Compare commits

...

759 Commits

Author SHA1 Message Date
Jakob Borg
d0ebf06ff8 Translation update 2015-05-03 08:29:29 +02:00
Jakob Borg
687b249034 Don't create stopped folder in staggered versioning (fixes #1749) 2015-05-03 08:22:11 +02:00
Jakob Borg
d1528dcff0 Remove old name conversion from staggered versioning 2015-05-03 08:22:11 +02:00
Jakob Borg
1c31cf6319 Merge pull request #1744 from Zillode/fix-1692
Upgrade running Syncthing instances (fixes #1692)
2015-05-02 15:17:45 +02:00
Lode Hoste
d54c366150 Upgrade running Syncthing instances (fixes #1692) 2015-05-01 23:38:54 +02:00
Jakob Borg
4ff535f883 Twitter link (lazily wrong icon, because not in Glyphicons...) 2015-05-01 23:32:51 +02:00
Jakob Borg
3cbddfe545 Merge pull request #1745 from Zillode/fix-1678
Added test for combining case insensitive and negated patterns (fixes #1678)
2015-05-01 09:20:51 +02:00
Lode Hoste
e3cae69495 Added test for combining case insensitive and negated patterns (fixes #1678) 2015-05-01 00:58:44 +02:00
Audrius Butkevicius
aee40316f8 Merge pull request #1732 from calmh/guisvc
Break out GUI into an API service
2015-04-30 22:15:47 +01:00
Audrius Butkevicius
d754f9ae89 Merge pull request #1742 from calmh/adaptive-cache
Adaptive database cache size
2015-04-30 21:56:40 +01:00
Audrius Butkevicius
3c50b3a9e0 Merge pull request #1741 from calmh/verbose
Verbose logging
2015-04-30 21:54:49 +01:00
Jakob Borg
fb312a71f7 Add verbose logging (fixes #179) 2015-04-30 20:47:21 +02:00
Jakob Borg
136d79eaa3 Break out GUI into an API service 2015-04-30 20:36:07 +02:00
Jakob Borg
834336499a Adaptive database cache size 2015-04-30 20:25:44 +02:00
Jakob Borg
c9da8237df Report usage statistics after transfer bench 2015-04-30 08:43:57 +02:00
Audrius Butkevicius
9638dcda0a Merge pull request #1735 from calmh/kinder-rehashing
User fewer hasher routines when there are many folders.
2015-04-29 20:43:51 +01:00
Jakob Borg
756c5a2604 User fewer hasher routines when there are many folders. 2015-04-29 21:26:08 +02:00
Jakob Borg
a9c31652b6 Merge branch 'pr-1725'
* pr-1725:
  typos and spelling correction
2015-04-29 17:09:30 +02:00
dartraiden
32a76901a9 typos and spelling correction 2015-04-29 15:59:47 +02:00
Audrius Butkevicius
a5e11c7489 Merge pull request #1730 from calmh/bug-1721
Don't hang when attempting multicast discovery on non-multicast interfaces
2015-04-29 11:01:25 +01:00
Audrius Butkevicius
f2b12014e1 Merge pull request #1729 from calmh/lint-clean
Run vet and lint. Make us lint clean.
2015-04-29 10:16:36 +01:00
Jakob Borg
60fcaebfdb Run vet and lint. Make us lint clean. 2015-04-29 10:38:02 +02:00
Audrius Butkevicius
32fe2cb659 Merge pull request #1700 from calmh/upnpsvc
Break out UPnP port mapping into a service
2015-04-29 00:18:49 +01:00
Jakob Borg
1207d54fdd Tweak UPnP discovery, avoid non-multicast interfaces (fixes #1721) 2015-04-28 22:32:10 +02:00
Stefan Tatschner
3932884688 Drop systemd README instruction in favor of the wiki 2015-04-28 14:11:12 +02:00
Audrius Butkevicius
19a2042746 Merge pull request #1723 from calmh/bug-1722
Handle conflict with local delete (fixes #1722)
2015-04-28 10:39:22 +01:00
Jakob Borg
4c6eb137da Merge pull request #1720 from AudriusButkevicius/ignores
Matcher is always there
2015-04-28 11:35:12 +02:00
Jakob Borg
57ec2ff915 Handle conflict with local delete (fixes #1722) 2015-04-28 11:34:16 +02:00
Audrius Butkevicius
77a161a087 Matcher checks nil receiver 2015-04-28 10:17:14 +01:00
Jakob Borg
0642402449 Break out UPnP port mapping into a service 2015-04-28 10:25:25 +02:00
Audrius Butkevicius
50d377d9fe Fix integration tests 2015-04-28 00:09:44 +01:00
Jakob Borg
f5211b0697 Add some more cache forbidding headers, for various user agents. 2015-04-27 09:08:55 +02:00
Jakob Borg
fd4ea46fd7 Merge pull request #1708 from jarlebring/upnp_close_conn_fix
Fix to for routers that cannot handle many open HTTP-connections
2015-04-26 22:54:47 +02:00
Elias Jarlebring
8d8546868d Fix to for routers that cannot handle many open HTTP-connections
Some routers do not respond when too many subsequent SOAP-requests are sent and will generate an EOF-error in the DefaultClient.Do(req). This modification adds a req.Close = True following the description on http://stackoverflow.com/questions/17714494/golang-http-request-results-in-eof-errors-when-making-multiple-requests-successi
2015-04-26 22:31:43 +02:00
Audrius Butkevicius
8ce547edeb Fix HTML (fixes #1707) 2015-04-26 20:14:34 +01:00
Jakob Borg
a17c48aed6 Merge pull request #1705 from AudriusButkevicius/glob
Add osutil.Glob to deal with Windows (fixes #1690)
2015-04-26 18:28:25 +02:00
Audrius Butkevicius
d12db3e7b8 Add osutil.Glob to deal with Windows (fixes #1690) 2015-04-26 16:37:50 +01:00
Jakob Borg
15b87ae297 Merge pull request #1704 from jarlebring/upnp_caps
Fix capitalization in HTTP-header in SOAP request (fixes #1696)
2015-04-26 20:10:14 +09:00
Jakob Borg
02fdf59839 Add jarlebring 2015-04-26 19:40:02 +09:00
jarlebring
d9da02b7a8 Formatting with gofmt 2015-04-26 12:37:37 +02:00
Elias Jarlebring
8f2ad6418d Fix capitalization in HTTP-header in SOAP request (fixes #1696)
Some routers are sensitive to the capitalization in  "SOAPAction" in the HTTP-header in SOAP request. This modification follows the recommendation of preserving caps in HTTP-headers in go described on http://stackoverflow.com/questions/26351716/how-to-keep-key-case-sensitive-in-request-header-using-golang?lq=1
2015-04-26 12:16:40 +02:00
Jakob Borg
ff984425a3 Merge pull request #1703 from AudriusButkevicius/page
Add pagination to Out of sync item list (fixes #1509)
2015-04-26 18:42:17 +09:00
Audrius Butkevicius
ac1058359f Rebuild assets 2015-04-26 00:22:30 +01:00
Audrius Butkevicius
9afbca3001 Add pagination to Out of sync item list (fixes #1509) 2015-04-26 00:22:26 +01:00
Audrius Butkevicius
ec3f17cb9c Add angular-dirPagination 2015-04-25 22:52:52 +01:00
Audrius Butkevicius
73b9d5c5f9 Merge pull request #1698 from calmh/pull-order
Configurable file pull order (alphabetic, random, by size or age)
2015-04-25 15:41:02 +01:00
Audrius Butkevicius
ecc8591c95 Merge pull request #1699 from calmh/connsvc
Break out connection handling into a service
2015-04-25 15:37:08 +01:00
Audrius Butkevicius
696b67e4b1 Merge pull request #1697 from calmh/auditsvc
Add audit log feature
2015-04-25 15:34:34 +01:00
Jakob Borg
266a5116a1 Break out connection handling into a service 2015-04-25 23:21:42 +09:00
Jakob Borg
131f2be857 Add audit log feature 2015-04-25 23:20:39 +09:00
Jakob Borg
be7b3a9952 Configurable file pull order (alphabetic, random, by size or age) 2015-04-25 23:20:21 +09:00
Jakob Borg
bb31b1785b Add a service manager to main (future use) 2015-04-25 23:16:46 +09:00
Jakob Borg
2a60f4b1e9 Add .gitattributes; normalize line endings 2015-04-25 23:16:46 +09:00
Jakob Borg
33a4fb5a1a Fix folder check tests 2015-04-25 23:16:46 +09:00
Audrius Butkevicius
aece6e8b6c Merge pull request #1689 from calmh/nolocks
events.Subscription.Poll does not seem to require locking
2015-04-24 10:26:58 +01:00
Jakob Borg
7bf55dd14f events.Subscription.Poll does not seem to require locking
This is a large source of output from the new lock logging, and it
doesn't seem to accomplish anything useful that I can see. Running
integration with the race detector to make sure...
2015-04-24 11:25:42 +09:00
Jakob Borg
e158f17c2b Adjust sync test intervals to be less latency sensitive 2015-04-24 11:25:24 +09:00
Jakob Borg
c5027d9478 Merge branch 'pr-1688'
* pr-1688:
  Minor fixup
  Add tests, fix getCaller, replace wg.Done with wg.Wait
2015-04-24 09:43:52 +09:00
Jakob Borg
36c1d82146 Minor fixup 2015-04-24 09:43:40 +09:00
Audrius Butkevicius
bd4f404d45 Add tests, fix getCaller, replace wg.Done with wg.Wait 2015-04-23 20:09:14 +01:00
Jakob Borg
43d39844f7 Merge pull request #1685 from AudriusButkevicius/mut
Add mutex logging
2015-04-23 21:16:23 +09:00
Audrius Butkevicius
e041a4d212 Track RUnlockers while locking a RWMutex 2015-04-23 11:29:23 +01:00
Audrius Butkevicius
433b923ea7 Add mutex logging 2015-04-23 10:54:14 +01:00
Audrius Butkevicius
f8f1c72b44 Merge pull request #1686 from calmh/major-upgrade-v11
Allow major upgrades
2015-04-23 09:31:58 +01:00
Jakob Borg
542716e216 Allow major upgrades 2015-04-23 17:13:11 +09:00
Jakob Borg
b35958d024 Avoid spurious request for /qr?text={{myID}} (fixes #1679) 2015-04-22 09:37:18 +09:00
Audrius Butkevicius
9ee3541655 Merge pull request #1673 from calmh/filestatus-json
Clean up REST JSON a little further
2015-04-21 17:11:06 +01:00
Jakob Borg
bf7d84c12a Clean up REST JSON a little further 2015-04-21 23:28:58 +09:00
Audrius Butkevicius
34c691087e Merge pull request #1674 from calmh/rc-upgrade
Loosen the requirements on what can be upgraded to what
2015-04-21 08:42:33 +01:00
Jakob Borg
08c383012f Loosen the requirements on what can be upgraded to what 2015-04-21 09:06:10 +09:00
Jakob Borg
e2420495f3 Fix type in device sort (fixes #1668) 2015-04-20 22:18:19 +09:00
Audrius Butkevicius
d530c5eda7 Merge pull request #1665 from calmh/wat
Don't initialize subscription in init()
2015-04-20 08:12:58 +01:00
Audrius Butkevicius
ef7420ecf6 Merge pull request #1666 from calmh/cpu-remind
Reminder in debug output to explain high CPU usage
2015-04-20 08:09:56 +01:00
Jakob Borg
c905a41e2a Reminder in debug output to explain high CPU usage 2015-04-20 14:29:38 +09:00
Jakob Borg
42ff4b5bf0 changelog.go should not be built 2015-04-20 14:03:50 +09:00
Jakob Borg
4fb74a32cc Don't initialize subscription in init()
By doing it init(), the monitor process also gets a subscription thing
running, which is unnecessary (and really confused me when seeing it in
the debug output).
2015-04-20 12:58:58 +09:00
Jakob Borg
c741465328 Use versionString() in about modal (fixes #1663) 2015-04-20 08:23:59 +09:00
Jakob Borg
fbca537a40 Merge pull request #1655 from kamadak/fix-nil-deref
Fix nil pointer dereferences in REST with non-existent folders
2015-04-19 17:20:47 +09:00
Jakob Borg
83420b0199 Merge pull request #1654 from AudriusButkevicius/fixes
Fix capitalization (fixes #1652, fixes #1649)
2015-04-19 17:20:05 +09:00
KAMADA Ken'ichi
33d3ba1b45 Fix nil pointer dereferences in REST with non-existent folders 2015-04-18 22:41:47 +09:00
Audrius Butkevicius
497f85a236 Fix capitalization (fixes #1652, fixes #1649) 2015-04-18 11:23:21 +01:00
Audrius Butkevicius
a624c302ab Merge pull request #1648 from calmh/scanner-batches
Don't buffer large files a long time while scanning
2015-04-17 09:05:09 +01:00
Jakob Borg
cebe21a3af Don't buffer large files a long time while scanning 2015-04-17 16:40:09 +09:00
Audrius Butkevicius
9eb679d70a Merge pull request #1647 from calmh/fix-localindexupdated
Homogenize the LocalIndexUpdated event
2015-04-17 08:14:38 +01:00
Jakob Borg
6d84443db8 Homogenize the LocalIndexUpdated event
It had two different formats, and we use "items" instead of "numFiles"
in other places.

(Discovered while documenting :)
2015-04-17 14:22:06 +09:00
Jakob Borg
da8a1f242c Merge pull request #1646 from AudriusButkevicius/readonly
Make targets writeable before removal on Windows (fixes #1610)
2015-04-17 14:21:39 +09:00
Jakob Borg
946d98b71f Merge pull request #1645 from AudriusButkevicius/tests
Fix tests on Windows (fixes #1531)
2015-04-17 14:20:53 +09:00
Audrius Butkevicius
dff51fc707 Make targets writeable before removal on Windows (fixes #1610) 2015-04-16 22:53:53 +01:00
Audrius Butkevicius
7d954dd5d1 Fix tests on Windows (fixes #1531) 2015-04-16 21:18:17 +01:00
Jakob Borg
c6300a5da8 Tone down UPnP errors a little 2015-04-16 23:45:12 +09:00
Jakob Borg
9359daa0d9 Merge branch 'pr-1636'
* pr-1636:
  Store and use _localStorage object
  fix using detect localStorage
2015-04-16 23:44:59 +09:00
Jakob Borg
2322e9cff7 Store and use _localStorage object 2015-04-16 23:44:34 +09:00
Jakob Borg
a876e1e348 Merge remote-tracking branch 'syncthing/pr/1636' into pr-1636
* syncthing/pr/1636:
  fix using detect localStorage
2015-04-16 23:32:48 +09:00
Jakob Borg
6a863c8f71 Translation update 2015-04-16 23:27:27 +09:00
Jakob Borg
392b006b06 Add Moter8 2015-04-16 23:23:34 +09:00
Audrius Butkevicius
96289f42b7 Merge pull request #1644 from syncthing/timeout
UPnP refactor/fixes
2015-04-16 14:32:16 +01:00
Audrius Butkevicius
1b69c2441c Make UPnP discovery requests on each interface explicitly (fixes #1113) 2015-04-16 14:23:36 +01:00
Audrius Butkevicius
8ca85a4918 Merge pull request #1639 from calmh/events
Improve event handling a little bit.
2015-04-16 14:18:52 +01:00
Audrius Butkevicius
2a31031cbc Add unit suffix to UPnP settings 2015-04-16 10:32:22 +01:00
Audrius Butkevicius
d148cd8ccc Make UPnP timeout configurable 2015-04-16 10:32:12 +01:00
Jakob Borg
d1cc1828b8 Improve ItemStarted/ItemFinished events
- Remove full details from ItemStarted (unnecessary, incorrect CamelCase)

 - Add "type" ("file" or "dir") to both events

 - Add "action" (what we tried to do - "delete" or "update") to both
   events.
2015-04-14 23:31:39 +09:00
Jakob Borg
069e8cf122 Don't schedule summaries on all state changes
Prior to this change we schedule summaries on each state change, i.e.
scanning->idle and idle->scanning, which is unnecessary. Now we only do
it on index updates, plus the immediate one on going syncing->idle.
2015-04-14 20:57:42 +09:00
Audrius Butkevicius
45cbcaca6d Merge pull request #1638 from calmh/lstat
Work around broken Lstat on Android
2015-04-14 11:59:20 +01:00
Jakob Borg
102a2db1f3 Work around broken Lstat on Android 2015-04-14 19:53:49 +09:00
Sergey Mishin
9f81c85ca7 fix using detect localStorage 2015-04-13 19:07:39 +03:00
Audrius Butkevicius
ba4a6fc0c5 Merge pull request #1633 from calmh/errorstate
Move folder errors to state
2015-04-13 00:48:13 +01:00
Jakob Borg
aa803ce2ff Move folder errors to state
The "Invalid" config attribute is retained for errors discovered during
config loading (empty path, duplicate ID). This can only be set or
cleared at config loading time.

Errors discovered during runtime (I/O problems, etc) are now in the
folder state instead. Changes to these are sent as any other folder
state change.
2015-04-13 07:43:45 +09:00
Jakob Borg
a027a60f5d Correctly feature detect localStorage (fixes #1632) 2015-04-13 06:50:07 +09:00
Jakob Borg
270649535e Merge pull request #1625 from Moter8/patch-1
Reword and clarify some sentences.
2015-04-10 13:48:14 +02:00
Carsten H
cf80ba71f4 Reword and clarify some sentences 2015-04-10 13:46:38 +02:00
Jakob Borg
b74df18a4a Translation update 2015-04-10 13:32:23 +02:00
Jakob Borg
5cd2906a39 Fix NICKS and authors in index.html 2015-04-10 12:57:43 +02:00
Jakob Borg
bc37b69d17 Add ARM to GUI architectures, and fallback for unknowns 2015-04-10 12:45:53 +02:00
Francois-Xavier Gsell
94f6e400ad fix '~' completion in add folder build assets (fix #1478) 2015-04-10 15:42:52 +08:00
Francois-Xavier Gsell
b95a6ccf80 fix '~' completion in add folder (fix #1478) 2015-04-10 15:42:52 +08:00
Jakob Borg
7df9c1b6e4 Merge pull request #1621 from Zillode/fix-no-upgrade
Fix compilation of -noupgrade builds
2015-04-10 08:30:50 +02:00
Lode Hoste
75348c0158 Fix compilation of -noupgrade builds 2015-04-09 22:44:46 +02:00
Audrius Butkevicius
75fb14acaf Fix integration tests 2015-04-09 16:16:39 +01:00
Audrius Butkevicius
5350315b68 Merge pull request #1614 from calmh/new-short-id
Index reset should generate file conflicts (fixes #1613)
2015-04-09 13:48:37 +01:00
Audrius Butkevicius
658e39c270 Merge pull request #1618 from calmh/id-conflict
Check for short ID conflict at startup
2015-04-09 13:40:32 +01:00
Audrius Butkevicius
ef7ce6c7e1 Merge pull request #1619 from calmh/gui-version
GUI version string includes OS and Arch
2015-04-09 12:29:58 +01:00
Jakob Borg
509e2411bf Merge pull request #1616 from syncthing/rates
Fix total transfer rates (fixes #1615)
2015-04-09 13:08:49 +02:00
Jakob Borg
65c906f951 Merge pull request #1617 from syncthing/integ
Try capturing panics
2015-04-09 13:07:46 +02:00
Audrius Butkevicius
1f159e8233 Fix total transfer rates (fixes #1615) 2015-04-09 12:07:21 +01:00
Jakob Borg
936c76119d Index reset should generate file conflicts (fixes #1613) 2015-04-09 13:06:09 +02:00
Jakob Borg
f45865606a Add initial merge and reset conflict tests 2015-04-09 13:06:09 +02:00
Jakob Borg
cfc9776bae Check for short ID conflict at startup 2015-04-09 13:06:00 +02:00
Audrius Butkevicius
0cb7ed9e4e Try capturing panics 2015-04-09 11:49:02 +01:00
Jakob Borg
4b07609458 GUI version string includes OS and Arch
(Useful when debugging via screenshots...)
2015-04-09 11:33:24 +02:00
Audrius Butkevicius
e41e58e781 Merge pull request #1608 from calmh/xdr-update
Update XDR dependency (fixes #1606)
2015-04-08 14:33:53 +01:00
Jakob Borg
f5030f1c2c Update XDR dependency (fixes #1606) 2015-04-08 14:49:29 +02:00
Jakob Borg
2a48fb8e87 Merge pull request #1607 from syncthing/deadlock
Don't run deadlock detection in release mode unless asked to (fixes #1536)
2015-04-08 14:48:20 +02:00
Audrius Butkevicius
df6dbc5fa4 Only run deadlock detection if asked or non-release/beta (fixes #1536) 2015-04-08 13:40:05 +01:00
Jakob Borg
4b1d2839e8 Correct override PATH in test 2015-04-08 14:23:26 +02:00
Audrius Butkevicius
a892f80e86 Merge pull request #1590 from calmh/long-filenames
Handle long filenames on Windows (fixes #1295)
2015-04-08 13:12:25 +01:00
Jakob Borg
b2a79855ae Handle long filenames on Windows (fixes #1295) 2015-04-08 14:05:39 +02:00
Audrius Butkevicius
ff4974178a Merge pull request #1605 from calmh/http-trace
Add HTTP request tracing
2015-04-07 21:10:51 +01:00
Jakob Borg
d7100fd9bc Add HTTP request tracing 2015-04-07 21:52:47 +02:00
Jakob Borg
0bfb40ae51 discourse -> forum 2015-04-07 16:07:16 +02:00
Jakob Borg
11c83670d6 Merge pull request #1601 from syncthing/conns
Fix GUI
2015-04-07 15:35:54 +02:00
Audrius Butkevicius
68ff4f3842 Fix GUI 2015-04-07 14:24:34 +01:00
Jakob Borg
ab25cd09ed Merge pull request #1600 from syncthing/conns
Change /rest/system/connections output (fixes #1487)
2015-04-07 14:29:40 +02:00
Audrius Butkevicius
8f05b8f982 Change /rest/system/connections output (fixes #1487) 2015-04-07 13:21:03 +01:00
Jakob Borg
63ae2f64cf Woops: /rest/system/errors -> /rest/system/error 2015-04-07 13:46:39 +02:00
Jakob Borg
105103fae0 Woops: /rest/system/report -> /rest/svc/report 2015-04-07 13:33:37 +02:00
Jakob Borg
70f4792ab1 Translation update 2015-04-07 12:24:02 +02:00
Jakob Borg
defd9fa322 Merge pull request #1595 from calmh/rest-rework
Tidy up the REST interface URLs (fixes #1593)
2015-04-07 12:20:36 +02:00
Jakob Borg
e884d0fda6 Tidy up the REST interface URLs (fixes #1593) 2015-04-07 12:16:23 +02:00
Audrius Butkevicius
5f6a8fdc20 Merge pull request #1568 from calmh/override
Override needs to twiddle the version a bit more (fixes #1564)
2015-04-07 11:13:43 +01:00
Audrius Butkevicius
196a9ddbb0 Merge pull request #1599 from calmh/cleanup
Clean up config directory of old crap
2015-04-07 10:33:46 +01:00
Audrius Butkevicius
746140bd11 Merge pull request #1573 from calmh/beacon
Use a socket per interface for v6 multicast (fixes #1563)
2015-04-07 10:23:25 +01:00
Jakob Borg
7b99a5fbac Clean up config directory of old crap 2015-04-07 09:25:28 +02:00
Jakob Borg
b74c31e520 Only show Override button in idle state 2015-04-06 23:33:28 +02:00
Jakob Borg
221f43e4bd Use a socket per interface for v6 multicast (fixes #1563) 2015-04-06 20:55:50 +02:00
Jakob Borg
a17333d73e Override needs to twiddle the version a bit more (fixes #1564) 2015-04-06 20:55:40 +02:00
Jakob Borg
207b43499c Merge remote-tracking branch 'syncthing/pr/1577'
* syncthing/pr/1577:
  Add uptime in webgui (fixes #1501)

Conflicts:
	cmd/syncthing/gui.go
	internal/auto/gui.files.go
2015-04-06 20:53:32 +02:00
Jakob Borg
0c0de17b38 Merge pull request #1582 from ralder/webgui-enable-gzip
Enable gzip for static files for webgui
2015-04-06 08:33:57 +02:00
Sergey Mishin
77882e6086 Enable gzip encoding static files for webgui 2015-04-06 03:11:30 +03:00
Audrius Butkevicius
19a9834843 Merge pull request #1589 from calmh/copyconf
Copy configuration struct when sending Changed() events
2015-04-05 20:53:10 +01:00
Audrius Butkevicius
23dab30ca5 Merge pull request #1588 from calmh/dbcommitter
Use separate routine for database updates in puller
2015-04-05 20:43:46 +01:00
ralder
b5d7ce8ebe Add uptime in webgui (fixes #1501) 2015-04-05 22:37:55 +03:00
Jakob Borg
bf4eb4b269 Copy configuration struct when sending Changed() events
Avoids data race. Copy() must be called with lock held.
2015-04-05 21:07:15 +02:00
Jakob Borg
a5edb6807e The -reset option now only removes db 2015-04-05 17:40:58 +02:00
Jakob Borg
ecadf30fe7 model: Use separate db commit routine (fixes #1558) 2015-04-05 16:19:14 +02:00
Jakob Borg
515f0db5b4 Benchmark syncing many vs large files 2015-04-05 16:16:15 +02:00
Jakob Borg
c2f367cf70 Merge pull request #1585 from Zillode/extra-case-test
Test combination of prefixes (@benapetr)
2015-04-05 13:37:52 +02:00
Lode Hoste
f21dfea965 Test combination of prefixes (@benapetr) 2015-04-05 11:45:43 +02:00
Audrius Butkevicius
b84cad4db0 Merge pull request #1583 from calmh/configcleanup
Remove handling for old deprecated config versions
2015-04-05 01:31:53 +01:00
Jakob Borg
16ae019c8c Fix AUTHORS 2015-04-04 23:52:23 +02:00
Jakob Borg
bd2051febd Remove handling for old deprecated config versions 2015-04-04 23:37:47 +02:00
Audrius Butkevicius
6fb1e03ed4 Merge pull request #1576 from Zillode/reset-indexes
Update reset API to reflect new use cases.
2015-04-04 22:31:59 +01:00
Jakob Borg
8d41a762b6 Clean up translations, generate assets 2015-04-04 23:21:45 +02:00
Jakob Borg
6d3003716c Merge remote-tracking branch 'syncthing/pr/1553'
* syncthing/pr/1553:
  Add dzarda
  Added an "n", typo in a GUI string
2015-04-04 23:20:48 +02:00
Marc Laporte
2c87c3bac3 Fix some typos 2015-04-04 23:15:07 +02:00
Jakob Borg
739c525a98 model: TestIgnores should not randomly fail 2015-04-04 22:55:24 +02:00
Lode Hoste
ab287ebf40 Update reset API to reflect new use cases.
/rest/reset clears the entire Syncthing DB and restart the program
/rest/reset&folder=default clears the indexes of the default folder
2015-04-04 22:45:11 +02:00
Jakob Borg
ab6bcab78a fnmatch: Test should pass on Mac/windows 2015-04-04 22:05:15 +02:00
Jakob Borg
6e317896e9 golint: FNM_FOOBAR -> fnmatch.FooBar 2015-04-04 22:03:03 +02:00
Jakob Borg
b08ee3ff81 golint: locHttps -> locHTTPS 2015-04-04 21:59:54 +02:00
Jakob Borg
17fd09102e go vet: t.Errorf -> t.Error 2015-04-04 21:58:21 +02:00
Jakob Borg
a598cd2b18 Auto generate author list in gui/index.html 2015-04-04 10:09:42 +02:00
Jakob Borg
55e434d67a Allow non-word characters at the end of commit messages 2015-04-04 09:50:59 +02:00
Jakob Borg
04d4b5d8a0 NICKS 2015-04-04 09:47:54 +02:00
Jakob Borg
7ea00bcb78 Merge pull request #1570 from ralder/select-language-webgui
Add language select menu in webgui (fixes #981)
2015-04-04 09:47:08 +02:00
ralder
0ab56ffde8 Add ralder 2015-04-04 03:27:02 +03:00
ralder
e7e945533e Add language select menu in webgui (fixes #981) 2015-04-04 02:57:07 +03:00
Jakob Borg
7fd1047832 Merge pull request #1578 from Zillode/fix-config-windows
Expand locations during initialisation (fixes #1575).
2015-04-03 22:04:08 +02:00
Lode Hoste
19dfa88258 Expand locations during initialisation (fixes #1575). 2015-04-03 21:57:19 +02:00
Jakob Borg
65923b5c20 The summary event service should send summary events 2015-04-02 10:07:54 +02:00
Jaroslav Malec
ac731aa50c Add dzarda 2015-04-01 17:12:56 +02:00
Audrius Butkevicius
2aa3182476 Merge pull request #1539 from calmh/locations
Move index to index-v0.11.0.db (new format) and centralize location config
2015-04-01 13:18:49 +01:00
Audrius Butkevicius
529c386943 Merge pull request #1529 from calmh/modeldata
Push model data instead of pull (fixes #1434)
2015-04-01 13:10:44 +01:00
Jakob Borg
b659da8a4b Merge pull request #1548 from Zillode/case-insensitive-ignores
Support case-insensitive ignores (fixes #1511).
2015-04-01 13:46:59 +02:00
Jakob Borg
eba98717c9 Dependency update 2015-04-01 11:58:35 +02:00
Jakob Borg
e4dba99cc0 Immediately recalculate summary when folder state changes syncing->idle 2015-04-01 11:58:27 +02:00
Jakob Borg
454e688c3d Push model data instead of pull (fixes #1434) 2015-04-01 11:46:30 +02:00
Jakob Borg
54752deaa1 Move index to index-v0.11.0.db (new format) and centralize location config 2015-04-01 11:30:28 +02:00
Jakob Borg
a3cf37cb2e Refactor and improve integration tests 2015-04-01 11:13:19 +02:00
Jakob Borg
6459d11d32 Merge pull request #1378 from Zillode/draft-upgrade
Do not consider draft releases or releases with emtpy assets
2015-04-01 11:03:13 +02:00
Jaroslav Malec
15f2fabaaf Added an "n", typo in a GUI string 2015-03-31 19:27:25 +02:00
Lode Hoste
d6030b8d68 Only consider relevant releases (fixes #1285). 2015-03-31 10:22:28 +02:00
Audrius Butkevicius
e1757ee726 Fix test 2015-03-30 22:49:16 +01:00
Lode Hoste
9fed75d59c Support case-insensitive ignores (fixes #1511). 2015-03-30 22:41:12 +02:00
Audrius Butkevicius
5fe15475a4 Merge pull request #1540 from calmh/conflicts
Handle conflicts when pulling (fixes #220)
2015-03-30 21:20:23 +01:00
Jakob Borg
c7f6f4f48d Merge pull request #1546 from Zillode/fix-reset-dir
Fix -reset folder target
2015-03-30 21:57:09 +02:00
Lode Hoste
747c6c2714 Fix -reset folder target 2015-03-30 17:07:31 +02:00
Jakob Borg
2951f128f6 Translation update for external versioner 2015-03-30 14:15:17 +02:00
Jakob Borg
53cb66eeaf Merge pull request #1537 from syncthing/marker
More graceful handling on folder errors (fixes #762)
2015-03-30 09:30:15 +02:00
Jakob Borg
bcf8f798e2 Merge pull request #1543 from rumpelsepp/systemd
systemd: Fix error code definitions to prevent failed units
2015-03-30 08:37:08 +02:00
Audrius Butkevicius
7406176fad More graceful handling on folder errors (fixes #762)
Checks health before accepting every scanner batch, also
recovers from errors without having to restart.
2015-03-30 08:27:12 +02:00
Jakob Borg
34ba5678c3 Allowed integration test time 60m -> 90m 2015-03-30 08:26:31 +02:00
Stefan Tatschner
da0b78c67a systemd: Fix error code definitions to prevent failed units
The systemd exit code definitions (introduced in c586a17, refs #1324) caused
problems. Per default systemd considers a return code != 0 as failed. So when
you click on "Restart" in the WebUI an error appears:

  systemd[953]: syncthing.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
  systemd[953]: Unit syncthing.service entered failed state.
  systemd[953]: syncthing.service failed.
  systemd[953]: syncthing.service holdoff time over, scheduling restart.
  systemd[953]: Started Syncthing - Open Source Continuous File Synchronization.
  systemd[953]: Starting Syncthing - Open Source Continuous File Synchronization...
  syncthing[13222]: [LFKUK] INFO: syncthing v0.10.30 (go1.4.2 linux-amd64 default) builduser@jara 2015-03-29 07:46:44 UTC

To fix this error we have to add the "succes codes" 2, 3, 4 to
"SuccessExitStatus":

  syncthing[13006]: [LFKUK] INFO: Restarting
  syncthing[13006]: [LFKUK] OK: Exiting
  systemd[953]: syncthing.service holdoff time over, scheduling restart.
  systemd[953]: Started Syncthing - Open Source Continuous File Synchronization.
  systemd[953]: Starting Syncthing - Open Source Continuous File Synchronization...
  syncthing[13031]: [LFKUK] INFO: syncthing v0.10.30 (go1.4.2 linux-amd64 default) builduser@jara 2015-03-29 07:46:44 UTC

To make sure that syncthing restarts in case of error, and to make sure
that "Restart=on-failure" actually works, let's remove
"RestartPreventExitStatus=1". Systemd considers this as an error per
default and the restart will be triggered successfully.
2015-03-30 00:51:06 +02:00
Jakob Borg
47e64ae503 Handle conflicts when pulling (fixes #220) 2015-03-30 00:01:52 +02:00
Audrius Butkevicius
520bb74626 Merge pull request #1541 from calmh/no-gctweak
Remove default GC tweak
2015-03-29 20:01:45 +01:00
Jakob Borg
4beef5cc66 Remove default GC tweak
This reverts the GC behavior to the Go default of triggering GC when the
heap has grown 100% compared to after the previous GC. We were setting
this to 25% to keep memory usage at a minimum, but it has a pretty
severe performance cost (especially when syncing large files) as we keep
triggering GC too often.

This documents the tweak in the `-help` message so users can decide for
themselves, and sticks to whatever the Go runtime developers thinks is
best for the default.
2015-03-29 19:08:22 +02:00
Jakob Borg
ba575f55ec Merge pull request #1530 from Zillode/multi-scan
Support multiple scan query strings at the same time
2015-03-29 16:02:45 +02:00
Lode Hoste
2012ce02e8 Support multiple scan query strings at the same time 2015-03-28 22:40:13 +01:00
Jakob Borg
c67e2c2a5a Merge pull request #1528 from Zillode/change-gui-port
Change (default) GUI port from 8080 to 8384 ('ST' in ascii values)
2015-03-27 14:08:03 +01:00
Jakob Borg
4b1ce250c1 Merge pull request #1527 from AudriusButkevicius/protochanges
Cherry-picks
2015-03-27 13:44:03 +01:00
Audrius Butkevicius
0401a07507 Change existingBlocks map type 2015-03-26 22:04:34 +00:00
Audrius Butkevicius
c12265499a Response errors may be protocol defined errors 2015-03-26 22:04:34 +00:00
Audrius Butkevicius
0e341832e0 Handle unknown flags at the model 2015-03-26 22:04:34 +00:00
Audrius Butkevicius
489e2e6ad5 Update Model function signatures 2015-03-26 22:04:33 +00:00
Audrius Butkevicius
941f637bca Update protocol package 2015-03-26 22:04:28 +00:00
Audrius Butkevicius
fc0cb704f2 Merge pull request #1523 from calmh/vv
Implement version vectors
2015-03-26 20:46:02 +00:00
Lode Hoste
960c0cbddf Change (default) GUI port from 8080 to 8384 ('ST' in ascii values) 2015-03-26 21:36:06 +01:00
Audrius Butkevicius
9f67d86b30 Merge pull request #1526 from calmh/minreconnect
Don't allow arbitrarily short reconnection intervals (fixes #1524)
2015-03-26 13:01:29 +00:00
Jakob Borg
66f7d83baa Don't allow arbitrarily short reconnection intervals (fixes #1524) 2015-03-26 13:57:57 +01:00
Jakob Borg
b44e87c6e8 Silence go vet composites warning
https://code.google.com/p/go/issues/detail?id=6820
2015-03-25 23:18:43 +01:00
Jakob Borg
6da7f17c4a Implement version vectors 2015-03-25 23:10:34 +01:00
Jakob Borg
b4f45d1e79 Update tests for version vectors 2015-03-25 23:10:33 +01:00
Jakob Borg
7d766bf7c7 Update protocol 2015-03-25 22:52:43 +01:00
Jakob Borg
75dc7e6671 Assets 2015-03-25 22:08:16 +01:00
Audrius Butkevicius
3fd887fc57 Merge pull request #1518 from calmh/negcache
Add negative cache time to global discovery
2015-03-25 21:06:15 +00:00
Audrius Butkevicius
128447a681 Merge pull request #1520 from calmh/flags
Add flags and options for future extensibility (fixes #1027)
2015-03-25 21:04:47 +00:00
Jakob Borg
23bae932c7 Add flags and options for future extensibility (fixes #1027) 2015-03-25 21:21:41 +01:00
Jakob Borg
9701998f82 Add negative cache time to global discovery
This reduces the amount of external queries by not repeating a query for
a given address if we have failed within the last three minutes.
2015-03-25 16:55:42 +01:00
Jakob Borg
0289c50ad9 Changelog.go needs a copyright header 2015-03-25 08:55:10 +01:00
Jakob Borg
50490f5b26 Merge pull request #1514 from kamadak/name
List the given name first for consistency with others
2015-03-24 21:10:12 +01:00
Jakob Borg
d12f802027 Merge pull request #1513 from kamadak/dir-perm
Preserve the permission of a newly created directory
2015-03-24 21:09:48 +01:00
KAMADA Ken'ichi
ac7097b4d0 Preserve the permission of a newly created directory
We need an explicit chmod() when creating a new directory.
Otherwise a new directory may be created with a different permission
from the one received from an originating device, because the umask
is applied to the mode given to mkdir().
The incorrect permission is later sent back to the originating device
and the original permission will be lost.
2015-03-23 22:39:16 +09:00
KAMADA Ken'ichi
66087e4332 List the given name first for consistency with others 2015-03-23 21:38:07 +09:00
Jakob Borg
3706f9bcb8 Merge pull request #1510 from AudriusButkevicius/location
Configure location provider
2015-03-22 18:38:33 +01:00
Audrius Butkevicius
c505218896 Configure location provider 2015-03-22 15:36:40 +00:00
Jakob Borg
6186a746e0 Rewrite changelog.sh in Go 2015-03-22 16:14:52 +01:00
Jakob Borg
3e98bae5ec Fix changelog.sh for Linux 2015-03-22 15:45:40 +01:00
Audrius Butkevicius
2e1e8f764e Fix crash on walker error (fixes #1507) 2015-03-22 14:28:14 +00:00
Jakob Borg
a7492f8612 Send correct MIME type for SVG images (fixes #1506) 2015-03-22 12:57:16 +01:00
Jakob Borg
123b1f01e4 Translation update 2015-03-22 11:33:10 +01:00
Jakob Borg
865f62e3eb Merge pull request #1505 from rumpelsepp/systemd
systemd: Set -logflags to 0, provide -no-browser flag
2015-03-21 23:28:54 +01:00
Stefan Tatschner
3ea93f52ee systemd: Set -logflags to 0, provide -no-browser flag
Syncthing should not try to start a browser when invoked by systemd.
Furthermore we do not need any timestamps in the journal as systemd
already handles this for us.
2015-03-21 20:55:29 +01:00
Audrius Butkevicius
b53e545ebc Merge pull request #1502 from calmh/defaults
Set defaults correctly for autoNormalize
2015-03-21 14:44:46 +00:00
Jakob Borg
157a4c891c Update integration test configs to v10 2015-03-21 15:40:00 +01:00
Jakob Borg
ad9ea07309 Set defaults correctly for autoNormalize
The default:"foo" struct tags aren't actually used for folder configs.
2015-03-21 15:33:31 +01:00
Jakob Borg
fc483cdfc6 Assets 2015-03-20 09:35:02 +01:00
Jakob Borg
9033838cf2 Merge pull request #1492 from alex2108/fix-gui
use lowerCamelCase for file versioning display
2015-03-20 09:34:48 +01:00
Jakob Borg
8e5d2d5905 Merge pull request #1491 from alex2108/master
use Lstat instead of Stat to prevent errors with symlinks
2015-03-20 09:34:10 +01:00
Alexander Graf
18aa66dabb use lowerCamelCase for file versioning display 2015-03-19 18:16:48 +01:00
Alexander Graf
a2f7b78453 use Lstat instead of Stat to prevent errors with symlinks 2015-03-19 17:36:15 +01:00
Jakob Borg
5c026cbe1d Merge pull request #1490 from syncthing/unspecified
Skip unspecified IPs
2015-03-19 14:02:47 +01:00
Audrius Butkevicius
dc51476897 Skip unspecified IPs 2015-03-19 12:44:38 +00:00
Jakob Borg
39eaa577e0 Merge pull request #1489 from syncthing/lans
Print LANs on startup
2015-03-19 13:09:59 +01:00
Jakob Borg
1f006481ee Merge pull request #1484 from alex2108/master
Add external versioner (ref #573)
2015-03-19 13:08:54 +01:00
Audrius Butkevicius
60faabcbe2 Print LANs on startup 2015-03-19 11:07:20 +00:00
Alexander Graf
d3f1eaf1a3 Add external versioner (ref #573) 2015-03-19 11:31:21 +01:00
Audrius Butkevicius
f568e76fd4 Merge pull request #1488 from calmh/utf8
Automatically fix file name normalization errors (fixes #430)
2015-03-19 08:33:08 +00:00
Jakob Borg
e947223aaa Decide once and for all to return filepath.SkipDir or nil 2015-03-19 07:46:13 +01:00
Jakob Borg
8311162be3 Automatically fix file name normalization errors (fixes #430) 2015-03-19 00:21:48 +01:00
Jakob Borg
75523556e8 Use SVG format logos 2015-03-18 12:51:23 +01:00
Audrius Butkevicius
c82b5d4982 Merge pull request #1474 from calmh/refactor-states
Refactor state handling
2015-03-17 19:09:48 +00:00
Jakob Borg
1c3158099c Rename files to match type names 2015-03-17 19:37:06 +01:00
Jakob Borg
bdbca75dfa Refactor state tracking (...)
Move state tracking into the puller/scanner objects. This is a first
step towards resolving #1391.

Rename Puller and Scanner to roFolder and rwFolder as they have more
duties than just pulling and scanning, and don't need to be exported.
2015-03-17 19:37:06 +01:00
Audrius Butkevicius
124b189cc0 Rebuild assets 2015-03-17 17:58:19 +00:00
Audrius Butkevicius
de38b46392 Fix build 2015-03-17 17:54:25 +00:00
Jakob Borg
e1975644d6 Add /rest/filestatus 2015-03-17 17:51:50 +00:00
Jakob Borg
d9fd27a9e8 Merge pull request #1421 from syncthing/mpl
Relicense to MPLv2
2015-03-17 16:42:33 +01:00
Jakob Borg
32425c5561 MPLv2 2015-03-17 16:02:27 +01:00
Johan Vromans
8d20923881 Suppress 'Last File Received' if a node is folder master (fixes #1472) 2015-03-17 08:44:17 +01:00
Jakob Borg
3a35b8b26c Add sciurius 2015-03-17 08:42:27 +01:00
Jakob Borg
36c93b755a Merge pull request #1465 from pascalj/lowercase-api
Use lowerCamelCase for the JSON API (fixes #1338)
2015-03-16 21:43:34 +01:00
Jakob Borg
ea8c3debea Merge pull request #1470 from syncthing/silence
Silence warnings (ref #1388)
2015-03-16 12:01:29 +01:00
Audrius Butkevicius
b2425b2a25 Silence warnings (ref #1388) 2015-03-16 10:47:59 +00:00
Pascal Jungblut
49bc74e7a0 Use lowerCamelCase for the JSON API (fixes #1338)
Replace the current mix of UpperCamelCase und lowerCamelCase with
consistent lowerCamelCase keys for the JSON API. Also adapt the frontend
so it works with the changed API.

Attention: this will break existing consumers of the API.
2015-03-16 10:05:01 +01:00
Jakob Borg
51c932164f bep/1.0 negotiation can't be a hard error. 2015-03-15 17:49:47 +01:00
Jakob Borg
19e82e93b1 Translation update 2015-03-15 16:42:52 +01:00
Jakob Borg
3f785eaecf Merge pull request #1466 from kamadak/win-w-bits
Do not send group/others-writable bits from Windows.
2015-03-15 16:35:13 +01:00
Jakob Borg
e59c0f38d9 Also build darwin/386 2015-03-15 16:23:45 +01:00
Jakob Borg
64004c6bc0 Alternate email for pascalj 2015-03-15 16:02:34 +01:00
Jakob Borg
422332de7e Add kamadak 2015-03-15 15:19:17 +01:00
Jakob Borg
5a15ba7451 Add pascalj 2015-03-15 15:17:35 +01:00
Jakob Borg
d3686bb1e2 Add kilburn 2015-03-15 15:17:26 +01:00
KAMADA Ken'ichi
3a6eeef580 Do not send group/others-writable bits from Windows.
There is no user/group/others in Windows' read-only attribute,
and all "w" bits are set in os.FileInfo if the file is not read-only.
Do not send these group/others-writable bits to other devices
in order to avoid unexpected world-writable files on other platforms.
2015-03-15 22:14:44 +09:00
Audrius Butkevicius
1dc5c6b8a8 Merge pull request #1457 from calmh/relverv3
Guessing version from directory name is not viable in Go (ref #1449)
2015-03-13 10:42:27 +00:00
Jakob Borg
3532a560d8 Guessing version from directory name is not viable in Go (ref #1449) 2015-03-13 10:10:13 +01:00
Audrius Butkevicius
d2d894d808 Merge pull request #1451 from calmh/relverv2
Get version from RELEASE file if it exists, or guess from directory (fixes #1449)
2015-03-12 10:38:39 +00:00
Jakob Borg
2aa38bfc4b Get version from RELEASE file if it exists, or guess from directory (fixes #1449) 2015-03-12 11:18:23 +01:00
Jakob Borg
9c3cee9ae4 Translation update 2015-03-11 21:15:55 +01:00
Jakob Borg
fc521b5f9d Protocol dep update 2015-03-11 21:13:17 +01:00
Audrius Butkevicius
5253368acc Merge pull request #1448 from calmh/xp
Fall back to %AppData% is %LocalAppData% is blank (fixes #1446)
2015-03-11 20:06:13 +00:00
Jakob Borg
51cfc3d4be Fall back to %AppData% is %LocalAppData% is blank (fixes #1446) 2015-03-11 21:04:10 +01:00
Audrius Butkevicius
80bffd93e7 Merge pull request #1447 from calmh/silencio
Don't yell about discovery listening and resolving (ref #1418)
2015-03-11 20:03:37 +00:00
Jakob Borg
df4f22e899 Don't yell about discovery listening and resolving (ref #1418) 2015-03-11 20:57:20 +01:00
Audrius Butkevicius
8cc70843a5 Merge pull request #1445 from calmh/compression
Compress only metadata by default (fixes #1374)
2015-03-11 19:35:15 +00:00
Jakob Borg
70c841f23a Compress only metadata by default (fixes #1374) 2015-03-11 19:10:57 +01:00
Jakob Borg
7b22e09805 Merge pull request #1438 from moshen/runit-reparenting
Fix syncthing process reparenting with runit
2015-03-10 08:21:41 +01:00
Jakob Borg
c5838c143c Merge pull request #1425 from AudriusButkevicius/laan
Allow not to limit bandwidth in LAN (fixes #1336)
2015-03-10 08:19:29 +01:00
Colin Kennedy
eaf71db7c9 Fix syncthing process reparenting with runit
When you: `sudo sv down /etc/service/syncthing/` the `TERM` signal
isn't propogated or trapped, so syncthing is orphaned and adopted by
init (PID 1).

- Changed call to `chpst` to `exec`
- Moved logging to `log/run` per `runsv` standard
2015-03-10 01:01:52 -05:00
Jakob Borg
b322b527b3 Add SVG versions of logo 2015-03-10 00:02:46 +01:00
Jakob Borg
cc4b231875 Merge pull request #1429 from moshen/runit-fix
Use chpst instead of djb name setuidgid
2015-03-09 23:50:08 +01:00
Jakob Borg
05642a3e17 Merge remote-tracking branch 'syncthing/pr/1436'
* syncthing/pr/1436:
  Remove red if we managed to report to atleast one discovery server (fixes #1427)

Conflicts:
	internal/auto/gui.files.go
2015-03-09 23:47:41 +01:00
Jakob Borg
7dcc6bb579 Add moshen 2015-03-09 23:45:03 +01:00
Audrius Butkevicius
f15c416e59 Remove red if we managed to report to atleast one discovery server (fixes #1427) 2015-03-09 21:55:14 +00:00
Audrius Butkevicius
6fa97eeec7 Allow not to limit bandwidth in LAN (fixes #1336) 2015-03-09 20:54:33 +00:00
Jakob Borg
03bbf273b3 Update protocol dep 2015-03-09 21:21:50 +01:00
Audrius Butkevicius
1e376cd3a6 Update assets 2015-03-09 12:13:43 +00:00
Audrius Butkevicius
49cf939c04 Add missing translation strings (fixes #1430) 2015-03-09 12:02:30 +00:00
Colin Kennedy
338394f8c3 Use chpst instead of djb name setuidgid
Some distros (Ubuntu, Debian?) don't link `chpst` to `setuidgid`, as it
could conflict with djb daemontools installation.  If daemontools isn't
going to be referenced in the README, then the example runit config
should reference the runit packaged utility.
2015-03-08 18:50:54 -05:00
Audrius Butkevicius
575b62d77b Merge pull request #1424 from AudriusButkevicius/scanner
Make sure we start scanning at an indexed location (fixes #1399)
2015-03-08 19:46:21 +00:00
Audrius Butkevicius
57fc0eb5b1 Make sure we start scanning at an indexed location (fixes #1399) 2015-03-08 19:45:47 +00:00
Jakob Borg
3a19fe3663 Merge pull request #1423 from AudriusButkevicius/warn
Silence discovery warnings when v6 not available
2015-03-08 19:50:35 +01:00
Audrius Butkevicius
0c049179b4 Silence discovery warnings when v6 not available (fixes #1418) 2015-03-08 16:49:12 +00:00
Jakob Borg
e22c873ec4 Repair integration tests 2015-03-08 08:41:43 +01:00
Jakob Borg
d644ebab09 Translation update 2015-03-08 07:52:52 +01:00
Jakob Borg
2fdc578a88 Update goleveldb (fixes #1414) 2015-03-08 07:49:02 +01:00
Jakob Borg
aeb3a3f7b5 Handle crap at the end of commit subjects 2015-03-07 22:50:54 +01:00
Audrius Butkevicius
044b7ce070 Merge pull request #1415 from calmh/announce-v6
Add global announce server on IPv6
2015-03-07 21:39:53 +00:00
Jakob Borg
815e538f10 Merge pull request #1401 from Zillode/fix-chmod-android
Fix chmod android
2015-03-07 21:42:44 +01:00
Lode Hoste
758233f001 Do not error when chmod failes when permissions are ignored (fixes #1404). 2015-03-07 21:38:16 +01:00
Jakob Borg
f4f4fda520 String slice uniquification must return a well defined order, or tests fail 2015-03-07 21:05:30 +01:00
Jakob Borg
1d77aeb69c Add global announce server on IPv6 2015-03-07 21:01:20 +01:00
Jakob Borg
29dbfc647d Add bencurthoys 2015-03-07 14:41:18 +01:00
Jakob Borg
55d9514e83 Merge pull request #1407 from bencurthoys/master
Windows Service Setup.exe
2015-03-07 14:38:43 +01:00
Jakob Borg
46bd7956a3 Merge branch 'pr-1410'
* pr-1410:
  Add test for osutil.InWritableDir
  Exit and error if the target is not a directory
2015-03-07 14:35:54 +01:00
Jakob Borg
e1ee394c26 Add test for osutil.InWritableDir 2015-03-07 14:35:29 +01:00
Lode Hoste
19884ade99 Exit and error if the target is not a directory 2015-03-06 22:02:29 +01:00
bencurthoys
f0a88061db Setup.exe for Windows
NSIS script to build setup file to install Syncthing and install as a service.
2015-03-06 15:30:50 +00:00
Audrius Butkevicius
6057138466 Merge pull request #1400 from calmh/negproto
Verify negotiated protocol bep/1.0
2015-03-05 15:16:59 +00:00
Jakob Borg
aaaa6556f3 Some commentary on the initial connection checks 2015-03-05 16:09:20 +01:00
Jakob Borg
4745431cda Verify negotiated protocol bep/1.0 2015-03-05 15:58:16 +01:00
Jakob Borg
0455a948a9 Merge pull request #1337 from AudriusButkevicius/fileslevels
Add /rest/tree API call
2015-03-05 15:24:51 +01:00
Audrius Butkevicius
bf3e249237 Add GlobalDirectoryTree benchmarks 2015-03-04 23:39:33 +00:00
Audrius Butkevicius
fb649e9525 Fix benchmarks, cleanup tests 2015-03-04 23:39:32 +00:00
Audrius Butkevicius
9d1e2d9f46 Add /rest/tree API call 2015-03-04 23:39:27 +00:00
Audrius Butkevicius
9876d93b60 Fix tests on Windows while running as a simple user 2015-03-04 22:39:33 +00:00
Jakob Borg
b3dd05580b Don't follow the prototype chain when looking for a folder name (fixes #1387) 2015-03-01 22:10:34 +01:00
Jakob Borg
32847f33fd Translation update 2015-03-01 21:51:56 +01:00
Jakob Borg
bff9723fe3 Merge branch 'fix-1373'
* fix-1373:
  fixup alterFiles
  Ensure progress when delete-by-rename fails (fixes #1373)
  Handle weird Lstat() returns for disappeared items (ref #1373)
  Alter files into directories and the other way around
2015-03-01 21:51:26 +01:00
Jakob Borg
d114648c16 fixup alterFiles 2015-03-01 21:38:04 +01:00
Jakob Borg
44d0da02d0 Ensure progress when delete-by-rename fails (fixes #1373) 2015-03-01 10:55:48 +01:00
Jakob Borg
c25107eff3 Handle weird Lstat() returns for disappeared items (ref #1373) 2015-03-01 10:55:43 +01:00
Jakob Borg
af5c36d2a8 Merge remote-tracking branch 'syncthing/pr/1373' into fix-1373
* syncthing/pr/1373:
  Alter files into directories and the other way around
2015-03-01 10:55:34 +01:00
Audrius Butkevicius
0828a67145 Merge pull request #1275 from calmh/kv-cache
Namespaced key value store and value cache
2015-02-26 13:51:43 +00:00
Jakob Borg
617fb84983 Also build Solaris, NetBSD by default 2015-02-26 12:21:20 +01:00
Jakob Borg
6f8ac2b61c Refactor: add and use db.NamespacedKV 2015-02-26 09:56:11 +01:00
Jakob Borg
6f2b4b96cf Refactor: use leveldb/util.BytesPrefix 2015-02-26 09:56:11 +01:00
Jakob Borg
3cc288a169 Build binary packages for Dragonfly 2015-02-26 08:58:58 +01:00
Jakob Borg
0bbbf3eb3b Compile on Dragonfly 2015-02-26 08:42:39 +01:00
Jakob Borg
4b1b56fee8 Reduce CPU usage (fixes #1376) 2015-02-25 23:30:24 +01:00
Jakob Borg
154fc59e93 Switch back to original kardianos/osext 2015-02-24 20:44:49 +01:00
Lode Hoste
218c4c128c Alter files into directories and the other way around 2015-02-23 12:12:31 +01:00
Audrius Butkevicius
53f1af0cab Merge pull request #1375 from calmh/fix-987
Attempt recovery of corrupted DB at startup (fixes #987)
2015-02-23 09:17:29 +00:00
Jakob Borg
f9577a38dc Attempt recovery of corrupted DB at startup (fixes #987) 2015-02-23 08:22:39 +01:00
Jakob Borg
fadc7d9ba5 Merge pull request #1366 from krozycki/master
All folder panels collapsed, fixes #1034
2015-02-20 10:18:50 +01:00
Jakob Borg
1e4b2133f6 Also handle ()| in glob patterns (fixes #1365) 2015-02-20 10:12:06 +01:00
Karol Różycki
bfefa6d016 All folder panels collapsed, fixes #1034 2015-02-19 15:48:43 +01:00
Jakob Borg
8b66472949 Fix sync benchmark for latest test changes 2015-02-19 13:15:51 +02:00
Jakob Borg
3b3aa94c4e Refactor out connection related functions to a separate file 2015-02-19 12:05:26 +02:00
Audrius Butkevicius
dc05275670 Merge pull request #1357 from calmh/truncate-v3
Simplify FileInfoTruncated
2015-02-19 10:04:53 +00:00
Jakob Borg
7921082ece Correctly handle ^ and $ in ignore patterns (fixes #1365) 2015-02-19 09:10:32 +02:00
Jakob Borg
efd6a29909 Translation update 2015-02-15 16:55:44 +01:00
Jakob Borg
88c44b303d Simplify FileInfoTruncated 2015-02-15 12:50:03 +01:00
Jakob Borg
e7dbb8ccdc Fix test for unknown flags 2015-02-15 09:51:39 +01:00
Jakob Borg
fe2a743c8d Protocol update 2015-02-15 09:41:32 +01:00
Audrius Butkevicius
1f5c124ac4 Merge pull request #1359 from krozycki/master
Typo fix. Fixes #1358
2015-02-14 15:17:45 +00:00
Karol Różycki
64a5bc038a Typo fix. Fixes #1358 2015-02-14 16:10:43 +01:00
Jakob Borg
5d9396334c Merge pull request #1311 from krozycki/master
Button to rescan all folders (fixes #1151)
2015-02-13 08:36:17 +01:00
Karol Różycki
ec160f1f0a Button to rescan all folders, fixes #1151 2015-02-12 21:03:35 +01:00
Jakob Borg
bbaeca96eb Merge pull request #1326 from rumpelsepp/systemd
Some systemd tweaks
2015-02-12 16:58:54 +01:00
Jakob Borg
3ab779895f Add tnn2 2015-02-12 12:14:56 +01:00
Jakob Borg
203c7360e7 Merge remote-tracking branch 'syncthing/pr/1332'
* syncthing/pr/1332:
  Implement memorySize() for NetBSD
2015-02-12 12:13:35 +01:00
Jakob Borg
a831f174ef Add marclaporte 2015-02-12 12:08:49 +01:00
Stefan Tatschner
153091f52f Some systemd tweaks
- Removed environment file to keep the service file minimal.
  "systemctl edit syncthing.service" does the job if somebody wants
  to customize the service.
- Changed "cmdline.target" to "default.target" as "cmdline.target"
  does not exist in systemd.special:
  http://www.freedesktop.org/software/systemd/man/systemd.special.html
- Added a missing "After=network.target".
- Added a documentation hint, thx @jaystrictor
2015-02-12 09:23:12 +01:00
Jakob Borg
35d3af5039 Use syncthing/build:latest for building 2015-02-11 10:17:15 +01:00
Jakob Borg
57e8cd6eab Regenerate assets 2015-02-11 09:35:08 +01:00
Jakob Borg
fc123a71af Merge pull request #1341 from AudriusButkevicius/configtest
Fix tests on Windows
2015-02-11 08:13:54 +01:00
Audrius Butkevicius
f14836cf02 Merge pull request #1346 from uok/match-icons
Match icons for ignore patterns
2015-02-10 21:42:53 +00:00
Ben Schulz
4178feb65f Match icons for ignore patterns 2015-02-10 21:38:56 +01:00
Audrius Butkevicius
2edaf22590 Merge pull request #1345 from calmh/smaller-batches
Reduce memory usage by writing smaller batches
2015-02-10 19:53:53 +00:00
Audrius Butkevicius
acd3dab957 Fix tests on Windows 2015-02-10 19:52:14 +00:00
Jakob Borg
6bbd74adcd Merge pull request #1321 from AudriusButkevicius/bitcheck
Refuse files with unknown bits set (fixes #1276)
2015-02-10 20:27:14 +01:00
Jakob Borg
9bb928bb38 Reduce memory usage by writing smaller batches 2015-02-10 20:24:25 +01:00
Jakob Borg
c87a6c5969 Use single filename for heap profiles 2015-02-10 19:50:27 +01:00
Jakob Borg
c482c13dcb Configurable heap profiling rate 2015-02-10 19:45:32 +01:00
Audrius Butkevicius
b87ed97402 Refuse files with unknown bits set (fixes #1276) 2015-02-09 23:32:33 +00:00
Jakob Borg
ee000dabfd Translation update 2015-02-09 23:03:31 +01:00
Jakob Borg
a73a011ee0 Merge pull request #1323 from AudriusButkevicius/finished
Add ItemFinished event (fixes #1258)
2015-02-09 15:24:10 +01:00
Jakob Borg
2a8e5e2c14 Merge pull request #1304 from AudriusButkevicius/pprof
Add STBLOCKPROFILE
2015-02-09 15:18:42 +01:00
Jakob Borg
5d9a41f712 Merge pull request #1319 from AudriusButkevicius/renames
Fix issues with renames
2015-02-09 15:14:47 +01:00
Jakob Borg
ebcf4b60f6 Merge pull request #1320 from AudriusButkevicius/cache
Remove fd cache (ref #1308)
2015-02-09 15:10:42 +01:00
Jakob Borg
f976b78917 Merge pull request #1322 from AudriusButkevicius/browser
Opening a browser happens in it's own routine (fixes #1273)
2015-02-09 15:06:56 +01:00
Tobias Nygren
078790bd0f Implement memorySize() for NetBSD 2015-02-05 20:31:25 +01:00
Audrius Butkevicius
b88c5a89a8 Add rumpelsepp 2015-02-03 00:15:57 +00:00
Audrius Butkevicius
57028e3acc Merge pull request #1325 from rumpelsepp/master
Added exit code definitions to systemd service files (fixes #1324)
2015-02-02 08:04:27 +00:00
Stefan Tatschner
c586a17926 Added exit code definitions to systemd service files (fixes #1324) 2015-02-01 23:55:54 +01:00
Audrius Butkevicius
9d078bac54 Add STBLOCKPROFILE 2015-02-01 19:00:24 +00:00
Audrius Butkevicius
38eaefcabd Add ItemFinished event (fixes #1258) 2015-02-01 18:59:29 +00:00
Audrius Butkevicius
ba8cadc2f1 Opening a browser happens in it's own routine (fixes #1273) 2015-02-01 18:59:28 +00:00
Audrius Butkevicius
380d5dfa6d Remove fd cache (ref #1308) 2015-02-01 18:59:24 +00:00
Audrius Butkevicius
32af626630 Fix issues with renames (fixes #1302)
Extra comments explain current issues.
2015-02-01 18:58:27 +00:00
Audrius Butkevicius
8358fedaf4 Fix failing integration tests 2015-02-01 18:57:46 +00:00
Audrius Butkevicius
ec82b0c648 Merge pull request #1309 from uok/fix-override
Fix button "override changes" line break (fixes #1144)
2015-01-29 10:40:14 +00:00
Ben Schulz
81a87f873f Fix button "override changes" line break 2015-01-29 11:28:59 +01:00
Audrius Butkevicius
d91b8ac444 Merge pull request #1299 from krozycki/master
Show information in folder panel if ignore patterns are active fixes #1279
2015-01-27 19:54:09 +00:00
Karol Różycki
952e51ac75 Show information in folder panel if ignore patterns are active, fixes #1279 2015-01-27 15:27:44 +01:00
Audrius Butkevicius
11267cd44f Merge pull request #1298 from uok/link-usage
Make links to usage report clickable
2015-01-26 20:59:20 +00:00
Ben Schulz
0e59e0aebd Make links to usage report clickable 2015-01-26 17:20:55 +01:00
Audrius Butkevicius
ae1d3b3dd3 Merge pull request #1293 from uok/move-panels
Move panels to top of page (ref #1270)
2015-01-25 09:53:03 +00:00
Ben Schulz
1a91dbee5f Move panels to top of page
- Move panels (new device, new folder, notice) to top of page
- Add icons to panel headers (restart, new folder, notice)
2015-01-23 16:28:30 +01:00
Jakob Borg
fd507e3e41 Merge pull request #1290 from krozycki/master
Ensuring path separator at the end of the folder path. (fixes #1262)
2015-01-22 15:36:32 -08:00
Jakob Borg
9c1a67cf47 Add krozycki 2015-01-22 15:35:58 -08:00
Jakob Borg
69e3824840 Add test for #1262 2015-01-22 15:34:22 -08:00
Karol Różycki
fcb1a98129 Ensuring path separator at the end of the folder path. (fixes #1262) 2015-01-23 00:22:30 +01:00
Audrius Butkevicius
6d942635af Merge pull request #1269 from uok/master
Small improvements for job queue
2015-01-22 22:58:33 +00:00
Audrius Butkevicius
cda2c5d459 Fix integration tests 2015-01-22 22:42:39 +00:00
Jakob Borg
969bb5a742 Fix protocol dependency hash 2015-01-22 13:29:54 -08:00
Jakob Borg
4bccc611c3 Update dependencies 2015-01-22 12:53:10 -08:00
Jakob Borg
d18c4ece0c Merge pull request #1282 from syncthing/integ
Improvements to integration tests
2015-01-22 08:22:46 -08:00
Audrius Butkevicius
25c664b13a Improvements to integration tests 2015-01-22 00:18:08 +00:00
Audrius Butkevicius
7c680c955f Merge pull request #1274 from calmh/minor-tweaks
Integer type policy
2015-01-19 20:40:36 +00:00
Ben Schulz
f037d1b6ca improve job queue 2015-01-19 20:49:19 +01:00
Jakob Borg
2c8b627008 Integer type policy
Integers are for numbers, enabling arithmetic like subtractions and for
loops without getting shot in the foot. Unsigneds are for bitfields.

- "int" for numbers that will always be laughably smaller than four
  billion, and where we don't care about the serialization format.

- "int32" for numbers that will always be laughably smaller than four
  billion, and will be serialized to four bytes.

- "int64" for numbers that may approach four billion or will be
  serialized to eight bytes.

- "uint32" and "uint64" for bitfields, depending on required number of
  bits and serialization format. Likewise "uint8" and "uint16", although
  rare in this project since they don't exist in XDR.

- "int8", "int16" and plain "uint" are almost never useful.
2015-01-19 10:34:36 -08:00
Jakob Borg
221e3eddd5 Remove leveldb panic workaround
Haven't seen this triggered for a long time...
2015-01-19 10:23:00 -08:00
Jakob Borg
74c39c677b Actually remove test file after test run 2015-01-19 10:23:00 -08:00
Jakob Borg
e6558832bf check-contrib 2015-01-19 10:18:28 -08:00
Jakob Borg
9c50625c55 Merge pull request #1267 from rumpelsepp/doc
Fix documentation link
2015-01-19 10:16:43 -08:00
Jakob Borg
d372435e92 Merge pull request #1264 from AudriusButkevicius/tempclean
Put temporary files in the OS temp directory (fixes #1239)
2015-01-19 10:12:32 -08:00
Audrius Butkevicius
a53facf709 Cleanup temporary files (fixes #1239) 2015-01-18 20:34:47 +00:00
Stefan Tatschner
ffc39dfbcb Fix documentation link 2015-01-18 18:17:55 +01:00
Audrius Butkevicius
cba38b15a9 Check for deleted files 2015-01-18 13:44:10 +00:00
Audrius Butkevicius
5ac7564bfe Merge pull request #1259 from calmh/check-folder-shared
Verify folder<->device permission in Request
2015-01-16 12:07:37 +00:00
Jakob Borg
53cd289b90 Verify folder<->device permission in Request
Requests from valid devices for valid folders should be rejected if the
folder is not shared with that device.
2015-01-16 12:50:51 +01:00
Jakob Borg
8dc13bcf1a go1.4.1 2015-01-16 10:18:54 +01:00
Audrius Butkevicius
261825a89b Merge pull request #1256 from calmh/fix-963
Adds File Versioning to folder info (fixes #963)
2015-01-15 17:25:28 +00:00
Jakob Borg
f47a5a309d Add File Versioning to folder info (fixes #963)
Could potentially use shorter value strings ("Simple" vs "Simple File
Versioning"), but this avoid introducing unnecessary strings to
translate - can always be changed in the future.
2015-01-15 15:50:49 +01:00
Jakob Borg
a40f2b9fa0 Merge pull request #1237 from syncthing/renamer
Efficient renames (fixes #1217)
2015-01-14 00:28:58 +01:00
Jakob Borg
cfcd3892f7 More concise changelog 2015-01-14 00:27:29 +01:00
Jakob Borg
703987f61c Changelog should read in chronological order from the top 2015-01-13 23:11:46 +01:00
Audrius Butkevicius
e50a8917ec Add renames to integration tests 2015-01-13 22:07:14 +00:00
Audrius Butkevicius
74d7c8e625 Efficient renames (fixes #1217) 2015-01-13 22:06:13 +00:00
Jakob Borg
a5d1383fe8 Translation update 2015-01-13 17:27:47 +01:00
Jakob Borg
bf2bcf515c Merge pull request #1247 from Rewt0r/master
Change Bottom Bar links to a new tab/window
2015-01-13 17:26:48 +01:00
Jakob Borg
4c5e94c64b Add Rewt0r 2015-01-13 17:25:51 +01:00
Jakob Borg
b4043216b6 Use bytes.Reader instead of bytes.Buffer for compiled in assets 2015-01-13 16:05:03 +01:00
Michael Jephcote
4371014667 Change Bottom Bar links to a new tab/window
Was getting kinda annoying clicking links in the footer and it navigating away from the page, I've added "_blank" as the target to those links.
2015-01-13 14:24:08 +00:00
Jakob Borg
fbb3222d29 Update dependencies 2015-01-13 13:55:35 +01:00
Audrius Butkevicius
4ca3889bed Merge pull request #1246 from syncthing/break-out-proto
Refactor out protocol and luhn (protocol dependency) packages
2015-01-13 12:37:46 +00:00
Jakob Borg
eef1aebe8c Refactor out protocol and luhn (protocol dependency) packages 2015-01-13 13:22:56 +01:00
Audrius Butkevicius
48382c4b59 Merge pull request #1245 from syncthing/indexclean-v2
Also filter out some other obviously invalid filenames (ref #1243)
2015-01-13 11:35:04 +00:00
Jakob Borg
9a45f0b31c Also filter out some other obviously invalid filenames (ref #1243) 2015-01-13 12:28:35 +01:00
Audrius Butkevicius
25c26e2f81 Merge pull request #1244 from syncthing/indexclean
Remove nil filenames from database and indexes (fixes #1243)
2015-01-13 08:36:49 +00:00
Jakob Borg
e4837f14b1 Remove nil filenames from database and indexes (fixes #1243) 2015-01-13 09:20:14 +01:00
Audrius Butkevicius
ce86131d12 Merge pull request #1242 from syncthing/rename-set
Renaming of package internal/files and type files.Set
2015-01-12 21:10:27 +00:00
Jakob Borg
e6c9baf6ef Rename db.Set to db.FileSet 2015-01-12 20:57:39 +01:00
Jakob Borg
8d6db7be31 Rename package internal/files to internal/db 2015-01-12 20:57:22 +01:00
Audrius Butkevicius
a2548b1fd0 Merge pull request #1241 from syncthing/fix-1143
Don't start a new refresh() loop on each UIOnline (fixes #1143)
2015-01-12 11:29:53 +00:00
Jakob Borg
e4658bb99d Don't start a new refresh() loop on each UIOnline (fixes #1143)
Separate out the stuff that should run on each UIOnline from the stuff
that should only run on init.
2015-01-12 12:15:58 +01:00
Jakob Borg
bf2e4a561a Changelog prints output on two lines per commit 2015-01-11 21:58:19 +01:00
Jakob Borg
1816320124 One more translation update 2015-01-11 21:19:42 +01:00
Jakob Borg
f09bfe293d Translation update 2015-01-11 20:31:30 +01:00
Jakob Borg
7b4e8fda4b Modal dialog titles must be manually translated 2015-01-11 14:48:40 +01:00
Audrius Butkevicius
c95812353f Merge pull request #1236 from syncthing/flag-safe
Reject Index and Request messages with unexpected flags
2015-01-11 12:40:40 +00:00
Jakob Borg
b622ec7a28 Reject Index and Request messages with unexpected flags 2015-01-11 13:29:01 +01:00
Jakob Borg
d8fbe7b77f Refactor readerLoop to switch on message type directly 2015-01-11 13:24:56 +01:00
Jakob Borg
dbcac37d91 Can run integration tests between different versions 2015-01-11 09:55:44 +01:00
Jakob Borg
d4d391b34f Change integration test "log" field to "instance" 2015-01-11 09:55:17 +01:00
Jakob Borg
571cf7d490 Merge pull request #1182 from AudriusButkevicius/autoauto
Connecting to a newer node triggers autoupgrade check (fixes #1177)
2015-01-11 09:16:12 +01:00
Jakob Borg
e18b19ca5a Translation base & assets update 2015-01-10 18:15:08 +01:00
Jakob Borg
48651bf482 Merge pull request #1234 from uok/master
Small improvements for footer navbar (ref #1154)
2015-01-10 18:14:47 +01:00
Audrius Butkevicius
5034a41c08 Connecting to a newer node triggers autoupgrade check (fixes #1177) 2015-01-10 17:05:19 +00:00
Ben Schulz
6795173e77 Small improvements for footer navbar 2015-01-10 18:02:27 +01:00
Jakob Borg
219ef996f5 Merge pull request #1226 from syncthing/deregister-fix
All roads lead to Finisher (fixes #1201)
2015-01-10 17:53:01 +01:00
Jakob Borg
00af1db275 Translation base & assets update 2015-01-10 17:51:18 +01:00
Jakob Borg
5935ea896f Merge pull request #1232 from facastagnini/master
"Quick guide to supported patterns" link updated
2015-01-10 17:50:23 +01:00
Jakob Borg
459983c05e Add facastagnini 2015-01-10 17:47:41 +01:00
Jakob Borg
ebf4f029ac Merge pull request #1229 from AudriusButkevicius/cfg-hasher
Make parallel hasher configurable, remove finisher setting (fixes #1199)
2015-01-10 17:45:15 +01:00
Jakob Borg
0eec945df1 Merge pull request #1230 from AudriusButkevicius/separator
Expose and use path separator (fixes #1163)
2015-01-10 17:43:41 +01:00
Jakob Borg
8824b9d68f Merge pull request #1231 from AudriusButkevicius/text
Rename "Last File Synced" to "Last File Received" (fixes #1145)
2015-01-10 17:43:06 +01:00
Jakob Borg
d2862814c5 Merge pull request #1233 from AudriusButkevicius/disco-log
Make discovery logging a bit better (fixes #1188)
2015-01-10 17:42:13 +01:00
Audrius Butkevicius
25fece2d50 Make discovery logging a bit better (fixes #1188) 2015-01-10 16:15:16 +00:00
Federico Castagnini
beb4239d1b The "Quick guide to supported patterns" link now points to the wiki article and will open in a new page/tab to avoid disrupting the settings page. 2015-01-10 10:42:45 -05:00
Audrius Butkevicius
2b78e37d92 Rename "Last File Synced" to "Last File Received" (fixes #1145)
IMHO this is more clear
2015-01-10 14:55:08 +00:00
Audrius Butkevicius
a2070d9ce4 Expose and use path separator (fixes #1163) 2015-01-10 14:51:29 +00:00
Audrius Butkevicius
5827a686b8 Make parallel hasher configurable, remove finisher setting (fixes #1199) 2015-01-10 14:32:20 +00:00
Audrius Butkevicius
dec479532e All roads lead to Finisher (fixes #1201) 2015-01-10 13:45:48 +00:00
Jakob Borg
5d173168cc Merge pull request #1214 from bigbear2nd/master
Added colored status indicator for narrow screens. (Fixes #1084)
2015-01-09 16:23:24 +01:00
bigbear2nd
2aac1cde04 Added colored status indicator for narrow screens. (Fixes #1084) 2015-01-09 20:54:00 +09:00
Audrius Butkevicius
3676f0268f Merge pull request #1220 from syncthing/arm-build
Only build ARMv5 (fixes #1218)
2015-01-09 10:25:50 +00:00
Audrius Butkevicius
a7b75a54bb Merge pull request #1219 from syncthing/refactor-truncated
Refactor stuff around FileInfoTruncated
2015-01-09 10:12:48 +00:00
Jakob Borg
961a87b743 Only build ARMv5 (fixes #1218)
With this change, the build system only builds one ARM variant - ARMv5.
We call the build architecture simply "arm", as this is what
runtime.GOARCH says.
2015-01-09 10:45:15 +01:00
Jakob Borg
e03d59e381 The protocol specs moved again 2015-01-09 08:54:19 +01:00
Jakob Borg
d46ce5003c Implement GetGlobalTruncated 2015-01-09 08:41:02 +01:00
Jakob Borg
4c4143d9be Move FileInfoTruncated to files package
This is where it's used, and it clarifies that it's never used over the
wire.
2015-01-09 08:28:24 +01:00
Jakob Borg
8bc7d259f4 Move FileIntf to files package, expose Iterator type
This is where FileIntf is used, so it should be defined here (it's not
a protocol thing, really).
2015-01-09 08:18:42 +01:00
Jakob Borg
2d047fa428 Remove unused types 2015-01-09 08:14:02 +01:00
Audrius Butkevicius
735d420d40 Merge pull request #1215 from syncthing/new-proto-bc
Add some new protocol fields
2015-01-08 21:54:01 +00:00
Jakob Borg
bc9fc1aece Actually close connection based on unknown protocol version 2015-01-08 22:11:26 +01:00
Jakob Borg
b88e3c99c1 Add fields for future extensibility
This adds a number of fields to the end of existing messages. This is a
backwards compatible change.
2015-01-08 22:11:26 +01:00
Jakob Borg
ce3e6e084c Ensure backwards compatibility before modifying protocol
This change makes sure that things work smoothly when "we" are a newer
version than our peer and have more fields in our messages than they do.
Missing fields will be left at zero/nil.

(The other side will ignore our extra fields, for the same effect.)
2015-01-08 14:25:11 +01:00
Jakob Borg
2a58ca7697 Merge pull request #1212 from cqcallaw/upnp
Properly handle absolute URLs when parsing UPnP service control URLs
2015-01-08 08:50:30 +01:00
Caleb Callaway
af96f7a0cd Properly handle absolute URLs when parsing UPnP service control URLs
Fixes #1187
2015-01-07 21:23:20 -08:00
Jakob Borg
7d39d1a925 Make it possible to include extra external files into binary packages 2015-01-07 16:15:50 +01:00
Jakob Borg
1b6c700e18 Merge remote-tracking branch 'origin/pr/1198'
* origin/pr/1198:
  Fix rendering issue on firefox when zoomed (fixes #1197)
2015-01-07 14:24:49 +01:00
Tim Abell
6304bd60ee Fix rendering issue on firefox when zoomed (fixes #1197)
issue #1197
2015-01-07 09:27:17 +00:00
Jakob Borg
4ad4417740 Add timabell 2015-01-07 08:36:54 +01:00
Jakob Borg
6a4c259a73 Merge pull request #1196 from AudriusButkevicius/finddevice
Add device finder utility
2015-01-07 08:31:54 +01:00
Audrius Butkevicius
12eabb220d Add device finder utility 2015-01-06 23:12:12 +00:00
Jakob Borg
d68ce2d68c Translation update 2015-01-06 23:12:40 +01:00
Jakob Borg
8e02c040eb Update key ID for signed releases in README (fixes #1180) 2015-01-06 23:06:16 +01:00
Jakob Borg
a7a317c284 The predictableRandom test can only run once successfully (fixes #1184) 2015-01-06 23:03:35 +01:00
Audrius Butkevicius
9d6ef24660 Merge pull request #1194 from syncthing/fix-1186
Use comma-ok idiom to signal files missing in database (fixes #1186)
2015-01-06 21:54:13 +00:00
Jakob Borg
14014408fb Merge pull request #1181 from kozec/stnoupgrade-disable-button
Return HTTP/500 from /rest/upgrade if STNOUPGRADE is defined
2015-01-06 22:54:08 +01:00
kozec
b933e9666a /rest/upgrade returns HTTP/500 if STNOUPGRADE is defined 2015-01-06 22:50:56 +01:00
Jakob Borg
7aff59bcce Add brendanlong 2015-01-06 22:48:01 +01:00
Jakob Borg
8e2760cb3d Merge pull request #1183 from brendanlong/fix-tests-on-go-1.3
Don't use Go 1.4 range syntax in queue_test.go
2015-01-06 22:47:23 +01:00
Brendan Long
7a9fc6dbd3 Don't use Go 1.4 range syntax in queue_test.go, since the listed requirement is Go 1.3. 2015-01-06 15:45:58 -06:00
Jakob Borg
75d0dc251e Use comma-ok idiom to signal files missing in database (fixes #1186)
Prevents us from doing stupid things to the folder root (empty file
path) when nodes disconnect...
2015-01-06 22:40:20 +01:00
Jakob Borg
9a50c4d93f Don't unnecessarily chmod directories when renaming 2015-01-06 22:10:44 +01:00
Audrius Butkevicius
010d5a0192 Merge pull request #1179 from syncthing/httperror
Handle HTTP errors on non-event requests (fixes #1120)
2015-01-05 17:45:18 +00:00
Jakob Borg
cf1594829a Handle HTTP errors on non-event requests (fixes #1120, fixes #807) 2015-01-05 16:03:00 +01:00
Jakob Borg
854d720ce0 Merge pr/988
* commit 'b9817ac':
  add README
  on-failure instead of always as we cannot otherwise kill the service
  systemd units for system/user
2015-01-05 15:14:33 +01:00
Jakob Borg
2f43c74ece Add peterhoeg 2015-01-05 15:14:22 +01:00
Peter Hoeg
b9817ac6b4 add README 2015-01-05 18:29:13 +08:00
Peter Hoeg
1e8da0d494 on-failure instead of always as we cannot otherwise kill the service 2015-01-05 18:29:13 +08:00
Peter Hoeg
c47be7b415 systemd units for system/user 2015-01-05 18:29:13 +08:00
Jakob Borg
d3f6cb860f Translation update 2015-01-04 20:18:14 +01:00
Audrius Butkevicius
83d25f09a3 Fix broken upgrades (fixes #1175) 2015-01-04 18:19:00 +00:00
Audrius Butkevicius
ed747a2d3d Add identicons to device prompts 2015-01-03 23:34:15 +00:00
Jakob Borg
3a8ee4ce2e Merge pull request #1169 from syncthing/pullhash
Hash blocks after receipt, try multiple peers (fixes #1166)
2015-01-04 00:24:07 +01:00
Audrius Butkevicius
5ac01a3af4 Hash blocks after receipt, try multiple peers (fixes #1166) 2015-01-03 23:21:57 +00:00
Jakob Borg
46343f2f9e Merge pull request #1174 from AudriusButkevicius/intro
New device, folder prompts (fixes #120, fixes #330)
2015-01-04 00:16:10 +01:00
Audrius Butkevicius
56ccb5b2ab New device, folder prompts (fixes #120, fixes #330) 2015-01-03 23:06:41 +00:00
Jakob Borg
9a946eed80 Discourse -> Wiki for docs 2015-01-03 16:44:13 +01:00
Audrius Butkevicius
9c6cb0f630 Merge pull request #1172 from syncthing/random-scanintv
Add a random perturbation to the scan interval (fixes #1150)
2015-01-02 15:25:22 +00:00
Audrius Butkevicius
1b066d6965 Merge pull request #1171 from syncthing/jobqueue
Add job queue (replaces #1060)
2015-01-02 15:18:50 +00:00
Jakob Borg
54c3caad53 Add a random perturbation to the scan interval (fixes #1150) 2015-01-02 16:16:16 +01:00
Jakob Borg
9b5e8aaf83 Repair buggy BringToFront 2015-01-02 15:54:04 +01:00
Jakob Borg
5143c09bcf Refactor / cleanup 2015-01-02 15:54:04 +01:00
Jakob Borg
2496185629 Only buffer file names, not full &FileInfo 2015-01-02 15:33:39 +01:00
Jakob Borg
34deb82aea Use slice instead of list, no map
benchmark                           old ns/op     new ns/op     delta
BenchmarkJobQueueBump               345           154498        +44682.03%
BenchmarkJobQueuePushPopDone10k     9437373       3258204       -65.48%

benchmark                           old allocs     new allocs     delta
BenchmarkJobQueueBump               0              0              +0.00%
BenchmarkJobQueuePushPopDone10k     10565          22             -99.79%

benchmark                           old bytes     new bytes     delta
BenchmarkJobQueueBump               0             0             +0.00%
BenchmarkJobQueuePushPopDone10k     1452498       385869        -73.43%
2015-01-02 15:33:39 +01:00
Jakob Borg
8f72ae9da2 Add some benchmarks 2015-01-02 15:33:39 +01:00
Audrius Butkevicius
b753f01ac1 Add tests 2015-01-02 15:33:39 +01:00
Audrius Butkevicius
fd0a147ae6 Add job queue (fixes #629)
Request to terminate currently ongoing downloads and jump to the bumped file
incoming in 3, 2, 1.

Also, has a slightly strange effect where we pop a job off the queue, but
the copyChannel is still busy and blocks, though it gets moved to the
progress slice in the jobqueue, and looks like it's in progress which it isn't
as it's waiting to be picked up from the copyChan.

As a result, the progress emitter doesn't register on the task, and hence the file
doesn't have a progress bar, but cannot be replaced by a bump.

I guess I can fix progress bar issue by moving the progressEmiter.Register just
before passing the file to the copyChan, but then we are back to the initial
problem of a file with a progress bar, but no progress happening as it's stuck
 on write to copyChan

I checked if there is a way to check for channel writeability (before popping)
but got struck by lightning just for bringing the idea up in #go-nuts.

My ideal scenario would be to check if copyChan is writeable, pop job from the
queue and shove it down handleFile. This way jobs would stay in the queue while
they cannot be handled, meaning that the `Bump` could bring your file up higher.
2015-01-02 15:33:39 +01:00
Audrius Butkevicius
e94bd90782 Merge pull request #1164 from syncthing/ro-tempfiles
Handle read only temp files after crash/restart
2014-12-31 12:08:37 +00:00
Jakob Borg
ce4b897d0e Handle read only temp files after crash/restart 2014-12-31 13:06:28 +01:00
Jakob Borg
a7694029e2 Make sure to stop processes when exiting integration test 2014-12-31 13:04:06 +01:00
Jakob Borg
1e9110b763 Add debugging utility for manual directory comparison 2014-12-31 13:04:06 +01:00
Jakob Borg
6f3fbbbe49 Improve error checking in integration tests 2014-12-31 13:04:04 +01:00
Jakob Borg
d346ec7bfe Merge pull request #1160 from AudriusButkevicius/upnp
Use unique names for UPnP mappings (fixes #1100, fixes #1128)
2014-12-31 12:56:47 +01:00
Jakob Borg
26a3613397 Merge pull request #1162 from AudriusButkevicius/silence
Silence versioner warnings for unmatched files (fixes #1117)
2014-12-31 12:54:23 +01:00
Jakob Borg
e6318bddf3 Merge pull request #1161 from AudriusButkevicius/upnp2
Use ListenMulticastUDP for multicast sockets (potentially fixes #1113)
2014-12-31 12:53:56 +01:00
Audrius Butkevicius
514bb0beda Silence versioner warnings for unmatched files (fixes #1117) 2014-12-30 22:43:07 +00:00
Audrius Butkevicius
41b1bd2f05 Use ListenMulticastUDP for multicast sockets (potentially fixes #1113) 2014-12-30 22:27:47 +00:00
Audrius Butkevicius
bf40dadf04 Use unique names for UPnP mappings (fixes #1100, fixes #1128) 2014-12-30 21:47:12 +00:00
Jakob Borg
cb1678ebec Clean up folders after -reset test 2014-12-30 11:02:49 +01:00
Jakob Borg
0c1ac568b5 Fix tests with newer goleveldb 2014-12-29 14:50:24 +01:00
Audrius Butkevicius
0f9550c747 Merge pull request #1149 from syncthing/fix-1058
Also check file size when determining if file is unchanged (fixes #1058)
2014-12-29 13:29:00 +00:00
Audrius Butkevicius
b13ae17a47 Merge pull request #1147 from syncthing/fix-1118
Generate a random API key on initial setup (fixes #1118)
2014-12-29 13:28:38 +00:00
Jakob Borg
f762a12d18 Also check file size when determining if file is unchanged (fixes #1058) 2014-12-29 14:24:12 +01:00
Jakob Borg
20d30a80be Generate a random API key on initial setup (fixes #1118)
Also makes the javascript implementation use the same algorithm for
generating random strings.
2014-12-29 13:48:26 +01:00
Audrius Butkevicius
229b218203 Merge pull request #1146 from syncthing/fix-1047
Make auto upgrade careful about breaking changes (fixes #1047)
2014-12-29 11:41:10 +00:00
Jakob Borg
4b668aaca8 Make auto upgrade careful about breaking changes (fixes #1047) 2014-12-29 12:35:06 +01:00
Jakob Borg
8c7f1421c6 Update goleveldb 2014-12-29 12:23:07 +01:00
Jakob Borg
d90b2c1d52 Translation update 2014-12-29 09:42:17 +01:00
Jakob Borg
22f39be197 Exit before attempting to use nil variables on scanning nonexistent folder 2014-12-23 14:14:05 +01:00
Audrius Butkevicius
2fa45436c2 Merge pull request #1140 from syncthing/fix-1133
Refactor ignore handling to fix #1133
2014-12-23 13:01:56 +02:00
Jakob Borg
cadbb6bbce Move ignore handling from index recv to puller (fixes #1133)
With this change we accept updates for ignored files from other devices,
and check the ignore patterns at pull time. When we detect that the
ignore patterns have changed we do a full check of files that we might
now need to pull.
2014-12-23 10:46:02 +01:00
Jakob Borg
2c89f04be7 Refactor ignore handling (...)
This uses persistent Matcher objects that can reload their content and
provide a hash string that can be used to check if it's changed. The
cache is local to each Matcher object instead of kept globally.
2014-12-23 10:46:02 +01:00
Jakob Borg
597011e3a9 Disregard change to removed doc 2014-12-23 10:23:36 +01:00
Audrius Butkevicius
0d433b58ba Merge pull request #1139 from syncthing/check-upgrade-md5
Check upgrade md5
2014-12-22 15:33:19 +02:00
Jakob Borg
cde8ef56e5 Implement manual -upgrade-to option 2014-12-22 12:18:10 +01:00
Jakob Borg
110816c7aa Consolidate Windows/Unix upgrading and check MD5 (fixes #1138) 2014-12-22 12:13:31 +01:00
Jakob Borg
fbb1e168f7 Include MD5 sums in archives 2014-12-22 12:12:34 +01:00
Jakob Borg
23085eb5ae Must verify success of from-network copy during upgrade (ref #1138) 2014-12-22 10:42:47 +01:00
Jakob Borg
7344a6205f Move protocol specs to a separate repo 2014-12-22 09:55:58 +01:00
marco-m
4b76ec40c0 Update DISCOVERY.md
Correct DISCOVERY.md with the changes proposed in the forum (https://discourse.syncthing.net/t/questions-about-the-discovery-protocol/1586)
2014-12-21 22:47:47 +01:00
Audrius Butkevicius
90101d0269 Merge pull request #1134 from syncthing/fix-816
Don't ignore ignored items forever (fixes #816)
2014-12-21 16:18:24 +02:00
Jakob Borg
7ac84c0660 Don't ignore ignored items forever (fixes #816) 2014-12-21 13:55:50 +01:00
Jakob Borg
2090530bbb Improve and clean up integration tests, benchmark. 2014-12-19 12:43:48 +01:00
Jakob Borg
b6cb7ddbaf There is no Legend string right now 2014-12-19 10:18:51 +01:00
Jakob Borg
3422d9335c ... and in NICKS (I should go to bed) 2014-12-18 22:55:04 +01:00
Jakob Borg
e91f9a944e Revert "Update bootstrap" (fixes #1121)
This reverts commit 51cdd38c3e.

Conflicts:
	internal/auto/gui.files.go
2014-12-18 22:32:03 +01:00
Jakob Borg
e7ddc7cf0f ... also in index.html 2014-12-18 22:02:45 +01:00
Jakob Borg
40dfa48756 Rebuild assets 2014-12-18 22:01:38 +01:00
Jakob Borg
579f92cf5f Merge branch 'pr-1115'
* pr-1115:
  Make progress indicators less animated
  put legend above list of needed files
2014-12-18 22:01:27 +01:00
Jakob Borg
4565125da9 Add Cathryne 2014-12-18 21:59:54 +01:00
Jakob Borg
ce13a01e65 Clarify authorship requirements in contribution guidelines 2014-12-18 21:56:52 +01:00
Jakob Borg
618a8682b7 golint style tweaks 2014-12-16 23:33:56 +01:00
Jakob Borg
963077f918 Translation update 2014-12-16 23:20:59 +01:00
Jakob Borg
3704d2d86b Don't exit after creating HTTPS certs (fixes #1103) 2014-12-16 22:55:44 +01:00
Jakob Borg
fc6a029311 gofmt 2014-12-16 22:40:04 +01:00
Jakob Borg
7c7b1e6c2d Merge branch 'update-bootstrap'
* update-bootstrap:
  Fix checkbox breakage in Settings dialog
  Update bootstrap
2014-12-15 09:13:05 +01:00
Jakob Borg
892920039d Fix checkbox breakage in Settings dialog 2014-12-15 09:12:59 +01:00
Jakob Borg
51cdd38c3e Update bootstrap 2014-12-15 08:54:29 +01:00
Jakob Borg
80977bd4c0 Make progress indicators less animated 2014-12-15 00:34:03 +01:00
Cathryne
d8022f94ef put legend above list of needed files 2014-12-13 18:33:20 +01:00
Jakob Borg
1c43587d7d Patch Go for issue #9102 in build env (fixes #1112) 2014-12-13 10:38:05 +01:00
Jakob Borg
b2ed32b118 Command -generate should work on non-existent dir 2014-12-12 21:39:03 +01:00
Jakob Borg
0cc815d816 Need config available for -reset (fixes #1111) 2014-12-12 21:29:57 +01:00
Jakob Borg
d452b7593f Merge branch 'pr-1094'
* pr-1094:
  GUI tweaks for last file synced
  Display last received file and time (fixes #292, fixes #801)
2014-12-12 14:25:12 +01:00
Jakob Borg
5346bdc683 GUI tweaks for last file synced 2014-12-12 14:24:36 +01:00
Jakob Borg
dc5c1e2002 Use Go 1.4 for builds 2014-12-11 12:48:40 +01:00
Jakob Borg
2e48e298a2 Merge pull request #1107 from AudriusButkevicius/cleanup
Remove temporaries during scan (fixes #1092)
2014-12-10 09:19:34 +01:00
Audrius Butkevicius
7a1aaaf5c4 Remove temporaries during scan (fixes #1092) 2014-12-09 23:58:58 +00:00
Audrius Butkevicius
bde92d5cfe Display last received file and time (fixes #292, fixes #801) 2014-12-09 20:24:48 +00:00
Audrius Butkevicius
691f0f4845 Merge pull request #1102 from syncthing/gui-poodle
Protect GUI HTTPS from some attacks
2014-12-09 09:52:21 +00:00
Jakob Borg
fdd458d2fe Protect GUI HTTPS from some attacks
- Disable SSLv3 against POODLE
 - Disable RC4 as a weak cipher
 - Set the CommonName to the system host name
2014-12-09 10:49:58 +01:00
Jakob Borg
d2c0b8374a Fix integration tests for Windows native 2014-12-08 22:15:10 +01:00
Jakob Borg
c96c78892d Include error in randomness failure panic 2014-12-08 19:40:38 +01:00
Jakob Borg
957643f523 crypto/rand.Reader may not return all entropy immediately 2014-12-08 19:36:08 +01:00
Audrius Butkevicius
749bbec566 Merge pull request #1099 from syncthing/vet-and-lint
Various changes for vet and lint
2014-12-08 17:08:18 +00:00
Jakob Borg
25e363c5fb Style tweaks and some *IDG->IGD in UPnP code 2014-12-08 17:07:55 +01:00
Jakob Borg
febeed3277 config.ConfigWrapper -> config.Wrapper 2014-12-08 16:39:11 +01:00
Jakob Borg
9d07aa006d Various style fixes 2014-12-08 16:36:15 +01:00
Jakob Borg
12d69e25dd Fixes for go vet 2014-12-08 16:19:08 +01:00
Jakob Borg
0c9f1efc75 Run vet and lint during build 2014-12-08 16:12:53 +01:00
Jakob Borg
665b4506e7 Correct check-contrib.sh 2014-12-08 15:44:55 +01:00
Jakob Borg
cb5548ceb8 Fit better in with Jenkins 2014-12-08 15:42:53 +01:00
Jakob Borg
c9492e54f7 Copyright notice 2014-12-08 15:42:53 +01:00
Jakob Borg
12490eafff Script to fail build on missing authors and copyrights 2014-12-08 15:42:53 +01:00
Jakob Borg
6e83d11d5f Translation update 2014-12-08 13:25:27 +01:00
Jakob Borg
9c6aedc91b Merge remote-tracking branch 'origin/pr/1097'
* origin/pr/1097:
  Revert "Cache file descriptors" (fixes #1096)
2014-12-08 13:23:06 +01:00
Audrius Butkevicius
a9339d0627 Revert "Cache file descriptors" (fixes #1096)
This reverts commit 992ad97ad5.

Causes issues on Windows which uses file locking.
Meaning we cannot archive or modify the file while it's open.
2014-12-08 11:56:14 +00:00
Jakob Borg
4d9aa10532 Merge pull request #1093 from syncthing/random
Refactor random string stuff and seeding
2014-12-08 09:40:30 +01:00
Audrius Butkevicius
b00264b594 Copy compression setting while introducing 2014-12-07 22:43:30 +00:00
Jakob Borg
e329c7015e Refactor random string stuff and seeding
Make sure we have a good random seed on the default RNG, that the
predictable RNG is clearly marked as such, that random strings are
actually the length requested, and that they contain a restricted set of
characters only.
2014-12-07 16:47:24 +01:00
Jakob Borg
1392cfc72d Actually commit and use new random UR ID 2014-12-07 15:49:17 +01:00
Jakob Borg
c6688d8f89 Include ref#, show author nickname in release notes 2014-12-07 12:52:18 +01:00
Jakob Borg
87abea0ba3 Script for generating the change log 2014-12-07 09:07:13 +01:00
Jakob Borg
f9fcb44f3c Translation update 2014-12-07 08:30:54 +01:00
Jakob Borg
996cbbca38 Merge remote-tracking branch 'origin/pr/1091'
* origin/pr/1091:
  Escape plus sign (fixes #1090)
2014-12-07 08:05:21 +01:00
Jakob Borg
581f4b89bd Merge remote-tracking branch 'origin/pr/977'
* origin/pr/977:
  Cache file descriptors
2014-12-07 08:03:34 +01:00
Audrius Butkevicius
88a347dce0 Escape plus sign (fixes #1090) 2014-12-07 00:10:32 +00:00
Audrius Butkevicius
3e7b197a1d Merge pull request #1074 from syncthing/fix-1071
Handle symlinks in versioning (fixes #1071)
2014-12-06 20:19:25 +00:00
Jakob Borg
94ab06e92f Handle symlinks, Staggered versioner (fixes #1071) 2014-12-06 15:20:35 +01:00
Jakob Borg
d38c81fcff Handle broken symlinks, Simple versioner (fixes #1071) 2014-12-06 15:20:35 +01:00
Jakob Borg
3e26fdfb67 Run filetype and symlink integration tests with versioning (ref #1071) 2014-12-06 15:20:35 +01:00
Jakob Borg
e1be73232d Merge remote-tracking branch 'origin/pr/1086'
* origin/pr/1086:
  Enable URL lang parameter for switching languages (fixes #1080)
2014-12-06 14:41:41 +01:00
Jakob Borg
06fd2268d9 Use Go 1.4 'generate' to create XDR codec 2014-12-06 14:23:10 +01:00
Jakob Borg
15251dfae1 Update calmh/xdr 2014-12-06 14:20:49 +01:00
Dennis Wilson
f62812a8dc Enable URL lang parameter for switching languages (fixes #1080) 2014-12-06 14:11:42 +01:00
Jakob Borg
c6041d2590 Skip dotfiles when generating assets 2014-12-06 13:46:02 +01:00
Jakob Borg
c7e779107c Staggered versioning should use current time for filename tag (fixes #994) 2014-12-06 13:26:46 +01:00
Jakob Borg
7cd25c919f Asset rebuild 2014-12-06 12:36:35 +01:00
Jakob Borg
47d67d3985 Merge remote-tracking branch 'origin/pr/1087'
* origin/pr/1087:
  Folder/device panel header with progress indicator
2014-12-06 12:32:43 +01:00
Jakob Borg
1a7921b46c Merge remote-tracking branch 'origin/pr/1083'
* origin/pr/1083:
  Select repos to share with in node editor dialog (fixes #719)
2014-12-06 12:30:46 +01:00
Jakob Borg
43d569741b Merge remote-tracking branch 'origin/pr/1081'
* origin/pr/1081:
  Revisit -no-console option for Windows
2014-12-06 12:17:46 +01:00
Jakob Borg
52c6869eab Merge remote-tracking branch 'origin/pr/1082'
* origin/pr/1082:
  Scrap IsSymlink for native support on Go 1.4
2014-12-06 12:12:53 +01:00
Jakob Borg
6dff9097a2 Use Go 1.4 build environment (currently 1.4rc2) 2014-12-06 12:12:33 +01:00
Ben Schulz
4ff211662a Folder/device panel header with progress indicator 2014-12-05 18:44:38 +01:00
Audrius Butkevicius
05eab51a0d Select repos to share with in node editor dialog (fixes #719) 2014-12-05 00:22:16 +00:00
Audrius Butkevicius
604a4e7dbc Scrap IsSymlink for native support on Go 1.4
Obviously needs Go 1.4 to go back in.

I am still open to doing fix-up's on rescan interval on Windows, which
would still allow getting rid of all the Windows code.

Frankly, we could just defer creations of links (like we defer deletions of files)
in hopes that the target gets created, and if it doesn't, well tough luck, you'll
get a file symlink.

To be honest, nobody would even notice this 'issue' as I am sure nobody on
Windows uses symlinks.

But at the same time, this ugly code is hidden away in some creppy file in
it's own module far far away, and the interface that it exports is fine'ish,
so I wouldn't mind keeping it as it is.
2014-12-04 23:02:57 +00:00
Audrius Butkevicius
80dca96ee8 Revisit -no-console option for Windows
The reason for ShowWindow opose to your FreeConsole is because if you start up
cmd.exe and do syncthing.exe -no-output it actually hides the existing cmd.exe
window oppose to opening a separate window and then hiding it, which keeps the
existing console hanging on syncthing.exe running.

I tried playing around with compiling as GUI, then given the option is not present
allocating a console, and redirecting the std streams to the new console, but that
seems ugly as I'd have to make quite a few calls. But that does get of the initial
flash.
2014-12-04 21:59:40 +00:00
Jakob Borg
7f97037190 Revert "Merge pull request #1078 from syncthing/go1.4"
This reverts commit b658afd857, reversing
changes made to 591c5dabf4.
2014-12-04 22:24:49 +01:00
Jakob Borg
b658afd857 Merge pull request #1078 from syncthing/go1.4
Use Go 1.4 build env
2014-12-04 20:45:48 +01:00
Audrius Butkevicius
992ad97ad5 Cache file descriptors 2014-12-04 16:18:47 +00:00
Jakob Borg
5af6cbae2c Verify Windows support, report appropriate errors when unsupported 2014-12-04 12:10:25 +01:00
Jakob Borg
2abe792f36 Use same order of parameters as os.Symlink 2014-12-04 11:53:55 +01:00
Jakob Borg
1ff9bb8fdc Remove Windows specific implementation 2014-12-04 06:59:30 +01:00
Jakob Borg
5cb1039daf Use Go 1.4 build environment (currently 1.4rc2) 2014-12-04 06:59:13 +01:00
Jakob Borg
591c5dabf4 Merge pull request #1077 from AudriusButkevicius/round
Avoid rounding errors (fixes #1068)
2014-12-04 05:52:29 +01:00
Audrius Butkevicius
770fff287e Avoid rounding errors (fixes #1068) 2014-12-03 23:44:39 +00:00
Jakob Borg
cea7a179ae Utility to print all info we know about a path 2014-12-03 11:32:10 +01:00
Audrius Butkevicius
d80c40cfbf Merge pull request #1072 from syncthing/rewrite-ignore-cache
Rewrite ignore cache
2014-12-03 09:50:05 +00:00
Jakob Borg
12e83374e9 Verify that a symlink can be removed 2014-12-03 09:05:01 +01:00
Jakob Borg
98344d2e5e Rewrite ignores to fix data race, use fewer maps 2014-12-03 08:39:59 +01:00
Jakob Borg
99dc1eec50 Map is a reference type, does not need * here 2014-12-03 08:39:59 +01:00
Jakob Borg
2a886576a6 Fix announce timers on Solaris (and others, given the right timing) (...)
In the successfull case, we start the timer with NewTimer(0), then do a
bunch of stuff during which time it can fire, then reset it with
Reset(0). The result is that two timer firings are queued when we enter
the select loop, so we do two announcements back to back and fail the
tests.
2014-12-03 08:36:45 +01:00
Jakob Borg
919d005550 Print detected data races to stdout instead of hiding in a file 2014-12-03 07:47:40 +01:00
Jakob Borg
97abdaca5a Merge pull request #1070 from AudriusButkevicius/staggered
Use unique versions in staggered versioner (fixes #1063)
2014-12-02 23:27:01 +01:00
Jakob Borg
9cc8b7c858 Simple smoke test for parallell scans 2014-12-02 22:13:08 +01:00
Jakob Borg
0726472b91 Update test configs to v7 2014-12-02 22:05:15 +01:00
Audrius Butkevicius
3cbe92d797 Use unique versions in staggered versioner (fixes #1063) 2014-12-02 19:04:12 +00:00
Jakob Borg
72a278c9ed Merge pull request #1065 from syncthing/coc
Add Code of Conduct
2014-12-02 16:22:13 +01:00
Jakob Borg
e567c8adce Merge pull request #1064 from syncthing/contributors
Clarify/formalize contribution policy and commit access
2014-12-02 16:21:36 +01:00
Jakob Borg
dde8045109 Add Code of Conduct 2014-12-02 15:57:31 +01:00
Jakob Borg
c922c4c383 Clarify/formalize contribution policy and commit access 2014-12-02 15:55:45 +01:00
Audrius Butkevicius
bc8907e90d Check if announcement data is available 2014-12-01 19:53:13 +00:00
Jakob Borg
a8ba7786ae Reinstate 'Shared With' until a better alternative emerges (ref #1054) 2014-12-01 20:50:27 +01:00
Jakob Borg
c734e48ad0 Merge pull request #1052 from AudriusButkevicius/disco4
Change to URL based announce server addresses (fixes #943)
2014-12-01 17:53:48 +01:00
Audrius Butkevicius
d30d0b29a9 Fix CSS 2014-12-01 10:30:38 +00:00
Audrius Butkevicius
2912defb97 Add tests for new discovery 2014-12-01 10:30:25 +00:00
Audrius Butkevicius
69f8ac6b56 Change to URL based announce server addresses (fixes #943) 2014-12-01 10:30:25 +00:00
Jakob Borg
e7441ff6e8 DisableSymlinks -> !SymlinksEnabled 2014-12-01 11:27:07 +01:00
Jakob Borg
8a34158fa4 Merge pull request #1053 from AudriusButkevicius/symdis
Add option to disable symlinks (fixes #1017)
2014-12-01 11:22:04 +01:00
Jakob Borg
8d2a6d96f2 Shorter Global Discovery label 2014-12-01 11:14:11 +01:00
Jakob Borg
bb50b677c7 Merge pull request #1037 from snnd/locale-service
Added Locale Service. Minor Controller Refactoring.
2014-12-01 10:38:44 +01:00
Jakob Borg
0fde4b3b2e Use runtime info to determine ARM version for upgrade (fixes #1051) 2014-12-01 10:24:13 +01:00
Dennis Wilson
ee9c109f07 add locale service to GUI. minor cleanup of controller. 2014-12-01 10:00:03 +01:00
Jakob Borg
1219423091 Revert "Figure out GOARM without being told (ref #1051)"
This reverts commit 2d7b0cf94d.

GOARM is not actually embedded and printed by "go env"
2014-12-01 09:39:57 +01:00
Jakob Borg
cf00ab854f Translation update (fixes #1054) 2014-12-01 09:13:58 +01:00
Jakob Borg
c417dcb7e2 Repair Rescan button, cleanup CSS (fixes #1054) 2014-12-01 09:11:16 +01:00
Audrius Butkevicius
7ad711f554 Add option to disable symlinks (fixes #1017) 2014-11-30 22:10:32 +00:00
Jakob Borg
2d7b0cf94d Figure out GOARM without being told (ref #1051) 2014-11-30 21:46:00 +01:00
Jakob Borg
d669c07e8a Increase allowed test runtimes (fixes #1049) 2014-11-30 21:21:37 +01:00
Jakob Borg
8bd52946b4 Merge pull request #1048 from asdil12/goarm
Directly accept GOARM env var for ARM version
2014-11-30 20:57:49 +01:00
Jakob Borg
27e81637be Add asdil12 2014-11-30 20:57:34 +01:00
Jakob Borg
5c67e27a30 Use CSS column layouts in About box 2014-11-30 20:49:49 +01:00
Dominik Heidler
59af9809fe Directly accept GOARM env var for ARM version
As GOARCH defaults to 'arm' on arm systems this allows packagers to
specify the arm version by setting the GOARM env var to 5, 6 or 7.
2014-11-30 17:08:43 +01:00
Jakob Borg
a564510c49 Homogenize folder and device state to 'Up to Date' (fixes #1042) 2014-11-30 13:45:08 +01:00
Jakob Borg
285b614927 Translation update 2014-11-30 13:38:05 +01:00
Audrius Butkevicius
fd2d2c035e Add support for multiple announce servers (fixes #677)
Somebody owes me a beer.
2014-11-30 13:25:06 +01:00
Jakob Borg
78981862be Silence verbose docker build output 2014-11-29 08:30:41 +01:00
Jakob Borg
7f1253ff83 Revert "golang.org/x/tools in Dockerfile"
This reverts commit 5dd5602229.
2014-11-30 10:42:31 +01:00
Jakob Borg
e0265aed05 Increase read timeout on HTTP server, try to not run out of sockets in stress test 2014-11-30 10:38:39 +01:00
Jakob Borg
9d36d88a65 Build std for race in Docker image 2014-11-30 00:30:23 +01:00
Jakob Borg
5dd5602229 golang.org/x/tools in Dockerfile 2014-11-30 00:18:24 +01:00
Jakob Borg
126c4e9a06 Dependency update, new golang.org/x package names 2014-11-30 00:17:00 +01:00
Jakob Borg
5dbaf6ceb0 Use short integration tests by default 2014-11-30 00:07:36 +01:00
Jakob Borg
90de5659ea Data race: sharedPullerState WriteAt+Close 2014-11-29 23:51:53 +01:00
Jakob Borg
367e50edab Fixup integration tests for race detector 2014-11-29 23:41:06 +01:00
Jakob Borg
42b8dafafe Data race: can't access sharedPullerState.closed from the outside 2014-11-29 23:18:56 +01:00
Jakob Borg
577aaf8ad6 Data race: Discoverer.registryLock must cover the contents of registry as well 2014-11-29 23:04:25 +01:00
Jakob Borg
07cdf0364c Data race: ProgressEmitter (debug output only) 2014-11-29 22:51:13 +01:00
Jakob Borg
7f829f0159 Data race: broken locking on model.folderIgnores 2014-11-29 22:38:08 +01:00
Jakob Borg
a918aa97d9 Data race: deviceActivity methods with value receiver :( 2014-11-29 22:38:08 +01:00
Jakob Borg
4fdecc9b85 Run integration tests with -race (fixes #1043) 2014-11-29 22:38:04 +01:00
Jakob Borg
7af25c785d Don't send unnecessary SNI in TLS handshake 2014-11-29 20:58:24 +01:00
Jakob Borg
4de39b205d Only color status text, not panel headings (fixes #1039) 2014-11-29 13:08:00 +01:00
Jakob Borg
2748a2e97f Mark unused devices as 'Unused' and in warning color, show folders per device (fixes #962) 2014-11-29 09:43:05 +01:00
Jakob Borg
2926bbfe15 Mark unshared folders as 'Unshared' and in warning color (fixes #962) 2014-11-29 09:42:51 +01:00
Audrius Butkevicius
254c63763a Remove top margin from checkboxes (fixes #1036) 2014-11-28 15:17:02 +00:00
Jakob Borg
2de834f1f4 Place list of devices to share with in columns, in supported browsers 2014-11-27 21:34:24 +01:00
Jakob Borg
7273eab80e Clean up device panel (...) (ref #964)
- Remove "Synchronization"
- Hide "Compression" when default (on)
- Hide "Introducer" when default (off)
2014-11-27 20:46:36 +01:00
Jakob Borg
13e79c777a Clean up folder panel (...) (fixes #964)
- Remove ID
- Hide "Out of sync" when in sync
- Hide "Folder master" when default (not master)
- Hide "Ignore permissions" when default (not ignored)
- Hide "Rescan interval" when default (60 seconds)
2014-11-27 20:43:00 +01:00
Jakob Borg
8aa7d4b463 Lower the bar for when to stop restarting (fixes #1004) 2014-11-27 20:34:35 +01:00
Jakob Borg
5251f1c9db Use a separate, unique ID for usage reporting (fixes #1000) 2014-11-27 10:00:07 +01:00
Jakob Borg
82e923dfc8 Add kozec 2014-11-26 23:25:52 +01:00
Jakob Borg
decf16b92c Merge pull request #1029 from kozec/master
Add STNOUPGRADE environment variable to prevents autoupgrades
2014-11-26 23:25:45 +01:00
kozec
b84d960a81 Added STNOUPGRADE environment variable; Prevents autoupgrades, no matter of configuration. 2014-11-26 22:17:01 +01:00
Jakob Borg
34cb305755 Report all rates in bytes per second (fixes #934) 2014-11-26 17:30:52 +01:00
Jakob Borg
ed85bfa915 Don't perform external discovery lookups until local cache has had time to warm (fixes #666) 2014-11-26 17:23:15 +01:00
Jakob Borg
06ef33ff5e Translation strings for new functionality 2014-11-26 13:47:17 +01:00
Jakob Borg
57f121178c Update translate/transifex for new GUI paths 2014-11-26 13:46:34 +01:00
Dennis Wilson
3b88ee623b GUI Rework: reorganized folders and split app.js 2014-11-26 13:43:38 +01:00
Jakob Borg
8588625937 Add snnd 2014-11-26 13:43:26 +01:00
Jakob Borg
3417839726 Merge pull request #1024 from AudriusButkevicius/regexp
Fix versioner regexp's (fixes #1023)
2014-11-26 13:22:57 +01:00
Audrius Butkevicius
c1069052ae Fix versioner regexp's (fixes #1023) 2014-11-25 22:32:18 +00:00
Audrius Butkevicius
ea17542e4b Change progress emitter
1. Do not use cached value for BytesCompleted
2. Refactor JS a bit
3. Allow disabling progress emitter
2014-11-25 22:07:18 +00:00
Audrius Butkevicius
c7d779fe88 Fix tests on Windows 2014-11-25 21:27:10 +00:00
Audrius Butkevicius
a70f3f12c5 Merge pull request #999 from piobpl/master
Showing detailed sync progress (fixes #476)
2014-11-25 20:55:12 +00:00
piobpl
90a31589bb Showing detailed sync progress (fixes #476)
based on commit by Audrius Butkevicius <audrius.butkevicius@gmail.com>
2014-11-25 20:18:35 +01:00
Jakob Borg
b48d9a3a82 Don't panic when lacking symlink support on XP (fixes #1016) 2014-11-24 23:32:11 +01:00
Jakob Borg
0255311bbe Note about IRC channel 2014-11-24 23:07:30 +01:00
Audrius Butkevicius
bd91519df9 Add aria label on cog (closes #1020) 2014-11-24 21:14:14 +00:00
Jakob Borg
58fe8b0cf1 Add example for Solaris SMF running 2014-11-24 13:59:59 +01:00
Jakob Borg
064aa64f20 Point to etc dir in README 2014-11-24 13:49:18 +01:00
Jakob Borg
d9f79853fb Include etc dir in Unix builds 2014-11-24 13:49:15 +01:00
Jakob Borg
2e68ee5c8b Add example for Mac OS X background running 2014-11-24 13:49:15 +01:00
Jakob Borg
a9544ca890 Add example for runit service 2014-11-24 13:48:42 +01:00
463 changed files with 39854 additions and 23503 deletions

9
.gitattributes vendored Normal file
View File

@@ -0,0 +1,9 @@
# Text files use LF line endings in this repository
* text=auto
# Except the dependencies, which we leave alone
Godeps/** -text=auto
# Diffs on these files are meaningless
gui.files.go -diff
*.svg -diff

8
.gitignore vendored
View File

@@ -1,12 +1,18 @@
syncthing
./syncthing
syncthing.exe
*.tar.gz
*.zip
*.asc
*.sublime*
.idea/
.jshintrc
coverage.out
files/pidx
bin
perfstats*.csv
coverage.xml
!gui/scripts/syncthing
.DS_Store
syncthing.md5
syncthing.exe.md5
RELEASE

1
.mailmap Symbolic link
View File

@@ -0,0 +1 @@
NICKS

26
AUTHORS
View File

@@ -5,25 +5,51 @@ Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Ben Curthoys <ben@bencurthoys.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
Brendan Long <self@brendanlong.com>
Caleb Callaway <enlightened.despot@gmail.com>
Carsten Hagemann <moter8@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Joel <chris@scriptolo.gy>
Colin Kennedy <moshen.colin@gmail.com>
Daniel Martí <mvdan@mvdan.cc>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Elias Jarlebring <jarlebring@gmail.com>
Emil Hessman <emil@hessman.se>
Federico Castagnini <federico.castagnini@gmail.com>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
Francois-Xavier Gsell <fxgsell@gmail.com>
Gilli Sigurdsson <gilli@vx.is>
Jakob Borg <jakob@nym.se>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec <dzardacz@gmail.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
Jochen Voss <voss@seehuhn.de>
Johan Vromans <jvromans@squirrel.nl>
Karol Różycki <rozycki.karol@gmail.com>
Ken'ichi Kamada <kamada@nanohz.org>
Lode Hoste <zillode@zillode.be>
Marcin Dziadus <dziadus.marcin@gmail.com>
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol <kilburn@la3.org>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Tilli <pyfisch@gmail.com>
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>
Peter Hoeg <peter@speartail.com>
Philippe Schommers <philippe@schommers.be>
Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Sergey Mishin <ralder@yandex.ru>
Stefan Tatschner <stefan@sevenbyte.org>
Tim Abell <tim@timwise.co.uk>
Tobias Nygren <tnn@nygren.pp.se>
Tomas Cerveny <kozec@kozec.com>
Tully Robinson <tully@tojr.org>
Veeti Paananen <veeti.paananen@rojekti.fi>
Vil Brekin <vilbrekin@gmail.com>

92
CONDUCT.md Normal file
View File

@@ -0,0 +1,92 @@
## Conduct
* We are committed to providing a friendly, safe and welcoming
environment for all, regardless of gender, sexual orientation,
disability, ethnicity, religion, or similar personal characteristic.
* On IRC, please avoid using overtly sexual nicknames or other nicknames
that might detract from a friendly, safe and welcoming environment for
all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design
or implementation choice carries a trade-off and numerous costs. There
is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid
ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass
anyone. That is not welcome behaviour. We interpret the term
"harassment" as including the definition in the <a
href="http://citizencodeofconduct.org/">Citizen Code of Conduct</a>;
if you have any lack of clarity about what might be included in that
concept, please read their definition. In particular, we don't
tolerate behavior that excludes people in socially marginalized
groups.
* Private harassment is also unacceptable. No matter who you are, if you
feel you have been or are being harassed or made uncomfortable by a
community member, please contact one of the channel ops or any of the
Syncthing core team immediately. Whether you're a regular contributor
or a newcomer, we care about making this community a safe place for
you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other
attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of
conduct in our communication channels, most notably in Syncthing-related
IRC channels and on the web forum.
1. Remarks that violate the Syncthing standards of conduct, including
hateful, hurtful, oppressive, or exclusionary remarks, are not
allowed. (Cursing is allowed, but never targeting another user, and
never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the
code of conduct or not, are also not allowed.
3. Moderators will first respond to such remarks with a warning.
4. If the warning is unheeded, the user will be "kicked," i.e., kicked
out of the communication channel to cool off.
5. If the user comes back and continues to make trouble, they will be
banned, i.e., indefinitely excluded.
6. Moderators may choose at their discretion to un-ban the user if it
was a first offense and they offer the offended party a genuine
apology.
7. If a moderator bans someone and you think it was unjustified, please
take it up with that moderator, or with a different moderator, **in
private**. Complaints about bans in-channel are not allowed.
8. Moderators are held to a higher standard than other community
members. If a moderator creates an inappropriate situation, they
should expect less leeway than others.
In the Syncthing community we strive to go the extra step to look out
for each other. Don't just aim to be technically unimpeachable, try to
be your best self. In particular, avoid flirting with offensive or
sensitive issues, particularly if they're off-topic; this all too
often leads to unnecessary fights, hurt feelings, and damaged trust;
worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the
urge to be defensive. Just stop doing what it was they complained about
and apologize. Even if you feel you were misinterpreted or unfairly
accused, chances are good there was something you could've communicated
better — remember that it's your responsibility to make your fellow
community members comfortable. Everyone wants to get along and we are
all here first and foremost because we want to talk about cool
technology. You will find that people will be eager to assume good
intent and forgive as long as you earn their trust.
*Adapted from the [Rust Code of Conduct](https://github.com/rust-lang/rust/wiki/Note-development-policy#conduct)*
*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling)*

View File

@@ -33,10 +33,40 @@ latest info on Transifex.
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Be prepared for a
[certain amount of review](https://discourse.syncthing.net/t/733); it's
all in the name of quality. :) Following the points below will make this
[certain amount of review](https://github.com/syncthing/syncthing/wiki/FAQ#why-are-you-being-so-hard-on-my-pull-request);
it's all in the name of quality. :) Following the points below will make this
a smoother process.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### Core Team
The Syncthing core team currently consists of the following members;
- Jakob Borg (@calmh)
- Audrius Butkevicius (@AudriusButkevicius)
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
@@ -59,12 +89,12 @@ a smoother process.
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
amendments during review.
## Licensing
All contributions are made under the same GPL license as the rest of the
project, except documentation, user interface text and translation
All contributions are made under the same MPLv2 license as the rest of
the project, except documentation, user interface text and translation
strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
@@ -75,8 +105,8 @@ add yourself as a separate commit in your first pull request.
## Building
[See the documentation](http://discourse.syncthing.net/t/44) on how to
get started with a build environment.
[See the documentation](https://github.com/syncthing/syncthing/wiki/Building)
on how to get started with a build environment.
## Branches
@@ -103,8 +133,8 @@ Yes please!
## Documentation
[Over here!](http://discourse.syncthing.net/category/documentation)
[Over here!](https://github.com/syncthing/syncthing/wiki)
## License
GPLv3
MPLv2

68
Godeps/Godeps.json generated
View File

@@ -1,61 +1,49 @@
{
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.4rc1",
"GoVersion": "go1.4",
"Packages": [
"./cmd/..."
],
"Deps": [
{
"ImportPath": "code.google.com/p/go.crypto/bcrypt",
"Comment": "null-216",
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
},
{
"ImportPath": "code.google.com/p/go.crypto/blowfish",
"Comment": "null-216",
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
},
{
"ImportPath": "code.google.com/p/go.text/transform",
"Comment": "null-90",
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
},
{
"ImportPath": "code.google.com/p/go.text/unicode/norm",
"Comment": "null-90",
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
},
{
"ImportPath": "github.com/AudriusButkevicius/lfu-go",
"Rev": "164bcecceb92fd6037f4d18a8d97b495ec6ef669"
},
{
"ImportPath": "github.com/bkaradzic/go-lz4",
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
},
{
"ImportPath": "github.com/calmh/logger",
"Rev": "f50d32b313bec2933a3e1049f7416a29f3413d29"
"Rev": "4d4e2801954c5581e4c2a80a3d3beb3b3645fd04"
},
{
"ImportPath": "github.com/calmh/osext",
"Rev": "9bf61584e5f1f172e8766ddc9022d9c401faaa5e"
"ImportPath": "github.com/calmh/luhn",
"Rev": "0c8388ff95fa92d4094011e5a04fc99dea3d1632"
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "ec3d404f43731551258977b38dd72cf557d00398"
"Rev": "5f7208e86762911861c94f1849eddbfc0a60cbf0"
},
{
"ImportPath": "github.com/juju/ratelimit",
"Rev": "f9f36d11773655c0485207f0ad30dc2655f69d56"
"Rev": "c5abe513796336ee2869745bff0638508450e9c5"
},
{
"ImportPath": "github.com/kardianos/osext",
"Rev": "efacde03154693404c65e7aa7d461ac9014acd0c"
},
{
"ImportPath": "github.com/syncthing/protocol",
"Rev": "e7db2648034fb71b051902a02bc25d4468ed492e"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "97e257099d2ab9578151ba85e2641e2cd14d3ca8"
"Rev": "87e4e645d80ae9c537e8f2dee52b28036a5dd75e"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
"Rev": "ce8acff4829e0c2458a67ead32390ac0a381c862"
"Rev": "156a073208e131d7d2e212cb749feae7c339e846"
},
{
"ImportPath": "github.com/thejerf/suture",
"Rev": "ff19fb384c3fe30f42717967eaa69da91e5f317c"
},
{
"ImportPath": "github.com/vitrun/qart/coding",
@@ -68,6 +56,22 @@
{
"ImportPath": "github.com/vitrun/qart/qr",
"Rev": "ccb109cf25f0cd24474da73b9fee4e7a3e8a8ce0"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
}
]
}

View File

@@ -1,30 +0,0 @@
# Copyright 2011 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
maketables: maketables.go triegen.go
go build $^
maketesttables: maketesttables.go triegen.go
go build $^
normregtest: normregtest.go
go build $^
tables: maketables
./maketables > tables.go
gofmt -w tables.go
trietesttables: maketesttables
./maketesttables > triedata_test.go
gofmt -w triedata_test.go
# Downloads from www.unicode.org, so not part
# of standard test scripts.
test: testtables regtest
testtables: maketables
./maketables -test > data_test.go && go test -tags=test
regtest: normregtest
./normregtest

View File

@@ -1,45 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Generate test data for trie code.
package main
import (
"fmt"
)
func main() {
printTestTables()
}
// We take the smallest, largest and an arbitrary value for each
// of the UTF-8 sequence lengths.
var testRunes = []rune{
0x01, 0x0C, 0x7F, // 1-byte sequences
0x80, 0x100, 0x7FF, // 2-byte sequences
0x800, 0x999, 0xFFFF, // 3-byte sequences
0x10000, 0x10101, 0x10FFFF, // 4-byte sequences
0x200, 0x201, 0x202, 0x210, 0x215, // five entries in one sparse block
}
const fileHeader = `// Generated by running
// maketesttables
// DO NOT EDIT
package norm
`
func printTestTables() {
fmt.Print(fileHeader)
fmt.Printf("var testRunes = %#v\n\n", testRunes)
t := newNode()
for i, r := range testRunes {
t.insert(r, uint16(i))
}
t.printTables("testdata")
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,232 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
type valueRange struct {
value uint16 // header: value:stride
lo, hi byte // header: lo:n
}
type trie struct {
index []uint8
values []uint16
sparse []valueRange
sparseOffset []uint16
cutoff uint8 // indices >= cutoff are sparse
}
// lookupValue determines the type of block n and looks up the value for b.
// For n < t.cutoff, the block is a simple lookup table. Otherwise, the block
// is a list of ranges with an accompanying value. Given a matching range r,
// the value for b is by r.value + (b - r.lo) * stride.
func (t *trie) lookupValue(n uint8, b byte) uint16 {
if n < t.cutoff {
return t.values[uint16(n)<<6+uint16(b)]
}
offset := t.sparseOffset[n-t.cutoff]
header := t.sparse[offset]
lo := offset + 1
hi := lo + uint16(header.lo)
for lo < hi {
m := lo + (hi-lo)/2
r := t.sparse[m]
if r.lo <= b && b <= r.hi {
return r.value + uint16(b-r.lo)*header.value
}
if b < r.lo {
hi = m
} else {
lo = m + 1
}
}
return 0
}
const (
t1 = 0x00 // 0000 0000
tx = 0x80 // 1000 0000
t2 = 0xC0 // 1100 0000
t3 = 0xE0 // 1110 0000
t4 = 0xF0 // 1111 0000
t5 = 0xF8 // 1111 1000
t6 = 0xFC // 1111 1100
te = 0xFE // 1111 1110
)
// lookup returns the trie value for the first UTF-8 encoding in s and
// the width in bytes of this encoding. The size will be 0 if s does not
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
func (t *trie) lookup(s []byte) (v uint16, sz int) {
c0 := s[0]
switch {
case c0 < tx:
return t.values[c0], 1
case c0 < t2:
return 0, 1
case c0 < t3:
if len(s) < 2 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
return t.lookupValue(i, c1), 2
case c0 < t4:
if len(s) < 3 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
return t.lookupValue(i, c2), 3
case c0 < t5:
if len(s) < 4 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
o = uint16(i)<<6 + uint16(c2)
i = t.index[o]
c3 := s[3]
if c3 < tx || t2 <= c3 {
return 0, 3
}
return t.lookupValue(i, c3), 4
}
// Illegal rune
return 0, 1
}
// lookupString returns the trie value for the first UTF-8 encoding in s and
// the width in bytes of this encoding. The size will be 0 if s does not
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
func (t *trie) lookupString(s string) (v uint16, sz int) {
c0 := s[0]
switch {
case c0 < tx:
return t.values[c0], 1
case c0 < t2:
return 0, 1
case c0 < t3:
if len(s) < 2 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
return t.lookupValue(i, c1), 2
case c0 < t4:
if len(s) < 3 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
return t.lookupValue(i, c2), 3
case c0 < t5:
if len(s) < 4 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
o = uint16(i)<<6 + uint16(c2)
i = t.index[o]
c3 := s[3]
if c3 < tx || t2 <= c3 {
return 0, 3
}
return t.lookupValue(i, c3), 4
}
// Illegal rune
return 0, 1
}
// lookupUnsafe returns the trie value for the first UTF-8 encoding in s.
// s must hold a full encoding.
func (t *trie) lookupUnsafe(s []byte) uint16 {
c0 := s[0]
if c0 < tx {
return t.values[c0]
}
if c0 < t2 {
return 0
}
i := t.index[c0]
if c0 < t3 {
return t.lookupValue(i, s[1])
}
i = t.index[uint16(i)<<6+uint16(s[1])]
if c0 < t4 {
return t.lookupValue(i, s[2])
}
i = t.index[uint16(i)<<6+uint16(s[2])]
if c0 < t5 {
return t.lookupValue(i, s[3])
}
return 0
}
// lookupStringUnsafe returns the trie value for the first UTF-8 encoding in s.
// s must hold a full encoding.
func (t *trie) lookupStringUnsafe(s string) uint16 {
c0 := s[0]
if c0 < tx {
return t.values[c0]
}
if c0 < t2 {
return 0
}
i := t.index[c0]
if c0 < t3 {
return t.lookupValue(i, s[1])
}
i = t.index[uint16(i)<<6+uint16(s[1])]
if c0 < t4 {
return t.lookupValue(i, s[2])
}
i = t.index[uint16(i)<<6+uint16(s[2])]
if c0 < t5 {
return t.lookupValue(i, s[3])
}
return 0
}

View File

@@ -1,152 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
import (
"testing"
"unicode/utf8"
)
// Test data is located in triedata_test.go; generated by maketesttables.
var testdata = testdataTrie
type rangeTest struct {
block uint8
lookup byte
result uint16
table []valueRange
offsets []uint16
}
var range1Off = []uint16{0, 2}
var range1 = []valueRange{
{0, 1, 0},
{1, 0x80, 0x80},
{0, 2, 0},
{1, 0x80, 0x80},
{9, 0xff, 0xff},
}
var rangeTests = []rangeTest{
{10, 0x80, 1, range1, range1Off},
{10, 0x00, 0, range1, range1Off},
{11, 0x80, 1, range1, range1Off},
{11, 0xff, 9, range1, range1Off},
{11, 0x00, 0, range1, range1Off},
}
func TestLookupSparse(t *testing.T) {
for i, test := range rangeTests {
n := trie{sparse: test.table, sparseOffset: test.offsets, cutoff: 10}
v := n.lookupValue(test.block, test.lookup)
if v != test.result {
t.Errorf("LookupSparse:%d: found %X; want %X", i, v, test.result)
}
}
}
// Test cases for illegal runes.
type trietest struct {
size int
bytes []byte
}
var tests = []trietest{
// illegal runes
{1, []byte{0x80}},
{1, []byte{0xFF}},
{1, []byte{t2, tx - 1}},
{1, []byte{t2, t2}},
{2, []byte{t3, tx, tx - 1}},
{2, []byte{t3, tx, t2}},
{1, []byte{t3, tx - 1, tx}},
{3, []byte{t4, tx, tx, tx - 1}},
{3, []byte{t4, tx, tx, t2}},
{1, []byte{t4, t2, tx, tx - 1}},
{2, []byte{t4, tx, t2, tx - 1}},
// short runes
{0, []byte{t2}},
{0, []byte{t3, tx}},
{0, []byte{t4, tx, tx}},
// we only support UTF-8 up to utf8.UTFMax bytes (4 bytes)
{1, []byte{t5, tx, tx, tx, tx}},
{1, []byte{t6, tx, tx, tx, tx, tx}},
}
func mkUTF8(r rune) ([]byte, int) {
var b [utf8.UTFMax]byte
sz := utf8.EncodeRune(b[:], r)
return b[:sz], sz
}
func TestLookup(t *testing.T) {
for i, tt := range testRunes {
b, szg := mkUTF8(tt)
v, szt := testdata.lookup(b)
if int(v) != i {
t.Errorf("lookup(%U): found value %#x, expected %#x", tt, v, i)
}
if szt != szg {
t.Errorf("lookup(%U): found size %d, expected %d", tt, szt, szg)
}
}
for i, tt := range tests {
v, sz := testdata.lookup(tt.bytes)
if v != 0 {
t.Errorf("lookup of illegal rune, case %d: found value %#x, expected 0", i, v)
}
if sz != tt.size {
t.Errorf("lookup of illegal rune, case %d: found size %d, expected %d", i, sz, tt.size)
}
}
// Verify defaults.
if v, _ := testdata.lookup([]byte{0xC1, 0x8C}); v != 0 {
t.Errorf("lookup of non-existing rune should be 0; found %X", v)
}
}
func TestLookupUnsafe(t *testing.T) {
for i, tt := range testRunes {
b, _ := mkUTF8(tt)
v := testdata.lookupUnsafe(b)
if int(v) != i {
t.Errorf("lookupUnsafe(%U): found value %#x, expected %#x", i, v, i)
}
}
}
func TestLookupString(t *testing.T) {
for i, tt := range testRunes {
b, szg := mkUTF8(tt)
v, szt := testdata.lookupString(string(b))
if int(v) != i {
t.Errorf("lookup(%U): found value %#x, expected %#x", i, v, i)
}
if szt != szg {
t.Errorf("lookup(%U): found size %d, expected %d", i, szt, szg)
}
}
for i, tt := range tests {
v, sz := testdata.lookupString(string(tt.bytes))
if int(v) != 0 {
t.Errorf("lookup of illegal rune, case %d: found value %#x, expected 0", i, v)
}
if sz != tt.size {
t.Errorf("lookup of illegal rune, case %d: found size %d, expected %d", i, sz, tt.size)
}
}
}
func TestLookupStringUnsafe(t *testing.T) {
for i, tt := range testRunes {
b, _ := mkUTF8(tt)
v := testdata.lookupStringUnsafe(string(b))
if int(v) != i {
t.Errorf("lookupUnsafe(%U): found value %#x, expected %#x", i, v, i)
}
}
}

View File

@@ -1,85 +0,0 @@
// Generated by running
// maketesttables
// DO NOT EDIT
package norm
var testRunes = []int32{1, 12, 127, 128, 256, 2047, 2048, 2457, 65535, 65536, 65793, 1114111, 512, 513, 514, 528, 533}
// testdataValues: 192 entries, 384 bytes
// Block 2 is the null block.
var testdataValues = [192]uint16{
// Block 0x0, offset 0x0
0x000c: 0x0001,
// Block 0x1, offset 0x40
0x007f: 0x0002,
// Block 0x2, offset 0x80
}
// testdataSparseOffset: 10 entries, 20 bytes
var testdataSparseOffset = []uint16{0x0, 0x2, 0x4, 0x8, 0xa, 0xc, 0xe, 0x10, 0x12, 0x14}
// testdataSparseValues: 22 entries, 88 bytes
var testdataSparseValues = [22]valueRange{
// Block 0x0, offset 0x1
{value: 0x0000, lo: 0x01},
{value: 0x0003, lo: 0x80, hi: 0x80},
// Block 0x1, offset 0x2
{value: 0x0000, lo: 0x01},
{value: 0x0004, lo: 0x80, hi: 0x80},
// Block 0x2, offset 0x3
{value: 0x0001, lo: 0x03},
{value: 0x000c, lo: 0x80, hi: 0x82},
{value: 0x000f, lo: 0x90, hi: 0x90},
{value: 0x0010, lo: 0x95, hi: 0x95},
// Block 0x3, offset 0x4
{value: 0x0000, lo: 0x01},
{value: 0x0005, lo: 0xbf, hi: 0xbf},
// Block 0x4, offset 0x5
{value: 0x0000, lo: 0x01},
{value: 0x0006, lo: 0x80, hi: 0x80},
// Block 0x5, offset 0x6
{value: 0x0000, lo: 0x01},
{value: 0x0007, lo: 0x99, hi: 0x99},
// Block 0x6, offset 0x7
{value: 0x0000, lo: 0x01},
{value: 0x0008, lo: 0xbf, hi: 0xbf},
// Block 0x7, offset 0x8
{value: 0x0000, lo: 0x01},
{value: 0x0009, lo: 0x80, hi: 0x80},
// Block 0x8, offset 0x9
{value: 0x0000, lo: 0x01},
{value: 0x000a, lo: 0x81, hi: 0x81},
// Block 0x9, offset 0xa
{value: 0x0000, lo: 0x01},
{value: 0x000b, lo: 0xbf, hi: 0xbf},
}
// testdataLookup: 640 bytes
// Block 0 is the null block.
var testdataLookup = [640]uint8{
// Block 0x0, offset 0x0
// Block 0x1, offset 0x40
// Block 0x2, offset 0x80
// Block 0x3, offset 0xc0
0x0c2: 0x01, 0x0c4: 0x02,
0x0c8: 0x03,
0x0df: 0x04,
0x0e0: 0x02,
0x0ef: 0x03,
0x0f0: 0x05, 0x0f4: 0x07,
// Block 0x4, offset 0x100
0x120: 0x05, 0x126: 0x06,
// Block 0x5, offset 0x140
0x17f: 0x07,
// Block 0x6, offset 0x180
0x180: 0x08, 0x184: 0x09,
// Block 0x7, offset 0x1c0
0x1d0: 0x04,
// Block 0x8, offset 0x200
0x23f: 0x0a,
// Block 0x9, offset 0x240
0x24f: 0x06,
}
var testdataTrie = trie{testdataLookup[:], testdataValues[:], testdataSparseValues[:], testdataSparseOffset[:], 1}

View File

@@ -1,317 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Trie table generator.
// Used by make*tables tools to generate a go file with trie data structures
// for mapping UTF-8 to a 16-bit value. All but the last byte in a UTF-8 byte
// sequence are used to lookup offsets in the index table to be used for the
// next byte. The last byte is used to index into a table with 16-bit values.
package main
import (
"fmt"
"hash/crc32"
"log"
"unicode/utf8"
)
const (
blockSize = 64
blockOffset = 2 // Subtract two blocks to compensate for the 0x80 added to continuation bytes.
maxSparseEntries = 16
)
// Intermediate trie structure
type trieNode struct {
table [256]*trieNode
value int
b byte
leaf bool
}
func newNode() *trieNode {
return new(trieNode)
}
func (n trieNode) String() string {
s := fmt.Sprint("trieNode{table: { non-nil at index: ")
for i, v := range n.table {
if v != nil {
s += fmt.Sprintf("%d, ", i)
}
}
s += fmt.Sprintf("}, value:%#x, b:%#x leaf:%v}", n.value, n.b, n.leaf)
return s
}
func (n trieNode) isInternal() bool {
internal := true
for i := 0; i < 256; i++ {
if nn := n.table[i]; nn != nil {
if !internal && !nn.leaf {
log.Fatalf("triegen: isInternal: node contains both leaf and non-leaf children (%v)", n)
}
internal = internal && !nn.leaf
}
}
return internal
}
func (n trieNode) mostFrequentStride() int {
counts := make(map[int]int)
v := 0
for _, t := range n.table[0x80 : 0x80+blockSize] {
if t != nil {
if stride := t.value - v; v != 0 && stride >= 0 {
counts[stride]++
}
v = t.value
} else {
v = 0
}
}
var maxs, maxc int
for stride, cnt := range counts {
if cnt > maxc || (cnt == maxc && stride < maxs) {
maxs, maxc = stride, cnt
}
}
return maxs
}
func (n trieNode) countSparseEntries() int {
stride := n.mostFrequentStride()
var count, v int
for _, t := range n.table[0x80 : 0x80+blockSize] {
tv := 0
if t != nil {
tv = t.value
}
if tv-v != stride {
if tv != 0 {
count++
}
}
v = tv
}
return count
}
func (n *trieNode) insert(r rune, value uint16) {
var p [utf8.UTFMax]byte
sz := utf8.EncodeRune(p[:], r)
for i := 0; i < sz; i++ {
if n.leaf {
log.Fatalf("triegen: insert: node (%#v) should not be a leaf", n)
}
nn := n.table[p[i]]
if nn == nil {
nn = newNode()
nn.b = p[i]
n.table[p[i]] = nn
}
n = nn
}
n.value = int(value)
n.leaf = true
}
type nodeIndex struct {
lookupBlocks []*trieNode
valueBlocks []*trieNode
sparseBlocks []*trieNode
sparseOffset []uint16
sparseCount int
lookupBlockIdx map[uint32]int
valueBlockIdx map[uint32]int
}
func newIndex() *nodeIndex {
index := &nodeIndex{}
index.lookupBlocks = make([]*trieNode, 0)
index.valueBlocks = make([]*trieNode, 0)
index.sparseBlocks = make([]*trieNode, 0)
index.sparseOffset = make([]uint16, 1)
index.lookupBlockIdx = make(map[uint32]int)
index.valueBlockIdx = make(map[uint32]int)
return index
}
func computeOffsets(index *nodeIndex, n *trieNode) int {
if n.leaf {
return n.value
}
hasher := crc32.New(crc32.MakeTable(crc32.IEEE))
// We only index continuation bytes.
for i := 0; i < blockSize; i++ {
v := 0
if nn := n.table[0x80+i]; nn != nil {
v = computeOffsets(index, nn)
}
hasher.Write([]byte{uint8(v >> 8), uint8(v)})
}
h := hasher.Sum32()
if n.isInternal() {
v, ok := index.lookupBlockIdx[h]
if !ok {
v = len(index.lookupBlocks) - blockOffset
index.lookupBlocks = append(index.lookupBlocks, n)
index.lookupBlockIdx[h] = v
}
n.value = v
} else {
v, ok := index.valueBlockIdx[h]
if !ok {
if c := n.countSparseEntries(); c > maxSparseEntries {
v = len(index.valueBlocks) - blockOffset
index.valueBlocks = append(index.valueBlocks, n)
index.valueBlockIdx[h] = v
} else {
v = -len(index.sparseOffset)
index.sparseBlocks = append(index.sparseBlocks, n)
index.sparseOffset = append(index.sparseOffset, uint16(index.sparseCount))
index.sparseCount += c + 1
index.valueBlockIdx[h] = v
}
}
n.value = v
}
return n.value
}
func printValueBlock(nr int, n *trieNode, offset int) {
boff := nr * blockSize
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
var printnewline bool
for i := 0; i < blockSize; i++ {
if i%6 == 0 {
printnewline = true
}
v := 0
if nn := n.table[i+offset]; nn != nil {
v = nn.value
}
if v != 0 {
if printnewline {
fmt.Printf("\n")
printnewline = false
}
fmt.Printf("%#04x:%#04x, ", boff+i, v)
}
}
}
func printSparseBlock(nr int, n *trieNode) {
boff := -n.value
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
v := 0
//stride := f(n)
stride := n.mostFrequentStride()
c := n.countSparseEntries()
fmt.Printf("\n{value:%#04x,lo:%#02x},", stride, uint8(c))
for i, nn := range n.table[0x80 : 0x80+blockSize] {
nv := 0
if nn != nil {
nv = nn.value
}
if nv-v != stride {
if v != 0 {
fmt.Printf(",hi:%#02x},", 0x80+i-1)
}
if nv != 0 {
fmt.Printf("\n{value:%#04x,lo:%#02x", nv, nn.b)
}
}
v = nv
}
if v != 0 {
fmt.Printf(",hi:%#02x},", 0x80+blockSize-1)
}
}
func printLookupBlock(nr int, n *trieNode, offset, cutoff int) {
boff := nr * blockSize
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
var printnewline bool
for i := 0; i < blockSize; i++ {
if i%8 == 0 {
printnewline = true
}
v := 0
if nn := n.table[i+offset]; nn != nil {
v = nn.value
}
if v != 0 {
if v < 0 {
v = -v - 1 + cutoff
}
if printnewline {
fmt.Printf("\n")
printnewline = false
}
fmt.Printf("%#03x:%#02x, ", boff+i, v)
}
}
}
// printTables returns the size in bytes of the generated tables.
func (t *trieNode) printTables(name string) int {
index := newIndex()
// Values for 7-bit ASCII are stored in first two block, followed by nil block.
index.valueBlocks = append(index.valueBlocks, nil, nil, nil)
// First byte of multi-byte UTF-8 codepoints are indexed in 4th block.
index.lookupBlocks = append(index.lookupBlocks, nil, nil, nil, nil)
// Index starter bytes of multi-byte UTF-8.
for i := 0xC0; i < 0x100; i++ {
if t.table[i] != nil {
computeOffsets(index, t.table[i])
}
}
nv := len(index.valueBlocks) * blockSize
fmt.Printf("// %sValues: %d entries, %d bytes\n", name, nv, nv*2)
fmt.Printf("// Block 2 is the null block.\n")
fmt.Printf("var %sValues = [%d]uint16 {", name, nv)
printValueBlock(0, t, 0)
printValueBlock(1, t, 64)
printValueBlock(2, newNode(), 0)
for i := 3; i < len(index.valueBlocks); i++ {
printValueBlock(i, index.valueBlocks[i], 0x80)
}
fmt.Print("\n}\n\n")
ls := len(index.sparseBlocks)
fmt.Printf("// %sSparseOffset: %d entries, %d bytes\n", name, ls, ls*2)
fmt.Printf("var %sSparseOffset = %#v\n\n", name, index.sparseOffset[1:])
ns := index.sparseCount
fmt.Printf("// %sSparseValues: %d entries, %d bytes\n", name, ns, ns*4)
fmt.Printf("var %sSparseValues = [%d]valueRange {", name, ns)
for i, n := range index.sparseBlocks {
printSparseBlock(i, n)
}
fmt.Print("\n}\n\n")
cutoff := len(index.valueBlocks) - blockOffset
ni := len(index.lookupBlocks) * blockSize
fmt.Printf("// %sLookup: %d bytes\n", name, ni)
fmt.Printf("// Block 0 is the null block.\n")
fmt.Printf("var %sLookup = [%d]uint8 {", name, ni)
printLookupBlock(0, newNode(), 0, cutoff)
printLookupBlock(1, newNode(), 0, cutoff)
printLookupBlock(2, newNode(), 0, cutoff)
printLookupBlock(3, t, 0xC0, cutoff)
for i := 4; i < len(index.lookupBlocks); i++ {
printLookupBlock(i, index.lookupBlocks[i], 0x80, cutoff)
}
fmt.Print("\n}\n\n")
fmt.Printf("var %sTrie = trie{ %sLookup[:], %sValues[:], %sSparseValues[:], %sSparseOffset[:], %d}\n\n",
name, name, name, name, name, cutoff)
return nv*2 + ns*4 + ni + ls*2
}

View File

@@ -1,19 +0,0 @@
Copyright (C) 2012 Dave Grijalva
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,19 +0,0 @@
A simple LFU cache for golang. Based on the paper [An O(1) algorithm for implementing the LFU cache eviction scheme](http://dhruvbird.com/lfu.pdf).
Usage:
```go
import "github.com/dgrijalva/lfu-go"
// Make a new thing
c := lfu.New()
// Set some values
c.Set("myKey", myValue)
// Retrieve some values
myValue = c.Get("myKey")
// Evict some values
c.Evict(1)
```

View File

@@ -1,156 +0,0 @@
package lfu
import (
"container/list"
"sync"
)
type Eviction struct {
Key string
Value interface{}
}
type Cache struct {
// If len > UpperBound, cache will automatically evict
// down to LowerBound. If either value is 0, this behavior
// is disabled.
UpperBound int
LowerBound int
values map[string]*cacheEntry
freqs *list.List
len int
lock *sync.Mutex
EvictionChannel chan<- Eviction
}
type cacheEntry struct {
key string
value interface{}
freqNode *list.Element
}
type listEntry struct {
entries map[*cacheEntry]byte
freq int
}
func New() *Cache {
c := new(Cache)
c.values = make(map[string]*cacheEntry)
c.freqs = list.New()
c.lock = new(sync.Mutex)
return c
}
func (c *Cache) Get(key string) interface{} {
c.lock.Lock()
defer c.lock.Unlock()
if e, ok := c.values[key]; ok {
c.increment(e)
return e.value
}
return nil
}
func (c *Cache) Set(key string, value interface{}) {
c.lock.Lock()
defer c.lock.Unlock()
if e, ok := c.values[key]; ok {
// value already exists for key. overwrite
e.value = value
c.increment(e)
} else {
// value doesn't exist. insert
e := new(cacheEntry)
e.key = key
e.value = value
c.values[key] = e
c.increment(e)
c.len++
// bounds mgmt
if c.UpperBound > 0 && c.LowerBound > 0 {
if c.len > c.UpperBound {
c.evict(c.len - c.LowerBound)
}
}
}
}
func (c *Cache) Len() int {
c.lock.Lock()
defer c.lock.Unlock()
return c.len
}
func (c *Cache) Evict(count int) int {
c.lock.Lock()
defer c.lock.Unlock()
return c.evict(count)
}
func (c *Cache) evict(count int) int {
// No lock here so it can be called
// from within the lock (during Set)
var evicted int
for i := 0; i < count; {
if place := c.freqs.Front(); place != nil {
for entry, _ := range place.Value.(*listEntry).entries {
if i < count {
if c.EvictionChannel != nil {
c.EvictionChannel <- Eviction{
Key: entry.key,
Value: entry.value,
}
}
delete(c.values, entry.key)
c.remEntry(place, entry)
evicted++
c.len--
i++
}
}
}
}
return evicted
}
func (c *Cache) increment(e *cacheEntry) {
currentPlace := e.freqNode
var nextFreq int
var nextPlace *list.Element
if currentPlace == nil {
// new entry
nextFreq = 1
nextPlace = c.freqs.Front()
} else {
// move up
nextFreq = currentPlace.Value.(*listEntry).freq + 1
nextPlace = currentPlace.Next()
}
if nextPlace == nil || nextPlace.Value.(*listEntry).freq != nextFreq {
// create a new list entry
li := new(listEntry)
li.freq = nextFreq
li.entries = make(map[*cacheEntry]byte)
if currentPlace != nil {
nextPlace = c.freqs.InsertAfter(li, currentPlace)
} else {
nextPlace = c.freqs.PushFront(li)
}
}
e.freqNode = nextPlace
nextPlace.Value.(*listEntry).entries[e] = 1
if currentPlace != nil {
// remove from current position
c.remEntry(currentPlace, e)
}
}
func (c *Cache) remEntry(place *list.Element, entry *cacheEntry) {
entries := place.Value.(*listEntry).entries
delete(entries, entry)
if len(entries) == 0 {
c.freqs.Remove(place)
}
}

View File

@@ -1,68 +0,0 @@
package lfu
import (
"fmt"
"testing"
)
func TestLFU(t *testing.T) {
c := New()
c.Set("a", "a")
if v := c.Get("a"); v != "a" {
t.Errorf("Value was not saved: %v != 'a'", v)
}
if l := c.Len(); l != 1 {
t.Errorf("Length was not updated: %v != 1", l)
}
c.Set("b", "b")
if v := c.Get("b"); v != "b" {
t.Errorf("Value was not saved: %v != 'b'", v)
}
if l := c.Len(); l != 2 {
t.Errorf("Length was not updated: %v != 2", l)
}
c.Get("a")
evicted := c.Evict(1)
if v := c.Get("a"); v != "a" {
t.Errorf("Value was improperly evicted: %v != 'a'", v)
}
if v := c.Get("b"); v != nil {
t.Errorf("Value was not evicted: %v", v)
}
if l := c.Len(); l != 1 {
t.Errorf("Length was not updated: %v != 1", l)
}
if evicted != 1 {
t.Errorf("Number of evicted items is wrong: %v != 1", evicted)
}
}
func TestBoundsMgmt(t *testing.T) {
c := New()
c.UpperBound = 10
c.LowerBound = 5
for i := 0; i < 100; i++ {
c.Set(fmt.Sprintf("%v", i), i)
}
if c.Len() > 10 {
t.Errorf("Bounds management failed to evict properly: %v", c.Len())
}
}
func TestEviction(t *testing.T) {
ch := make(chan Eviction, 1)
c := New()
c.EvictionChannel = ch
c.Set("a", "b")
c.Evict(1)
ev := <-ch
if ev.Key != "a" || ev.Value.(string) != "b" {
t.Error("Incorrect item")
}
}

View File

@@ -1,194 +1,194 @@
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt indicates the input was corrupt
ErrCorrupt = errors.New("corrupt input")
)
const (
mlBits = 4
mlMask = (1 << mlBits) - 1
runBits = 8 - mlBits
runMask = (1 << runBits) - 1
)
type decoder struct {
src []byte
dst []byte
spos uint32
dpos uint32
ref uint32
}
func (d *decoder) readByte() (uint8, error) {
if int(d.spos) == len(d.src) {
return 0, io.EOF
}
b := d.src[d.spos]
d.spos++
return b, nil
}
func (d *decoder) getLen() (uint32, error) {
length := uint32(0)
ln, err := d.readByte()
if err != nil {
return 0, ErrCorrupt
}
for ln == 255 {
length += 255
ln, err = d.readByte()
if err != nil {
return 0, ErrCorrupt
}
}
length += uint32(ln)
return length, nil
}
func (d *decoder) cp(length, decr uint32) {
if int(d.ref+length) < int(d.dpos) {
copy(d.dst[d.dpos:], d.dst[d.ref:d.ref+length])
} else {
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.dst[d.ref+ii]
}
}
d.dpos += length
d.ref += length - decr
}
func (d *decoder) finish(err error) error {
if err == io.EOF {
return nil
}
return err
}
// Decode returns the decoded form of src. The returned slice may be a
// subslice of dst if it was large enough to hold the entire decoded block.
func Decode(dst, src []byte) ([]byte, error) {
if len(src) < 4 {
return nil, ErrCorrupt
}
uncompressedLen := binary.LittleEndian.Uint32(src)
if uncompressedLen == 0 {
return nil, nil
}
if uncompressedLen > MaxInputSize {
return nil, ErrTooLarge
}
if dst == nil || len(dst) < int(uncompressedLen) {
dst = make([]byte, uncompressedLen)
}
d := decoder{src: src, dst: dst[:uncompressedLen], spos: 4}
decr := []uint32{0, 3, 2, 3}
for {
code, err := d.readByte()
if err != nil {
return d.dst, d.finish(err)
}
length := uint32(code >> mlBits)
if length == runMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
if int(d.spos+length) > len(d.src) {
return nil, ErrCorrupt
}
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.src[d.spos+ii]
}
d.spos += length
d.dpos += length
if int(d.spos) == len(d.src) {
return d.dst, nil
}
if int(d.spos+2) >= len(d.src) {
return nil, ErrCorrupt
}
back := uint32(d.src[d.spos]) | uint32(d.src[d.spos+1])<<8
if back > d.dpos {
return nil, ErrCorrupt
}
d.spos += 2
d.ref = d.dpos - back
length = uint32(code & mlMask)
if length == mlMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
literal := d.dpos - d.ref
if literal < 4 {
d.cp(4, decr[literal])
} else {
length += 4
}
if d.dpos+length > uncompressedLen {
return nil, ErrCorrupt
}
d.cp(length, 0)
}
}
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt indicates the input was corrupt
ErrCorrupt = errors.New("corrupt input")
)
const (
mlBits = 4
mlMask = (1 << mlBits) - 1
runBits = 8 - mlBits
runMask = (1 << runBits) - 1
)
type decoder struct {
src []byte
dst []byte
spos uint32
dpos uint32
ref uint32
}
func (d *decoder) readByte() (uint8, error) {
if int(d.spos) == len(d.src) {
return 0, io.EOF
}
b := d.src[d.spos]
d.spos++
return b, nil
}
func (d *decoder) getLen() (uint32, error) {
length := uint32(0)
ln, err := d.readByte()
if err != nil {
return 0, ErrCorrupt
}
for ln == 255 {
length += 255
ln, err = d.readByte()
if err != nil {
return 0, ErrCorrupt
}
}
length += uint32(ln)
return length, nil
}
func (d *decoder) cp(length, decr uint32) {
if int(d.ref+length) < int(d.dpos) {
copy(d.dst[d.dpos:], d.dst[d.ref:d.ref+length])
} else {
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.dst[d.ref+ii]
}
}
d.dpos += length
d.ref += length - decr
}
func (d *decoder) finish(err error) error {
if err == io.EOF {
return nil
}
return err
}
// Decode returns the decoded form of src. The returned slice may be a
// subslice of dst if it was large enough to hold the entire decoded block.
func Decode(dst, src []byte) ([]byte, error) {
if len(src) < 4 {
return nil, ErrCorrupt
}
uncompressedLen := binary.LittleEndian.Uint32(src)
if uncompressedLen == 0 {
return nil, nil
}
if uncompressedLen > MaxInputSize {
return nil, ErrTooLarge
}
if dst == nil || len(dst) < int(uncompressedLen) {
dst = make([]byte, uncompressedLen)
}
d := decoder{src: src, dst: dst[:uncompressedLen], spos: 4}
decr := []uint32{0, 3, 2, 3}
for {
code, err := d.readByte()
if err != nil {
return d.dst, d.finish(err)
}
length := uint32(code >> mlBits)
if length == runMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
if int(d.spos+length) > len(d.src) {
return nil, ErrCorrupt
}
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.src[d.spos+ii]
}
d.spos += length
d.dpos += length
if int(d.spos) == len(d.src) {
return d.dst, nil
}
if int(d.spos+2) >= len(d.src) {
return nil, ErrCorrupt
}
back := uint32(d.src[d.spos]) | uint32(d.src[d.spos+1])<<8
if back > d.dpos {
return nil, ErrCorrupt
}
d.spos += 2
d.ref = d.dpos - back
length = uint32(code & mlMask)
if length == mlMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
literal := d.dpos - d.ref
if literal < 4 {
d.cp(4, decr[literal])
} else {
length += 4
}
if d.dpos+length > uncompressedLen {
return nil, ErrCorrupt
}
d.cp(length, 0)
}
}

View File

@@ -1,188 +1,188 @@
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import "encoding/binary"
import "errors"
const (
minMatch = 4
hashLog = 17
hashTableSize = 1 << hashLog
hashShift = (minMatch * 8) - hashLog
incompressible uint32 = 128
uninitHash = 0x88888888
// MaxInputSize is the largest buffer than can be compressed in a single block
MaxInputSize = 0x7E000000
)
var (
// ErrTooLarge indicates the input buffer was too large
ErrTooLarge = errors.New("input too large")
)
type encoder struct {
src []byte
dst []byte
hashTable []uint32
pos uint32
anchor uint32
dpos uint32
}
// CompressBound returns the maximum length of a lz4 block, given it's uncompressed length
func CompressBound(isize int) int {
if isize > MaxInputSize {
return 0
}
return isize + ((isize) / 255) + 16 + 4
}
func (e *encoder) writeLiterals(length, mlLen, pos uint32) {
ln := length
var code byte
if ln > runMask-1 {
code = runMask
} else {
code = byte(ln)
}
if mlLen > mlMask-1 {
e.dst[e.dpos] = (code << mlBits) + byte(mlMask)
} else {
e.dst[e.dpos] = (code << mlBits) + byte(mlLen)
}
e.dpos++
if code == runMask {
ln -= runMask
for ; ln > 254; ln -= 255 {
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(ln)
e.dpos++
}
for ii := uint32(0); ii < length; ii++ {
e.dst[e.dpos+ii] = e.src[pos+ii]
}
e.dpos += length
}
// Encode returns the encoded form of src. The returned array may be a
// sub-slice of dst if it was large enough to hold the entire output.
func Encode(dst, src []byte) ([]byte, error) {
if len(src) >= MaxInputSize {
return nil, ErrTooLarge
}
if n := CompressBound(len(src)); len(dst) < n {
dst = make([]byte, n)
}
e := encoder{src: src, dst: dst, hashTable: make([]uint32, hashTableSize)}
binary.LittleEndian.PutUint32(dst, uint32(len(src)))
e.dpos = 4
var (
step uint32 = 1
limit = incompressible
)
for {
if int(e.pos)+12 >= len(e.src) {
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
return e.dst[:e.dpos], nil
}
sequence := uint32(e.src[e.pos+3])<<24 | uint32(e.src[e.pos+2])<<16 | uint32(e.src[e.pos+1])<<8 | uint32(e.src[e.pos+0])
hash := (sequence * 2654435761) >> hashShift
ref := e.hashTable[hash] + uninitHash
e.hashTable[hash] = e.pos - uninitHash
if ((e.pos-ref)>>16) != 0 || uint32(e.src[ref+3])<<24|uint32(e.src[ref+2])<<16|uint32(e.src[ref+1])<<8|uint32(e.src[ref+0]) != sequence {
if e.pos-e.anchor > limit {
limit <<= 1
step += 1 + (step >> 2)
}
e.pos += step
continue
}
if step > 1 {
e.hashTable[hash] = ref - uninitHash
e.pos -= step - 1
step = 1
continue
}
limit = incompressible
ln := e.pos - e.anchor
back := e.pos - ref
anchor := e.anchor
e.pos += minMatch
ref += minMatch
e.anchor = e.pos
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
e.pos++
ref++
}
mlLen := e.pos - e.anchor
e.writeLiterals(ln, mlLen, anchor)
e.dst[e.dpos] = uint8(back)
e.dst[e.dpos+1] = uint8(back >> 8)
e.dpos += 2
if mlLen > mlMask-1 {
mlLen -= mlMask
for mlLen > 254 {
mlLen -= 255
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(mlLen)
e.dpos++
}
e.anchor = e.pos
}
}
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import "encoding/binary"
import "errors"
const (
minMatch = 4
hashLog = 17
hashTableSize = 1 << hashLog
hashShift = (minMatch * 8) - hashLog
incompressible uint32 = 128
uninitHash = 0x88888888
// MaxInputSize is the largest buffer than can be compressed in a single block
MaxInputSize = 0x7E000000
)
var (
// ErrTooLarge indicates the input buffer was too large
ErrTooLarge = errors.New("input too large")
)
type encoder struct {
src []byte
dst []byte
hashTable []uint32
pos uint32
anchor uint32
dpos uint32
}
// CompressBound returns the maximum length of a lz4 block, given it's uncompressed length
func CompressBound(isize int) int {
if isize > MaxInputSize {
return 0
}
return isize + ((isize) / 255) + 16 + 4
}
func (e *encoder) writeLiterals(length, mlLen, pos uint32) {
ln := length
var code byte
if ln > runMask-1 {
code = runMask
} else {
code = byte(ln)
}
if mlLen > mlMask-1 {
e.dst[e.dpos] = (code << mlBits) + byte(mlMask)
} else {
e.dst[e.dpos] = (code << mlBits) + byte(mlLen)
}
e.dpos++
if code == runMask {
ln -= runMask
for ; ln > 254; ln -= 255 {
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(ln)
e.dpos++
}
for ii := uint32(0); ii < length; ii++ {
e.dst[e.dpos+ii] = e.src[pos+ii]
}
e.dpos += length
}
// Encode returns the encoded form of src. The returned array may be a
// sub-slice of dst if it was large enough to hold the entire output.
func Encode(dst, src []byte) ([]byte, error) {
if len(src) >= MaxInputSize {
return nil, ErrTooLarge
}
if n := CompressBound(len(src)); len(dst) < n {
dst = make([]byte, n)
}
e := encoder{src: src, dst: dst, hashTable: make([]uint32, hashTableSize)}
binary.LittleEndian.PutUint32(dst, uint32(len(src)))
e.dpos = 4
var (
step uint32 = 1
limit = incompressible
)
for {
if int(e.pos)+12 >= len(e.src) {
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
return e.dst[:e.dpos], nil
}
sequence := uint32(e.src[e.pos+3])<<24 | uint32(e.src[e.pos+2])<<16 | uint32(e.src[e.pos+1])<<8 | uint32(e.src[e.pos+0])
hash := (sequence * 2654435761) >> hashShift
ref := e.hashTable[hash] + uninitHash
e.hashTable[hash] = e.pos - uninitHash
if ((e.pos-ref)>>16) != 0 || uint32(e.src[ref+3])<<24|uint32(e.src[ref+2])<<16|uint32(e.src[ref+1])<<8|uint32(e.src[ref+0]) != sequence {
if e.pos-e.anchor > limit {
limit <<= 1
step += 1 + (step >> 2)
}
e.pos += step
continue
}
if step > 1 {
e.hashTable[hash] = ref - uninitHash
e.pos -= step - 1
step = 1
continue
}
limit = incompressible
ln := e.pos - e.anchor
back := e.pos - ref
anchor := e.anchor
e.pos += minMatch
ref += minMatch
e.anchor = e.pos
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
e.pos++
ref++
}
mlLen := e.pos - e.anchor
e.writeLiterals(ln, mlLen, anchor)
e.dst[e.dpos] = uint8(back)
e.dst[e.dpos+1] = uint8(back >> 8)
e.dpos += 2
if mlLen > mlMask-1 {
mlLen -= mlMask
for mlLen > 254 {
mlLen -= 255
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(mlLen)
e.dpos++
}
e.anchor = e.pos
}
}

View File

@@ -16,6 +16,7 @@ type LogLevel int
const (
LevelDebug LogLevel = iota
LevelVerbose
LevelInfo
LevelOK
LevelWarn
@@ -83,6 +84,24 @@ func (l *Logger) Debugf(format string, vals ...interface{}) {
l.callHandlers(LevelDebug, s)
}
// Infoln logs a line with a VERBOSE prefix.
func (l *Logger) Verboseln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "VERBOSE: "+s)
l.callHandlers(LevelVerbose, s)
}
// Infof logs a formatted line with a VERBOSE prefix.
func (l *Logger) Verbosef(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "VERBOSE: "+s)
l.callHandlers(LevelVerbose, s)
}
// Infoln logs a line with an INFO prefix.
func (l *Logger) Infoln(vals ...interface{}) {
l.mut.Lock()

19
Godeps/_workspace/src/github.com/calmh/luhn/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2014 Jakob Borg
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 Jakob Borg
// Package luhn generates and validates Luhn mod N check digits.
package luhn

View File

@@ -1,24 +1,11 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 Jakob Borg
package luhn_test
import (
"testing"
"github.com/syncthing/syncthing/internal/luhn"
"github.com/calmh/luhn"
)
func TestGenerate(t *testing.T) {

View File

@@ -1,20 +0,0 @@
Copyright (c) 2012 Daniel Theophanes
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.

View File

@@ -1,79 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin linux freebsd netbsd windows
package osext
import (
"fmt"
"os"
oexec "os/exec"
"path/filepath"
"runtime"
"testing"
)
const execPath_EnvVar = "OSTEST_OUTPUT_EXECPATH"
func TestExecPath(t *testing.T) {
ep, err := Executable()
if err != nil {
t.Fatalf("ExecPath failed: %v", err)
}
// we want fn to be of the form "dir/prog"
dir := filepath.Dir(filepath.Dir(ep))
fn, err := filepath.Rel(dir, ep)
if err != nil {
t.Fatalf("filepath.Rel: %v", err)
}
cmd := &oexec.Cmd{}
// make child start with a relative program path
cmd.Dir = dir
cmd.Path = fn
// forge argv[0] for child, so that we can verify we could correctly
// get real path of the executable without influenced by argv[0].
cmd.Args = []string{"-", "-test.run=XXXX"}
cmd.Env = []string{fmt.Sprintf("%s=1", execPath_EnvVar)}
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("exec(self) failed: %v", err)
}
outs := string(out)
if !filepath.IsAbs(outs) {
t.Fatalf("Child returned %q, want an absolute path", out)
}
if !sameFile(outs, ep) {
t.Fatalf("Child returned %q, not the same file as %q", out, ep)
}
}
func sameFile(fn1, fn2 string) bool {
fi1, err := os.Stat(fn1)
if err != nil {
return false
}
fi2, err := os.Stat(fn2)
if err != nil {
return false
}
return os.SameFile(fi1, fi2)
}
func init() {
if e := os.Getenv(execPath_EnvVar); e != "" {
// first chdir to another path
dir := "/"
if runtime.GOOS == "windows" {
dir = filepath.VolumeName(".")
}
os.Chdir(dir)
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
os.Exit(0)
}
}

View File

@@ -67,7 +67,7 @@ func BenchmarkThisEncode(b *testing.B) {
func BenchmarkThisEncoder(b *testing.B) {
w := xdr.NewWriter(ioutil.Discard)
for i := 0; i < b.N; i++ {
_, err := s.encodeXDR(w)
_, err := s.EncodeXDRInto(w)
if err != nil {
b.Fatal(err)
}
@@ -108,7 +108,7 @@ func BenchmarkThisDecoder(b *testing.B) {
r := xdr.NewReader(rr)
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
err := t.decodeXDR(r)
err := t.DecodeXDRFrom(r)
if err != nil {
b.Fatal(err)
}

View File

@@ -26,7 +26,9 @@ XDRBenchStruct Structure:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | I3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| uint8 |
/ /
\ uint8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Bs0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -69,7 +71,7 @@ struct XDRBenchStruct {
func (o XDRBenchStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o XDRBenchStruct) MarshalXDR() ([]byte, error) {
@@ -87,11 +89,11 @@ func (o XDRBenchStruct) MustMarshalXDR() []byte {
func (o XDRBenchStruct) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o XDRBenchStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint64(o.I1)
xw.WriteUint32(o.I2)
xw.WriteUint16(o.I3)
@@ -111,16 +113,16 @@ func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *XDRBenchStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *XDRBenchStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *XDRBenchStruct) decodeXDR(xr *xdr.Reader) error {
func (o *XDRBenchStruct) DecodeXDRFrom(xr *xdr.Reader) error {
o.I1 = xr.ReadUint64()
o.I2 = xr.ReadUint32()
o.I3 = xr.ReadUint16()
@@ -155,7 +157,7 @@ struct repeatReader {
func (o repeatReader) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o repeatReader) MarshalXDR() ([]byte, error) {
@@ -173,27 +175,27 @@ func (o repeatReader) MustMarshalXDR() []byte {
func (o repeatReader) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o repeatReader) encodeXDR(xw *xdr.Writer) (int, error) {
func (o repeatReader) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteBytes(o.data)
return xw.Tot(), xw.Error()
}
func (o *repeatReader) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *repeatReader) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *repeatReader) decodeXDR(xr *xdr.Reader) error {
func (o *repeatReader) DecodeXDRFrom(xr *xdr.Reader) error {
o.data = xr.ReadBytes()
return xr.Error()
}

View File

@@ -11,6 +11,8 @@ import (
"go/format"
"go/parser"
"go/token"
"io"
"log"
"os"
"regexp"
"strconv"
@@ -50,7 +52,7 @@ import (
var encodeTpl = template.Must(template.New("encoder").Parse(`
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}//+n
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
@@ -68,11 +70,11 @@ func (o {{.TypeName}}) MustMarshalXDR() []byte {
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}//+n
func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
{{range $fieldInfo := .Fields}}
{{if not $fieldInfo.IsSlice}}
{{if ne $fieldInfo.Convert ""}}
@@ -85,7 +87,7 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
{{end}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
{{else}}
_, err := o.{{$fieldInfo.Name}}.encodeXDR(xw)
_, err := o.{{$fieldInfo.Name}}.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -103,7 +105,7 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
{{else if $fieldInfo.IsBasic}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}}[i])
{{else}}
_, err := o.{{$fieldInfo.Name}}[i].encodeXDR(xw)
_, err := o.{{$fieldInfo.Name}}[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -116,16 +118,16 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}//+n
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}//+n
func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
{{range $fieldInfo := .Fields}}
{{if not $fieldInfo.IsSlice}}
{{if ne $fieldInfo.Convert ""}}
@@ -137,10 +139,13 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}()
{{end}}
{{else}}
(&o.{{$fieldInfo.Name}}).decodeXDR(xr)
(&o.{{$fieldInfo.Name}}).DecodeXDRFrom(xr)
{{end}}
{{else}}
_{{$fieldInfo.Name}}Size := int(xr.ReadUint32())
if _{{$fieldInfo.Name}}Size < 0 {
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
}
{{if ge $fieldInfo.Max 1}}
if _{{$fieldInfo.Name}}Size > {{$fieldInfo.Max}} {
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
@@ -153,7 +158,7 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
{{else if $fieldInfo.IsBasic}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
{{else}}
(&o.{{$fieldInfo.Name}}[i]).decodeXDR(xr)
(&o.{{$fieldInfo.Name}}[i]).DecodeXDRFrom(xr)
{{end}}
}
{{end}}
@@ -255,12 +260,18 @@ func handleStruct(t *ast.StructType) []fieldInfo {
} else {
f = fieldInfo{
Name: fn,
IsBasic: false,
IsSlice: true,
FieldType: tn,
Max: max,
}
}
case *ast.SelectorExpr:
f = fieldInfo{
Name: fn,
FieldType: ft.Sel.Name,
Max: max,
}
}
fs = append(fs, f)
@@ -269,7 +280,7 @@ func handleStruct(t *ast.StructType) []fieldInfo {
return fs
}
func generateCode(s structInfo) {
func generateCode(output io.Writer, s structInfo) {
name := s.Name
fs := s.Fields
@@ -286,7 +297,7 @@ func generateCode(s structInfo) {
if err != nil {
panic(err)
}
fmt.Println(string(bs))
fmt.Fprintln(output, string(bs))
}
func uncamelize(s string) string {
@@ -295,69 +306,71 @@ func uncamelize(s string) string {
})
}
func generateDiagram(s structInfo) {
func generateDiagram(output io.Writer, s structInfo) {
sn := s.Name
fs := s.Fields
fmt.Println(sn + " Structure:")
fmt.Println()
fmt.Println(" 0 1 2 3")
fmt.Println(" 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1")
fmt.Fprintln(output, sn+" Structure:")
fmt.Fprintln(output)
fmt.Fprintln(output, " 0 1 2 3")
fmt.Fprintln(output, " 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1")
line := "+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+"
fmt.Println(line)
fmt.Fprintln(output, line)
for _, f := range fs {
tn := f.FieldType
sl := f.IsSlice
name := uncamelize(f.Name)
if sl {
fmt.Printf("| %s |\n", center("Number of "+name, 61))
fmt.Println(line)
if f.IsSlice {
fmt.Fprintf(output, "| %s |\n", center("Number of "+name, 61))
fmt.Fprintln(output, line)
}
switch tn {
case "bool":
fmt.Printf("| %s |V|\n", center(name+" (V=0 or 1)", 59))
fmt.Println(line)
case "uint16":
fmt.Printf("| %s | %s |\n", center("0x0000", 29), center(name, 29))
fmt.Println(line)
case "uint32":
fmt.Printf("| %s |\n", center(name, 61))
fmt.Println(line)
fmt.Fprintf(output, "| %s |V|\n", center(name+" (V=0 or 1)", 59))
fmt.Fprintln(output, line)
case "int16", "uint16":
fmt.Fprintf(output, "| %s | %s |\n", center("0x0000", 29), center(name, 29))
fmt.Fprintln(output, line)
case "int32", "uint32":
fmt.Fprintf(output, "| %s |\n", center(name, 61))
fmt.Fprintln(output, line)
case "int64", "uint64":
fmt.Printf("| %-61s |\n", "")
fmt.Printf("+ %s +\n", center(name+" (64 bits)", 61))
fmt.Printf("| %-61s |\n", "")
fmt.Println(line)
fmt.Fprintf(output, "| %-61s |\n", "")
fmt.Fprintf(output, "+ %s +\n", center(name+" (64 bits)", 61))
fmt.Fprintf(output, "| %-61s |\n", "")
fmt.Fprintln(output, line)
case "string", "byte": // XXX We assume slice of byte!
fmt.Printf("| %s |\n", center("Length of "+name, 61))
fmt.Println(line)
fmt.Printf("/ %61s /\n", "")
fmt.Printf("\\ %s \\\n", center(name+" (variable length)", 61))
fmt.Printf("/ %61s /\n", "")
fmt.Println(line)
fmt.Fprintf(output, "| %s |\n", center("Length of "+name, 61))
fmt.Fprintln(output, line)
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintf(output, "\\ %s \\\n", center(name+" (variable length)", 61))
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintln(output, line)
default:
if sl {
if f.IsSlice {
tn = "Zero or more " + tn + " Structures"
fmt.Printf("/ %s /\n", center("", 61))
fmt.Printf("\\ %s \\\n", center(tn, 61))
fmt.Printf("/ %s /\n", center("", 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
} else {
fmt.Printf("| %s |\n", center(tn, 61))
tn = tn + " Structure"
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
}
fmt.Println(line)
fmt.Fprintln(output, line)
}
}
fmt.Println()
fmt.Println()
fmt.Fprintln(output)
fmt.Fprintln(output)
}
func generateXdr(s structInfo) {
func generateXdr(output io.Writer, s structInfo) {
sn := s.Name
fs := s.Fields
fmt.Printf("struct %s {\n", sn)
fmt.Fprintf(output, "struct %s {\n", sn)
for _, f := range fs {
tn := f.FieldType
@@ -372,22 +385,24 @@ func generateXdr(s structInfo) {
}
switch tn {
case "int16", "int32":
fmt.Fprintf(output, "\tint %s%s;\n", fn, suf)
case "uint16", "uint32":
fmt.Printf("\tunsigned int %s%s;\n", fn, suf)
fmt.Fprintf(output, "\tunsigned int %s%s;\n", fn, suf)
case "int64":
fmt.Printf("\thyper %s%s;\n", fn, suf)
fmt.Fprintf(output, "\thyper %s%s;\n", fn, suf)
case "uint64":
fmt.Printf("\tunsigned hyper %s%s;\n", fn, suf)
fmt.Fprintf(output, "\tunsigned hyper %s%s;\n", fn, suf)
case "string":
fmt.Printf("\tstring %s<%s>;\n", fn, l)
fmt.Fprintf(output, "\tstring %s<%s>;\n", fn, l)
case "byte":
fmt.Printf("\topaque %s<%s>;\n", fn, l)
fmt.Fprintf(output, "\topaque %s<%s>;\n", fn, l)
default:
fmt.Printf("\t%s %s%s;\n", tn, fn, suf)
fmt.Fprintf(output, "\t%s %s%s;\n", tn, fn, suf)
}
}
fmt.Println("}")
fmt.Println()
fmt.Fprintln(output, "}")
fmt.Fprintln(output)
}
func center(s string, w int) string {
@@ -418,25 +433,35 @@ func inspector(structs *[]structInfo) func(ast.Node) bool {
}
func main() {
outputFile := flag.String("o", "", "Output file, blank for stdout")
flag.Parse()
fname := flag.Arg(0)
fset := token.NewFileSet()
f, err := parser.ParseFile(fset, fname, nil, parser.ParseComments)
if err != nil {
panic(err)
log.Fatal(err)
}
var structs []structInfo
i := inspector(&structs)
ast.Inspect(f, i)
headerTpl.Execute(os.Stdout, map[string]string{"Package": f.Name.Name})
var output io.Writer = os.Stdout
if *outputFile != "" {
fd, err := os.Create(*outputFile)
if err != nil {
log.Fatal(err)
}
output = fd
}
headerTpl.Execute(output, map[string]string{"Package": f.Name.Name})
for _, s := range structs {
fmt.Printf("\n/*\n\n")
generateDiagram(s)
generateXdr(s)
fmt.Printf("*/\n")
generateCode(s)
fmt.Fprintf(output, "\n/*\n\n")
generateDiagram(output, s)
generateXdr(output, s)
fmt.Fprintf(output, "*/\n")
generateCode(output, s)
}
}

View File

@@ -32,11 +32,11 @@ type TestStruct struct {
type Opaque [32]byte
func (u *Opaque) encodeXDR(w *xdr.Writer) (int, error) {
func (u *Opaque) EncodeXDRInto(w *xdr.Writer) (int, error) {
return w.WriteRaw(u[:])
}
func (u *Opaque) decodeXDR(r *xdr.Reader) (int, error) {
func (u *Opaque) DecodeXDRFrom(r *xdr.Reader) (int, error) {
return r.ReadRaw(u[:])
}

View File

@@ -18,17 +18,23 @@ TestStruct Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int |
/ /
\ int Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int8 |
/ /
\ int8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| uint8 |
/ /
\ uint8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int16 |
| 0x0000 | I16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | UI16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int32 |
| I32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| UI32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -52,7 +58,9 @@ TestStruct Structure:
\ S (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Opaque |
/ /
\ Opaque Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of SS |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -68,9 +76,9 @@ struct TestStruct {
int I;
int8 I8;
uint8 UI8;
int16 I16;
int I16;
unsigned int UI16;
int32 I32;
int I32;
unsigned int UI32;
hyper I64;
unsigned hyper UI64;
@@ -84,7 +92,7 @@ struct TestStruct {
func (o TestStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o TestStruct) MarshalXDR() ([]byte, error) {
@@ -102,11 +110,11 @@ func (o TestStruct) MustMarshalXDR() []byte {
func (o TestStruct) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o TestStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint64(uint64(o.I))
xw.WriteUint8(uint8(o.I8))
xw.WriteUint8(o.UI8)
@@ -124,7 +132,7 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("S", l, 1024)
}
xw.WriteString(o.S)
_, err := o.C.encodeXDR(xw)
_, err := o.C.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -140,16 +148,16 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *TestStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *TestStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
func (o *TestStruct) DecodeXDRFrom(xr *xdr.Reader) error {
o.I = int(xr.ReadUint64())
o.I8 = int8(xr.ReadUint8())
o.UI8 = xr.ReadUint8()
@@ -161,8 +169,11 @@ func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
o.UI64 = xr.ReadUint64()
o.BS = xr.ReadBytesMax(1024)
o.S = xr.ReadStringMax(1024)
(&o.C).decodeXDR(xr)
(&o.C).DecodeXDRFrom(xr)
_SSSize := int(xr.ReadUint32())
if _SSSize < 0 {
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
}
if _SSSize > 1024 {
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
}

View File

@@ -68,7 +68,8 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
if r.err != nil {
return nil
}
if max > 0 && l > max {
if l < 0 || max > 0 && l > max {
// l may be negative on 32 bit builds
r.err = ElementSizeExceeded("bytes field", l, max)
return nil
}
@@ -154,6 +155,10 @@ func (e XDRError) Error() string {
return "xdr " + e.op + ": " + e.err.Error()
}
func (e XDRError) IsEOF() bool {
return e.err == io.EOF
}
func (r *Reader) Error() error {
if r.err == nil {
return nil

View File

@@ -1,3 +1,6 @@
This package contains an efficient token-bucket-based rate limiter.
Copyright (C) 2015 Canonical Ltd.
This software is licensed under the LGPLv3, included below.
As a special exception to the GNU Lesser General Public License version 3

View File

@@ -0,0 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,14 @@
### Extensions to the "os" package.
## Find the current Executable and ExecutableFolder.
There is sometimes utility in finding the current executable file
that is running. This can be used for upgrading the current executable
or finding resources located relative to the executable file.
Multi-platform and supports:
* Linux
* OS X
* Windows
* Plan 9
* BSDs.

View File

@@ -25,8 +25,3 @@ func ExecutableFolder() (string, error) {
folder, _ := filepath.Split(p)
return folder, nil
}
// Depricated. Same as Executable().
func GetExePath() (exePath string, err error) {
return Executable()
}

View File

@@ -5,16 +5,16 @@
package osext
import (
"syscall"
"os"
"strconv"
"os"
"strconv"
"syscall"
)
func executable() (string, error) {
f, err := os.Open("/proc/" + strconv.Itoa(os.Getpid()) + "/text")
if err != nil {
return "", err
}
defer f.Close()
return syscall.Fd2path(int(f.Fd()))
f, err := os.Open("/proc/" + strconv.Itoa(os.Getpid()) + "/text")
if err != nil {
return "", err
}
defer f.Close()
return syscall.Fd2path(int(f.Fd()))
}

View File

@@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux netbsd openbsd solaris
// +build linux netbsd openbsd solaris dragonfly
package osext
@@ -11,15 +11,21 @@ import (
"fmt"
"os"
"runtime"
"strings"
)
func executable() (string, error) {
switch runtime.GOOS {
case "linux":
return os.Readlink("/proc/self/exe")
const deletedSuffix = " (deleted)"
execpath, err := os.Readlink("/proc/self/exe")
if err != nil {
return execpath, err
}
return strings.TrimSuffix(execpath, deletedSuffix), nil
case "netbsd":
return os.Readlink("/proc/curproc/exe")
case "openbsd":
case "openbsd", "dragonfly":
return os.Readlink("/proc/curproc/file")
case "solaris":
return os.Readlink(fmt.Sprintf("/proc/%d/path/a.out", os.Getpid()))

View File

@@ -0,0 +1,180 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin linux freebsd netbsd windows
package osext
import (
"bytes"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"runtime"
"testing"
)
const (
executableEnvVar = "OSTEST_OUTPUT_EXECUTABLE"
executableEnvValueMatch = "match"
executableEnvValueDelete = "delete"
)
func TestExecutableMatch(t *testing.T) {
ep, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
// fullpath to be of the form "dir/prog".
dir := filepath.Dir(filepath.Dir(ep))
fullpath, err := filepath.Rel(dir, ep)
if err != nil {
t.Fatalf("filepath.Rel: %v", err)
}
// Make child start with a relative program path.
// Alter argv[0] for child to verify getting real path without argv[0].
cmd := &exec.Cmd{
Dir: dir,
Path: fullpath,
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueMatch)},
}
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("exec(self) failed: %v", err)
}
outs := string(out)
if !filepath.IsAbs(outs) {
t.Fatalf("Child returned %q, want an absolute path", out)
}
if !sameFile(outs, ep) {
t.Fatalf("Child returned %q, not the same file as %q", out, ep)
}
}
func TestExecutableDelete(t *testing.T) {
if runtime.GOOS != "linux" {
t.Skip()
}
fpath, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
r, w := io.Pipe()
stderrBuff := &bytes.Buffer{}
stdoutBuff := &bytes.Buffer{}
cmd := &exec.Cmd{
Path: fpath,
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueDelete)},
Stdin: r,
Stderr: stderrBuff,
Stdout: stdoutBuff,
}
err = cmd.Start()
if err != nil {
t.Fatalf("exec(self) start failed: %v", err)
}
tempPath := fpath + "_copy"
_ = os.Remove(tempPath)
err = copyFile(tempPath, fpath)
if err != nil {
t.Fatalf("copy file failed: %v", err)
}
err = os.Remove(fpath)
if err != nil {
t.Fatalf("remove running test file failed: %v", err)
}
err = os.Rename(tempPath, fpath)
if err != nil {
t.Fatalf("rename copy to previous name failed: %v", err)
}
w.Write([]byte{0})
w.Close()
err = cmd.Wait()
if err != nil {
t.Fatalf("exec wait failed: %v", err)
}
childPath := stderrBuff.String()
if !filepath.IsAbs(childPath) {
t.Fatalf("Child returned %q, want an absolute path", childPath)
}
if !sameFile(childPath, fpath) {
t.Fatalf("Child returned %q, not the same file as %q", childPath, fpath)
}
}
func sameFile(fn1, fn2 string) bool {
fi1, err := os.Stat(fn1)
if err != nil {
return false
}
fi2, err := os.Stat(fn2)
if err != nil {
return false
}
return os.SameFile(fi1, fi2)
}
func copyFile(dest, src string) error {
df, err := os.Create(dest)
if err != nil {
return err
}
defer df.Close()
sf, err := os.Open(src)
if err != nil {
return err
}
defer sf.Close()
_, err = io.Copy(df, sf)
return err
}
func TestMain(m *testing.M) {
env := os.Getenv(executableEnvVar)
switch env {
case "":
os.Exit(m.Run())
case executableEnvValueMatch:
// First chdir to another path.
dir := "/"
if runtime.GOOS == "windows" {
dir = filepath.VolumeName(".")
}
os.Chdir(dir)
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
case executableEnvValueDelete:
bb := make([]byte, 1)
var err error
n, err := os.Stdin.Read(bb)
if err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
os.Exit(2)
}
if n != 1 {
fmt.Fprint(os.Stderr, "ERROR: n != 1, n == ", n)
os.Exit(2)
}
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
}
os.Exit(0)
}

View File

@@ -0,0 +1,4 @@
# This is the official list of Protocol Authors for copyright purposes.
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Jakob Borg <jakob@nym.se>

View File

@@ -0,0 +1,76 @@
## Reporting Bugs
Please file bugs in the [Github Issue
Tracker](https://github.com/syncthing/protocol/issues).
## Contributing Code
Every contribution is welcome. Following the points below will make this
a smoother process.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
as much as makes sense.
- All text files use Unix line endings.
- Each commit should be `go fmt` clean.
- The commit message subject should be a single short sentence
describing the change, starting with a capital letter.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
- A contribution solving a single issue or introducing a single new
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
## Licensing
All contributions are made under the same MIT license as the rest of the
project, except documentation, user interface text and translation
strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
When accepting your first contribution, the maintainer of the project
will ensure that you are added to the AUTHORS file. You are welcome to
add yourself as a separate commit in your first pull request.
## Tests
Yes please!
## License
MIT

View File

@@ -0,0 +1,19 @@
Copyright (C) 2014-2015 The Protocol Authors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,13 @@
The BEPv1 Protocol
==================
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/protocol.svg?style=flat-square)](http://build.syncthing.net/job/protocol/lastBuild/)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat-square)](http://godoc.org/github.com/syncthing/protocol)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](http://opensource.org/licenses/MIT)
This is the protocol implementation used by Syncthing.
License
=======
MIT

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -26,6 +13,9 @@ type TestModel struct {
name string
offset int64
size int
hash []byte
flags uint32
options []Option
closedCh chan bool
}
@@ -35,17 +25,20 @@ func newTestModel() *TestModel {
}
}
func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
}
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
}
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int) ([]byte, error) {
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
t.folder = folder
t.name = name
t.offset = offset
t.size = size
t.hash = hash
t.flags = flags
t.options = options
return t.data, nil
}

View File

@@ -0,0 +1,53 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
import "fmt"
type Compression int
const (
CompressMetadata Compression = iota // zero value is the default, default should be "metadata"
CompressNever
CompressAlways
compressionThreshold = 128 // don't bother compressing messages smaller than this many bytes
)
var compressionMarshal = map[Compression]string{
CompressNever: "never",
CompressMetadata: "metadata",
CompressAlways: "always",
}
var compressionUnmarshal = map[string]Compression{
// Legacy
"false": CompressNever,
"true": CompressMetadata,
// Current
"never": CompressNever,
"metadata": CompressMetadata,
"always": CompressAlways,
}
func (c Compression) String() string {
s, ok := compressionMarshal[c]
if !ok {
return fmt.Sprintf("unknown:%d", c)
}
return s
}
func (c Compression) GoString() string {
return fmt.Sprintf("%q", c.String())
}
func (c Compression) MarshalText() ([]byte, error) {
return []byte(compressionMarshal[c]), nil
}
func (c *Compression) UnmarshalText(bs []byte) error {
*c = compressionUnmarshal[string(bs)]
return nil
}

View File

@@ -0,0 +1,49 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
import "testing"
func TestCompressionMarshal(t *testing.T) {
uTestcases := []struct {
s string
c Compression
}{
{"true", CompressMetadata},
{"false", CompressNever},
{"never", CompressNever},
{"metadata", CompressMetadata},
{"always", CompressAlways},
{"whatever", CompressMetadata},
}
mTestcases := []struct {
s string
c Compression
}{
{"never", CompressNever},
{"metadata", CompressMetadata},
{"always", CompressAlways},
}
var c Compression
for _, tc := range uTestcases {
err := c.UnmarshalText([]byte(tc.s))
if err != nil {
t.Error(err)
}
if c != tc.c {
t.Errorf("%s unmarshalled to %d, not %d", tc.s, c, tc.c)
}
}
for _, tc := range mTestcases {
bs, err := tc.c.MarshalText()
if err != nil {
t.Error(err)
}
if s := string(bs); s != tc.s {
t.Errorf("%d marshalled to %q, not %q", tc.c, s, tc.s)
}
}
}

View File

@@ -0,0 +1,62 @@
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"io"
"sync/atomic"
"time"
)
type countingReader struct {
io.Reader
tot int64 // bytes
last int64 // unix nanos
}
var (
totalIncoming int64
totalOutgoing int64
)
func (c *countingReader) Read(bs []byte) (int, error) {
n, err := c.Reader.Read(bs)
atomic.AddInt64(&c.tot, int64(n))
atomic.AddInt64(&totalIncoming, int64(n))
atomic.StoreInt64(&c.last, time.Now().UnixNano())
return n, err
}
func (c *countingReader) Tot() int64 {
return atomic.LoadInt64(&c.tot)
}
func (c *countingReader) Last() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.last))
}
type countingWriter struct {
io.Writer
tot int64 // bytes
last int64 // unix nanos
}
func (c *countingWriter) Write(bs []byte) (int, error) {
n, err := c.Writer.Write(bs)
atomic.AddInt64(&c.tot, int64(n))
atomic.AddInt64(&totalOutgoing, int64(n))
atomic.StoreInt64(&c.last, time.Now().UnixNano())
return n, err
}
func (c *countingWriter) Tot() int64 {
return atomic.LoadInt64(&c.tot)
}
func (c *countingWriter) Last() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.last))
}
func TotalInOut() (int64, int64) {
return atomic.LoadInt64(&totalIncoming), atomic.LoadInt64(&totalOutgoing)
}

View File

@@ -0,0 +1,15 @@
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"os"
"strings"
"github.com/calmh/logger"
)
var (
debug = strings.Contains(os.Getenv("STTRACE"), "protocol") || os.Getenv("STTRACE") == "all"
l = logger.DefaultLogger
)

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -19,12 +6,13 @@ import (
"bytes"
"crypto/sha256"
"encoding/base32"
"encoding/binary"
"errors"
"fmt"
"regexp"
"strings"
"github.com/syncthing/syncthing/internal/luhn"
"github.com/calmh/luhn"
)
type DeviceID [32]byte
@@ -80,6 +68,11 @@ func (n DeviceID) Equals(other DeviceID) bool {
return bytes.Compare(n[:], other[:]) == 0
}
// Short returns an integer representing bits 0-63 of the device ID.
func (n DeviceID) Short() uint64 {
return binary.BigEndian.Uint64(n[:])
}
func (n *DeviceID) MarshalText() ([]byte, error) {
return []byte(n.String()), nil
}

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol

View File

@@ -0,0 +1,4 @@
// Copyright (C) 2014 The Protocol Authors.
// Package protocol implements the Block Exchange Protocol.
package protocol

View File

@@ -0,0 +1,51 @@
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"errors"
)
const (
ecNoError int32 = iota
ecGeneric
ecNoSuchFile
ecInvalid
)
var (
ErrNoError error = nil
ErrGeneric = errors.New("generic error")
ErrNoSuchFile = errors.New("no such file")
ErrInvalid = errors.New("file is invalid")
)
var lookupError = map[int32]error{
ecNoError: ErrNoError,
ecGeneric: ErrGeneric,
ecNoSuchFile: ErrNoSuchFile,
ecInvalid: ErrInvalid,
}
var lookupCode = map[error]int32{
ErrNoError: ecNoError,
ErrGeneric: ecGeneric,
ErrNoSuchFile: ecNoSuchFile,
ErrInvalid: ecInvalid,
}
func codeToError(errcode int32) error {
err, ok := lookupError[errcode]
if !ok {
return ErrGeneric
}
return err
}
func errorToCode(err error) int32 {
code, ok := lookupCode[err]
if !ok {
return ecGeneric
}
return code
}

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol

View File

@@ -0,0 +1,126 @@
// Copyright (C) 2014 The Protocol Authors.
//go:generate -command genxdr go run ../syncthing/Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
//go:generate genxdr -o message_xdr.go message.go
package protocol
import "fmt"
type IndexMessage struct {
Folder string
Files []FileInfo
Flags uint32
Options []Option // max:64
}
type FileInfo struct {
Name string // max:8192
Flags uint32
Modified int64
Version Vector
LocalVersion int64
Blocks []BlockInfo
}
func (f FileInfo) String() string {
return fmt.Sprintf("File{Name:%q, Flags:0%o, Modified:%d, Version:%v, Size:%d, Blocks:%v}",
f.Name, f.Flags, f.Modified, f.Version, f.Size(), f.Blocks)
}
func (f FileInfo) Size() (bytes int64) {
if f.IsDeleted() || f.IsDirectory() {
return 128
}
for _, b := range f.Blocks {
bytes += int64(b.Size)
}
return
}
func (f FileInfo) IsDeleted() bool {
return f.Flags&FlagDeleted != 0
}
func (f FileInfo) IsInvalid() bool {
return f.Flags&FlagInvalid != 0
}
func (f FileInfo) IsDirectory() bool {
return f.Flags&FlagDirectory != 0
}
func (f FileInfo) IsSymlink() bool {
return f.Flags&FlagSymlink != 0
}
func (f FileInfo) HasPermissionBits() bool {
return f.Flags&FlagNoPermBits == 0
}
type BlockInfo struct {
Offset int64 // noencode (cache only)
Size int32
Hash []byte // max:64
}
func (b BlockInfo) String() string {
return fmt.Sprintf("Block{%d/%d/%x}", b.Offset, b.Size, b.Hash)
}
type RequestMessage struct {
Folder string // max:64
Name string // max:8192
Offset int64
Size int32
Hash []byte // max:64
Flags uint32
Options []Option // max:64
}
type ResponseMessage struct {
Data []byte
Code int32
}
type ClusterConfigMessage struct {
ClientName string // max:64
ClientVersion string // max:64
Folders []Folder
Options []Option // max:64
}
func (o *ClusterConfigMessage) GetOption(key string) string {
for _, option := range o.Options {
if option.Key == key {
return option.Value
}
}
return ""
}
type Folder struct {
ID string // max:64
Devices []Device
Flags uint32
Options []Option // max:64
}
type Device struct {
ID []byte // max:32
MaxLocalVersion int64
Flags uint32
Options []Option // max:64
}
type Option struct {
Key string // max:64
Value string // max:1024
}
type CloseMessage struct {
Reason string // max:1024
Code int32
}
type EmptyMessage struct{}

View File

@@ -30,18 +30,28 @@ IndexMessage Structure:
\ Zero or more FileInfo Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Option Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct IndexMessage {
string Folder<64>;
string Folder<>;
FileInfo Files<>;
unsigned int Flags;
Option Options<64>;
}
*/
func (o IndexMessage) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o IndexMessage) MarshalXDR() ([]byte, error) {
@@ -59,18 +69,26 @@ func (o IndexMessage) MustMarshalXDR() []byte {
func (o IndexMessage) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o IndexMessage) encodeXDR(xw *xdr.Writer) (int, error) {
if l := len(o.Folder); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Folder", l, 64)
}
func (o IndexMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteString(o.Folder)
xw.WriteUint32(uint32(len(o.Files)))
for i := range o.Files {
_, err := o.Files[i].encodeXDR(xw)
_, err := o.Files[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
xw.WriteUint32(o.Flags)
if l := len(o.Options); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -80,21 +98,36 @@ func (o IndexMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *IndexMessage) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *IndexMessage) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *IndexMessage) decodeXDR(xr *xdr.Reader) error {
o.Folder = xr.ReadStringMax(64)
func (o *IndexMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.Folder = xr.ReadString()
_FilesSize := int(xr.ReadUint32())
if _FilesSize < 0 {
return xdr.ElementSizeExceeded("Files", _FilesSize, 0)
}
o.Files = make([]FileInfo, _FilesSize)
for i := range o.Files {
(&o.Files[i]).decodeXDR(xr)
(&o.Files[i]).DecodeXDRFrom(xr)
}
o.Flags = xr.ReadUint32()
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize < 0 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
@@ -118,9 +151,9 @@ FileInfo Structure:
+ Modified (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Version (64 bits) +
| |
/ /
\ Vector Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Local Version (64 bits) +
@@ -138,8 +171,8 @@ struct FileInfo {
string Name<8192>;
unsigned int Flags;
hyper Modified;
unsigned hyper Version;
unsigned hyper LocalVersion;
Vector Version;
hyper LocalVersion;
BlockInfo Blocks<>;
}
@@ -147,7 +180,7 @@ struct FileInfo {
func (o FileInfo) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o FileInfo) MarshalXDR() ([]byte, error) {
@@ -165,22 +198,25 @@ func (o FileInfo) MustMarshalXDR() []byte {
func (o FileInfo) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o FileInfo) encodeXDR(xw *xdr.Writer) (int, error) {
func (o FileInfo) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.Name); l > 8192 {
return xw.Tot(), xdr.ElementSizeExceeded("Name", l, 8192)
}
xw.WriteString(o.Name)
xw.WriteUint32(o.Flags)
xw.WriteUint64(uint64(o.Modified))
xw.WriteUint64(o.Version)
xw.WriteUint64(o.LocalVersion)
_, err := o.Version.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
xw.WriteUint64(uint64(o.LocalVersion))
xw.WriteUint32(uint32(len(o.Blocks)))
for i := range o.Blocks {
_, err := o.Blocks[i].encodeXDR(xw)
_, err := o.Blocks[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -190,131 +226,34 @@ func (o FileInfo) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *FileInfo) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *FileInfo) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *FileInfo) decodeXDR(xr *xdr.Reader) error {
func (o *FileInfo) DecodeXDRFrom(xr *xdr.Reader) error {
o.Name = xr.ReadStringMax(8192)
o.Flags = xr.ReadUint32()
o.Modified = int64(xr.ReadUint64())
o.Version = xr.ReadUint64()
o.LocalVersion = xr.ReadUint64()
(&o.Version).DecodeXDRFrom(xr)
o.LocalVersion = int64(xr.ReadUint64())
_BlocksSize := int(xr.ReadUint32())
if _BlocksSize < 0 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 0)
}
o.Blocks = make([]BlockInfo, _BlocksSize)
for i := range o.Blocks {
(&o.Blocks[i]).decodeXDR(xr)
(&o.Blocks[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
/*
FileInfoTruncated Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Name |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Name (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Modified (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Version (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Local Version (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Num Blocks |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct FileInfoTruncated {
string Name<8192>;
unsigned int Flags;
hyper Modified;
unsigned hyper Version;
unsigned hyper LocalVersion;
unsigned int NumBlocks;
}
*/
func (o FileInfoTruncated) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
}
func (o FileInfoTruncated) MarshalXDR() ([]byte, error) {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o FileInfoTruncated) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o FileInfoTruncated) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
return []byte(aw), err
}
func (o FileInfoTruncated) encodeXDR(xw *xdr.Writer) (int, error) {
if l := len(o.Name); l > 8192 {
return xw.Tot(), xdr.ElementSizeExceeded("Name", l, 8192)
}
xw.WriteString(o.Name)
xw.WriteUint32(o.Flags)
xw.WriteUint64(uint64(o.Modified))
xw.WriteUint64(o.Version)
xw.WriteUint64(o.LocalVersion)
xw.WriteUint32(o.NumBlocks)
return xw.Tot(), xw.Error()
}
func (o *FileInfoTruncated) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
}
func (o *FileInfoTruncated) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
}
func (o *FileInfoTruncated) decodeXDR(xr *xdr.Reader) error {
o.Name = xr.ReadStringMax(8192)
o.Flags = xr.ReadUint32()
o.Modified = int64(xr.ReadUint64())
o.Version = xr.ReadUint64()
o.LocalVersion = xr.ReadUint64()
o.NumBlocks = xr.ReadUint32()
return xr.Error()
}
/*
BlockInfo Structure:
0 1 2 3
@@ -331,7 +270,7 @@ BlockInfo Structure:
struct BlockInfo {
unsigned int Size;
int Size;
opaque Hash<64>;
}
@@ -339,7 +278,7 @@ struct BlockInfo {
func (o BlockInfo) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o BlockInfo) MarshalXDR() ([]byte, error) {
@@ -357,12 +296,12 @@ func (o BlockInfo) MustMarshalXDR() []byte {
func (o BlockInfo) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o BlockInfo) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteUint32(o.Size)
func (o BlockInfo) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint32(uint32(o.Size))
if l := len(o.Hash); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Hash", l, 64)
}
@@ -372,17 +311,17 @@ func (o BlockInfo) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *BlockInfo) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *BlockInfo) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *BlockInfo) decodeXDR(xr *xdr.Reader) error {
o.Size = xr.ReadUint32()
func (o *BlockInfo) DecodeXDRFrom(xr *xdr.Reader) error {
o.Size = int32(xr.ReadUint32())
o.Hash = xr.ReadBytesMax(64)
return xr.Error()
}
@@ -412,20 +351,37 @@ RequestMessage Structure:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Size |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Hash |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Hash (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Option Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct RequestMessage {
string Folder<64>;
string Name<8192>;
unsigned hyper Offset;
unsigned int Size;
hyper Offset;
int Size;
opaque Hash<64>;
unsigned int Flags;
Option Options<64>;
}
*/
func (o RequestMessage) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o RequestMessage) MarshalXDR() ([]byte, error) {
@@ -443,11 +399,11 @@ func (o RequestMessage) MustMarshalXDR() []byte {
func (o RequestMessage) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o RequestMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o RequestMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.Folder); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Folder", l, 64)
}
@@ -456,27 +412,55 @@ func (o RequestMessage) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("Name", l, 8192)
}
xw.WriteString(o.Name)
xw.WriteUint64(o.Offset)
xw.WriteUint32(o.Size)
xw.WriteUint64(uint64(o.Offset))
xw.WriteUint32(uint32(o.Size))
if l := len(o.Hash); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Hash", l, 64)
}
xw.WriteBytes(o.Hash)
xw.WriteUint32(o.Flags)
if l := len(o.Options); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
return xw.Tot(), xw.Error()
}
func (o *RequestMessage) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *RequestMessage) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *RequestMessage) decodeXDR(xr *xdr.Reader) error {
func (o *RequestMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.Folder = xr.ReadStringMax(64)
o.Name = xr.ReadStringMax(8192)
o.Offset = xr.ReadUint64()
o.Size = xr.ReadUint32()
o.Offset = int64(xr.ReadUint64())
o.Size = int32(xr.ReadUint32())
o.Hash = xr.ReadBytesMax(64)
o.Flags = xr.ReadUint32()
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize < 0 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
@@ -493,17 +477,20 @@ ResponseMessage Structure:
\ Data (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Code |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct ResponseMessage {
opaque Data<>;
int Code;
}
*/
func (o ResponseMessage) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o ResponseMessage) MarshalXDR() ([]byte, error) {
@@ -521,28 +508,30 @@ func (o ResponseMessage) MustMarshalXDR() []byte {
func (o ResponseMessage) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o ResponseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o ResponseMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteBytes(o.Data)
xw.WriteUint32(uint32(o.Code))
return xw.Tot(), xw.Error()
}
func (o *ResponseMessage) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *ResponseMessage) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *ResponseMessage) decodeXDR(xr *xdr.Reader) error {
func (o *ResponseMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.Data = xr.ReadBytes()
o.Code = int32(xr.ReadUint32())
return xr.Error()
}
@@ -582,7 +571,7 @@ ClusterConfigMessage Structure:
struct ClusterConfigMessage {
string ClientName<64>;
string ClientVersion<64>;
Folder Folders<64>;
Folder Folders<>;
Option Options<64>;
}
@@ -590,7 +579,7 @@ struct ClusterConfigMessage {
func (o ClusterConfigMessage) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o ClusterConfigMessage) MarshalXDR() ([]byte, error) {
@@ -608,11 +597,11 @@ func (o ClusterConfigMessage) MustMarshalXDR() []byte {
func (o ClusterConfigMessage) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o ClusterConfigMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.ClientName); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("ClientName", l, 64)
}
@@ -621,12 +610,9 @@ func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("ClientVersion", l, 64)
}
xw.WriteString(o.ClientVersion)
if l := len(o.Folders); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Folders", l, 64)
}
xw.WriteUint32(uint32(len(o.Folders)))
for i := range o.Folders {
_, err := o.Folders[i].encodeXDR(xw)
_, err := o.Folders[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -636,7 +622,7 @@ func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].encodeXDR(xw)
_, err := o.Options[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -646,33 +632,36 @@ func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *ClusterConfigMessage) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *ClusterConfigMessage) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *ClusterConfigMessage) decodeXDR(xr *xdr.Reader) error {
func (o *ClusterConfigMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.ClientName = xr.ReadStringMax(64)
o.ClientVersion = xr.ReadStringMax(64)
_FoldersSize := int(xr.ReadUint32())
if _FoldersSize > 64 {
return xdr.ElementSizeExceeded("Folders", _FoldersSize, 64)
if _FoldersSize < 0 {
return xdr.ElementSizeExceeded("Folders", _FoldersSize, 0)
}
o.Folders = make([]Folder, _FoldersSize)
for i := range o.Folders {
(&o.Folders[i]).decodeXDR(xr)
(&o.Folders[i]).DecodeXDRFrom(xr)
}
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize < 0 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).decodeXDR(xr)
(&o.Options[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
@@ -696,18 +685,28 @@ Folder Structure:
\ Zero or more Device Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Option Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct Folder {
string ID<64>;
Device Devices<>;
unsigned int Flags;
Option Options<64>;
}
*/
func (o Folder) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o Folder) MarshalXDR() ([]byte, error) {
@@ -725,18 +724,29 @@ func (o Folder) MustMarshalXDR() []byte {
func (o Folder) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Folder) encodeXDR(xw *xdr.Writer) (int, error) {
func (o Folder) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.ID); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 64)
}
xw.WriteString(o.ID)
xw.WriteUint32(uint32(len(o.Devices)))
for i := range o.Devices {
_, err := o.Devices[i].encodeXDR(xw)
_, err := o.Devices[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
xw.WriteUint32(o.Flags)
if l := len(o.Options); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -746,21 +756,36 @@ func (o Folder) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *Folder) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *Folder) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *Folder) decodeXDR(xr *xdr.Reader) error {
func (o *Folder) DecodeXDRFrom(xr *xdr.Reader) error {
o.ID = xr.ReadStringMax(64)
_DevicesSize := int(xr.ReadUint32())
if _DevicesSize < 0 {
return xdr.ElementSizeExceeded("Devices", _DevicesSize, 0)
}
o.Devices = make([]Device, _DevicesSize)
for i := range o.Devices {
(&o.Devices[i]).decodeXDR(xr)
(&o.Devices[i]).DecodeXDRFrom(xr)
}
o.Flags = xr.ReadUint32()
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize < 0 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
@@ -778,25 +803,32 @@ Device Structure:
\ ID (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Max Local Version (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Option Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct Device {
opaque ID<32>;
hyper MaxLocalVersion;
unsigned int Flags;
unsigned hyper MaxLocalVersion;
Option Options<64>;
}
*/
func (o Device) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o Device) MarshalXDR() ([]byte, error) {
@@ -814,35 +846,56 @@ func (o Device) MustMarshalXDR() []byte {
func (o Device) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Device) encodeXDR(xw *xdr.Writer) (int, error) {
func (o Device) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.ID); l > 32 {
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 32)
}
xw.WriteBytes(o.ID)
xw.WriteUint64(uint64(o.MaxLocalVersion))
xw.WriteUint32(o.Flags)
xw.WriteUint64(o.MaxLocalVersion)
if l := len(o.Options); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
return xw.Tot(), xw.Error()
}
func (o *Device) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *Device) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *Device) decodeXDR(xr *xdr.Reader) error {
func (o *Device) DecodeXDRFrom(xr *xdr.Reader) error {
o.ID = xr.ReadBytesMax(32)
o.MaxLocalVersion = int64(xr.ReadUint64())
o.Flags = xr.ReadUint32()
o.MaxLocalVersion = xr.ReadUint64()
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize < 0 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
@@ -876,7 +929,7 @@ struct Option {
func (o Option) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o Option) MarshalXDR() ([]byte, error) {
@@ -894,11 +947,11 @@ func (o Option) MustMarshalXDR() []byte {
func (o Option) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Option) encodeXDR(xw *xdr.Writer) (int, error) {
func (o Option) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.Key); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Key", l, 64)
}
@@ -912,16 +965,16 @@ func (o Option) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *Option) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *Option) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *Option) decodeXDR(xr *xdr.Reader) error {
func (o *Option) DecodeXDRFrom(xr *xdr.Reader) error {
o.Key = xr.ReadStringMax(64)
o.Value = xr.ReadStringMax(1024)
return xr.Error()
@@ -940,17 +993,20 @@ CloseMessage Structure:
\ Reason (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Code |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct CloseMessage {
string Reason<1024>;
int Code;
}
*/
func (o CloseMessage) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o CloseMessage) MarshalXDR() ([]byte, error) {
@@ -968,31 +1024,33 @@ func (o CloseMessage) MustMarshalXDR() []byte {
func (o CloseMessage) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o CloseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o CloseMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.Reason); l > 1024 {
return xw.Tot(), xdr.ElementSizeExceeded("Reason", l, 1024)
}
xw.WriteString(o.Reason)
xw.WriteUint32(uint32(o.Code))
return xw.Tot(), xw.Error()
}
func (o *CloseMessage) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *CloseMessage) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *CloseMessage) decodeXDR(xr *xdr.Reader) error {
func (o *CloseMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.Reason = xr.ReadStringMax(1024)
o.Code = int32(xr.ReadUint32())
return xr.Error()
}
@@ -1012,7 +1070,7 @@ struct EmptyMessage {
func (o EmptyMessage) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o EmptyMessage) MarshalXDR() ([]byte, error) {
@@ -1030,25 +1088,25 @@ func (o EmptyMessage) MustMarshalXDR() []byte {
func (o EmptyMessage) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o EmptyMessage) encodeXDR(xw *xdr.Writer) (int, error) {
func (o EmptyMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}
func (o *EmptyMessage) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *EmptyMessage) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *EmptyMessage) decodeXDR(xr *xdr.Reader) error {
func (o *EmptyMessage) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}

View File

@@ -0,0 +1,40 @@
// Copyright (C) 2014 The Protocol Authors.
// +build darwin
package protocol
// Darwin uses NFD normalization
import "golang.org/x/text/unicode/norm"
type nativeModel struct {
next Model
}
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
for i := range files {
files[i].Name = norm.NFD.String(files[i].Name)
}
m.next.Index(deviceID, folder, files, flags, options)
}
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
for i := range files {
files[i].Name = norm.NFD.String(files[i].Name)
}
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
name = norm.NFD.String(name)
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
m.next.ClusterConfig(deviceID, config)
}
func (m nativeModel) Close(deviceID DeviceID, err error) {
m.next.Close(deviceID, err)
}

View File

@@ -0,0 +1,31 @@
// Copyright (C) 2014 The Protocol Authors.
// +build !windows,!darwin
package protocol
// Normal Unixes uses NFC and slashes, which is the wire format.
type nativeModel struct {
next Model
}
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
m.next.Index(deviceID, folder, files, flags, options)
}
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
m.next.ClusterConfig(deviceID, config)
}
func (m nativeModel) Close(deviceID DeviceID, err error) {
m.next.Close(deviceID, err)
}

View File

@@ -0,0 +1,63 @@
// Copyright (C) 2014 The Protocol Authors.
// +build windows
package protocol
// Windows uses backslashes as file separator and disallows a bunch of
// characters in the filename
import (
"path/filepath"
"strings"
)
var disallowedCharacters = string([]rune{
'<', '>', ':', '"', '|', '?', '*',
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
31,
})
type nativeModel struct {
next Model
}
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
fixupFiles(files)
m.next.Index(deviceID, folder, files, flags, options)
}
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
fixupFiles(files)
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
name = filepath.FromSlash(name)
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
m.next.ClusterConfig(deviceID, config)
}
func (m nativeModel) Close(deviceID DeviceID, err error) {
m.next.Close(deviceID, err)
}
func fixupFiles(files []FileInfo) {
for i, f := range files {
if strings.ContainsAny(f.Name, disallowedCharacters) {
if f.IsDeleted() {
// Don't complain if the file is marked as deleted, since it
// can't possibly exist here anyway.
continue
}
files[i].Flags |= FlagInvalid
l.Warnf("File name %q contains invalid characters; marked as invalid.", f.Name)
}
files[i].Name = filepath.FromSlash(files[i].Name)
}
}

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -48,6 +35,7 @@ const (
stateIdxRcvd
)
// FileInfo flags
const (
FlagDeleted uint32 = 1 << 12
FlagInvalid = 1 << 13
@@ -56,9 +44,22 @@ const (
FlagSymlink = 1 << 16
FlagSymlinkMissingTarget = 1 << 17
FlagsAll = (1 << 18) - 1
SymlinkTypeMask = FlagDirectory | FlagSymlinkMissingTarget
)
// IndexMessage message flags (for IndexUpdate)
const (
FlagIndexTemporary uint32 = 1 << iota
)
// Request message flags
const (
FlagRequestTemporary uint32 = 1 << iota
)
// ClusterConfigMessage.Folders.Devices flags
const (
FlagShareTrusted uint32 = 1 << 0
FlagShareReadOnly = 1 << 1
@@ -71,13 +72,17 @@ var (
ErrClosed = errors.New("connection closed")
)
// Specific variants of empty messages...
type pingMessage struct{ EmptyMessage }
type pongMessage struct{ EmptyMessage }
type Model interface {
// An index was received from the peer device
Index(deviceID DeviceID, folder string, files []FileInfo)
Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option)
// An index update was received from the peer device
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo)
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option)
// A request was made by the peer device
Request(deviceID DeviceID, folder string, name string, offset int64, size int) ([]byte, error)
Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error)
// A cluster configuration message was received
ClusterConfig(deviceID DeviceID, config ClusterConfigMessage)
// The peer device closed the connection
@@ -87,9 +92,9 @@ type Model interface {
type Connection interface {
ID() DeviceID
Name() string
Index(folder string, files []FileInfo) error
IndexUpdate(folder string, files []FileInfo) error
Request(folder string, name string, offset int64, size int) ([]byte, error)
Index(folder string, files []FileInfo, flags uint32, options []Option) error
IndexUpdate(folder string, files []FileInfo, flags uint32, options []Option) error
Request(folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error)
ClusterConfig(config ClusterConfigMessage)
Statistics() Statistics
}
@@ -113,7 +118,7 @@ type rawConnection struct {
closed chan struct{}
once sync.Once
compressionThreshold int // compress messages larger than this many bytes
compression Compression
rdbuf0 []byte // used & reused by readMessage
rdbuf1 []byte // used & reused by readMessage
@@ -133,30 +138,30 @@ type encodable interface {
AppendXDR([]byte) ([]byte, error)
}
type isEofer interface {
IsEOF() bool
}
const (
pingTimeout = 30 * time.Second
pingIdleTime = 60 * time.Second
)
func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiver Model, name string, compress bool) Connection {
func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiver Model, name string, compress Compression) Connection {
cr := &countingReader{Reader: reader}
cw := &countingWriter{Writer: writer}
compThres := 1<<31 - 1 // compression disabled
if compress {
compThres = 128 // compress messages that are 128 bytes long or larger
}
c := rawConnection{
id: deviceID,
name: name,
receiver: nativeModel{receiver},
state: stateInitial,
cr: cr,
cw: cw,
outbox: make(chan hdrMsg),
nextID: make(chan int),
closed: make(chan struct{}),
compressionThreshold: compThres,
id: deviceID,
name: name,
receiver: nativeModel{receiver},
state: stateInitial,
cr: cr,
cw: cw,
outbox: make(chan hdrMsg),
nextID: make(chan int),
closed: make(chan struct{}),
compression: compress,
}
go c.readerLoop()
@@ -176,33 +181,43 @@ func (c *rawConnection) Name() string {
}
// Index writes the list of file information to the connected peer device
func (c *rawConnection) Index(folder string, idx []FileInfo) error {
func (c *rawConnection) Index(folder string, idx []FileInfo, flags uint32, options []Option) error {
select {
case <-c.closed:
return ErrClosed
default:
}
c.idxMut.Lock()
c.send(-1, messageTypeIndex, IndexMessage{folder, idx})
c.send(-1, messageTypeIndex, IndexMessage{
Folder: folder,
Files: idx,
Flags: flags,
Options: options,
})
c.idxMut.Unlock()
return nil
}
// IndexUpdate writes the list of file information to the connected peer device as an update
func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo) error {
func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo, flags uint32, options []Option) error {
select {
case <-c.closed:
return ErrClosed
default:
}
c.idxMut.Lock()
c.send(-1, messageTypeIndexUpdate, IndexMessage{folder, idx})
c.send(-1, messageTypeIndexUpdate, IndexMessage{
Folder: folder,
Files: idx,
Flags: flags,
Options: options,
})
c.idxMut.Unlock()
return nil
}
// Request returns the bytes for the specified block after fetching them from the connected peer.
func (c *rawConnection) Request(folder string, name string, offset int64, size int) ([]byte, error) {
func (c *rawConnection) Request(folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
var id int
select {
case id = <-c.nextID:
@@ -218,7 +233,15 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
c.awaiting[id] = rc
c.awaitingMut.Unlock()
ok := c.send(id, messageTypeRequest, RequestMessage{folder, name, uint64(offset), uint32(size)})
ok := c.send(id, messageTypeRequest, RequestMessage{
Folder: folder,
Name: name,
Offset: offset,
Size: int32(size),
Hash: hash,
Flags: flags,
Options: options,
})
if !ok {
return nil, ErrClosed
}
@@ -274,48 +297,51 @@ func (c *rawConnection) readerLoop() (err error) {
return err
}
switch hdr.msgType {
case messageTypeIndex:
if c.state < stateCCRcvd {
return fmt.Errorf("protocol error: index message in state %d", c.state)
}
c.handleIndex(msg.(IndexMessage))
c.state = stateIdxRcvd
switch msg := msg.(type) {
case IndexMessage:
switch hdr.msgType {
case messageTypeIndex:
if c.state < stateCCRcvd {
return fmt.Errorf("protocol error: index message in state %d", c.state)
}
c.handleIndex(msg)
c.state = stateIdxRcvd
case messageTypeIndexUpdate:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: index update message in state %d", c.state)
case messageTypeIndexUpdate:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: index update message in state %d", c.state)
}
c.handleIndexUpdate(msg)
}
c.handleIndexUpdate(msg.(IndexMessage))
case messageTypeRequest:
case RequestMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: request message in state %d", c.state)
}
// Requests are handled asynchronously
go c.handleRequest(hdr.msgID, msg.(RequestMessage))
go c.handleRequest(hdr.msgID, msg)
case messageTypeResponse:
case ResponseMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: response message in state %d", c.state)
}
c.handleResponse(hdr.msgID, msg.(ResponseMessage))
c.handleResponse(hdr.msgID, msg)
case messageTypePing:
c.send(hdr.msgID, messageTypePong, EmptyMessage{})
case pingMessage:
c.send(hdr.msgID, messageTypePong, pongMessage{})
case messageTypePong:
case pongMessage:
c.handlePong(hdr.msgID)
case messageTypeClusterConfig:
case ClusterConfigMessage:
if c.state != stateInitial {
return fmt.Errorf("protocol error: cluster config message in state %d", c.state)
}
go c.receiver.ClusterConfig(c.id, msg.(ClusterConfigMessage))
go c.receiver.ClusterConfig(c.id, msg)
c.state = stateCCRcvd
case messageTypeClose:
return errors.New(msg.(CloseMessage).Reason)
case CloseMessage:
return errors.New(msg.Reason)
default:
return fmt.Errorf("protocol error: %s: unknown message type %#x", c.id, hdr.msgType)
@@ -341,6 +367,11 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
l.Debugf("read header %v (msglen=%d)", hdr, msglen)
}
if hdr.version != 0 {
err = fmt.Errorf("unknown protocol version 0x%x", hdr.version)
return
}
if cap(c.rdbuf0) < msglen {
c.rdbuf0 = make([]byte, msglen)
} else {
@@ -376,33 +407,58 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
}
}
// We check each returned error for the XDRError.IsEOF() method.
// IsEOF()==true here means that the message contained fewer fields than
// expected. It does not signify an EOF on the socket, because we've
// successfully read a size value and that many bytes already. New fields
// we expected but the other peer didn't send should be interpreted as
// zero/nil, and if that's not valid we'll verify it somewhere else.
switch hdr.msgType {
case messageTypeIndex, messageTypeIndexUpdate:
var idx IndexMessage
err = idx.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = idx
case messageTypeRequest:
var req RequestMessage
err = req.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = req
case messageTypeResponse:
var resp ResponseMessage
err = resp.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = resp
case messageTypePing, messageTypePong:
msg = EmptyMessage{}
case messageTypePing:
msg = pingMessage{}
case messageTypePong:
msg = pongMessage{}
case messageTypeClusterConfig:
var cc ClusterConfigMessage
err = cc.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = cc
case messageTypeClose:
var cm CloseMessage
err = cm.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = cm
default:
@@ -414,29 +470,58 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
func (c *rawConnection) handleIndex(im IndexMessage) {
if debug {
l.Debugf("Index(%v, %v, %d files)", c.id, im.Folder, len(im.Files))
l.Debugf("Index(%v, %v, %d file, flags %x, opts: %s)", c.id, im.Folder, len(im.Files), im.Flags, im.Options)
}
c.receiver.Index(c.id, im.Folder, im.Files)
c.receiver.Index(c.id, im.Folder, filterIndexMessageFiles(im.Files), im.Flags, im.Options)
}
func (c *rawConnection) handleIndexUpdate(im IndexMessage) {
if debug {
l.Debugf("queueing IndexUpdate(%v, %v, %d files)", c.id, im.Folder, len(im.Files))
l.Debugf("queueing IndexUpdate(%v, %v, %d files, flags %x, opts: %s)", c.id, im.Folder, len(im.Files), im.Flags, im.Options)
}
c.receiver.IndexUpdate(c.id, im.Folder, im.Files)
c.receiver.IndexUpdate(c.id, im.Folder, filterIndexMessageFiles(im.Files), im.Flags, im.Options)
}
func filterIndexMessageFiles(fs []FileInfo) []FileInfo {
var out []FileInfo
for i, f := range fs {
switch f.Name {
case "", ".", "..", "/": // A few obviously invalid filenames
l.Infof("Dropping invalid filename %q from incoming index", f.Name)
if out == nil {
// Most incoming updates won't contain anything invalid, so we
// delay the allocation and copy to output slice until we
// really need to do it, then copy all the so var valid files
// to it.
out = make([]FileInfo, i, len(fs)-1)
copy(out, fs)
}
default:
if out != nil {
out = append(out, f)
}
}
}
if out != nil {
return out
}
return fs
}
func (c *rawConnection) handleRequest(msgID int, req RequestMessage) {
data, _ := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size))
data, err := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size), req.Hash, req.Flags, req.Options)
c.send(msgID, messageTypeResponse, ResponseMessage{data})
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: data,
Code: errorToCode(err),
})
}
func (c *rawConnection) handleResponse(msgID int, resp ResponseMessage) {
c.awaitingMut.Lock()
if rc := c.awaiting[msgID]; rc != nil {
c.awaiting[msgID] = nil
rc <- asyncResult{resp.Data, nil}
rc <- asyncResult{resp.Data, codeToError(resp.Code)}
close(rc)
}
c.awaitingMut.Unlock()
@@ -493,7 +578,15 @@ func (c *rawConnection) writerLoop() {
return
}
if len(uncBuf) >= c.compressionThreshold {
compress := false
switch c.compression {
case CompressAlways:
compress = true
case CompressMetadata:
compress = hm.hdr.msgType != messageTypeResponse
}
if compress && len(uncBuf) >= compressionThreshold {
// Use compression for large messages
hm.hdr.compression = true
@@ -630,8 +723,8 @@ func (c *rawConnection) pingerLoop() {
type Statistics struct {
At time.Time
InBytesTotal uint64
OutBytesTotal uint64
InBytesTotal int64
OutBytesTotal int64
}
func (c *rawConnection) Statistics() Statistics {

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -80,8 +67,8 @@ func TestPing(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, nil, "name", true).(wireFormatConnection).next.(*rawConnection)
c1 := NewConnection(c1ID, br, aw, nil, "name", true).(wireFormatConnection).next.(*rawConnection)
c0 := NewConnection(c0ID, ar, bw, nil, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c1 := NewConnection(c1ID, br, aw, nil, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
if ok := c0.ping(); !ok {
t.Error("c0 ping failed")
@@ -104,8 +91,8 @@ func TestPingErr(t *testing.T) {
eaw := &ErrPipe{PipeWriter: *aw, max: i, err: e}
ebw := &ErrPipe{PipeWriter: *bw, max: j, err: e}
c0 := NewConnection(c0ID, ar, ebw, m0, "name", true).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, eaw, m1, "name", true)
c0 := NewConnection(c0ID, ar, ebw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, eaw, m1, "name", CompressAlways)
res := c0.ping()
if (i < 8 || j < 8) && res {
@@ -180,8 +167,8 @@ func TestVersionErr(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", true).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", true)
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
w := xdr.NewWriter(c0.cw)
w.WriteUint32(encodeHeader(header{
@@ -189,7 +176,7 @@ func TestVersionErr(t *testing.T) {
msgID: 0,
msgType: 0,
}))
w.WriteUint32(0)
w.WriteUint32(0) // Avoids reader closing due to EOF
if !m1.isClosed() {
t.Error("Connection should close due to unknown version")
@@ -203,8 +190,8 @@ func TestTypeErr(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", true).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", true)
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
w := xdr.NewWriter(c0.cw)
w.WriteUint32(encodeHeader(header{
@@ -212,7 +199,7 @@ func TestTypeErr(t *testing.T) {
msgID: 0,
msgType: 42,
}))
w.WriteUint32(0)
w.WriteUint32(0) // Avoids reader closing due to EOF
if !m1.isClosed() {
t.Error("Connection should close due to unknown message type")
@@ -226,8 +213,8 @@ func TestClose(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", true).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", true)
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.close(nil)
@@ -242,10 +229,10 @@ func TestClose(t *testing.T) {
t.Error("Ping should not return true")
}
c0.Index("default", nil)
c0.Index("default", nil)
c0.Index("default", nil, 0, nil)
c0.Index("default", nil, 0, nil)
if _, err := c0.Request("default", "foo", 0, 0); err == nil {
if _, err := c0.Request("default", "foo", 0, 0, nil, 0, nil); err == nil {
t.Error("Request should return an error")
}
}

View File

@@ -0,0 +1,115 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
// The Vector type represents a version vector. The zero value is a usable
// version vector. The vector has slice semantics and some operations on it
// are "append-like" in that they may return the same vector modified, or a
// new allocated Vector with the modified contents.
type Vector []Counter
// Counter represents a single counter in the version vector.
type Counter struct {
ID uint64
Value uint64
}
// Update returns a Vector with the index for the specific ID incremented by
// one. If it is possible, the vector v is updated and returned. If it is not,
// a copy will be created, updated and returned.
func (v Vector) Update(ID uint64) Vector {
for i := range v {
if v[i].ID == ID {
// Update an existing index
v[i].Value++
return v
} else if v[i].ID > ID {
// Insert a new index
nv := make(Vector, len(v)+1)
copy(nv, v[:i])
nv[i].ID = ID
nv[i].Value = 1
copy(nv[i+1:], v[i:])
return nv
}
}
// Append a new new index
return append(v, Counter{ID, 1})
}
// Merge returns the vector containing the maximum indexes from a and b. If it
// is possible, the vector a is updated and returned. If it is not, a copy
// will be created, updated and returned.
func (a Vector) Merge(b Vector) Vector {
var ai, bi int
for bi < len(b) {
if ai == len(a) {
// We've reach the end of a, all that remains are appends
return append(a, b[bi:]...)
}
if a[ai].ID > b[bi].ID {
// The index from b should be inserted here
n := make(Vector, len(a)+1)
copy(n, a[:ai])
n[ai] = b[bi]
copy(n[ai+1:], a[ai:])
a = n
}
if a[ai].ID == b[bi].ID {
if v := b[bi].Value; v > a[ai].Value {
a[ai].Value = v
}
}
if bi < len(b) && a[ai].ID == b[bi].ID {
bi++
}
ai++
}
return a
}
// Copy returns an identical vector that is not shared with v.
func (v Vector) Copy() Vector {
nv := make(Vector, len(v))
copy(nv, v)
return nv
}
// Equal returns true when the two vectors are equivalent.
func (a Vector) Equal(b Vector) bool {
return a.Compare(b) == Equal
}
// LesserEqual returns true when the two vectors are equivalent or a is Lesser
// than b.
func (a Vector) LesserEqual(b Vector) bool {
comp := a.Compare(b)
return comp == Lesser || comp == Equal
}
// LesserEqual returns true when the two vectors are equivalent or a is Greater
// than b.
func (a Vector) GreaterEqual(b Vector) bool {
comp := a.Compare(b)
return comp == Greater || comp == Equal
}
// Concurrent returns true when the two vectors are concrurrent.
func (a Vector) Concurrent(b Vector) bool {
comp := a.Compare(b)
return comp == ConcurrentGreater || comp == ConcurrentLesser
}
// Counter returns the current value of the given counter ID.
func (v Vector) Counter(id uint64) uint64 {
for _, c := range v {
if c.ID == id {
return c.Value
}
}
return 0
}

View File

@@ -0,0 +1,89 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
// Ordering represents the relationship between two Vectors.
type Ordering int
const (
Equal Ordering = iota
Greater
Lesser
ConcurrentLesser
ConcurrentGreater
)
// There's really no such thing as "concurrent lesser" and "concurrent
// greater" in version vectors, just "concurrent". But it's useful to be able
// to get a strict ordering between versions for stable sorts and so on, so we
// return both variants. The convenience method Concurrent() can be used to
// check for either case.
// Compare returns the Ordering that describes a's relation to b.
func (a Vector) Compare(b Vector) Ordering {
var ai, bi int // index into a and b
var av, bv Counter // value at current index
result := Equal
for ai < len(a) || bi < len(b) {
var aMissing, bMissing bool
if ai < len(a) {
av = a[ai]
} else {
av = Counter{}
aMissing = true
}
if bi < len(b) {
bv = b[bi]
} else {
bv = Counter{}
bMissing = true
}
switch {
case av.ID == bv.ID:
// We have a counter value for each side
if av.Value > bv.Value {
if result == Lesser {
return ConcurrentLesser
}
result = Greater
} else if av.Value < bv.Value {
if result == Greater {
return ConcurrentGreater
}
result = Lesser
}
case !aMissing && av.ID < bv.ID || bMissing:
// Value is missing on the b side
if av.Value > 0 {
if result == Lesser {
return ConcurrentLesser
}
result = Greater
}
case !bMissing && bv.ID < av.ID || aMissing:
// Value is missing on the a side
if bv.Value > 0 {
if result == Greater {
return ConcurrentGreater
}
result = Lesser
}
}
if ai < len(a) && (av.ID <= bv.ID || bMissing) {
ai++
}
if bi < len(b) && (bv.ID <= av.ID || aMissing) {
bi++
}
}
return result
}

View File

@@ -0,0 +1,249 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
import (
"math"
"testing"
)
func TestCompare(t *testing.T) {
testcases := []struct {
a, b Vector
r Ordering
}{
// Empty vectors are identical
{Vector{}, Vector{}, Equal},
{Vector{}, nil, Equal},
{nil, Vector{}, Equal},
{nil, Vector{Counter{42, 0}}, Equal},
{Vector{}, Vector{Counter{42, 0}}, Equal},
{Vector{Counter{42, 0}}, nil, Equal},
{Vector{Counter{42, 0}}, Vector{}, Equal},
// Zero is the implied value for a missing Counter
{
Vector{Counter{42, 0}},
Vector{Counter{77, 0}},
Equal,
},
// Equal vectors are equal
{
Vector{Counter{42, 33}},
Vector{Counter{42, 33}},
Equal,
},
{
Vector{Counter{42, 33}, Counter{77, 24}},
Vector{Counter{42, 33}, Counter{77, 24}},
Equal,
},
// These a-vectors are all greater than the b-vector
{
Vector{Counter{42, 1}},
nil,
Greater,
},
{
Vector{Counter{42, 1}},
Vector{},
Greater,
},
{
Vector{Counter{0, 1}},
Vector{Counter{0, 0}},
Greater,
},
{
Vector{Counter{42, 1}},
Vector{Counter{42, 0}},
Greater,
},
{
Vector{Counter{math.MaxUint64, 1}},
Vector{Counter{math.MaxUint64, 0}},
Greater,
},
{
Vector{Counter{0, math.MaxUint64}},
Vector{Counter{0, 0}},
Greater,
},
{
Vector{Counter{42, math.MaxUint64}},
Vector{Counter{42, 0}},
Greater,
},
{
Vector{Counter{math.MaxUint64, math.MaxUint64}},
Vector{Counter{math.MaxUint64, 0}},
Greater,
},
{
Vector{Counter{0, math.MaxUint64}},
Vector{Counter{0, math.MaxUint64 - 1}},
Greater,
},
{
Vector{Counter{42, math.MaxUint64}},
Vector{Counter{42, math.MaxUint64 - 1}},
Greater,
},
{
Vector{Counter{math.MaxUint64, math.MaxUint64}},
Vector{Counter{math.MaxUint64, math.MaxUint64 - 1}},
Greater,
},
{
Vector{Counter{42, 2}},
Vector{Counter{42, 1}},
Greater,
},
{
Vector{Counter{22, 22}, Counter{42, 2}},
Vector{Counter{22, 22}, Counter{42, 1}},
Greater,
},
{
Vector{Counter{42, 2}, Counter{77, 3}},
Vector{Counter{42, 1}, Counter{77, 3}},
Greater,
},
{
Vector{Counter{22, 22}, Counter{42, 2}, Counter{77, 3}},
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
Greater,
},
{
Vector{Counter{22, 23}, Counter{42, 2}, Counter{77, 4}},
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
Greater,
},
// These a-vectors are all lesser than the b-vector
{nil, Vector{Counter{42, 1}}, Lesser},
{Vector{}, Vector{Counter{42, 1}}, Lesser},
{
Vector{Counter{42, 0}},
Vector{Counter{42, 1}},
Lesser,
},
{
Vector{Counter{42, 1}},
Vector{Counter{42, 2}},
Lesser,
},
{
Vector{Counter{22, 22}, Counter{42, 1}},
Vector{Counter{22, 22}, Counter{42, 2}},
Lesser,
},
{
Vector{Counter{42, 1}, Counter{77, 3}},
Vector{Counter{42, 2}, Counter{77, 3}},
Lesser,
},
{
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
Vector{Counter{22, 22}, Counter{42, 2}, Counter{77, 3}},
Lesser,
},
{
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
Vector{Counter{22, 23}, Counter{42, 2}, Counter{77, 4}},
Lesser,
},
// These are all in conflict
{
Vector{Counter{42, 2}},
Vector{Counter{43, 1}},
ConcurrentGreater,
},
{
Vector{Counter{43, 1}},
Vector{Counter{42, 2}},
ConcurrentLesser,
},
{
Vector{Counter{22, 23}, Counter{42, 1}},
Vector{Counter{22, 22}, Counter{42, 2}},
ConcurrentGreater,
},
{
Vector{Counter{22, 21}, Counter{42, 2}},
Vector{Counter{22, 22}, Counter{42, 1}},
ConcurrentLesser,
},
{
Vector{Counter{22, 21}, Counter{42, 2}, Counter{43, 1}},
Vector{Counter{20, 1}, Counter{22, 22}, Counter{42, 1}},
ConcurrentLesser,
},
}
for i, tc := range testcases {
// Test real Compare
if r := tc.a.Compare(tc.b); r != tc.r {
t.Errorf("%d: %+v.Compare(%+v) == %v (expected %v)", i, tc.a, tc.b, r, tc.r)
}
// Test convenience functions
switch tc.r {
case Greater:
if tc.a.Equal(tc.b) {
t.Errorf("%+v == %+v", tc.a, tc.b)
}
if tc.a.Concurrent(tc.b) {
t.Errorf("%+v concurrent %+v", tc.a, tc.b)
}
if !tc.a.GreaterEqual(tc.b) {
t.Errorf("%+v not >= %+v", tc.a, tc.b)
}
if tc.a.LesserEqual(tc.b) {
t.Errorf("%+v <= %+v", tc.a, tc.b)
}
case Lesser:
if tc.a.Concurrent(tc.b) {
t.Errorf("%+v concurrent %+v", tc.a, tc.b)
}
if tc.a.Equal(tc.b) {
t.Errorf("%+v == %+v", tc.a, tc.b)
}
if tc.a.GreaterEqual(tc.b) {
t.Errorf("%+v >= %+v", tc.a, tc.b)
}
if !tc.a.LesserEqual(tc.b) {
t.Errorf("%+v not <= %+v", tc.a, tc.b)
}
case Equal:
if tc.a.Concurrent(tc.b) {
t.Errorf("%+v concurrent %+v", tc.a, tc.b)
}
if !tc.a.Equal(tc.b) {
t.Errorf("%+v not == %+v", tc.a, tc.b)
}
if !tc.a.GreaterEqual(tc.b) {
t.Errorf("%+v not <= %+v", tc.a, tc.b)
}
if !tc.a.LesserEqual(tc.b) {
t.Errorf("%+v not <= %+v", tc.a, tc.b)
}
case ConcurrentLesser, ConcurrentGreater:
if !tc.a.Concurrent(tc.b) {
t.Errorf("%+v not concurrent %+v", tc.a, tc.b)
}
if tc.a.Equal(tc.b) {
t.Errorf("%+v == %+v", tc.a, tc.b)
}
if tc.a.GreaterEqual(tc.b) {
t.Errorf("%+v >= %+v", tc.a, tc.b)
}
if tc.a.LesserEqual(tc.b) {
t.Errorf("%+v <= %+v", tc.a, tc.b)
}
}
}
}

View File

@@ -0,0 +1,134 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
import "testing"
func TestUpdate(t *testing.T) {
var v Vector
// Append
v = v.Update(42)
expected := Vector{Counter{42, 1}}
if v.Compare(expected) != Equal {
t.Errorf("Update error, %+v != %+v", v, expected)
}
// Insert at front
v = v.Update(36)
expected = Vector{Counter{36, 1}, Counter{42, 1}}
if v.Compare(expected) != Equal {
t.Errorf("Update error, %+v != %+v", v, expected)
}
// Insert in moddle
v = v.Update(37)
expected = Vector{Counter{36, 1}, Counter{37, 1}, Counter{42, 1}}
if v.Compare(expected) != Equal {
t.Errorf("Update error, %+v != %+v", v, expected)
}
// Update existing
v = v.Update(37)
expected = Vector{Counter{36, 1}, Counter{37, 2}, Counter{42, 1}}
if v.Compare(expected) != Equal {
t.Errorf("Update error, %+v != %+v", v, expected)
}
}
func TestCopy(t *testing.T) {
v0 := Vector{Counter{42, 1}}
v1 := v0.Copy()
v1.Update(42)
if v0.Compare(v1) != Lesser {
t.Errorf("Copy error, %+v should be ancestor of %+v", v0, v1)
}
}
func TestMerge(t *testing.T) {
testcases := []struct {
a, b, m Vector
}{
// No-ops
{
Vector{},
Vector{},
Vector{},
},
{
Vector{Counter{22, 1}, Counter{42, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
},
// Appends
{
Vector{},
Vector{Counter{22, 1}, Counter{42, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
},
{
Vector{Counter{22, 1}},
Vector{Counter{42, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
},
{
Vector{Counter{22, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
},
// Insert
{
Vector{Counter{22, 1}, Counter{42, 1}},
Vector{Counter{22, 1}, Counter{23, 2}, Counter{42, 1}},
Vector{Counter{22, 1}, Counter{23, 2}, Counter{42, 1}},
},
{
Vector{Counter{42, 1}},
Vector{Counter{22, 1}},
Vector{Counter{22, 1}, Counter{42, 1}},
},
// Update
{
Vector{Counter{22, 1}, Counter{42, 2}},
Vector{Counter{22, 2}, Counter{42, 1}},
Vector{Counter{22, 2}, Counter{42, 2}},
},
// All of the above
{
Vector{Counter{10, 1}, Counter{20, 2}, Counter{30, 1}},
Vector{Counter{5, 1}, Counter{10, 2}, Counter{15, 1}, Counter{20, 1}, Counter{25, 1}, Counter{35, 1}},
Vector{Counter{5, 1}, Counter{10, 2}, Counter{15, 1}, Counter{20, 2}, Counter{25, 1}, Counter{30, 1}, Counter{35, 1}},
},
}
for i, tc := range testcases {
if m := tc.a.Merge(tc.b); m.Compare(tc.m) != Equal {
t.Errorf("%d: %+v.Merge(%+v) == %+v (expected %+v)", i, tc.a, tc.b, m, tc.m)
}
}
}
func TestCounterValue(t *testing.T) {
v0 := Vector{Counter{42, 1}, Counter{64, 5}}
if v0.Counter(42) != 1 {
t.Error("Counter error, %d != %d", v0.Counter(42), 1)
}
if v0.Counter(64) != 5 {
t.Error("Counter error, %d != %d", v0.Counter(64), 5)
}
if v0.Counter(72) != 0 {
t.Error("Counter error, %d != %d", v0.Counter(72), 0)
}
}

View File

@@ -0,0 +1,38 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
// This stuff is hacked up manually because genxdr doesn't support 'type
// Vector []Counter' declarations and it was tricky when I tried to add it...
type xdrWriter interface {
WriteUint32(uint32) (int, error)
WriteUint64(uint64) (int, error)
}
type xdrReader interface {
ReadUint32() uint32
ReadUint64() uint64
}
// EncodeXDRInto encodes the vector as an XDR object into the given XDR
// encoder.
func (v Vector) EncodeXDRInto(w xdrWriter) (int, error) {
w.WriteUint32(uint32(len(v)))
for i := range v {
w.WriteUint64(v[i].ID)
w.WriteUint64(v[i].Value)
}
return 4 + 16*len(v), nil
}
// DecodeXDRFrom decodes the XDR objects from the given reader into itself.
func (v *Vector) DecodeXDRFrom(r xdrReader) error {
l := int(r.ReadUint32())
n := make(Vector, l)
for i := range n {
n[i].ID = r.ReadUint64()
n[i].Value = r.ReadUint64()
}
*v = n
return nil
}

View File

@@ -1,24 +1,11 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"path/filepath"
"code.google.com/p/go.text/unicode/norm"
"golang.org/x/text/unicode/norm"
)
type wireFormatConnection struct {
@@ -33,7 +20,7 @@ func (c wireFormatConnection) Name() string {
return c.next.Name()
}
func (c wireFormatConnection) Index(folder string, fs []FileInfo) error {
func (c wireFormatConnection) Index(folder string, fs []FileInfo, flags uint32, options []Option) error {
var myFs = make([]FileInfo, len(fs))
copy(myFs, fs)
@@ -41,10 +28,10 @@ func (c wireFormatConnection) Index(folder string, fs []FileInfo) error {
myFs[i].Name = norm.NFC.String(filepath.ToSlash(myFs[i].Name))
}
return c.next.Index(folder, myFs)
return c.next.Index(folder, myFs, flags, options)
}
func (c wireFormatConnection) IndexUpdate(folder string, fs []FileInfo) error {
func (c wireFormatConnection) IndexUpdate(folder string, fs []FileInfo, flags uint32, options []Option) error {
var myFs = make([]FileInfo, len(fs))
copy(myFs, fs)
@@ -52,12 +39,12 @@ func (c wireFormatConnection) IndexUpdate(folder string, fs []FileInfo) error {
myFs[i].Name = norm.NFC.String(filepath.ToSlash(myFs[i].Name))
}
return c.next.IndexUpdate(folder, myFs)
return c.next.IndexUpdate(folder, myFs, flags, options)
}
func (c wireFormatConnection) Request(folder, name string, offset int64, size int) ([]byte, error) {
func (c wireFormatConnection) Request(folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
name = norm.NFC.String(filepath.ToSlash(name))
return c.next.Request(folder, name, offset, size)
return c.next.Request(folder, name, offset, size, hash, flags, options)
}
func (c wireFormatConnection) ClusterConfig(config ClusterConfigMessage) {

View File

@@ -4,7 +4,7 @@
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build go1.3
// +build !go1.2
package leveldb

View File

@@ -0,0 +1,30 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build !go1.2
package cache
import (
"math/rand"
"testing"
)
func BenchmarkLRUCache(b *testing.B) {
c := NewCache(NewLRU(10000))
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for pb.Next() {
key := uint64(r.Intn(1000000))
c.Get(0, key, func() (int, Value) {
return 1, key
}).Release()
}
})
}

View File

@@ -8,152 +8,669 @@
package cache
import (
"sync"
"sync/atomic"
"unsafe"
"github.com/syndtr/goleveldb/leveldb/util"
)
// SetFunc is the function that will be called by Namespace.Get to create
// a cache object, if charge is less than one than the cache object will
// not be registered to cache tree, if value is nil then the cache object
// will not be created.
type SetFunc func() (charge int, value interface{})
// DelFin is the function that will be called as the result of a delete operation.
// Exist == true is indication that the object is exist, and pending == true is
// indication of deletion already happen but haven't done yet (wait for all handles
// to be released). And exist == false means the object doesn't exist.
type DelFin func(exist, pending bool)
// PurgeFin is the function that will be called as the result of a purge operation.
type PurgeFin func(ns, key uint64)
// Cache is a cache tree. A cache instance must be goroutine-safe.
type Cache interface {
// SetCapacity sets cache tree capacity.
SetCapacity(capacity int)
// Capacity returns cache tree capacity.
// Cacher provides interface to implements a caching functionality.
// An implementation must be goroutine-safe.
type Cacher interface {
// Capacity returns cache capacity.
Capacity() int
// Used returns used cache tree capacity.
Used() int
// SetCapacity sets cache capacity.
SetCapacity(capacity int)
// Size returns entire alive cache objects size.
Size() int
// Promote promotes the 'cache node'.
Promote(n *Node)
// NumObjects returns number of alive objects.
NumObjects() int
// Ban evicts the 'cache node' and prevent subsequent 'promote'.
Ban(n *Node)
// GetNamespace gets cache namespace with the given id.
// GetNamespace is never return nil.
GetNamespace(id uint64) Namespace
// Evict evicts the 'cache node'.
Evict(n *Node)
// PurgeNamespace purges cache namespace with the given id from this cache tree.
// Also read Namespace.Purge.
PurgeNamespace(id uint64, fin PurgeFin)
// EvictNS evicts 'cache node' with the given namespace.
EvictNS(ns uint64)
// ZapNamespace detaches cache namespace with the given id from this cache tree.
// Also read Namespace.Zap.
ZapNamespace(id uint64)
// EvictAll evicts all 'cache node'.
EvictAll()
// Purge purges all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Purge method on all cache namespace.
Purge(fin PurgeFin)
// Zap detaches all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Zap method on all cache namespace.
Zap()
// Close closes the 'cache tree'
Close() error
}
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
type Namespace interface {
// Get gets cache object with the given key.
// If cache object is not found and setf is not nil, Get will atomically creates
// the cache object by calling setf. Otherwise Get will returns nil.
//
// The returned cache handle should be released after use by calling Release
// method.
Get(key uint64, setf SetFunc) Handle
// Value is a 'cacheable object'. It may implements util.Releaser, if
// so the the Release method will be called once object is released.
type Value interface{}
// Delete removes cache object with the given key from cache tree.
// A deleted cache object will be released as soon as all of its handles have
// been released.
// Delete only happen once, subsequent delete will consider cache object doesn't
// exist, even if the cache object ins't released yet.
//
// If not nil, fin will be called if the cache object doesn't exist or when
// finally be released.
//
// Delete returns true if such cache object exist and never been deleted.
Delete(key uint64, fin DelFin) bool
// Purge removes all cache objects within this namespace from cache tree.
// This is the same as doing delete on all cache objects.
//
// If not nil, fin will be called on all cache objects when its finally be
// released.
Purge(fin PurgeFin)
// Zap detaches namespace from cache tree and release all its cache objects.
// A zapped namespace can never be filled again.
// Calling Get on zapped namespace will always return nil.
Zap()
type CacheGetter struct {
Cache *Cache
NS uint64
}
// Handle is a cache handle.
type Handle interface {
// Release releases this cache handle. This method can be safely called mutiple
// times.
Release()
// Value returns value of this cache handle.
// Value will returns nil after this cache handle have be released.
Value() interface{}
func (g *CacheGetter) Get(key uint64, setFunc func() (size int, value Value)) *Handle {
return g.Cache.Get(g.NS, key, setFunc)
}
// The hash tables implementation is based on:
// "Dynamic-Sized Nonblocking Hash Tables", by Yujie Liu, Kunlong Zhang, and Michael Spear. ACM Symposium on Principles of Distributed Computing, Jul 2014.
const (
DelNotExist = iota
DelExist
DelPendig
mInitialSize = 1 << 4
mOverflowThreshold = 1 << 5
mOverflowGrowThreshold = 1 << 7
)
// Namespace state.
type nsState int
const (
nsEffective nsState = iota
nsZapped
)
// Node state.
type nodeState int
const (
nodeZero nodeState = iota
nodeEffective
nodeEvicted
nodeDeleted
)
// Fake handle.
type fakeHandle struct {
value interface{}
fin func()
once uint32
type mBucket struct {
mu sync.Mutex
node []*Node
frozen bool
}
func (h *fakeHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.value
func (b *mBucket) freeze() []*Node {
b.mu.Lock()
defer b.mu.Unlock()
if !b.frozen {
b.frozen = true
}
return b.node
}
func (b *mBucket) get(r *Cache, h *mNode, hash uint32, ns, key uint64, noset bool) (done, added bool, n *Node) {
b.mu.Lock()
if b.frozen {
b.mu.Unlock()
return
}
// Scan the node.
for _, n := range b.node {
if n.hash == hash && n.ns == ns && n.key == key {
atomic.AddInt32(&n.ref, 1)
b.mu.Unlock()
return true, false, n
}
}
// Get only.
if noset {
b.mu.Unlock()
return true, false, nil
}
// Create node.
n = &Node{
r: r,
hash: hash,
ns: ns,
key: key,
ref: 1,
}
// Add node to bucket.
b.node = append(b.node, n)
bLen := len(b.node)
b.mu.Unlock()
// Update counter.
grow := atomic.AddInt32(&r.nodes, 1) >= h.growThreshold
if bLen > mOverflowThreshold {
grow = grow || atomic.AddInt32(&h.overflow, 1) >= mOverflowGrowThreshold
}
// Grow.
if grow && atomic.CompareAndSwapInt32(&h.resizeInProgess, 0, 1) {
nhLen := len(h.buckets) << 1
nh := &mNode{
buckets: make([]unsafe.Pointer, nhLen),
mask: uint32(nhLen) - 1,
pred: unsafe.Pointer(h),
growThreshold: int32(nhLen * mOverflowThreshold),
shrinkThreshold: int32(nhLen >> 1),
}
ok := atomic.CompareAndSwapPointer(&r.mHead, unsafe.Pointer(h), unsafe.Pointer(nh))
if !ok {
panic("BUG: failed swapping head")
}
go nh.initBuckets()
}
return true, true, n
}
func (b *mBucket) delete(r *Cache, h *mNode, hash uint32, ns, key uint64) (done, deleted bool) {
b.mu.Lock()
if b.frozen {
b.mu.Unlock()
return
}
// Scan the node.
var (
n *Node
bLen int
)
for i := range b.node {
n = b.node[i]
if n.ns == ns && n.key == key {
if atomic.LoadInt32(&n.ref) == 0 {
deleted = true
// Call releaser.
if n.value != nil {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
n.value = nil
}
// Remove node from bucket.
b.node = append(b.node[:i], b.node[i+1:]...)
bLen = len(b.node)
}
break
}
}
b.mu.Unlock()
if deleted {
// Call OnDel.
for _, f := range n.onDel {
f()
}
// Update counter.
atomic.AddInt32(&r.size, int32(n.size)*-1)
shrink := atomic.AddInt32(&r.nodes, -1) < h.shrinkThreshold
if bLen >= mOverflowThreshold {
atomic.AddInt32(&h.overflow, -1)
}
// Shrink.
if shrink && len(h.buckets) > mInitialSize && atomic.CompareAndSwapInt32(&h.resizeInProgess, 0, 1) {
nhLen := len(h.buckets) >> 1
nh := &mNode{
buckets: make([]unsafe.Pointer, nhLen),
mask: uint32(nhLen) - 1,
pred: unsafe.Pointer(h),
growThreshold: int32(nhLen * mOverflowThreshold),
shrinkThreshold: int32(nhLen >> 1),
}
ok := atomic.CompareAndSwapPointer(&r.mHead, unsafe.Pointer(h), unsafe.Pointer(nh))
if !ok {
panic("BUG: failed swapping head")
}
go nh.initBuckets()
}
}
return true, deleted
}
type mNode struct {
buckets []unsafe.Pointer // []*mBucket
mask uint32
pred unsafe.Pointer // *mNode
resizeInProgess int32
overflow int32
growThreshold int32
shrinkThreshold int32
}
func (n *mNode) initBucket(i uint32) *mBucket {
if b := (*mBucket)(atomic.LoadPointer(&n.buckets[i])); b != nil {
return b
}
p := (*mNode)(atomic.LoadPointer(&n.pred))
if p != nil {
var node []*Node
if n.mask > p.mask {
// Grow.
pb := (*mBucket)(atomic.LoadPointer(&p.buckets[i&p.mask]))
if pb == nil {
pb = p.initBucket(i & p.mask)
}
m := pb.freeze()
// Split nodes.
for _, x := range m {
if x.hash&n.mask == i {
node = append(node, x)
}
}
} else {
// Shrink.
pb0 := (*mBucket)(atomic.LoadPointer(&p.buckets[i]))
if pb0 == nil {
pb0 = p.initBucket(i)
}
pb1 := (*mBucket)(atomic.LoadPointer(&p.buckets[i+uint32(len(n.buckets))]))
if pb1 == nil {
pb1 = p.initBucket(i + uint32(len(n.buckets)))
}
m0 := pb0.freeze()
m1 := pb1.freeze()
// Merge nodes.
node = make([]*Node, 0, len(m0)+len(m1))
node = append(node, m0...)
node = append(node, m1...)
}
b := &mBucket{node: node}
if atomic.CompareAndSwapPointer(&n.buckets[i], nil, unsafe.Pointer(b)) {
if len(node) > mOverflowThreshold {
atomic.AddInt32(&n.overflow, int32(len(node)-mOverflowThreshold))
}
return b
}
}
return (*mBucket)(atomic.LoadPointer(&n.buckets[i]))
}
func (n *mNode) initBuckets() {
for i := range n.buckets {
n.initBucket(uint32(i))
}
atomic.StorePointer(&n.pred, nil)
}
// Cache is a 'cache map'.
type Cache struct {
mu sync.RWMutex
mHead unsafe.Pointer // *mNode
nodes int32
size int32
cacher Cacher
closed bool
}
// NewCache creates a new 'cache map'. The cacher is optional and
// may be nil.
func NewCache(cacher Cacher) *Cache {
h := &mNode{
buckets: make([]unsafe.Pointer, mInitialSize),
mask: mInitialSize - 1,
growThreshold: int32(mInitialSize * mOverflowThreshold),
shrinkThreshold: 0,
}
for i := range h.buckets {
h.buckets[i] = unsafe.Pointer(&mBucket{})
}
r := &Cache{
mHead: unsafe.Pointer(h),
cacher: cacher,
}
return r
}
func (r *Cache) getBucket(hash uint32) (*mNode, *mBucket) {
h := (*mNode)(atomic.LoadPointer(&r.mHead))
i := hash & h.mask
b := (*mBucket)(atomic.LoadPointer(&h.buckets[i]))
if b == nil {
b = h.initBucket(i)
}
return h, b
}
func (r *Cache) delete(n *Node) bool {
for {
h, b := r.getBucket(n.hash)
done, deleted := b.delete(r, h, n.hash, n.ns, n.key)
if done {
return deleted
}
}
return false
}
// Nodes returns number of 'cache node' in the map.
func (r *Cache) Nodes() int {
return int(atomic.LoadInt32(&r.nodes))
}
// Size returns sums of 'cache node' size in the map.
func (r *Cache) Size() int {
return int(atomic.LoadInt32(&r.size))
}
// Capacity returns cache capacity.
func (r *Cache) Capacity() int {
if r.cacher == nil {
return 0
}
return r.cacher.Capacity()
}
// SetCapacity sets cache capacity.
func (r *Cache) SetCapacity(capacity int) {
if r.cacher != nil {
r.cacher.SetCapacity(capacity)
}
}
// Get gets 'cache node' with the given namespace and key.
// If cache node is not found and setFunc is not nil, Get will atomically creates
// the 'cache node' by calling setFunc. Otherwise Get will returns nil.
//
// The returned 'cache handle' should be released after use by calling Release
// method.
func (r *Cache) Get(ns, key uint64, setFunc func() (size int, value Value)) *Handle {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return nil
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, setFunc == nil)
if done {
if n != nil {
n.mu.Lock()
if n.value == nil {
if setFunc == nil {
n.mu.Unlock()
n.unref()
return nil
}
n.size, n.value = setFunc()
if n.value == nil {
n.size = 0
n.mu.Unlock()
n.unref()
return nil
}
atomic.AddInt32(&r.size, int32(n.size))
}
n.mu.Unlock()
if r.cacher != nil {
r.cacher.Promote(n)
}
return &Handle{unsafe.Pointer(n)}
}
break
}
}
return nil
}
func (h *fakeHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
// Delete removes and ban 'cache node' with the given namespace and key.
// A banned 'cache node' will never inserted into the 'cache tree'. Ban
// only attributed to the particular 'cache node', so when a 'cache node'
// is recreated it will not be banned.
//
// If onDel is not nil, then it will be executed if such 'cache node'
// doesn't exist or once the 'cache node' is released.
//
// Delete return true is such 'cache node' exist.
func (r *Cache) Delete(ns, key uint64, onDel func()) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return false
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, true)
if done {
if n != nil {
if onDel != nil {
n.mu.Lock()
n.onDel = append(n.onDel, onDel)
n.mu.Unlock()
}
if r.cacher != nil {
r.cacher.Ban(n)
}
n.unref()
return true
}
break
}
}
if onDel != nil {
onDel()
}
return false
}
// Evict evicts 'cache node' with the given namespace and key. This will
// simply call Cacher.Evict.
//
// Evict return true is such 'cache node' exist.
func (r *Cache) Evict(ns, key uint64) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return false
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, true)
if done {
if n != nil {
if r.cacher != nil {
r.cacher.Evict(n)
}
n.unref()
return true
}
break
}
}
return false
}
// EvictNS evicts 'cache node' with the given namespace. This will
// simply call Cacher.EvictNS.
func (r *Cache) EvictNS(ns uint64) {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return
}
if h.fin != nil {
h.fin()
h.fin = nil
if r.cacher != nil {
r.cacher.EvictNS(ns)
}
}
// EvictAll evicts all 'cache node'. This will simply call Cacher.EvictAll.
func (r *Cache) EvictAll() {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return
}
if r.cacher != nil {
r.cacher.EvictAll()
}
}
// Close closes the 'cache map' and releases all 'cache node'.
func (r *Cache) Close() error {
r.mu.Lock()
if !r.closed {
r.closed = true
if r.cacher != nil {
if err := r.cacher.Close(); err != nil {
return err
}
}
h := (*mNode)(r.mHead)
h.initBuckets()
for i := range h.buckets {
b := (*mBucket)(h.buckets[i])
for _, n := range b.node {
// Call releaser.
if n.value != nil {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
n.value = nil
}
// Call OnDel.
for _, f := range n.onDel {
f()
}
}
}
}
r.mu.Unlock()
return nil
}
// Node is a 'cache node'.
type Node struct {
r *Cache
hash uint32
ns, key uint64
mu sync.Mutex
size int
value Value
ref int32
onDel []func()
CacheData unsafe.Pointer
}
// NS returns this 'cache node' namespace.
func (n *Node) NS() uint64 {
return n.ns
}
// Key returns this 'cache node' key.
func (n *Node) Key() uint64 {
return n.key
}
// Size returns this 'cache node' size.
func (n *Node) Size() int {
return n.size
}
// Value returns this 'cache node' value.
func (n *Node) Value() Value {
return n.value
}
// Ref returns this 'cache node' ref counter.
func (n *Node) Ref() int32 {
return atomic.LoadInt32(&n.ref)
}
// GetHandle returns an handle for this 'cache node'.
func (n *Node) GetHandle() *Handle {
if atomic.AddInt32(&n.ref, 1) <= 1 {
panic("BUG: Node.GetHandle on zero ref")
}
return &Handle{unsafe.Pointer(n)}
}
func (n *Node) unref() {
if atomic.AddInt32(&n.ref, -1) == 0 {
n.r.delete(n)
}
}
func (n *Node) unrefLocked() {
if atomic.AddInt32(&n.ref, -1) == 0 {
n.r.mu.RLock()
if !n.r.closed {
n.r.delete(n)
}
n.r.mu.RUnlock()
}
}
type Handle struct {
n unsafe.Pointer // *Node
}
func (h *Handle) Value() Value {
n := (*Node)(atomic.LoadPointer(&h.n))
if n != nil {
return n.value
}
return nil
}
func (h *Handle) Release() {
nPtr := atomic.LoadPointer(&h.n)
if nPtr != nil && atomic.CompareAndSwapPointer(&h.n, nPtr, nil) {
n := (*Node)(nPtr)
n.unrefLocked()
}
}
func murmur32(ns, key uint64, seed uint32) uint32 {
const (
m = uint32(0x5bd1e995)
r = 24
)
k1 := uint32(ns >> 32)
k2 := uint32(ns)
k3 := uint32(key >> 32)
k4 := uint32(key)
k1 *= m
k1 ^= k1 >> r
k1 *= m
k2 *= m
k2 ^= k2 >> r
k2 *= m
k3 *= m
k3 ^= k3 >> r
k3 *= m
k4 *= m
k4 ^= k4 >> r
k4 *= m
h := seed
h *= m
h ^= k1
h *= m
h ^= k2
h *= m
h ^= k3
h *= m
h ^= k4
h ^= h >> 13
h *= m
h ^= h >> 15
return h
}

View File

@@ -13,11 +13,26 @@ import (
"sync/atomic"
"testing"
"time"
"unsafe"
)
type int32o int32
func (o *int32o) acquire() {
if atomic.AddInt32((*int32)(o), 1) != 1 {
panic("BUG: invalid ref")
}
}
func (o *int32o) Release() {
if atomic.AddInt32((*int32)(o), -1) != 0 {
panic("BUG: invalid ref")
}
}
type releaserFunc struct {
fn func()
value interface{}
value Value
}
func (r releaserFunc) Release() {
@@ -26,8 +41,8 @@ func (r releaserFunc) Release() {
}
}
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
return ns.Get(key, func() (int, interface{}) {
func set(c *Cache, ns, key uint64, value Value, charge int, relf func()) *Handle {
return c.Get(ns, key, func() (int, Value) {
if relf != nil {
return charge, releaserFunc{relf, value}
} else {
@@ -36,7 +51,246 @@ func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) H
})
}
func TestCache_HitMiss(t *testing.T) {
func TestCacheMap(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
nsx := []struct {
nobjects, nhandles, concurrent, repeat int
}{
{10000, 400, 50, 3},
{100000, 1000, 100, 10},
}
var (
objects [][]int32o
handles [][]unsafe.Pointer
)
for _, x := range nsx {
objects = append(objects, make([]int32o, x.nobjects))
handles = append(handles, make([]unsafe.Pointer, x.nhandles))
}
c := NewCache(nil)
wg := new(sync.WaitGroup)
var done int32
for ns, x := range nsx {
for i := 0; i < x.concurrent; i++ {
wg.Add(1)
go func(ns, i, repeat int, objects []int32o, handles []unsafe.Pointer) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for j := len(objects) * repeat; j >= 0; j-- {
key := uint64(r.Intn(len(objects)))
h := c.Get(uint64(ns), key, func() (int, Value) {
o := &objects[key]
o.acquire()
return 1, o
})
if v := h.Value().(*int32o); v != &objects[key] {
t.Fatalf("#%d invalid value: want=%p got=%p", ns, &objects[key], v)
}
if objects[key] != 1 {
t.Fatalf("#%d invalid object %d: %d", ns, key, objects[key])
}
if !atomic.CompareAndSwapPointer(&handles[r.Intn(len(handles))], nil, unsafe.Pointer(h)) {
h.Release()
}
}
}(ns, i, x.repeat, objects[ns], handles[ns])
}
go func(handles []unsafe.Pointer) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for atomic.LoadInt32(&done) == 0 {
i := r.Intn(len(handles))
h := (*Handle)(atomic.LoadPointer(&handles[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles[i], unsafe.Pointer(h), nil) {
h.Release()
}
time.Sleep(time.Millisecond)
}
}(handles[ns])
}
go func() {
handles := make([]*Handle, 100000)
for atomic.LoadInt32(&done) == 0 {
for i := range handles {
handles[i] = c.Get(999999999, uint64(i), func() (int, Value) {
return 1, 1
})
}
for _, h := range handles {
h.Release()
}
}
}()
wg.Wait()
atomic.StoreInt32(&done, 1)
for _, handles0 := range handles {
for i := range handles0 {
h := (*Handle)(atomic.LoadPointer(&handles0[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles0[i], unsafe.Pointer(h), nil) {
h.Release()
}
}
}
for ns, objects0 := range objects {
for i, o := range objects0 {
if o != 0 {
t.Fatalf("invalid object #%d.%d: ref=%d", ns, i, o)
}
}
}
}
func TestCacheMap_NodesAndSize(t *testing.T) {
c := NewCache(nil)
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 2, nil)
set(c, 1, 1, 3, 3, nil)
set(c, 2, 1, 4, 1, nil)
if c.Nodes() != 4 {
t.Errorf("invalid nodes counter: want=%d got=%d", 4, c.Nodes())
}
if c.Size() != 7 {
t.Errorf("invalid size counter: want=%d got=%d", 4, c.Size())
}
}
func TestLRUCache_Capacity(t *testing.T) {
c := NewCache(NewLRU(10))
if c.Capacity() != 10 {
t.Errorf("invalid capacity: want=%d got=%d", 10, c.Capacity())
}
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 2, nil).Release()
set(c, 1, 1, 3, 3, nil).Release()
set(c, 2, 1, 4, 1, nil).Release()
set(c, 2, 2, 5, 1, nil).Release()
set(c, 2, 3, 6, 1, nil).Release()
set(c, 2, 4, 7, 1, nil).Release()
set(c, 2, 5, 8, 1, nil).Release()
if c.Nodes() != 7 {
t.Errorf("invalid nodes counter: want=%d got=%d", 7, c.Nodes())
}
if c.Size() != 10 {
t.Errorf("invalid size counter: want=%d got=%d", 10, c.Size())
}
c.SetCapacity(9)
if c.Capacity() != 9 {
t.Errorf("invalid capacity: want=%d got=%d", 9, c.Capacity())
}
if c.Nodes() != 6 {
t.Errorf("invalid nodes counter: want=%d got=%d", 6, c.Nodes())
}
if c.Size() != 8 {
t.Errorf("invalid size counter: want=%d got=%d", 8, c.Size())
}
}
func TestCacheMap_NilValue(t *testing.T) {
c := NewCache(NewLRU(10))
h := c.Get(0, 0, func() (size int, value Value) {
return 1, nil
})
if h != nil {
t.Error("cache handle is non-nil")
}
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
}
func TestLRUCache_GetLatency(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
const (
concurrentSet = 30
concurrentGet = 3
duration = 3 * time.Second
delay = 3 * time.Millisecond
maxkey = 100000
)
var (
set, getHit, getAll int32
getMaxLatency, getDuration int64
)
c := NewCache(NewLRU(5000))
wg := &sync.WaitGroup{}
until := time.Now().Add(duration)
for i := 0; i < concurrentSet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for time.Now().Before(until) {
c.Get(0, uint64(r.Intn(maxkey)), func() (int, Value) {
time.Sleep(delay)
atomic.AddInt32(&set, 1)
return 1, 1
}).Release()
}
}(i)
}
for i := 0; i < concurrentGet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for {
mark := time.Now()
if mark.Before(until) {
h := c.Get(0, uint64(r.Intn(maxkey)), nil)
latency := int64(time.Now().Sub(mark))
m := atomic.LoadInt64(&getMaxLatency)
if latency > m {
atomic.CompareAndSwapInt64(&getMaxLatency, m, latency)
}
atomic.AddInt64(&getDuration, latency)
if h != nil {
atomic.AddInt32(&getHit, 1)
h.Release()
}
atomic.AddInt32(&getAll, 1)
} else {
break
}
}
}(i)
}
wg.Wait()
getAvglatency := time.Duration(getDuration) / time.Duration(getAll)
t.Logf("set=%d getHit=%d getAll=%d getMaxLatency=%v getAvgLatency=%v",
set, getHit, getAll, time.Duration(getMaxLatency), getAvglatency)
if getAvglatency > delay/3 {
t.Errorf("get avg latency > %v: got=%v", delay/3, getAvglatency)
}
}
func TestLRUCache_HitMiss(t *testing.T) {
cases := []struct {
key uint64
value string
@@ -54,14 +308,13 @@ func TestCache_HitMiss(t *testing.T) {
}
setfin := 0
c := NewLRUCache(1000)
ns := c.GetNamespace(0)
c := NewCache(NewLRU(1000))
for i, x := range cases {
set(ns, x.key, x.value, len(x.value), func() {
set(c, 0, x.key, x.value, len(x.value), func() {
setfin++
}).Release()
for j, y := range cases {
h := ns.Get(y.key, nil)
h := c.Get(0, y.key, nil)
if j <= i {
// should hit
if h == nil {
@@ -85,7 +338,7 @@ func TestCache_HitMiss(t *testing.T) {
for i, x := range cases {
finalizerOk := false
ns.Delete(x.key, func(exist, pending bool) {
c.Delete(0, x.key, func() {
finalizerOk = true
})
@@ -94,7 +347,7 @@ func TestCache_HitMiss(t *testing.T) {
}
for j, y := range cases {
h := ns.Get(y.key, nil)
h := c.Get(0, y.key, nil)
if j > i {
// should hit
if h == nil {
@@ -122,20 +375,19 @@ func TestCache_HitMiss(t *testing.T) {
}
func TestLRUCache_Eviction(t *testing.T) {
c := NewLRUCache(12)
ns := c.GetNamespace(0)
o1 := set(ns, 1, 1, 1, nil)
set(ns, 2, 2, 1, nil).Release()
set(ns, 3, 3, 1, nil).Release()
set(ns, 4, 4, 1, nil).Release()
set(ns, 5, 5, 1, nil).Release()
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
c := NewCache(NewLRU(12))
o1 := set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 1, nil).Release()
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
set(c, 0, 5, 5, 1, nil).Release()
if h := c.Get(0, 2, nil); h != nil { // 1,3,4,5,2
h.Release()
}
set(ns, 9, 9, 10, nil).Release() // 5,2,9
set(c, 0, 9, 9, 10, nil).Release() // 5,2,9
for _, key := range []uint64{9, 2, 5, 1} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
@@ -147,7 +399,7 @@ func TestLRUCache_Eviction(t *testing.T) {
}
o1.Release()
for _, key := range []uint64{1, 2, 5} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
@@ -158,7 +410,7 @@ func TestLRUCache_Eviction(t *testing.T) {
}
}
for _, key := range []uint64{3, 4, 9} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
@@ -169,487 +421,134 @@ func TestLRUCache_Eviction(t *testing.T) {
}
}
func TestLRUCache_SetGet(t *testing.T) {
c := NewLRUCache(13)
ns := c.GetNamespace(0)
for i := 0; i < 200; i++ {
n := uint64(rand.Intn(99999) % 20)
set(ns, n, n, 1, nil).Release()
if h := ns.Get(n, nil); h != nil {
if h.Value() == nil {
t.Errorf("key '%d' contains nil value", n)
func TestLRUCache_Evict(t *testing.T) {
c := NewCache(NewLRU(6))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
set(c, 1, 1, 4, 1, nil).Release()
set(c, 1, 2, 5, 1, nil).Release()
set(c, 2, 1, 6, 1, nil).Release()
set(c, 2, 2, 7, 1, nil).Release()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
h.Release()
} else {
if x := h.Value().(uint64); x != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
}
t.Errorf("Cache.Get on #%d.%d return nil", ns, key)
}
}
}
if ok := c.Evict(0, 1); !ok {
t.Error("first Cache.Evict on #0.1 return false")
}
if ok := c.Evict(0, 1); ok {
t.Error("second Cache.Evict on #0.1 return true")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #0.1 return non-nil: %v", h.Value())
}
c.EvictNS(1)
if h := c.Get(1, 1, nil); h != nil {
t.Errorf("Cache.Get on #1.1 return non-nil: %v", h.Value())
}
if h := c.Get(1, 2, nil); h != nil {
t.Errorf("Cache.Get on #1.2 return non-nil: %v", h.Value())
}
c.EvictAll()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
t.Errorf("Cache.Get on #%d.%d return non-nil: %v", ns, key, h.Value())
}
}
}
}
func TestLRUCache_Delete(t *testing.T) {
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
if ok := c.Delete(0, 1, delFunc); !ok {
t.Error("Cache.Delete on #1 return false")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #1 return non-nil: %v", h.Value())
}
if ok := c.Delete(0, 1, delFunc); ok {
t.Error("Cache.Delete on #1 return true")
}
h2 := c.Get(0, 2, nil)
if h2 == nil {
t.Error("Cache.Get on #2 return nil")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(1) Cache.Delete on #2 return false")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(2) Cache.Delete on #2 return false")
}
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
c.Get(0, 2, nil).Release()
for key := 2; key <= 4; key++ {
if h := c.Get(0, uint64(key), nil); h != nil {
h.Release()
} else {
t.Errorf("key '%d' doesn't exist", n)
}
}
}
func TestLRUCache_Purge(t *testing.T) {
c := NewLRUCache(3)
ns1 := c.GetNamespace(0)
o1 := set(ns1, 1, 1, 1, nil)
o2 := set(ns1, 2, 2, 1, nil)
ns1.Purge(nil)
set(ns1, 3, 3, 1, nil).Release()
for _, key := range []uint64{1, 2, 3} {
h := ns1.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
o1.Release()
o2.Release()
for _, key := range []uint64{1, 2} {
h := ns1.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
}
type testingCacheObjectCounter struct {
created uint
released uint
}
func (c *testingCacheObjectCounter) createOne() {
c.created++
}
func (c *testingCacheObjectCounter) releaseOne() {
c.released++
}
type testingCacheObject struct {
t *testing.T
cnt *testingCacheObjectCounter
ns, key uint64
releaseCalled bool
}
func (x *testingCacheObject) Release() {
if !x.releaseCalled {
x.releaseCalled = true
x.cnt.releaseOne()
} else {
x.t.Errorf("duplicate setfin NS#%d KEY#%d", x.ns, x.key)
}
}
func TestLRUCache_ConcurrentSetGet(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
seed := time.Now().UnixNano()
t.Logf("seed=%d", seed)
const (
N = 2000000
M = 4000
C = 3
)
var set, get uint32
wg := &sync.WaitGroup{}
c := NewLRUCache(M / 4)
for ni := uint64(0); ni < C; ni++ {
r0 := rand.New(rand.NewSource(seed + int64(ni)))
r1 := rand.New(rand.NewSource(seed + int64(ni) + 1))
ns := c.GetNamespace(ni)
wg.Add(2)
go func(ns Namespace, r *rand.Rand) {
for i := 0; i < N; i++ {
x := uint64(r.Int63n(M))
o := ns.Get(x, func() (int, interface{}) {
atomic.AddUint32(&set, 1)
return 1, x
})
if v := o.Value().(uint64); v != x {
t.Errorf("#%d invalid value, got=%d", x, v)
}
o.Release()
}
wg.Done()
}(ns, r0)
go func(ns Namespace, r *rand.Rand) {
for i := 0; i < N; i++ {
x := uint64(r.Int63n(M))
o := ns.Get(x, nil)
if o != nil {
atomic.AddUint32(&get, 1)
if v := o.Value().(uint64); v != x {
t.Errorf("#%d invalid value, got=%d", x, v)
}
o.Release()
}
}
wg.Done()
}(ns, r1)
}
wg.Wait()
t.Logf("set=%d get=%d", set, get)
}
func TestLRUCache_Finalizer(t *testing.T) {
const (
capacity = 100
goroutines = 100
iterations = 10000
keymax = 8000
)
cnt := &testingCacheObjectCounter{}
c := NewLRUCache(capacity)
type instance struct {
seed int64
rnd *rand.Rand
nsid uint64
ns Namespace
effective int
handles []Handle
handlesMap map[uint64]int
delete bool
purge bool
zap bool
wantDel int
delfinCalled int
delfinCalledAll int
delfinCalledEff int
purgefinCalled int
}
instanceGet := func(p *instance, key uint64) {
h := p.ns.Get(key, func() (charge int, value interface{}) {
to := &testingCacheObject{
t: t, cnt: cnt,
ns: p.nsid,
key: key,
}
p.effective++
cnt.createOne()
return 1, releaserFunc{func() {
to.Release()
p.effective--
}, to}
})
p.handles = append(p.handles, h)
p.handlesMap[key] = p.handlesMap[key] + 1
}
instanceRelease := func(p *instance, i int) {
h := p.handles[i]
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
if n := p.handlesMap[key]; n == 0 {
t.Fatal("key ref == 0")
} else if n > 1 {
p.handlesMap[key] = n - 1
} else {
delete(p.handlesMap, key)
}
h.Release()
p.handles = append(p.handles[:i], p.handles[i+1:]...)
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
}
seed := time.Now().UnixNano()
t.Logf("seed=%d", seed)
instances := make([]*instance, goroutines)
for i := range instances {
p := &instance{}
p.handlesMap = make(map[uint64]int)
p.seed = seed + int64(i)
p.rnd = rand.New(rand.NewSource(p.seed))
p.nsid = uint64(i)
p.ns = c.GetNamespace(p.nsid)
p.delete = i%6 == 0
p.purge = i%8 == 0
p.zap = i%12 == 0 || i%3 == 0
instances[i] = p
}
runr := rand.New(rand.NewSource(seed - 1))
run := func(rnd *rand.Rand, x []*instance, init func(p *instance) bool, fn func(p *instance, i int) bool) {
var (
rx []*instance
rn []int
)
if init == nil {
rx = append([]*instance{}, x...)
rn = make([]int, len(x))
} else {
for _, p := range x {
if init(p) {
rx = append(rx, p)
rn = append(rn, 0)
}
}
}
for len(rx) > 0 {
i := rand.Intn(len(rx))
if fn(rx[i], rn[i]) {
rn[i]++
} else {
rx = append(rx[:i], rx[i+1:]...)
rn = append(rn[:i], rn[i+1:]...)
}
t.Errorf("Cache.Get on #%d return nil", key)
}
}
// Get and release.
run(runr, instances, nil, func(p *instance, i int) bool {
if i < iterations {
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
instanceGet(p, uint64(p.rnd.Intn(keymax)))
} else {
instanceRelease(p, p.rnd.Intn(len(p.handles)))
}
return true
} else {
return false
}
})
if used, cap := c.Used(), c.Capacity(); used > cap {
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
h2.Release()
if h := c.Get(0, 2, nil); h != nil {
t.Errorf("Cache.Get on #2 return non-nil: %v", h.Value())
}
// Check effective objects.
for i, p := range instances {
if int(p.effective) < len(p.handlesMap) {
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
}
}
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// First delete.
run(runr, instances, func(p *instance) bool {
p.wantDel = p.effective
return p.delete
}, func(p *instance, i int) bool {
key := uint64(i)
if key < keymax {
_, wantExist := p.handlesMap[key]
gotExist := p.ns.Delete(key, func(exist, pending bool) {
p.delfinCalledAll++
if exist {
p.delfinCalledEff++
}
})
if !gotExist && wantExist {
t.Errorf("delete on NS#%d KEY#%d not found", p.nsid, key)
}
return true
} else {
return false
}
})
// Second delete.
run(runr, instances, func(p *instance) bool {
p.delfinCalled = 0
return p.delete
}, func(p *instance, i int) bool {
key := uint64(i)
if key < keymax {
gotExist := p.ns.Delete(key, func(exist, pending bool) {
if exist && !pending {
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.nsid, key)
}
p.delfinCalled++
})
if gotExist {
t.Errorf("delete on NS#%d KEY#%d found", p.nsid, key)
}
return true
} else {
if p.delfinCalled != keymax {
t.Errorf("(2) NS#%d not all delete fin called, diff=%d", p.nsid, keymax-p.delfinCalled)
}
return false
}
})
// Purge.
run(runr, instances, func(p *instance) bool {
return p.purge
}, func(p *instance, i int) bool {
p.ns.Purge(func(ns, key uint64) {
p.purgefinCalled++
})
return false
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Release.
run(runr, instances, func(p *instance) bool {
return !p.zap
}, func(p *instance, i int) bool {
if len(p.handles) > 0 {
instanceRelease(p, len(p.handles)-1)
return true
} else {
return false
}
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Zap.
run(runr, instances, func(p *instance) bool {
return p.zap
}, func(p *instance, i int) bool {
p.ns.Zap()
p.handles = nil
p.handlesMap = nil
return false
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
}
c.Purge(nil)
for _, p := range instances {
if p.delete {
if p.delfinCalledAll != keymax {
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.nsid, p.purge, p.zap, keymax-p.delfinCalledAll)
}
if p.delfinCalledEff != p.wantDel {
t.Errorf("#%d not all effective delete fin called, diff=%d", p.nsid, p.wantDel-p.delfinCalledEff)
}
if p.purge && p.purgefinCalled > 0 {
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.nsid, p.delete, p.zap, p.purgefinCalled)
}
} else {
if p.purge {
if p.purgefinCalled != p.wantDel {
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.nsid, p.delete, p.zap, p.wantDel-p.purgefinCalled)
}
}
}
}
if cnt.created != cnt.released {
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
if delFuncCalled != 4 {
t.Errorf("delFunc isn't called 4 times: got=%d", delFuncCalled)
}
}
func BenchmarkLRUCache_Set(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
}
func BenchmarkLRUCache_Get(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
ns.Get(i, nil)
}
}
func BenchmarkLRUCache_Get2(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
ns.Get(i, func() (charge int, value interface{}) {
return 0, nil
})
}
}
func BenchmarkLRUCache_Release(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
handles := make([]Handle, b.N)
for i := uint64(0); i < uint64(b.N); i++ {
handles[i] = set(ns, i, "", 1, nil)
}
b.ResetTimer()
for _, h := range handles {
h.Release()
}
}
func BenchmarkLRUCache_SetRelease(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {
capacity = 10
}
c := NewLRUCache(capacity)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil).Release()
}
}
func BenchmarkLRUCache_SetReleaseTwice(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {
capacity = 10
}
c := NewLRUCache(capacity)
ns := c.GetNamespace(0)
b.ResetTimer()
na := b.N / 2
nb := b.N - na
for i := uint64(0); i < uint64(na); i++ {
set(ns, i, "", 1, nil).Release()
}
for i := uint64(0); i < uint64(nb); i++ {
set(ns, i, "", 1, nil).Release()
func TestLRUCache_Close(t *testing.T) {
relFuncCalled := 0
relFunc := func() {
relFuncCalled++
}
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, relFunc).Release()
set(c, 0, 2, 2, 1, relFunc).Release()
h3 := set(c, 0, 3, 3, 1, relFunc)
if h3 == nil {
t.Error("Cache.Get on #3 return nil")
}
if ok := c.Delete(0, 3, delFunc); !ok {
t.Error("Cache.Delete on #3 return false")
}
c.Close()
if relFuncCalled != 3 {
t.Errorf("relFunc isn't called 3 times: got=%d", relFuncCalled)
}
if delFuncCalled != 1 {
t.Errorf("delFunc isn't called 1 times: got=%d", delFuncCalled)
}
}

View File

@@ -0,0 +1,195 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"unsafe"
)
type lruNode struct {
n *Node
h *Handle
ban bool
next, prev *lruNode
}
func (n *lruNode) insert(at *lruNode) {
x := at.next
at.next = n
n.prev = at
n.next = x
x.prev = n
}
func (n *lruNode) remove() {
if n.prev != nil {
n.prev.next = n.next
n.next.prev = n.prev
n.prev = nil
n.next = nil
} else {
panic("BUG: removing removed node")
}
}
type lru struct {
mu sync.Mutex
capacity int
used int
recent lruNode
}
func (r *lru) reset() {
r.recent.next = &r.recent
r.recent.prev = &r.recent
r.used = 0
}
func (r *lru) Capacity() int {
r.mu.Lock()
defer r.mu.Unlock()
return r.capacity
}
func (r *lru) SetCapacity(capacity int) {
var evicted []*lruNode
r.mu.Lock()
r.capacity = capacity
for r.used > r.capacity {
rn := r.recent.prev
if rn == nil {
panic("BUG: invalid LRU used or capacity counter")
}
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) Promote(n *Node) {
var evicted []*lruNode
r.mu.Lock()
if n.CacheData == nil {
if n.Size() <= r.capacity {
rn := &lruNode{n: n, h: n.GetHandle()}
rn.insert(&r.recent)
n.CacheData = unsafe.Pointer(rn)
r.used += n.Size()
for r.used > r.capacity {
rn := r.recent.prev
if rn == nil {
panic("BUG: invalid LRU used or capacity counter")
}
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
}
} else {
rn := (*lruNode)(n.CacheData)
if !rn.ban {
rn.remove()
rn.insert(&r.recent)
}
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) Ban(n *Node) {
r.mu.Lock()
if n.CacheData == nil {
n.CacheData = unsafe.Pointer(&lruNode{n: n, ban: true})
} else {
rn := (*lruNode)(n.CacheData)
if !rn.ban {
rn.remove()
rn.ban = true
r.used -= rn.n.Size()
r.mu.Unlock()
rn.h.Release()
rn.h = nil
return
}
}
r.mu.Unlock()
}
func (r *lru) Evict(n *Node) {
r.mu.Lock()
rn := (*lruNode)(n.CacheData)
if rn == nil || rn.ban {
r.mu.Unlock()
return
}
n.CacheData = nil
r.mu.Unlock()
rn.h.Release()
}
func (r *lru) EvictNS(ns uint64) {
var evicted []*lruNode
r.mu.Lock()
for e := r.recent.prev; e != &r.recent; {
rn := e
e = e.prev
if rn.n.NS() == ns {
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) EvictAll() {
r.mu.Lock()
back := r.recent.prev
for rn := back; rn != &r.recent; rn = rn.prev {
rn.n.CacheData = nil
}
r.reset()
r.mu.Unlock()
for rn := back; rn != &r.recent; rn = rn.prev {
rn.h.Release()
}
}
func (r *lru) Close() error {
return nil
}
// NewLRU create a new LRU-cache.
func NewLRU(capacity int) Cacher {
r := &lru{capacity: capacity}
r.reset()
return r
}

View File

@@ -1,622 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/util"
)
// The LLRB implementation were taken from https://github.com/petar/GoLLRB.
// Which contains the following header:
//
// Copyright 2010 Petar Maymounkov. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// lruCache represent a LRU cache state.
type lruCache struct {
mu sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
used, size, alive int
}
// NewLRUCache creates a new initialized LRU cache with the given capacity.
func NewLRUCache(capacity int) Cache {
c := &lruCache{
table: make(map[uint64]*lruNs),
capacity: capacity,
}
c.recent.rNext = &c.recent
c.recent.rPrev = &c.recent
return c
}
func (c *lruCache) Capacity() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.capacity
}
func (c *lruCache) Used() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.used
}
func (c *lruCache) Size() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.size
}
func (c *lruCache) NumObjects() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.alive
}
// SetCapacity set cache capacity.
func (c *lruCache) SetCapacity(capacity int) {
c.mu.Lock()
c.capacity = capacity
c.evict()
c.mu.Unlock()
}
// GetNamespace return namespace object for given id.
func (c *lruCache) GetNamespace(id uint64) Namespace {
c.mu.Lock()
defer c.mu.Unlock()
if ns, ok := c.table[id]; ok {
return ns
}
ns := &lruNs{lru: c, id: id}
c.table[id] = ns
return ns
}
func (c *lruCache) ZapNamespace(id uint64) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.zapNB()
delete(c.table, id)
}
c.mu.Unlock()
}
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
// Purge purge entire cache.
func (c *lruCache) Purge(fin PurgeFin) {
c.mu.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
func (c *lruCache) Zap() {
c.mu.Lock()
for _, ns := range c.table {
ns.zapNB()
}
c.table = make(map[uint64]*lruNs)
c.mu.Unlock()
}
func (c *lruCache) evict() {
top := &c.recent
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
if n.state != nodeEffective {
panic("evicting non effective node")
}
n.state = nodeEvicted
n.rRemove()
n.derefNB()
c.used -= n.charge
n = c.recent.rPrev
}
}
type lruNs struct {
lru *lruCache
id uint64
rbRoot *lruNode
state nsState
}
func (ns *lruNs) rbGetOrCreateNode(h *lruNode, key uint64) (hn, n *lruNode) {
if h == nil {
n = &lruNode{ns: ns, key: key}
return n, n
}
if key < h.key {
hn, n = ns.rbGetOrCreateNode(h.rbLeft, key)
if hn != nil {
h.rbLeft = hn
} else {
return nil, n
}
} else if key > h.key {
hn, n = ns.rbGetOrCreateNode(h.rbRight, key)
if hn != nil {
h.rbRight = hn
} else {
return nil, n
}
} else {
return nil, h
}
if rbIsRed(h.rbRight) && !rbIsRed(h.rbLeft) {
h = rbRotLeft(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
rbFlip(h)
}
return h, n
}
func (ns *lruNs) getOrCreateNode(key uint64) *lruNode {
hn, n := ns.rbGetOrCreateNode(ns.rbRoot, key)
if hn != nil {
ns.rbRoot = hn
ns.rbRoot.rbBlack = true
}
return n
}
func (ns *lruNs) rbGetNode(key uint64) *lruNode {
h := ns.rbRoot
for h != nil {
switch {
case key < h.key:
h = h.rbLeft
case key > h.key:
h = h.rbRight
default:
return h
}
}
return nil
}
func (ns *lruNs) getNode(key uint64) *lruNode {
return ns.rbGetNode(key)
}
func (ns *lruNs) rbDeleteNode(h *lruNode, key uint64) *lruNode {
if h == nil {
return nil
}
if key < h.key {
if h.rbLeft == nil { // key not present. Nothing to delete
return h
}
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
h = rbMoveLeft(h)
}
h.rbLeft = ns.rbDeleteNode(h.rbLeft, key)
} else {
if rbIsRed(h.rbLeft) {
h = rbRotRight(h)
}
// If @key equals @h.key and no right children at @h
if h.key == key && h.rbRight == nil {
return nil
}
if h.rbRight != nil && !rbIsRed(h.rbRight) && !rbIsRed(h.rbRight.rbLeft) {
h = rbMoveRight(h)
}
// If @key equals @h.key, and (from above) 'h.Right != nil'
if h.key == key {
var x *lruNode
h.rbRight, x = rbDeleteMin(h.rbRight)
if x == nil {
panic("logic")
}
x.rbLeft, h.rbLeft = h.rbLeft, nil
x.rbRight, h.rbRight = h.rbRight, nil
x.rbBlack = h.rbBlack
h = x
} else { // Else, @key is bigger than @h.key
h.rbRight = ns.rbDeleteNode(h.rbRight, key)
}
}
return rbFixup(h)
}
func (ns *lruNs) deleteNode(key uint64) {
ns.rbRoot = ns.rbDeleteNode(ns.rbRoot, key)
if ns.rbRoot != nil {
ns.rbRoot.rbBlack = true
}
}
func (ns *lruNs) rbIterateNodes(h *lruNode, pivot uint64, iter func(n *lruNode) bool) bool {
if h == nil {
return true
}
if h.key >= pivot {
if !ns.rbIterateNodes(h.rbLeft, pivot, iter) {
return false
}
if !iter(h) {
return false
}
}
return ns.rbIterateNodes(h.rbRight, pivot, iter)
}
func (ns *lruNs) iterateNodes(iter func(n *lruNode) bool) {
ns.rbIterateNodes(ns.rbRoot, 0, iter)
}
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
ns.lru.mu.Lock()
defer ns.lru.mu.Unlock()
if ns.state != nsEffective {
return nil
}
var n *lruNode
if setf == nil {
n = ns.getNode(key)
if n == nil {
return nil
}
} else {
n = ns.getOrCreateNode(key)
}
switch n.state {
case nodeZero:
charge, value := setf()
if value == nil {
ns.deleteNode(key)
return nil
}
if charge < 0 {
charge = 0
}
n.value = value
n.charge = charge
n.state = nodeEvicted
ns.lru.size += charge
ns.lru.alive++
fallthrough
case nodeEvicted:
if n.charge == 0 {
break
}
// Insert to recent list.
n.state = nodeEffective
n.ref++
ns.lru.used += n.charge
ns.lru.evict()
fallthrough
case nodeEffective:
// Bump to front.
n.rRemove()
n.rInsert(&ns.lru.recent)
case nodeDeleted:
// Do nothing.
default:
panic("invalid state")
}
n.ref++
return &lruHandle{node: n}
}
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
ns.lru.mu.Lock()
defer ns.lru.mu.Unlock()
if ns.state != nsEffective {
if fin != nil {
fin(false, false)
}
return false
}
n := ns.getNode(key)
if n == nil {
if fin != nil {
fin(false, false)
}
return false
}
switch n.state {
case nodeEffective:
ns.lru.used -= n.charge
n.state = nodeDeleted
n.delfin = fin
n.rRemove()
n.derefNB()
case nodeEvicted:
n.state = nodeDeleted
n.delfin = fin
case nodeDeleted:
if fin != nil {
fin(true, true)
}
return false
default:
panic("invalid state")
}
return true
}
func (ns *lruNs) purgeNB(fin PurgeFin) {
if ns.state == nsEffective {
var nodes []*lruNode
ns.iterateNodes(func(n *lruNode) bool {
nodes = append(nodes, n)
return true
})
for _, n := range nodes {
switch n.state {
case nodeEffective:
ns.lru.used -= n.charge
n.state = nodeDeleted
n.purgefin = fin
n.rRemove()
n.derefNB()
case nodeEvicted:
n.state = nodeDeleted
n.purgefin = fin
case nodeDeleted:
default:
panic("invalid state")
}
}
}
}
func (ns *lruNs) Purge(fin PurgeFin) {
ns.lru.mu.Lock()
ns.purgeNB(fin)
ns.lru.mu.Unlock()
}
func (ns *lruNs) zapNB() {
if ns.state == nsEffective {
ns.state = nsZapped
ns.iterateNodes(func(n *lruNode) bool {
if n.state == nodeEffective {
ns.lru.used -= n.charge
n.rRemove()
}
ns.lru.size -= n.charge
n.state = nodeDeleted
n.fin()
return true
})
ns.rbRoot = nil
}
}
func (ns *lruNs) Zap() {
ns.lru.mu.Lock()
ns.zapNB()
delete(ns.lru.table, ns.id)
ns.lru.mu.Unlock()
}
type lruNode struct {
ns *lruNs
rNext, rPrev *lruNode
rbLeft, rbRight *lruNode
rbBlack bool
key uint64
value interface{}
charge int
ref int
state nodeState
delfin DelFin
purgefin PurgeFin
}
func (n *lruNode) rInsert(at *lruNode) {
x := at.rNext
at.rNext = n
n.rPrev = at
n.rNext = x
x.rPrev = n
}
func (n *lruNode) rRemove() bool {
if n.rPrev == nil {
return false
}
n.rPrev.rNext = n.rNext
n.rNext.rPrev = n.rPrev
n.rPrev = nil
n.rNext = nil
return true
}
func (n *lruNode) fin() {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
if n.purgefin != nil {
if n.delfin != nil {
panic("conflicting delete and purge fin")
}
n.purgefin(n.ns.id, n.key)
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true, false)
n.delfin = nil
}
}
func (n *lruNode) derefNB() {
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// Remove elemement.
n.ns.deleteNode(n.key)
n.ns.lru.size -= n.charge
n.ns.lru.alive--
n.fin()
}
n.value = nil
} else if n.ref < 0 {
panic("leveldb/cache: lruCache: negative node reference")
}
}
func (n *lruNode) deref() {
n.ns.lru.mu.Lock()
n.derefNB()
n.ns.lru.mu.Unlock()
}
type lruHandle struct {
node *lruNode
once uint32
}
func (h *lruHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.node.value
}
return nil
}
func (h *lruHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
h.node.deref()
h.node = nil
}
func rbIsRed(h *lruNode) bool {
if h == nil {
return false
}
return !h.rbBlack
}
func rbRotLeft(h *lruNode) *lruNode {
x := h.rbRight
if x.rbBlack {
panic("rotating a black link")
}
h.rbRight = x.rbLeft
x.rbLeft = h
x.rbBlack = h.rbBlack
h.rbBlack = false
return x
}
func rbRotRight(h *lruNode) *lruNode {
x := h.rbLeft
if x.rbBlack {
panic("rotating a black link")
}
h.rbLeft = x.rbRight
x.rbRight = h
x.rbBlack = h.rbBlack
h.rbBlack = false
return x
}
func rbFlip(h *lruNode) {
h.rbBlack = !h.rbBlack
h.rbLeft.rbBlack = !h.rbLeft.rbBlack
h.rbRight.rbBlack = !h.rbRight.rbBlack
}
func rbMoveLeft(h *lruNode) *lruNode {
rbFlip(h)
if rbIsRed(h.rbRight.rbLeft) {
h.rbRight = rbRotRight(h.rbRight)
h = rbRotLeft(h)
rbFlip(h)
}
return h
}
func rbMoveRight(h *lruNode) *lruNode {
rbFlip(h)
if rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
rbFlip(h)
}
return h
}
func rbFixup(h *lruNode) *lruNode {
if rbIsRed(h.rbRight) {
h = rbRotLeft(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
rbFlip(h)
}
return h
}
func rbDeleteMin(h *lruNode) (hn, n *lruNode) {
if h == nil {
return nil, nil
}
if h.rbLeft == nil {
return nil, h
}
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
h = rbMoveLeft(h)
}
h.rbLeft, n = rbDeleteMin(h.rbLeft)
return rbFixup(h), n
}

View File

@@ -9,14 +9,12 @@ package leveldb
import (
"bytes"
"fmt"
"io"
"math/rand"
"testing"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
"io"
"math/rand"
"testing"
)
const ctValSize = 1000
@@ -33,8 +31,8 @@ func newDbCorruptHarnessWopt(t *testing.T, o *opt.Options) *dbCorruptHarness {
func newDbCorruptHarness(t *testing.T) *dbCorruptHarness {
return newDbCorruptHarnessWopt(t, &opt.Options{
BlockCache: cache.NewLRUCache(100),
Strict: opt.StrictJournalChecksum,
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
})
}
@@ -269,9 +267,9 @@ func TestCorruptDB_TableIndex(t *testing.T) {
func TestCorruptDB_MissingManifest(t *testing.T) {
rnd := rand.New(rand.NewSource(0x0badda7a))
h := newDbCorruptHarnessWopt(t, &opt.Options{
BlockCache: cache.NewLRUCache(100),
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
})
h.build(1000)

View File

@@ -347,12 +347,14 @@ func recoverTable(s *session, o *opt.Options) error {
return err
}
iter := tr.NewIterator(nil, nil)
iter.(iterator.ErrorCallbackSetter).SetErrorCallback(func(err error) {
if errors.IsCorrupted(err) {
s.logf("table@recovery block corruption @%d %q", file.Num(), err)
tcorruptedBlock++
}
})
if itererr, ok := iter.(iterator.ErrorCallbackSetter); ok {
itererr.SetErrorCallback(func(err error) {
if errors.IsCorrupted(err) {
s.logf("table@recovery block corruption @%d %q", file.Num(), err)
tcorruptedBlock++
}
})
}
// Scan the table.
for iter.Next() {
@@ -823,8 +825,8 @@ func (db *DB) GetProperty(name string) (value string, err error) {
case p == "blockpool":
value = fmt.Sprintf("%v", db.s.tops.bpool)
case p == "cachedblock":
if bc := db.s.o.GetBlockCache(); bc != nil {
value = fmt.Sprintf("%d", bc.Size())
if db.s.tops.bcache != nil {
value = fmt.Sprintf("%d", db.s.tops.bcache.Size())
} else {
value = "<nil>"
}

View File

@@ -8,6 +8,7 @@ package leveldb
import (
"container/list"
"fmt"
"runtime"
"sync"
"sync/atomic"
@@ -89,6 +90,10 @@ func (db *DB) newSnapshot() *Snapshot {
return snap
}
func (snap *Snapshot) String() string {
return fmt.Sprintf("leveldb.Snapshot{%d}", snap.elem.seq)
}
// Get gets the value for the given key. It returns ErrNotFound if
// the DB does not contains the key.
//

View File

@@ -1271,7 +1271,7 @@ func TestDB_DeletionMarkers2(t *testing.T) {
}
func TestDB_CompactionTableOpenError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{CachedOpenFiles: -1})
h := newDbHarnessWopt(t, &opt.Options{OpenFilesCacheCapacity: -1})
defer h.close()
im := 10
@@ -1629,8 +1629,8 @@ func TestDB_ManualCompaction(t *testing.T) {
func TestDB_BloomFilter(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
BlockCache: opt.NoCache,
Filter: filter.NewBloomFilter(10),
DisableBlockCache: true,
Filter: filter.NewBloomFilter(10),
})
defer h.close()
@@ -2066,8 +2066,8 @@ func TestDB_GetProperties(t *testing.T) {
func TestDB_GoleveldbIssue72and83(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 1 * opt.MiB,
CachedOpenFiles: 3,
WriteBuffer: 1 * opt.MiB,
OpenFilesCacheCapacity: 3,
})
defer h.close()
@@ -2200,7 +2200,7 @@ func TestDB_GoleveldbIssue72and83(t *testing.T) {
func TestDB_TransientError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 128 * opt.KiB,
CachedOpenFiles: 3,
OpenFilesCacheCapacity: 3,
DisableCompactionBackoff: true,
})
defer h.close()
@@ -2410,7 +2410,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
CompactionTableSize: 43 * opt.KiB,
CompactionExpandLimitFactor: 1,
CompactionGPOverlapsFactor: 1,
BlockCache: opt.NoCache,
DisableBlockCache: true,
}
s, err := newSession(stor, o)
if err != nil {

View File

@@ -112,9 +112,9 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
db.writeDelay += time.Since(start)
db.writeDelayN++
} else if db.writeDelayN > 0 {
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
db.writeDelay = 0
db.writeDelayN = 0
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
}
return
}

View File

@@ -17,14 +17,14 @@ import (
var _ = testutil.Defer(func() {
Describe("Leveldb external", func() {
o := &opt.Options{
BlockCache: opt.NoCache,
BlockRestartInterval: 5,
BlockSize: 80,
Compression: opt.NoCompression,
CachedOpenFiles: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
CompactionTableSize: 2000,
DisableBlockCache: true,
BlockRestartInterval: 5,
BlockSize: 80,
Compression: opt.NoCompression,
OpenFilesCacheCapacity: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
CompactionTableSize: 2000,
}
Describe("write test", func() {

View File

@@ -106,7 +106,7 @@ func (ik iKey) assert() {
panic("leveldb: nil iKey")
}
if len(ik) < 8 {
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", ik, len(ik)))
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", []byte(ik), len(ik)))
}
}
@@ -124,7 +124,7 @@ func (ik iKey) parseNum() (seq uint64, kt kType) {
num := ik.num()
seq, kt = uint64(num>>8), kType(num&0xff)
if kt > ktVal {
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", ik, len(ik), kt))
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", []byte(ik), len(ik), kt))
}
return
}

View File

@@ -20,8 +20,9 @@ const (
GiB = MiB * 1024
)
const (
DefaultBlockCacheSize = 8 * MiB
var (
DefaultBlockCacher = LRUCacher
DefaultBlockCacheCapacity = 8 * MiB
DefaultBlockRestartInterval = 16
DefaultBlockSize = 4 * KiB
DefaultCompactionExpandLimitFactor = 25
@@ -33,7 +34,8 @@ const (
DefaultCompactionTotalSize = 10 * MiB
DefaultCompactionTotalSizeMultiplier = 10.0
DefaultCompressionType = SnappyCompression
DefaultCachedOpenFiles = 500
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultMaxMemCompationLevel = 2
DefaultNumLevel = 7
DefaultWriteBuffer = 4 * MiB
@@ -41,22 +43,33 @@ const (
DefaultWriteL0SlowdownTrigger = 8
)
type noCache struct{}
// Cacher is a caching algorithm.
type Cacher interface {
New(capacity int) cache.Cacher
}
func (noCache) SetCapacity(capacity int) {}
func (noCache) Capacity() int { return 0 }
func (noCache) Used() int { return 0 }
func (noCache) Size() int { return 0 }
func (noCache) NumObjects() int { return 0 }
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
func (noCache) ZapNamespace(id uint64) {}
func (noCache) Purge(fin cache.PurgeFin) {}
func (noCache) Zap() {}
type CacherFunc struct {
NewFunc func(capacity int) cache.Cacher
}
var NoCache cache.Cache = noCache{}
func (f *CacherFunc) New(capacity int) cache.Cacher {
if f.NewFunc != nil {
return f.NewFunc(capacity)
}
return nil
}
// Compression is the per-block compression algorithm to use.
func noCacher(int) cache.Cacher { return nil }
var (
// LRUCacher is the LRU-cache algorithm.
LRUCacher = &CacherFunc{cache.NewLRU}
// NoCacher is the value to disable caching algorithm.
NoCacher = &CacherFunc{}
)
// Compression is the 'sorted table' block compression algorithm to use.
type Compression uint
func (c Compression) String() string {
@@ -133,16 +146,17 @@ type Options struct {
// The default value is nil
AltFilters []filter.Filter
// BlockCache provides per-block caching for LevelDB. Specify NoCache to
// disable block caching.
// BlockCacher provides cache algorithm for LevelDB 'sorted table' block caching.
// Specify NoCacher to disable caching algorithm.
//
// By default LevelDB will create LRU-cache with capacity of BlockCacheSize.
BlockCache cache.Cache
// The default value is LRUCacher.
BlockCacher Cacher
// BlockCacheSize defines the capacity of the default 'block cache'.
// BlockCacheCapacity defines the capacity of the 'sorted table' block caching.
// Use -1 for zero, this has same effect as specifying NoCacher to BlockCacher.
//
// The default value is 8MiB.
BlockCacheSize int
BlockCacheCapacity int
// BlockRestartInterval is the number of keys between restart points for
// delta encoding of keys.
@@ -156,13 +170,6 @@ type Options struct {
// The default value is 4KiB.
BlockSize int
// CachedOpenFiles defines number of open files to kept around when not
// in-use, the counting includes still in-use files.
// Set this to negative value to disable caching.
//
// The default value is 500.
CachedOpenFiles int
// CompactionExpandLimitFactor limits compaction size after expanded.
// This will be multiplied by table size limit at compaction target level.
//
@@ -237,11 +244,17 @@ type Options struct {
// The default value uses the same ordering as bytes.Compare.
Comparer comparer.Comparer
// Compression defines the per-block compression to use.
// Compression defines the 'sorted table' block compression to use.
//
// The default value (DefaultCompression) uses snappy compression.
Compression Compression
// DisableBlockCache allows disable use of cache.Cache functionality on
// 'sorted table' block.
//
// The default value is false.
DisableBlockCache bool
// DisableCompactionBackoff allows disable compaction retry backoff.
//
// The default value is false.
@@ -288,6 +301,18 @@ type Options struct {
// The default is 7.
NumLevel int
// OpenFilesCacher provides cache algorithm for open files caching.
// Specify NoCacher to disable caching algorithm.
//
// The default value is LRUCacher.
OpenFilesCacher Cacher
// OpenFilesCacheCapacity defines the capacity of the open files caching.
// Use -1 for zero, this has same effect as specifying NoCacher to OpenFilesCacher.
//
// The default value is 500.
OpenFilesCacheCapacity int
// Strict defines the DB strict level.
Strict Strict
@@ -320,18 +345,22 @@ func (o *Options) GetAltFilters() []filter.Filter {
return o.AltFilters
}
func (o *Options) GetBlockCache() cache.Cache {
if o == nil {
func (o *Options) GetBlockCacher() Cacher {
if o == nil || o.BlockCacher == nil {
return DefaultBlockCacher
} else if o.BlockCacher == NoCacher {
return nil
}
return o.BlockCache
return o.BlockCacher
}
func (o *Options) GetBlockCacheSize() int {
if o == nil || o.BlockCacheSize <= 0 {
return DefaultBlockCacheSize
func (o *Options) GetBlockCacheCapacity() int {
if o == nil || o.BlockCacheCapacity == 0 {
return DefaultBlockCacheCapacity
} else if o.BlockCacheCapacity < 0 {
return 0
}
return o.BlockCacheSize
return o.BlockCacheCapacity
}
func (o *Options) GetBlockRestartInterval() int {
@@ -348,15 +377,6 @@ func (o *Options) GetBlockSize() int {
return o.BlockSize
}
func (o *Options) GetCachedOpenFiles() int {
if o == nil || o.CachedOpenFiles == 0 {
return DefaultCachedOpenFiles
} else if o.CachedOpenFiles < 0 {
return 0
}
return o.CachedOpenFiles
}
func (o *Options) GetCompactionExpandLimit(level int) int {
factor := DefaultCompactionExpandLimitFactor
if o != nil && o.CompactionExpandLimitFactor > 0 {
@@ -477,7 +497,7 @@ func (o *Options) GetMaxMemCompationLevel() int {
if o != nil {
if o.MaxMemCompationLevel > 0 {
level = o.MaxMemCompationLevel
} else if o.MaxMemCompationLevel == -1 {
} else if o.MaxMemCompationLevel < 0 {
level = 0
}
}
@@ -494,6 +514,25 @@ func (o *Options) GetNumLevel() int {
return o.NumLevel
}
func (o *Options) GetOpenFilesCacher() Cacher {
if o == nil || o.OpenFilesCacher == nil {
return DefaultOpenFilesCacher
}
if o.OpenFilesCacher == NoCacher {
return nil
}
return o.OpenFilesCacher
}
func (o *Options) GetOpenFilesCacheCapacity() int {
if o == nil || o.OpenFilesCacheCapacity == 0 {
return DefaultOpenFilesCacheCapacity
} else if o.OpenFilesCacheCapacity < 0 {
return 0
}
return o.OpenFilesCacheCapacity
}
func (o *Options) GetStrict(strict Strict) bool {
if o == nil || o.Strict == 0 {
return DefaultStrict&strict != 0

View File

@@ -7,7 +7,6 @@
package leveldb
import (
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
)
@@ -32,13 +31,6 @@ func (s *session) setOptions(o *opt.Options) {
no.AltFilters[i] = &iFilter{filter}
}
}
// Block cache.
switch o.GetBlockCache() {
case nil:
no.BlockCache = cache.NewLRUCache(o.GetBlockCacheSize())
case opt.NoCache:
no.BlockCache = nil
}
// Comparer.
s.icmp = &iComparer{o.GetComparer()}
no.Comparer = s.icmp

View File

@@ -73,7 +73,7 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
stCompPtrs: make([]iKey, o.GetNumLevel()),
}
s.setOptions(o)
s.tops = newTableOps(s, s.o.GetCachedOpenFiles())
s.tops = newTableOps(s)
s.setVersion(newVersion(s))
s.log("log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed")
return
@@ -82,9 +82,6 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
// Close session.
func (s *session) close() {
s.tops.close()
if bc := s.o.GetBlockCache(); bc != nil {
bc.Purge(nil)
}
if s.manifest != nil {
s.manifest.Close()
}

View File

@@ -221,7 +221,7 @@ func (fs *fileStorage) GetManifest() (f File, err error) {
fs.log(fmt.Sprintf("skipping %s: invalid file name", fn))
continue
}
if _, e1 := strconv.ParseUint(fn[7:], 10, 0); e1 != nil {
if _, e1 := strconv.ParseUint(fn[8:], 10, 0); e1 != nil {
fs.log(fmt.Sprintf("skipping %s: invalid file num: %v", fn, e1))
continue
}

View File

@@ -286,10 +286,10 @@ func (x *tFilesSortByNum) Less(i, j int) bool {
// Table operations.
type tOps struct {
s *session
cache cache.Cache
cacheNS cache.Namespace
bpool *util.BufferPool
s *session
cache *cache.Cache
bcache *cache.Cache
bpool *util.BufferPool
}
// Creates an empty table and returns table writer.
@@ -338,26 +338,28 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
// Opens table. It returns a cache handle, which should
// be released after use.
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
func (t *tOps) open(f *tFile) (ch *cache.Handle, err error) {
num := f.file.Num()
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
ch = t.cache.Get(0, num, func() (size int, value cache.Value) {
var r storage.Reader
r, err = f.file.Open()
if err != nil {
return 0, nil
}
var bcacheNS cache.Namespace
if bc := t.s.o.GetBlockCache(); bc != nil {
bcacheNS = bc.GetNamespace(num)
var bcache *cache.CacheGetter
if t.bcache != nil {
bcache = &cache.CacheGetter{Cache: t.bcache, NS: num}
}
var tr *table.Reader
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcacheNS, t.bpool, t.s.o.Options)
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcache, t.bpool, t.s.o.Options)
if err != nil {
r.Close()
return 0, nil
}
return 1, tr
})
if ch == nil && err == nil {
err = ErrClosed
@@ -412,16 +414,14 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
// no one use the the table.
func (t *tOps) remove(f *tFile) {
num := f.file.Num()
t.cacheNS.Delete(num, func(exist, pending bool) {
if !pending {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.ZapNamespace(num)
}
t.cache.Delete(0, num, func() {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if t.bcache != nil {
t.bcache.EvictNS(num)
}
})
}
@@ -429,18 +429,34 @@ func (t *tOps) remove(f *tFile) {
// Closes the table ops instance. It will close all tables,
// regadless still used or not.
func (t *tOps) close() {
t.cache.Zap()
t.bpool.Close()
t.cache.Close()
if t.bcache != nil {
t.bcache.Close()
}
}
// Creates new initialized table ops instance.
func newTableOps(s *session, cacheCap int) *tOps {
c := cache.NewLRUCache(cacheCap)
func newTableOps(s *session) *tOps {
var (
cacher cache.Cacher
bcache *cache.Cache
)
if s.o.GetOpenFilesCacheCapacity() > 0 {
cacher = cache.NewLRU(s.o.GetOpenFilesCacheCapacity())
}
if !s.o.DisableBlockCache {
var bcacher cache.Cacher
if s.o.GetBlockCacheCapacity() > 0 {
bcacher = cache.NewLRU(s.o.GetBlockCacheCapacity())
}
bcache = cache.NewCache(bcacher)
}
return &tOps{
s: s,
cache: c,
cacheNS: c.GetNamespace(0),
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
s: s,
cache: cache.NewCache(cacher),
bcache: bcache,
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
}
}

View File

@@ -509,7 +509,7 @@ type Reader struct {
mu sync.RWMutex
fi *storage.FileInfo
reader io.ReaderAt
cache cache.Namespace
cache *cache.CacheGetter
err error
bpool *util.BufferPool
// Options
@@ -613,18 +613,22 @@ func (r *Reader) readBlock(bh blockHandle, verifyChecksum bool) (*block, error)
func (r *Reader) readBlockCached(bh blockHandle, verifyChecksum, fillCache bool) (*block, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *block
b, err = r.readBlock(bh, verifyChecksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
var (
err error
ch *cache.Handle
)
if fillCache {
ch = r.cache.Get(bh.offset, func() (size int, value cache.Value) {
var b *block
b, err = r.readBlock(bh, verifyChecksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
} else {
ch = r.cache.Get(bh.offset, nil)
}
if ch != nil {
b, ok := ch.Value().(*block)
if !ok {
@@ -667,18 +671,22 @@ func (r *Reader) readFilterBlock(bh blockHandle) (*filterBlock, error) {
func (r *Reader) readFilterBlockCached(bh blockHandle, fillCache bool) (*filterBlock, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
var (
err error
ch *cache.Handle
)
if fillCache {
ch = r.cache.Get(bh.offset, func() (size int, value cache.Value) {
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
} else {
ch = r.cache.Get(bh.offset, nil)
}
if ch != nil {
b, ok := ch.Value().(*filterBlock)
if !ok {
@@ -980,7 +988,7 @@ func (r *Reader) Release() {
// The fi, cache and bpool is optional and can be nil.
//
// The returned table reader instance is goroutine-safe.
func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Namespace, bpool *util.BufferPool, o *opt.Options) (*Reader, error) {
func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache *cache.CacheGetter, bpool *util.BufferPool, o *opt.Options) (*Reader, error) {
if f == nil {
return nil, errors.New("leveldb/table: nil file")
}

View File

@@ -7,10 +7,15 @@ package snappy
import (
"encoding/binary"
"errors"
"io"
)
// ErrCorrupt reports that the input is invalid.
var ErrCorrupt = errors.New("snappy: corrupt input")
var (
// ErrCorrupt reports that the input is invalid.
ErrCorrupt = errors.New("snappy: corrupt input")
// ErrUnsupported reports that the input isn't supported.
ErrUnsupported = errors.New("snappy: unsupported input")
)
// DecodedLen returns the length of the decoded block.
func DecodedLen(src []byte) (int, error) {
@@ -122,3 +127,166 @@ func Decode(dst, src []byte) ([]byte, error) {
}
return dst[:d], nil
}
// NewReader returns a new Reader that decompresses from r, using the framing
// format described at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
func NewReader(r io.Reader) *Reader {
return &Reader{
r: r,
decoded: make([]byte, maxUncompressedChunkLen),
buf: make([]byte, MaxEncodedLen(maxUncompressedChunkLen)+checksumSize),
}
}
// Reader is an io.Reader than can read Snappy-compressed bytes.
type Reader struct {
r io.Reader
err error
decoded []byte
buf []byte
// decoded[i:j] contains decoded bytes that have not yet been passed on.
i, j int
readHeader bool
}
// Reset discards any buffered data, resets all state, and switches the Snappy
// reader to read from r. This permits reusing a Reader rather than allocating
// a new one.
func (r *Reader) Reset(reader io.Reader) {
r.r = reader
r.err = nil
r.i = 0
r.j = 0
r.readHeader = false
}
func (r *Reader) readFull(p []byte) (ok bool) {
if _, r.err = io.ReadFull(r.r, p); r.err != nil {
if r.err == io.ErrUnexpectedEOF {
r.err = ErrCorrupt
}
return false
}
return true
}
// Read satisfies the io.Reader interface.
func (r *Reader) Read(p []byte) (int, error) {
if r.err != nil {
return 0, r.err
}
for {
if r.i < r.j {
n := copy(p, r.decoded[r.i:r.j])
r.i += n
return n, nil
}
if !r.readFull(r.buf[:4]) {
return 0, r.err
}
chunkType := r.buf[0]
if !r.readHeader {
if chunkType != chunkTypeStreamIdentifier {
r.err = ErrCorrupt
return 0, r.err
}
r.readHeader = true
}
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
if chunkLen > len(r.buf) {
r.err = ErrUnsupported
return 0, r.err
}
// The chunk types are specified at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
buf := r.buf[:chunkLen]
if !r.readFull(buf) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
buf = buf[checksumSize:]
n, err := DecodedLen(buf)
if err != nil {
r.err = err
return 0, r.err
}
if n > len(r.decoded) {
r.err = ErrCorrupt
return 0, r.err
}
if _, err := Decode(r.decoded, buf); err != nil {
r.err = err
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCorrupt
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeUncompressedData:
// Section 4.3. Uncompressed data (chunk type 0x01).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
buf := r.buf[:checksumSize]
if !r.readFull(buf) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
// Read directly into r.decoded instead of via r.buf.
n := chunkLen - checksumSize
if !r.readFull(r.decoded[:n]) {
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCorrupt
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeStreamIdentifier:
// Section 4.1. Stream identifier (chunk type 0xff).
if chunkLen != len(magicBody) {
r.err = ErrCorrupt
return 0, r.err
}
if !r.readFull(r.buf[:len(magicBody)]) {
return 0, r.err
}
for i := 0; i < len(magicBody); i++ {
if r.buf[i] != magicBody[i] {
r.err = ErrCorrupt
return 0, r.err
}
}
continue
}
if chunkType <= 0x7f {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return 0, r.err
} else {
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen]) {
return 0, r.err
}
}
}
}

View File

@@ -6,6 +6,7 @@ package snappy
import (
"encoding/binary"
"io"
)
// We limit how far copy back-references can go, the same as the C++ code.
@@ -172,3 +173,86 @@ func MaxEncodedLen(srcLen int) int {
// This last factor dominates the blowup, so the final estimate is:
return 32 + srcLen + srcLen/6
}
// NewWriter returns a new Writer that compresses to w, using the framing
// format described at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
func NewWriter(w io.Writer) *Writer {
return &Writer{
w: w,
enc: make([]byte, MaxEncodedLen(maxUncompressedChunkLen)),
}
}
// Writer is an io.Writer than can write Snappy-compressed bytes.
type Writer struct {
w io.Writer
err error
enc []byte
buf [checksumSize + chunkHeaderSize]byte
wroteHeader bool
}
// Reset discards the writer's state and switches the Snappy writer to write to
// w. This permits reusing a Writer rather than allocating a new one.
func (w *Writer) Reset(writer io.Writer) {
w.w = writer
w.err = nil
w.wroteHeader = false
}
// Write satisfies the io.Writer interface.
func (w *Writer) Write(p []byte) (n int, errRet error) {
if w.err != nil {
return 0, w.err
}
if !w.wroteHeader {
copy(w.enc, magicChunk)
if _, err := w.w.Write(w.enc[:len(magicChunk)]); err != nil {
w.err = err
return n, err
}
w.wroteHeader = true
}
for len(p) > 0 {
var uncompressed []byte
if len(p) > maxUncompressedChunkLen {
uncompressed, p = p[:maxUncompressedChunkLen], p[maxUncompressedChunkLen:]
} else {
uncompressed, p = p, nil
}
checksum := crc(uncompressed)
// Compress the buffer, discarding the result if the improvement
// isn't at least 12.5%.
chunkType := uint8(chunkTypeCompressedData)
chunkBody, err := Encode(w.enc, uncompressed)
if err != nil {
w.err = err
return n, err
}
if len(chunkBody) >= len(uncompressed)-len(uncompressed)/8 {
chunkType, chunkBody = chunkTypeUncompressedData, uncompressed
}
chunkLen := 4 + len(chunkBody)
w.buf[0] = chunkType
w.buf[1] = uint8(chunkLen >> 0)
w.buf[2] = uint8(chunkLen >> 8)
w.buf[3] = uint8(chunkLen >> 16)
w.buf[4] = uint8(checksum >> 0)
w.buf[5] = uint8(checksum >> 8)
w.buf[6] = uint8(checksum >> 16)
w.buf[7] = uint8(checksum >> 24)
if _, err = w.w.Write(w.buf[:]); err != nil {
w.err = err
return n, err
}
if _, err = w.w.Write(chunkBody); err != nil {
w.err = err
return n, err
}
n += len(uncompressed)
}
return n, nil
}

View File

@@ -8,6 +8,10 @@
// The C++ snappy implementation is at http://code.google.com/p/snappy/
package snappy
import (
"hash/crc32"
)
/*
Each encoded block begins with the varint-encoded length of the decoded data,
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
@@ -36,3 +40,29 @@ const (
tagCopy2 = 0x02
tagCopy4 = 0x03
)
const (
checksumSize = 4
chunkHeaderSize = 4
magicChunk = "\xff\x06\x00\x00" + magicBody
magicBody = "sNaPpY"
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt says
// that "the uncompressed data in a chunk must be no longer than 65536 bytes".
maxUncompressedChunkLen = 65536
)
const (
chunkTypeCompressedData = 0x00
chunkTypeUncompressedData = 0x01
chunkTypePadding = 0xfe
chunkTypeStreamIdentifier = 0xff
)
var crcTable = crc32.MakeTable(crc32.Castagnoli)
// crc implements the checksum specified in section 3 of
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
func crc(b []byte) uint32 {
c := crc32.Update(0, crcTable, b)
return uint32(c>>15|c<<17) + 0xa282ead8
}

View File

@@ -18,7 +18,10 @@ import (
"testing"
)
var download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
var (
download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
testdata = flag.String("testdata", "testdata", "Directory containing the test data")
)
func roundtrip(b, ebuf, dbuf []byte) error {
e, err := Encode(ebuf, b)
@@ -55,11 +58,11 @@ func TestSmallCopy(t *testing.T) {
}
func TestSmallRand(t *testing.T) {
rand.Seed(27354294)
rng := rand.New(rand.NewSource(27354294))
for n := 1; n < 20000; n += 23 {
b := make([]byte, n)
for i, _ := range b {
b[i] = uint8(rand.Uint32())
for i := range b {
b[i] = uint8(rng.Uint32())
}
if err := roundtrip(b, nil, nil); err != nil {
t.Fatal(err)
@@ -70,7 +73,7 @@ func TestSmallRand(t *testing.T) {
func TestSmallRegular(t *testing.T) {
for n := 1; n < 20000; n += 23 {
b := make([]byte, n)
for i, _ := range b {
for i := range b {
b[i] = uint8(i%10 + 'a')
}
if err := roundtrip(b, nil, nil); err != nil {
@@ -79,6 +82,120 @@ func TestSmallRegular(t *testing.T) {
}
}
func cmp(a, b []byte) error {
if len(a) != len(b) {
return fmt.Errorf("got %d bytes, want %d", len(a), len(b))
}
for i := range a {
if a[i] != b[i] {
return fmt.Errorf("byte #%d: got 0x%02x, want 0x%02x", i, a[i], b[i])
}
}
return nil
}
func TestFramingFormat(t *testing.T) {
// src is comprised of alternating 1e5-sized sequences of random
// (incompressible) bytes and repeated (compressible) bytes. 1e5 was chosen
// because it is larger than maxUncompressedChunkLen (64k).
src := make([]byte, 1e6)
rng := rand.New(rand.NewSource(1))
for i := 0; i < 10; i++ {
if i%2 == 0 {
for j := 0; j < 1e5; j++ {
src[1e5*i+j] = uint8(rng.Intn(256))
}
} else {
for j := 0; j < 1e5; j++ {
src[1e5*i+j] = uint8(i)
}
}
}
buf := new(bytes.Buffer)
if _, err := NewWriter(buf).Write(src); err != nil {
t.Fatalf("Write: encoding: %v", err)
}
dst, err := ioutil.ReadAll(NewReader(buf))
if err != nil {
t.Fatalf("ReadAll: decoding: %v", err)
}
if err := cmp(dst, src); err != nil {
t.Fatal(err)
}
}
func TestReaderReset(t *testing.T) {
gold := bytes.Repeat([]byte("All that is gold does not glitter,\n"), 10000)
buf := new(bytes.Buffer)
if _, err := NewWriter(buf).Write(gold); err != nil {
t.Fatalf("Write: %v", err)
}
encoded, invalid, partial := buf.String(), "invalid", "partial"
r := NewReader(nil)
for i, s := range []string{encoded, invalid, partial, encoded, partial, invalid, encoded, encoded} {
if s == partial {
r.Reset(strings.NewReader(encoded))
if _, err := r.Read(make([]byte, 101)); err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
continue
}
r.Reset(strings.NewReader(s))
got, err := ioutil.ReadAll(r)
switch s {
case encoded:
if err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
if err := cmp(got, gold); err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
case invalid:
if err == nil {
t.Errorf("#%d: got nil error, want non-nil", i)
continue
}
}
}
}
func TestWriterReset(t *testing.T) {
gold := bytes.Repeat([]byte("Not all those who wander are lost;\n"), 10000)
var gots, wants [][]byte
const n = 20
w, failed := NewWriter(nil), false
for i := 0; i <= n; i++ {
buf := new(bytes.Buffer)
w.Reset(buf)
want := gold[:len(gold)*i/n]
if _, err := w.Write(want); err != nil {
t.Errorf("#%d: Write: %v", i, err)
failed = true
continue
}
got, err := ioutil.ReadAll(NewReader(buf))
if err != nil {
t.Errorf("#%d: ReadAll: %v", i, err)
failed = true
continue
}
gots = append(gots, got)
wants = append(wants, want)
}
if failed {
return
}
for i := range gots {
if err := cmp(gots[i], wants[i]); err != nil {
t.Errorf("#%d: %v", i, err)
}
}
}
func benchDecode(b *testing.B, src []byte) {
encoded, err := Encode(nil, src)
if err != nil {
@@ -102,7 +219,7 @@ func benchEncode(b *testing.B, src []byte) {
}
}
func readFile(b *testing.B, filename string) []byte {
func readFile(b testing.TB, filename string) []byte {
src, err := ioutil.ReadFile(filename)
if err != nil {
b.Fatalf("failed reading %s: %s", filename, err)
@@ -144,7 +261,7 @@ func BenchmarkWordsEncode1e5(b *testing.B) { benchWords(b, 1e5, false) }
func BenchmarkWordsEncode1e6(b *testing.B) { benchWords(b, 1e6, false) }
// testFiles' values are copied directly from
// https://code.google.com/p/snappy/source/browse/trunk/snappy_unittest.cc.
// https://raw.githubusercontent.com/google/snappy/master/snappy_unittest.cc
// The label field is unused in snappy-go.
var testFiles = []struct {
label string
@@ -152,29 +269,36 @@ var testFiles = []struct {
}{
{"html", "html"},
{"urls", "urls.10K"},
{"jpg", "house.jpg"},
{"pdf", "mapreduce-osdi-1.pdf"},
{"jpg", "fireworks.jpeg"},
{"jpg_200", "fireworks.jpeg"},
{"pdf", "paper-100k.pdf"},
{"html4", "html_x_4"},
{"cp", "cp.html"},
{"c", "fields.c"},
{"lsp", "grammar.lsp"},
{"xls", "kennedy.xls"},
{"txt1", "alice29.txt"},
{"txt2", "asyoulik.txt"},
{"txt3", "lcet10.txt"},
{"txt4", "plrabn12.txt"},
{"bin", "ptt5"},
{"sum", "sum"},
{"man", "xargs.1"},
{"pb", "geo.protodata"},
{"gaviota", "kppkn.gtb"},
}
// The test data files are present at this canonical URL.
const baseURL = "https://snappy.googlecode.com/svn/trunk/testdata/"
const baseURL = "https://raw.githubusercontent.com/google/snappy/master/testdata/"
func downloadTestdata(basename string) (errRet error) {
filename := filepath.Join("testdata", basename)
filename := filepath.Join(*testdata, basename)
if stat, err := os.Stat(filename); err == nil && stat.Size() != 0 {
return nil
}
if !*download {
return fmt.Errorf("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
if err := os.Mkdir(*testdata, 0777); err != nil && !os.IsExist(err) {
return fmt.Errorf("failed to create testdata: %s", err)
}
f, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create %s: %s", filename, err)
@@ -185,36 +309,27 @@ func downloadTestdata(basename string) (errRet error) {
os.Remove(filename)
}
}()
resp, err := http.Get(baseURL + basename)
url := baseURL + basename
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("failed to download %s: %s", baseURL+basename, err)
return fmt.Errorf("failed to download %s: %s", url, err)
}
defer resp.Body.Close()
if s := resp.StatusCode; s != http.StatusOK {
return fmt.Errorf("downloading %s: HTTP status code %d (%s)", url, s, http.StatusText(s))
}
_, err = io.Copy(f, resp.Body)
if err != nil {
return fmt.Errorf("failed to write %s: %s", filename, err)
return fmt.Errorf("failed to download %s to %s: %s", url, filename, err)
}
return nil
}
func benchFile(b *testing.B, n int, decode bool) {
filename := filepath.Join("testdata", testFiles[n].filename)
if stat, err := os.Stat(filename); err != nil || stat.Size() == 0 {
if !*download {
b.Fatal("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
if err := os.Mkdir("testdata", 0777); err != nil && !os.IsExist(err) {
b.Fatalf("failed to create testdata: %s", err)
}
for _, tf := range testFiles {
if err := downloadTestdata(tf.filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
}
if err := downloadTestdata(testFiles[n].filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
data := readFile(b, filename)
data := readFile(b, filepath.Join(*testdata, testFiles[n].filename))
if decode {
benchDecode(b, data)
} else {
@@ -235,12 +350,6 @@ func Benchmark_UFlat8(b *testing.B) { benchFile(b, 8, true) }
func Benchmark_UFlat9(b *testing.B) { benchFile(b, 9, true) }
func Benchmark_UFlat10(b *testing.B) { benchFile(b, 10, true) }
func Benchmark_UFlat11(b *testing.B) { benchFile(b, 11, true) }
func Benchmark_UFlat12(b *testing.B) { benchFile(b, 12, true) }
func Benchmark_UFlat13(b *testing.B) { benchFile(b, 13, true) }
func Benchmark_UFlat14(b *testing.B) { benchFile(b, 14, true) }
func Benchmark_UFlat15(b *testing.B) { benchFile(b, 15, true) }
func Benchmark_UFlat16(b *testing.B) { benchFile(b, 16, true) }
func Benchmark_UFlat17(b *testing.B) { benchFile(b, 17, true) }
func Benchmark_ZFlat0(b *testing.B) { benchFile(b, 0, false) }
func Benchmark_ZFlat1(b *testing.B) { benchFile(b, 1, false) }
func Benchmark_ZFlat2(b *testing.B) { benchFile(b, 2, false) }
@@ -253,9 +362,3 @@ func Benchmark_ZFlat8(b *testing.B) { benchFile(b, 8, false) }
func Benchmark_ZFlat9(b *testing.B) { benchFile(b, 9, false) }
func Benchmark_ZFlat10(b *testing.B) { benchFile(b, 10, false) }
func Benchmark_ZFlat11(b *testing.B) { benchFile(b, 11, false) }
func Benchmark_ZFlat12(b *testing.B) { benchFile(b, 12, false) }
func Benchmark_ZFlat13(b *testing.B) { benchFile(b, 13, false) }
func Benchmark_ZFlat14(b *testing.B) { benchFile(b, 14, false) }
func Benchmark_ZFlat15(b *testing.B) { benchFile(b, 15, false) }
func Benchmark_ZFlat16(b *testing.B) { benchFile(b, 16, false) }
func Benchmark_ZFlat17(b *testing.B) { benchFile(b, 17, false) }

View File

@@ -0,0 +1,7 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- tip

View File

@@ -0,0 +1,19 @@
Copyright (c) 2014 Barracuda Networks, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,45 @@
Suture
======
[![Build Status](https://travis-ci.org/thejerf/suture.png?branch=master)](https://travis-ci.org/thejerf/suture)
Suture provides Erlang-ish supervisor trees for Go. "Supervisor trees" ->
"sutree" -> "suture" -> holds your code together when it's trying to die.
This is intended to be a production-quality library going into code that I
will be very early on the phone tree to support when it goes down. However,
it has not been deployed into something quite that serious yet. (I will
update this statement when that changes.)
It is intended to deal gracefully with the real failure cases that can
occur with supervision trees (such as burning all your CPU time endlessly
restarting dead services), while also making no unnecessary demands on the
"service" code, and providing hooks to perform adequate logging with in a
production environment.
[A blog post describing the design decisions](http://www.jerf.org/iri/post/2930)
is available.
This module is fully covered with [godoc](http://godoc.org/github.com/thejerf/suture),
including an example, usage, and everything else you might expect from a
README.md on GitHub. (DRY.)
This is not currently tagged with particular git tags for Go as this is
currently considered to be alpha code. As I move this into production and
feel more confident about it, I'll give it relevant tags.
Code Signing
------------
Starting with the commit after ac7cf8591b, I will be signing this repository
with the ["jerf" keybase account](https://keybase.io/jerf).
Aspiration
----------
One of the big wins the Erlang community has with their pervasive OTP
support is that it makes it easy for them to distribute libraries that
easily fit into the OTP paradigm. It ought to someday be considered a good
idea to distribute libraries that provide some sort of supervisor tree
functionality out of the box. It is possible to provide this functionality
without explicitly depending on the Suture library.

View File

@@ -0,0 +1,12 @@
#!/bin/bash
GOLINTOUT=$(golint *go)
if [ ! -z "$GOLINTOUT" -o "$?" != 0 ]; then
echo golint failed:
echo $GOLINTOUT
exit 1
fi
go test

View File

@@ -0,0 +1,650 @@
/*
Package suture provides Erlang-like supervisor trees.
This implements Erlang-esque supervisor trees, as adapted for Go. This is
intended to be an industrial-strength implementation, but it has not yet
been deployed in a hostile environment. (It's headed there, though.)
Supervisor Tree -> SuTree -> suture -> holds your code together when it's
trying to fall apart.
Why use Suture?
* You want to write bullet-resistant services that will remain available
despite unforeseen failure.
* You need the code to be smart enough not to consume 100% of the CPU
restarting things.
* You want to easily compose multiple such services in one program.
* You want the Erlang programmers to stop lording their supervision
trees over you.
Suture has 100% test coverage, and is golint clean. This doesn't prove it
free of bugs, but it shows I care.
A blog post describing the design decisions is available at
http://www.jerf.org/iri/post/2930 .
Using Suture
To idiomatically use Suture, create a Supervisor which is your top level
"application" supervisor. This will often occur in your program's "main"
function.
Create "Service"s, which implement the Service interface. .Add() them
to your Supervisor. Supervisors are also services, so you can create a
tree structure here, depending on the exact combination of restarts
you want to create.
Finally, as what is probably the last line of your main() function, call
.Serve() on your top level supervisor. This will start all the services
you've defined.
See the Example for an example, using a simple service that serves out
incrementing integers.
*/
package suture
import (
"errors"
"fmt"
"log"
"math"
"runtime"
"sync/atomic"
"time"
)
const (
notRunning = iota
normal
paused
)
type supervisorID uint32
type serviceID uint32
var currentSupervisorID uint32
// ErrWrongSupervisor is returned by the (*Supervisor).Remove method
// if you pass a ServiceToken from the wrong Supervisor.
var ErrWrongSupervisor = errors.New("wrong supervisor for this service token, no service removed")
// ServiceToken is an opaque identifier that can be used to terminate a service that
// has been Add()ed to a Supervisor.
type ServiceToken struct {
id uint64
}
/*
Supervisor is the core type of the module that represents a Supervisor.
Supervisors should be constructed either by New or NewSimple.
Once constructed, a Supervisor should be started in one of three ways:
1. Calling .Serve().
2. Calling .ServeBackground().
3. Adding it to an existing Supervisor.
Calling Serve will cause the supervisor to run until it is shut down by
an external user calling Stop() on it. If that never happens, it simply
runs forever. I suggest creating your services in Supervisors, then making
a Serve() call on your top-level Supervisor be the last line of your main
func.
Calling ServeBackground will CORRECTLY start the supervisor running in a
new goroutine. You do not want to just:
go supervisor.Serve()
because that will briefly create a race condition as it starts up, if you
try to .Add() services immediately afterward.
*/
type Supervisor struct {
Name string
id supervisorID
failureDecay float64
failureThreshold float64
failureBackoff time.Duration
timeout time.Duration
log func(string)
services map[serviceID]Service
lastFail time.Time
failures float64
restartQueue []serviceID
state uint8
serviceCounter serviceID
control chan supervisorMessage
resumeTimer <-chan time.Time
// The testing uses the ability to grab these individual logging functions
// and get inside of suture's handling at a deep level.
// If you ever come up with some need to get into these, submit a pull
// request to make them public and some smidge of justification, and
// I'll happily do it.
logBadStop func(Service)
logFailure func(service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
logBackoff func(*Supervisor, bool)
// avoid a dependency on github.com/thejerf/abtime by just implementing
// a minimal chunk.
getNow func() time.Time
getResume func(time.Duration) <-chan time.Time
}
// Spec is used to pass arguments to the New function to create a
// supervisor. See the New function for full documentation.
type Spec struct {
Log func(string)
FailureDecay float64
FailureThreshold float64
FailureBackoff time.Duration
Timeout time.Duration
}
/*
New is the full constructor function for a supervisor.
The name is a friendly human name for the supervisor, used in logging. Suture
does not care if this is unique, but it is good for your sanity if it is.
If not set, the following values are used:
* Log: A function is created that uses log.Print.
* FailureDecay: 30 seconds
* FailureThreshold: 5 failures
* FailureBackoff: 15 seconds
* Timeout: 10 seconds
The Log function will be called when errors occur. Suture will log the
following:
* When a service has failed, with a descriptive message about the
current backoff status, and whether it was immediately restarted
* When the supervisor has gone into its backoff mode, and when it
exits it
* When a service fails to stop
The failureRate, failureThreshold, and failureBackoff controls how failures
are handled, in order to avoid the supervisor failure case where the
program does nothing but restarting failed services. If you do not
care how failures behave, the default values should be fine for the
vast majority of services, but if you want the details:
The supervisor tracks the number of failures that have occurred, with an
exponential decay on the count. Every FailureDecay seconds, the number of
failures that have occurred is cut in half. (This is done smoothly with an
exponential function.) When a failure occurs, the number of failures
is incremented by one. When the number of failures passes the
FailureThreshold, the entire service waits for FailureBackoff seconds
before attempting any further restarts, at which point it resets its
failure count to zero.
Timeout is how long Suture will wait for a service to properly terminate.
*/
func New(name string, spec Spec) (s *Supervisor) {
s = new(Supervisor)
s.Name = name
s.id = supervisorID(atomic.AddUint32(&currentSupervisorID, 1))
if spec.Log == nil {
s.log = func(msg string) {
log.Print(fmt.Sprintf("Supervisor %s: %s", s.Name, msg))
}
} else {
s.log = spec.Log
}
if spec.FailureDecay == 0 {
s.failureDecay = 30
} else {
s.failureDecay = spec.FailureDecay
}
if spec.FailureThreshold == 0 {
s.failureThreshold = 5
} else {
s.failureThreshold = spec.FailureThreshold
}
if spec.FailureBackoff == 0 {
s.failureBackoff = time.Second * 15
} else {
s.failureBackoff = spec.FailureBackoff
}
if spec.Timeout == 0 {
s.timeout = time.Second * 10
} else {
s.timeout = spec.Timeout
}
// overriding these allows for testing the threshold behavior
s.getNow = time.Now
s.getResume = time.After
s.control = make(chan supervisorMessage)
s.services = make(map[serviceID]Service)
s.restartQueue = make([]serviceID, 0, 1)
s.resumeTimer = make(chan time.Time)
// set up the default logging handlers
s.logBadStop = func(service Service) {
s.log(fmt.Sprintf("Service %s failed to terminate in a timely manner", serviceName(service)))
}
s.logFailure = func(service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
var errString string
e, canError := err.(error)
if canError {
errString = e.Error()
} else {
errString = fmt.Sprintf("%#v", err)
}
s.log(fmt.Sprintf("Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(service), failures, threshold, restarting, errString, string(st)))
}
s.logBackoff = func(s *Supervisor, entering bool) {
if entering {
s.log("Entering the backoff state.")
} else {
s.log("Exiting backoff state.")
}
}
return
}
func serviceName(service Service) (serviceName string) {
stringer, canStringer := service.(fmt.Stringer)
if canStringer {
serviceName = stringer.String()
} else {
serviceName = fmt.Sprintf("%#v", service)
}
return
}
// NewSimple is a convenience function to create a service with just a name
// and the sensible defaults.
func NewSimple(name string) *Supervisor {
return New(name, Spec{})
}
/*
Service is the interface that describes a service to a Supervisor.
Serve Method
The Serve method is called by a Supervisor to start the service.
The service should execute within the goroutine that this is
called in. If this function either returns or panics, the Supervisor
will call it again.
A Serve method SHOULD do as much cleanup of the state as possible,
to prevent any corruption in the previous state from crashing the
service again.
Stop Method
This method is used by the supervisor to stop the service. Calling this
directly on a Service given to a Supervisor will simply result in the
Service being restarted; use the Supervisor's .Remove(ServiceToken) method
to stop a service. A supervisor will call .Stop() only once. Thus, it may
be as destructive as it likes to get the service to stop.
Once Stop has been called on a Service, the Service SHOULD NOT be
reused in any other supervisor! Because of the impossibility of
guaranteeing that the service has actually stopped in Go, you can't
prove that you won't be starting two goroutines using the exact
same memory to store state, causing completely unpredictable behavior.
Stop should not return until the service has actually stopped.
"Stopped" here is defined as "the service will stop servicing any
further requests in the future". For instance, a common implementation
is to receive a message on a dedicated "stop" channel and immediately
returning. Once the stop command has been processed, the service is
stopped.
Another common Stop implementation is to forcibly close an open socket
or other resource, which will cause detectable errors to manifest in the
service code. Bear in mind that to perfectly correctly use this
approach requires a bit more work to handle the chance of a Stop
command coming in before the resource has been created.
If a service does not Stop within the supervisor's timeout duration, a log
entry will be made with a descriptive string to that effect. This does
not guarantee that the service is hung; it may still get around to being
properly stopped in the future. Until the service is fully stopped,
both the service and the spawned goroutine trying to stop it will be
"leaked".
Stringer Interface
It is not mandatory to implement the fmt.Stringer interface on your
service, but if your Service does happen to implement that, the log
messages that describe your service will use that when naming the
service. Otherwise, you'll see the GoString of your service object,
obtained via fmt.Sprintf("%#v", service).
*/
type Service interface {
Serve()
Stop()
}
/*
Add adds a service to this supervisor.
If the supervisor is currently running, the service will be started
immediately. If the supervisor is not currently running, the service
will be started when the supervisor is.
The returned ServiceID may be passed to the Remove method of the Supervisor
to terminate the service.
*/
func (s *Supervisor) Add(service Service) ServiceToken {
if s == nil {
panic("can't add service to nil *suture.Supervisor")
}
if s.state == notRunning {
id := s.serviceCounter
s.serviceCounter++
s.services[id] = service
s.restartQueue = append(s.restartQueue, id)
return ServiceToken{uint64(s.id)<<32 | uint64(id)}
}
response := make(chan serviceID)
s.control <- addService{service, response}
return ServiceToken{uint64(s.id)<<32 | uint64(<-response)}
}
// ServeBackground starts running a supervisor in its own goroutine. This
// method does not return until it is safe to use .Add() on the Supervisor.
func (s *Supervisor) ServeBackground() {
go s.Serve()
s.sync()
}
/*
Serve starts the supervisor. You should call this on the top-level supervisor,
but nothing else.
*/
func (s *Supervisor) Serve() {
if s == nil {
panic("Can't serve with a nil *suture.Supervisor")
}
if s.id == 0 {
panic("Can't call Serve on an incorrectly-constructed *suture.Supervisor")
}
defer func() {
s.state = notRunning
}()
if s.state != notRunning {
// FIXME: Don't explain why I don't need a semaphore, just use one
// This doesn't use a semaphore because it's just a sanity check.
panic("Running a supervisor while it is already running?")
}
s.state = normal
// for all the services I currently know about, start them
for _, id := range s.restartQueue {
service, present := s.services[id]
if present {
s.runService(service, id)
}
}
s.restartQueue = make([]serviceID, 0, 1)
for {
select {
case m := <-s.control:
switch msg := m.(type) {
case serviceFailed:
s.handleFailedService(msg.id, msg.err, msg.stacktrace)
case serviceEnded:
service, monitored := s.services[msg.id]
if monitored {
s.handleFailedService(msg.id, fmt.Sprintf("%s returned unexpectedly", service), []byte("[unknown stack trace]"))
}
case addService:
id := s.serviceCounter
s.serviceCounter++
s.services[id] = msg.service
s.runService(msg.service, id)
msg.response <- id
case removeService:
s.removeService(msg.id)
case stopSupervisor:
for id := range s.services {
s.removeService(id)
}
return
case listServices:
services := []Service{}
for _, service := range s.services {
services = append(services, service)
}
msg.c <- services
case syncSupervisor:
// this does nothing on purpose; its sole purpose is to
// introduce a sync point via the channel receive
case panicSupervisor:
// used only by tests
panic("Panicking as requested!")
}
case _ = <-s.resumeTimer:
// We're resuming normal operation after a pause due to
// excessive thrashing
// FIXME: Ought to permit some spacing of these functions, rather
// than simply hammering through them
s.state = normal
s.failures = 0
s.logBackoff(s, false)
for _, id := range s.restartQueue {
service, present := s.services[id]
if present {
s.runService(service, id)
}
}
s.restartQueue = make([]serviceID, 0, 1)
}
}
}
func (s *Supervisor) handleFailedService(id serviceID, err interface{}, stacktrace []byte) {
now := s.getNow()
if s.lastFail.IsZero() {
s.lastFail = now
s.failures = 1.0
} else {
sinceLastFail := now.Sub(s.lastFail).Seconds()
intervals := sinceLastFail / s.failureDecay
s.failures = s.failures*math.Pow(.5, intervals) + 1
}
if s.failures > s.failureThreshold {
s.state = paused
s.logBackoff(s, true)
s.resumeTimer = s.getResume(s.failureBackoff)
}
s.lastFail = now
failedService, monitored := s.services[id]
// It is possible for a service to be no longer monitored
// by the time we get here. In that case, just ignore it.
if monitored {
if s.state == normal {
s.runService(failedService, id)
s.logFailure(failedService, s.failures, s.failureThreshold, true, err, stacktrace)
} else {
// FIXME: When restarting, check that the service still
// exists (it may have been stopped in the meantime)
s.restartQueue = append(s.restartQueue, id)
s.logFailure(failedService, s.failures, s.failureThreshold, false, err, stacktrace)
}
}
}
func (s *Supervisor) runService(service Service, id serviceID) {
go func() {
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 65535, 65535)
written := runtime.Stack(buf, false)
buf = buf[:written]
s.fail(id, r, buf)
}
}()
service.Serve()
s.serviceEnded(id)
}()
}
func (s *Supervisor) removeService(id serviceID) {
service, present := s.services[id]
if present {
delete(s.services, id)
go func() {
successChan := make(chan bool)
go func() {
service.Stop()
successChan <- true
}()
failChan := s.getResume(s.timeout)
select {
case <-successChan:
// Life is good!
case <-failChan:
s.logBadStop(service)
}
}()
}
}
// String implements the fmt.Stringer interface.
func (s *Supervisor) String() string {
return s.Name
}
// sum type pattern for type-safe message passing; see
// http://www.jerf.org/iri/post/2917
type supervisorMessage interface {
isSupervisorMessage()
}
/*
Remove will remove the given service from the Supervisor, and attempt to Stop() it.
The ServiceID token comes from the Add() call.
*/
func (s *Supervisor) Remove(id ServiceToken) error {
sID := supervisorID(id.id >> 32)
if sID != s.id {
return ErrWrongSupervisor
}
s.control <- removeService{serviceID(id.id & 0xffffffff)}
return nil
}
/*
Services returns a []Service containing a snapshot of the services this
Supervisor is managing.
*/
func (s *Supervisor) Services() []Service {
ls := listServices{make(chan []Service)}
s.control <- ls
return <-ls.c
}
type listServices struct {
c chan []Service
}
func (ls listServices) isSupervisorMessage() {}
type removeService struct {
id serviceID
}
func (rs removeService) isSupervisorMessage() {}
func (s *Supervisor) sync() {
s.control <- syncSupervisor{}
}
type syncSupervisor struct {
}
func (ss syncSupervisor) isSupervisorMessage() {}
func (s *Supervisor) fail(id serviceID, err interface{}, stacktrace []byte) {
s.control <- serviceFailed{id, err, stacktrace}
}
type serviceFailed struct {
id serviceID
err interface{}
stacktrace []byte
}
func (sf serviceFailed) isSupervisorMessage() {}
func (s *Supervisor) serviceEnded(id serviceID) {
s.control <- serviceEnded{id}
}
type serviceEnded struct {
id serviceID
}
func (s serviceEnded) isSupervisorMessage() {}
// added by the Add() method
type addService struct {
service Service
response chan serviceID
}
func (as addService) isSupervisorMessage() {}
// Stop stops the Supervisor.
func (s *Supervisor) Stop() {
s.control <- stopSupervisor{}
}
type stopSupervisor struct {
}
func (ss stopSupervisor) isSupervisorMessage() {}
func (s *Supervisor) panic() {
s.control <- panicSupervisor{}
}
type panicSupervisor struct {
}
func (ps panicSupervisor) isSupervisorMessage() {}

View File

@@ -0,0 +1,49 @@
package suture
import "fmt"
type Incrementor struct {
current int
next chan int
stop chan bool
}
func (i *Incrementor) Stop() {
fmt.Println("Stopping the service")
i.stop <- true
}
func (i *Incrementor) Serve() {
for {
select {
case i.next <- i.current:
i.current += 1
case <-i.stop:
// We sync here just to guarantee the output of "Stopping the service",
// so this passes the test reliably.
// Most services would simply "return" here.
i.stop <- true
return
}
}
}
func ExampleNew_simple() {
supervisor := NewSimple("Supervisor")
service := &Incrementor{0, make(chan int), make(chan bool)}
supervisor.Add(service)
go supervisor.ServeBackground()
fmt.Println("Got:", <-service.next)
fmt.Println("Got:", <-service.next)
supervisor.Stop()
// We sync here just to guarantee the output of "Stopping the service"
<-service.stop
// Output:
// Got: 0
// Got: 1
// Stopping the service
}

View File

@@ -0,0 +1,592 @@
package suture
import (
"errors"
"fmt"
"reflect"
"sync"
"testing"
"time"
)
const (
Happy = iota
Fail
Panic
Hang
UseStopChan
)
var everMultistarted = false
// Test that supervisors work perfectly when everything is hunky dory.
func TestTheHappyCase(t *testing.T) {
t.Parallel()
s := NewSimple("A")
if s.String() != "A" {
t.Fatal("Can't get name from a supervisor")
}
service := NewService("B")
s.Add(service)
go s.Serve()
<-service.started
// If we stop the service, it just gets restarted
service.Stop()
<-service.started
// And it is shut down when we stop the supervisor
service.take <- UseStopChan
s.Stop()
<-service.stop
}
// Test that adding to a running supervisor does indeed start the service.
func TestAddingToRunningSupervisor(t *testing.T) {
t.Parallel()
s := NewSimple("A1")
s.ServeBackground()
defer s.Stop()
service := NewService("B1")
s.Add(service)
<-service.started
services := s.Services()
if !reflect.DeepEqual([]Service{service}, services) {
t.Fatal("Can't get list of services as expected.")
}
}
// Test what happens when services fail.
func TestFailures(t *testing.T) {
t.Parallel()
s := NewSimple("A2")
s.failureThreshold = 3.5
go s.Serve()
defer func() {
// to avoid deadlocks during shutdown, we have to not try to send
// things out on channels while we're shutting down (this undoes the
// logFailure overide about 25 lines down)
s.logFailure = func(Service, float64, float64, bool, interface{}, []byte) {}
s.Stop()
}()
s.sync()
service1 := NewService("B2")
service2 := NewService("C2")
s.Add(service1)
<-service1.started
s.Add(service2)
<-service2.started
nowFeeder := NewNowFeeder()
pastVal := time.Unix(1000000, 0)
nowFeeder.appendTimes(pastVal)
s.getNow = nowFeeder.getter
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan bool)
// use this to synchronize on here
s.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- r
}
// All that setup was for this: Service1, please return now.
service1.take <- Fail
restarted := <-failNotify
<-service1.started
if !restarted || s.failures != 1 || s.lastFail != pastVal {
t.Fatal("Did not fail in the expected manner")
}
// Getting past this means the service was restarted.
service1.take <- Happy
// Service2, your turn.
service2.take <- Fail
nowFeeder.appendTimes(pastVal)
restarted = <-failNotify
<-service2.started
if !restarted || s.failures != 2 || s.lastFail != pastVal {
t.Fatal("Did not fail in the expected manner")
}
// And you're back. (That is, the correct service was restarted.)
service2.take <- Happy
// Now, one failureDecay later, is everything working correctly?
oneDecayLater := time.Unix(1000030, 0)
nowFeeder.appendTimes(oneDecayLater)
service2.take <- Fail
restarted = <-failNotify
<-service2.started
// playing a bit fast and loose here with floating point, but...
// we get 2 by taking the current failure value of 2, decaying it
// by one interval, which cuts it in half to 1, then adding 1 again,
// all of which "should" be precise
if !restarted || s.failures != 2 || s.lastFail != oneDecayLater {
t.Fatal("Did not decay properly", s.lastFail, oneDecayLater)
}
// For a change of pace, service1 would you be so kind as to panic?
nowFeeder.appendTimes(oneDecayLater)
service1.take <- Panic
restarted = <-failNotify
<-service1.started
if !restarted || s.failures != 3 || s.lastFail != oneDecayLater {
t.Fatal("Did not correctly recover from a panic")
}
nowFeeder.appendTimes(oneDecayLater)
backingoff := make(chan bool)
s.logBackoff = func(s *Supervisor, backingOff bool) {
backingoff <- backingOff
}
// And with this failure, we trigger the backoff code.
service1.take <- Fail
backoff := <-backingoff
restarted = <-failNotify
if !backoff || restarted || s.failures != 4 {
t.Fatal("Broke past the threshold but did not log correctly", s.failures)
}
if service1.existing != 0 {
t.Fatal("service1 still exists according to itself?")
}
// service2 is still running, because we don't shut anything down in a
// backoff, we just stop restarting.
service2.take <- Happy
var correct bool
timer := time.NewTimer(time.Millisecond * 10)
// verify the service has not been restarted
// hard to get around race conditions here without simply using a timer...
select {
case service1.take <- Happy:
correct = false
case <-timer.C:
correct = true
}
if !correct {
t.Fatal("Restarted the service during the backoff interval")
}
// tell the supervisor the restart interval has passed
resumeChan <- time.Time{}
backoff = <-backingoff
<-service1.started
s.sync()
if s.failures != 0 {
t.Fatal("Did not reset failure count after coming back from timeout.")
}
nowFeeder.appendTimes(oneDecayLater)
service1.take <- Fail
restarted = <-failNotify
<-service1.started
if !restarted || backoff {
t.Fatal("For some reason, got that we were backing off again.", restarted, backoff)
}
}
func TestRunningAlreadyRunning(t *testing.T) {
t.Parallel()
s := NewSimple("A3")
go s.Serve()
defer s.Stop()
// ensure the supervisor has made it to its main loop
s.sync()
var errored bool
func() {
defer func() {
if r := recover(); r != nil {
errored = true
}
}()
s.Serve()
}()
if !errored {
t.Fatal("Supervisor failed to prevent itself from double-running.")
}
}
func TestFullConstruction(t *testing.T) {
t.Parallel()
s := New("Moo", Spec{
Log: func(string) {},
FailureDecay: 1,
FailureThreshold: 2,
FailureBackoff: 3,
Timeout: time.Second * 29,
})
if s.String() != "Moo" || s.failureDecay != 1 || s.failureThreshold != 2 || s.failureBackoff != 3 || s.timeout != time.Second*29 {
t.Fatal("Full construction failed somehow")
}
}
// This is mostly for coverage testing.
func TestDefaultLogging(t *testing.T) {
t.Parallel()
s := NewSimple("A4")
service := NewService("B4")
s.Add(service)
s.failureThreshold = .5
s.failureBackoff = time.Millisecond * 25
go s.Serve()
s.sync()
<-service.started
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
service.take <- UseStopChan
service.take <- Fail
<-service.stop
resumeChan <- time.Time{}
<-service.started
service.take <- Happy
serviceName(&BarelyService{})
s.logBadStop(service)
s.logFailure(service, 1, 1, true, errors.New("test error"), []byte{})
s.Stop()
}
func TestNestedSupervisors(t *testing.T) {
t.Parallel()
super1 := NewSimple("Top5")
super2 := NewSimple("Nested5")
service := NewService("Service5")
super1.Add(super2)
super2.Add(service)
go super1.Serve()
super1.sync()
<-service.started
service.take <- Happy
super1.Stop()
}
func TestStoppingSupervisorStopsServices(t *testing.T) {
t.Parallel()
s := NewSimple("Top6")
service := NewService("Service 6")
s.Add(service)
go s.Serve()
s.sync()
<-service.started
service.take <- UseStopChan
s.Stop()
<-service.stop
}
func TestStoppingStillWorksWithHungServices(t *testing.T) {
t.Parallel()
s := NewSimple("Top7")
service := NewService("Service WillHang7")
s.Add(service)
go s.Serve()
<-service.started
service.take <- UseStopChan
service.take <- Hang
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan struct{})
s.logBadStop = func(s Service) {
failNotify <- struct{}{}
}
s.Stop()
resumeChan <- time.Time{}
<-failNotify
service.release <- true
<-service.stop
}
func TestRemoveService(t *testing.T) {
t.Parallel()
s := NewSimple("Top")
service := NewService("ServiceToRemove8")
id := s.Add(service)
go s.Serve()
<-service.started
service.take <- UseStopChan
err := s.Remove(id)
if err != nil {
t.Fatal("Removing service somehow failed")
}
<-service.stop
err = s.Remove(ServiceToken{1<<36 + 1})
if err != ErrWrongSupervisor {
t.Fatal("Did not detect that the ServiceToken was wrong")
}
}
func TestFailureToConstruct(t *testing.T) {
t.Parallel()
var s *Supervisor
panics(func() {
s.Serve()
})
s = new(Supervisor)
panics(func() {
s.Serve()
})
}
func TestFailingSupervisors(t *testing.T) {
t.Parallel()
// This is a bit of a complicated test, so let me explain what
// all this is doing:
// 1. Set up a top-level supervisor with a hair-trigger backoff.
// 2. Add a supervisor to that.
// 3. To that supervisor, add a service.
// 4. Panic the supervisor in the middle, sending the top-level into
// backoff.
// 5. Kill the lower level service too.
// 6. Verify that when the top-level service comes out of backoff,
// the service ends up restarted as expected.
// Ultimately, we can't have more than a best-effort recovery here.
// A panic'ed supervisor can't really be trusted to have consistent state,
// and without *that*, we can't trust it to do anything sensible with
// the children it may have been running. So unlike Erlang, we can't
// can't really expect to be able to safely restart them or anything.
// Really, the "correct" answer is that the Supervisor must never panic,
// but in the event that it does, this verifies that it at least tries
// to get on with life.
// This also tests that if a Supervisor itself panics, and one of its
// monitored services goes down in the meantime, that the monitored
// service also gets correctly restarted when the supervisor does.
s1 := NewSimple("Top9")
s2 := NewSimple("Nested9")
service := NewService("Service9")
s1.Add(s2)
s2.Add(service)
go s1.Serve()
<-service.started
s1.failureThreshold = .5
// let us control precisely when s1 comes back
resumeChan := make(chan time.Time)
s1.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan string)
// use this to synchronize on here
s1.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- fmt.Sprintf("%s", s)
}
s2.panic()
failing := <-failNotify
// that's enough sync to guarantee this:
if failing != "Nested9" || s1.state != paused {
t.Fatal("Top-level supervisor did not go into backoff as expected")
}
service.take <- Fail
resumeChan <- time.Time{}
<-service.started
}
func TestNilSupervisorAdd(t *testing.T) {
t.Parallel()
var s *Supervisor
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic as expected on nil add")
}
}()
s.Add(s)
}
// http://golangtutorials.blogspot.com/2011/10/gotest-unit-testing-and-benchmarking-go.html
// claims test function are run in the same order as the source file...
// I'm not sure if this is part of the contract, though. Especially in the
// face of "t.Parallel()"...
func TestEverMultistarted(t *testing.T) {
if everMultistarted {
t.Fatal("Seem to have multistarted a service at some point, bummer.")
}
}
// A test service that can be induced to fail, panic, or hang on demand.
func NewService(name string) *FailableService {
return &FailableService{name, make(chan bool), make(chan int),
make(chan bool, 1), make(chan bool), make(chan bool), 0}
}
type FailableService struct {
name string
started chan bool
take chan int
shutdown chan bool
release chan bool
stop chan bool
existing int
}
func (s *FailableService) Serve() {
if s.existing != 0 {
everMultistarted = true
panic("Multi-started the same service! " + s.name)
}
s.existing += 1
s.started <- true
useStopChan := false
for {
select {
case val := <-s.take:
switch val {
case Happy:
// Do nothing on purpose. Life is good!
case Fail:
s.existing -= 1
if useStopChan {
s.stop <- true
}
return
case Panic:
s.existing -= 1
panic("Panic!")
case Hang:
// or more specifically, "hang until I release you"
<-s.release
case UseStopChan:
useStopChan = true
}
case <-s.shutdown:
s.existing -= 1
if useStopChan {
s.stop <- true
}
return
}
}
}
func (s *FailableService) String() string {
return s.name
}
func (s *FailableService) Stop() {
s.shutdown <- true
}
type NowFeeder struct {
values []time.Time
getter func() time.Time
m sync.Mutex
}
// This is used to test serviceName; it's a service without a Stringer.
type BarelyService struct{}
func (bs *BarelyService) Serve() {}
func (bs *BarelyService) Stop() {}
func NewNowFeeder() (nf *NowFeeder) {
nf = new(NowFeeder)
nf.getter = func() time.Time {
nf.m.Lock()
defer nf.m.Unlock()
if len(nf.values) > 0 {
ret := nf.values[0]
nf.values = nf.values[1:]
return ret
}
panic("Ran out of values for NowFeeder")
}
return
}
func (nf *NowFeeder) appendTimes(t ...time.Time) {
nf.m.Lock()
defer nf.m.Unlock()
nf.values = append(nf.values, t...)
}
func panics(doesItPanic func()) (panics bool) {
defer func() {
if r := recover(); r != nil {
panics = true
}
}()
doesItPanic()
return
}

View File

@@ -8,11 +8,11 @@ package bcrypt
// The code is a port of Provos and Mazières's C implementation.
import (
"code.google.com/p/go.crypto/blowfish"
"crypto/rand"
"crypto/subtle"
"errors"
"fmt"
"golang.org/x/crypto/blowfish"
"io"
"strconv"
)

Some files were not shown because too many files have changed in this diff Show More