Compare commits

...

410 Commits

Author SHA1 Message Date
Jakob Borg
03e17ec1dd Docs & translations update 2015-10-02 08:08:07 +02:00
Jakob Borg
892e470ba5 Authors update 2015-10-02 08:07:28 +02:00
Jakob Borg
f8b835c6be Create missing directories 2015-10-02 08:01:00 +02:00
Jakob Borg
5fab25effd Correctly report errors encountered parsing ignores (fixes #2309, fixes #2296)
An error on opening .stignore will satisfy os.IsNotExist() and not be
reported. Other errors will be reported and stop the folder, including
is-not-exist errors from #include as these are passed through fmt.Errorf.

Also fixes minor issue where we would not print cause of folder stopping
to the log.

Also fixes minor issue with capitalization of errors.
2015-10-02 08:00:16 +02:00
Jakob Borg
d746b043c8 Subscribing to events should not bump event ID (fixes #2329) 2015-10-02 07:59:24 +02:00
Jakob Borg
beb643c666 Don't check for free space on master folders (fixes #2236)
Also clean up the logic a little, I thought it was a bit muddled.
2015-10-02 07:58:53 +02:00
Stefan Tatschner
d039df46c0 Pull syncthing-inotify.service as an optional dep
This patch adds syncthing-inotify.service as an optional dependency to
syncthing.service. That means, if syncthing-inotify.service is
available, it is started and stopped with syncthing.

See discussion here:
https://forum.syncthing.net/t/gnome-shell-extension-syncthing-icon/5759
2015-10-02 07:58:24 +02:00
Jakob Borg
b7e5948b09 Update suture for data race bug 2015-10-02 07:57:32 +02:00
Jakob Borg
e3dd072022 Docs & translation update 2015-09-13 11:46:17 +02:00
Jakob Borg
b98a8bafb3 Don't require free disk space when we might only update metadata
Instead, make sure we do the check as part of CheckFolderHealth before
pulling, and individually per file to try to not run out of space at
that stage.

(The latter is far from fool proof as we may pull lots of stuff in
parallell, but it's worth a try.)
2015-09-13 11:44:54 +02:00
Jakob Borg
4b5c44b61c Only check pull file size if check is enabled (ref #2241) 2015-09-06 17:13:33 +02:00
Jakob Borg
41c131d840 Allow fractional free space percentages (fixes #2233) 2015-09-06 08:47:52 +02:00
Jakob Borg
66df2e57ae Correctly handle (?i) in ignores (fixes #1953) 2015-09-03 08:22:53 +02:00
Jakob Borg
4a6e9c189c Adjust defaults for number of hashers based on OS
https://forum.syncthing.net/t/syncthing-is-such-a-massive-resource-hog/5494/19?u=calmh
2015-09-03 08:22:34 +02:00
Jakob Borg
2eece95ed2 Allow -logfile on all platforms (fixes #2004) 2015-08-30 13:37:18 +02:00
Jakob Borg
008b07a52f Line height on headers to avoid cut off descenders 2015-08-30 13:36:57 +02:00
Jakob Borg
4fd01ae552 Translation update 2015-08-30 13:35:39 +02:00
Jakob Borg
8e6cf4bf4e Bind to IPv6 multicast group instead of ::
This makes it possible to run multiple instances on the same box, all
receiving local discovery packets. Tested on Mac, Windows, supposed to
work on at least Linux too. For Windows, there may be issues with XP and
earlier, but meh...
2015-08-27 18:59:15 +02:00
Jakob Borg
ace8451604 Don't trust response header (fixes #2186)
Either Angular or the browser sometimes returns cached repsonse header,
causing a flap between requests that return the new version and requests
that return the old one. Here, instead, we trust the actual data
returned by the uncached /rest/system/version call.
2015-08-25 15:55:51 +02:00
Jakob Borg
feffc0416f Fix events timeout errors
Resetting the timeout doesn't fully cut it, as it may timeout after we
got an event and be delivered later. This should fix it well enough for
the moment. https://github.com/golang/go/issues/11513
2015-08-24 09:40:21 +02:00
Audrius Butkevicius
42dfa45d52 Try harder removing the temp file 2015-08-23 15:53:00 +02:00
Jakob Borg
706926543e Report reason for no IPv6 multicast with STTRACE=discover 2015-08-23 15:52:28 +02:00
Jakob Borg
a024cefd35 IPv6 multicast on Windows (fixes #1817) 2015-08-23 15:52:03 +02:00
Jakob Borg
42dcb3d2cc Translation & docs update 2015-08-23 12:03:28 +02:00
Jakob Borg
ed1852f8f6 Update protocol dependency 2015-08-20 12:14:47 +02:00
Zillode
e2be051558 Merge pull request #2169 from calmh/restartmon
Retain standard streams over restart (fixes #2155)
2015-08-19 23:16:32 +02:00
Audrius Butkevicius
50702eda94 Merge pull request #2171 from Zillode/staggered-test
Add unit test for staggered versioning (fixes #2165)
2015-08-19 20:11:19 +01:00
Jakob Borg
59eeafbdfa Merge pull request #2174 from alex2108/master
Fix time zone error in staggered versioning (fixes #2165)
2015-08-19 20:25:24 +02:00
Alexander Graf
abc606608c Fix time zone error in staggered versioning (fixes #2165) 2015-08-19 17:23:50 +02:00
Jakob Borg
1487552b48 s/in/at/ (fixes #2158) 2015-08-19 10:42:48 +02:00
Jakob Borg
c7dbe18df6 Newest first should be different from oldest first (fixes #2161) 2015-08-19 09:42:52 +02:00
Jakob Borg
c2bc3358cc Merge pull request #2168 from calmh/codename
Add release code name
2015-08-19 08:31:22 +02:00
Lode Hoste
47a1494d68 Add unit test for staggered versioning (fixes #2165) 2015-08-18 19:52:58 +02:00
Jakob Borg
dbb388719e Retain standard streams over restart (fixes #2155) 2015-08-18 17:24:50 +02:00
Jakob Borg
283c91548a Add release code name
I figured we're missing out on being cool and awesome by not having an
alphabetically based release code name like the big guys. This commit
fixes that. I've unilaterally decided on a theme of "$metal $bug"
because metals are kind of cool, and bugs, well, ...
2015-08-18 13:33:36 +02:00
Audrius Butkevicius
38b93bd310 Merge pull request #2167 from uok/fixicon
Fix missing folder master icon
2015-08-18 09:52:22 +01:00
Ben Schulz
8dcc30ac83 Fix missing folder master icon 2015-08-18 10:40:18 +02:00
Jakob Borg
0ee123375d Merge remote-tracking branch 'syncthing/pr/2117'
* syncthing/pr/2117:
  Disable testing upgrade endpoint because it fails when disconnected
2015-08-18 09:15:00 +02:00
Jakob Borg
be18cbef8b Update dependencies 2015-08-18 08:56:07 +02:00
Lode Hoste
8eb494c13e Disable testing upgrade endpoint because it fails when disconnected 2015-08-17 22:08:35 +02:00
Audrius Butkevicius
b6b6375f70 Merge pull request #2163 from calmh/dbrecover
Recover from 'corrupted or incomplete CURRENT file' etc
2015-08-16 15:58:51 +01:00
Jakob Borg
8783688391 Recover from 'corrupted or incomplete CURRENT file' etc (fixes #2017) 2015-08-16 16:36:06 +02:00
Jakob Borg
98afc3e99c Docs & translation update 2015-08-16 15:29:48 +02:00
Audrius Butkevicius
50a1858367 Merge pull request #2136 from calmh/noarchivedir
Clarify and correct handling of existing files/directories when pulling
2015-08-15 14:31:38 +01:00
Audrius Butkevicius
f3f586773b Merge pull request #2160 from calmh/rlimit
Increase open file (fd) limit if possible
2015-08-15 14:31:20 +01:00
Jakob Borg
61a182077f Clarify and correct handling of existing files/directories when pulling
This fixes a corner case I discovered in the symlink branch, where we
unexpectedly succeed in "replacing" an entire non-empty directory tree
with a file or symlink. This happens when archiving is in use, as we
then just move the entire tree away into the archive. This is wrong as
we should just archive files and fail on non-empty dirs in all cases.

New handling first checks what the (old) thing is, and if it's a
directory or symlink just does the delete, otherwise does conflict
handling or archiving as appropriate.
2015-08-15 15:29:59 +02:00
Jakob Borg
1c9513e770 Increase open file (fd) limit if possible
This will decrease the risk of running out of file descriptors for the
database and other bad things, which could otherwise potentially happen
if we're serving lots of requests and scanning in parallel, etc.

Windows doesn't have a per process open file limit like Unix so we don't
need to worry about it there.
2015-08-15 15:28:53 +02:00
Jakob Borg
5e5eb9bf8e Update test configs to v11 2015-08-14 14:19:43 +02:00
Audrius Butkevicius
7a9bb65e03 Merge pull request #2156 from calmh/stuckatzero
Don't get stuck at "Syncing 0%" when adding a new folder
2015-08-14 10:42:01 +01:00
Jakob Borg
a5345ac71e Don't get stuck at "Syncing 0%" when adding a new folder
The number of copiers and pullers is set to default at config loading
time, but the new folder configuration doesn't pass through config
loading so we start up with 0 copiers and 0 pullers and hence get stuck.
I moved the default handling to the puller itself instead. I think this
way is also cleaner as we get to keep the 0 in the config and the puller
gets to decide the defaults on it's own.
2015-08-14 10:35:51 +02:00
Jakob Borg
ae5079f7b4 Update lang-en.json 2015-08-14 09:13:09 +02:00
Jakob Borg
ea1ecfbc38 Merge pull request #2147 from uok/dontbesonegative
Prevent negative values for number inputs
2015-08-14 09:12:10 +02:00
Jakob Borg
a84b6b4bcc Merge pull request #2145 from uok/awesome
Change to Font Awesome icon font (fixes #2138)
2015-08-14 09:11:14 +02:00
Ben Schulz
93023128fd Prevent negative values for number inputs
- settings: incoming/outgoing rate limit - min: 0
- folder: maximum age (staggered file versioning) - min: 0
- help texts for validation
2015-08-13 15:56:10 +02:00
Ben Schulz
77157f16a1 Change to Font Awesome icon font (fixes #2138)
- remove Glyphicon assets and customize bootstrap CSS
- add Font Awesome v4.4.0 assets
- replace Glyphicons with Font Awesome icons in HTML
- add icons to modal headers
- add attribution for Font Awesome
- format HTML source code for buttons
2015-08-13 15:41:51 +02:00
Jakob Borg
99736e5066 Ensure dir before files ordering when scanning 2015-08-13 13:01:50 +02:00
Jakob Borg
681306b7a1 Clean up the scripts a bit (...)
- Move the Go files into script/ instead of random places
- Rewrite check-contrib.sh into check-authors.go and check-copyright.go
- Clean up build.sh a little bit
2015-08-13 12:35:26 +02:00
Jakob Borg
5f36c9d4de Merge pull request #2152 from Zillode/fix-gui-reorg
Revert small changes made during reorg GUI
2015-08-12 21:47:25 +02:00
Lode Hoste
6cbc8791b1 Revert small changes made during reorg GUI 2015-08-12 21:31:34 +02:00
Jakob Borg
c08de67b0d Remove erroneous file 2015-08-11 17:05:59 +02:00
Audrius Butkevicius
b21e18dfad Merge pull request #2149 from kamadak/fix-delete-folder
Fix deleting a folder.
2015-08-11 00:14:26 +01:00
KAMADA Ken'ichi
1e497915be Fix deleting a folder.
Removing a folder does not work in the "Edit Folder" dialog.
This bug was introduced in 26d52be.
2015-08-11 07:48:48 +09:00
Audrius Butkevicius
3704c41dda Merge pull request #2146 from uok/double
Remove double slashes in directives (fixes #2143)
2015-08-10 11:45:03 +01:00
Ben Schulz
6ff31ac666 Remove double slashes in directives (fixes #2143) 2015-08-10 11:48:39 +02:00
Jakob Borg
a2f73a7d35 Allow specifying Docker image to use for building 2015-08-09 14:40:18 +02:00
Jakob Borg
1492e57676 Minor typo in UPnP service description list 2015-08-09 14:14:13 +02:00
Jakob Borg
7504fc53b6 Merge branch 'v0.11'
* v0.11:
  Translations and docs update
  Enable browser caching of static resources
  Handle multiple case insensitivity prefixes in ignores (fixes #2134)
  Make rescan available for unshared folders
  Add timeout for peek (fixes #1035)
  Fix TestReset when Syncthing shuts down too fast
  Clarify password in integration tests
  Properly rename config files during integration tests (fixes #1769)
2015-08-09 11:58:18 +02:00
Jakob Borg
daa2bcefad Translations and docs update 2015-08-09 11:56:22 +02:00
Jakob Borg
49aa9399be Repair config tests 2015-08-09 11:46:28 +02:00
Jakob Borg
a71090df81 Enable browser caching of static resources
This sends the Cache-Control header to allow caching of static resources,
and checks the If-Modified-Since header to allow browser to use the
cached resource on refresh.
2015-08-09 11:36:06 +02:00
Jakob Borg
0bfcafc5c6 Handle multiple case insensitivity prefixes in ignores (fixes #2134) 2015-08-09 11:35:12 +02:00
Lode Hoste
161d5c8379 Make rescan available for unshared folders 2015-08-09 11:34:44 +02:00
Audrius Butkevicius
5cfb578170 Add timeout for peek (fixes #1035) 2015-08-09 11:34:30 +02:00
Lode Hoste
9b0d47e9eb Fix TestReset when Syncthing shuts down too fast 2015-08-09 11:34:21 +02:00
Lode Hoste
13f4706067 Clarify password in integration tests 2015-08-09 11:34:10 +02:00
Lode Hoste
7ebdb1736f Properly rename config files during integration tests (fixes #1769) 2015-08-09 11:34:05 +02:00
Jakob Borg
2bcb57c994 Merge branch 'pr-2066'
* pr-2066:
  Configurable home disk percentage, translations
  Add minimum disk free percentage to GUI
  Stop folder when running out of disk space (fixes #2057)
2015-08-09 10:38:33 +02:00
Jakob Borg
a2df691c7d Configurable home disk percentage, translations 2015-08-09 10:37:23 +02:00
Lode Hoste
58f1191f2d Add minimum disk free percentage to GUI 2015-08-09 10:37:23 +02:00
Lode Hoste
dfaa999291 Stop folder when running out of disk space (fixes #2057)
& tweaks by calmh
2015-08-09 10:37:23 +02:00
Jakob Borg
6a58033f2b Merge pull request #2124 from calmh/go15
Updates for Go 1.5
2015-08-09 09:37:09 +02:00
Jakob Borg
7705a6c1f1 mv internal lib 2015-08-09 09:35:26 +02:00
Jakob Borg
0a803891a4 Updates for Go 1.5 2015-08-09 09:35:25 +02:00
Jakob Borg
7d620a93b9 Merge pull request #2141 from kamadak/fix-setting-addrlist
Fix editing address lists in the Settings dialog.
2015-08-09 09:11:30 +02:00
KAMADA Ken'ichi
3f3b2f4c99 Fix editing address lists in the Settings dialog.
Setting "Sync Protocol Listen Addresses" and "Global Discovery
Server" in the Settings dialog does not work.  This bug seems to
have been introduced in 26d52be.
2015-08-09 15:52:22 +09:00
Audrius Butkevicius
9eb4089710 Merge pull request #2137 from calmh/caching
Enable browser caching of static resources
2015-08-08 13:20:58 +01:00
Jakob Borg
257d1afdf8 Enable browser caching of static resources
This sends the Cache-Control header to allow caching of static resources,
and checks the If-Modified-Since header to allow browser to use the
cached resource on refresh. Also fixes some paths that caused redirects
(core//foo -> core/foo)
2015-08-08 13:50:18 +02:00
Audrius Butkevicius
dad1fb7805 Merge pull request #2135 from calmh/caseinsign
Handle multiple case insensitivity prefixes in ignores (fixes #2134)
2015-08-08 12:04:46 +01:00
Jakob Borg
e312fdd4f8 Handle multiple case insensitivity prefixes in ignores (fixes #2134) 2015-08-08 11:58:20 +02:00
Jakob Borg
9b6681d543 Use grid instead of three-column (fixes #2130) 2015-08-08 11:37:19 +02:00
Jakob Borg
bcc04623c1 Fix editing device address (fixes #2129) 2015-08-08 11:29:13 +02:00
Jakob Borg
d75d162058 Merge pull request #2126 from AudriusButkevicius/peek
Add timeout for peek (fixes #1035)
2015-08-07 17:30:54 +02:00
Audrius Butkevicius
8f2294bbd4 Merge pull request #2128 from Zillode/fix-rescan-btn-unshared
Make rescan available for unshared folders
2015-08-06 20:54:41 +01:00
Lode Hoste
d1b91c7619 Make rescan available for unshared folders 2015-08-06 21:52:29 +02:00
Audrius Butkevicius
1b6b481fcc Add timeout for peek (fixes #1035) 2015-08-06 12:07:34 +01:00
Jakob Borg
dd64ba1910 Merge pull request #2116 from Zillode/fix-test-reset
Fix TestReset when Syncthing shuts down too fast
2015-08-05 09:47:23 +02:00
Jakob Borg
82a5d1cd79 Merge pull request #2108 from Zillode/fix-delete-button
Rename 'delete' to 'remove' (fixes #1007)
2015-08-05 09:34:27 +02:00
Jakob Borg
c26c172d01 Merge pull request #2103 from Zillode/fix-integration-rename-windows
Fix integration rename windows
2015-08-05 08:59:53 +02:00
Lode Hoste
a7b7aaa7cb Fix TestReset when Syncthing shuts down too fast 2015-08-04 09:20:46 +02:00
Lode Hoste
4e5c02c05c Clarify password in integration tests 2015-08-03 20:03:50 +02:00
Lode Hoste
2baf61fda3 Properly rename config files during integration tests (fixes #1769) 2015-08-03 20:03:50 +02:00
Lode Hoste
219a25fe80 Rename 'delete' to 'remove' (fixes #1007) 2015-08-03 20:01:26 +02:00
Jakob Borg
b1dd704819 Rebuild assets 2015-08-02 09:41:22 +02:00
Jakob Borg
9513e91d66 Flatten GUI tree somewhat
The very deep tree structure didn't really aggree with me, sorry. This
makes the core module rather large, but on the other hand that just
highlights that it is rather large.
2015-08-02 09:24:01 +02:00
Dennis Wilson
26d52bedb3 Squashed commit of pull request #1954 2015-08-02 09:21:46 +02:00
Jakob Borg
19451e0654 Translation and docs update 2015-08-02 09:19:32 +02:00
Jakob Borg
fa922d7792 Send index immediately on local change 2015-08-02 08:08:24 +02:00
Jakob Borg
bbe1de3119 Merge pull request #2100 from AudriusButkevicius/memory
Use protocol provided buffer for requests (fixes #1157)
2015-08-02 08:07:52 +02:00
Jakob Borg
f87e9b596d Merge pull request #2106 from Zillode/scan-deletes
Reduce scanning effort
2015-08-02 07:42:39 +02:00
Jakob Borg
917e12952e Merge pull request #2109 from Zillode/update-credits
Update credits for dependencies (fixes #2082)
2015-08-02 07:25:45 +02:00
Audrius Butkevicius
1977c526e4 Use protocol provided buffers for requests (fixes #1157) 2015-08-01 12:35:47 +01:00
Audrius Butkevicius
b63351074c Update protocol package 2015-08-01 12:22:19 +01:00
Lode Hoste
d78eb1247e Update credits for dependencies (fixes #2082) 2015-07-31 23:19:09 +02:00
Lode Hoste
9b9fe0d65c Reduce scanning effort 2015-07-31 21:32:49 +02:00
Jakob Borg
5a2db802d9 Fix TestReset 2015-07-28 21:31:01 +04:00
Jakob Borg
d3972b88f2 Remove set.ReplaceWithDelete (dead code) 2015-07-28 21:09:43 +04:00
Audrius Butkevicius
e62cf13760 Add stwatchfile 2015-07-27 19:00:22 +01:00
Jakob Borg
a5f6e3bba0 Update translations and docs 2015-07-26 11:25:40 +02:00
Jakob Borg
d170660c25 Usage -> Statistics 2015-07-24 12:04:16 +02:00
Audrius Butkevicius
2202aaed51 Merge pull request #2087 from calmh/norestart
Add folders without restart (fixes #2063)
2015-07-24 08:06:36 +01:00
Audrius Butkevicius
cbefcd50cf Merge pull request #2088 from calmh/usagedata
Link to usage data (ref syncthing/website#17)
2015-07-24 07:55:41 +01:00
Jakob Borg
1acfa291a0 Link to usage data (ref syncthing/website#17) 2015-07-24 08:53:33 +02:00
Jakob Borg
21accd534c Add folders without restart (fixes #2063) 2015-07-24 08:20:57 +02:00
Audrius Butkevicius
de89d7a976 Merge pull request #2084 from calmh/norestart
Add devices without restart (fixes #2083)
2015-07-23 11:07:27 +01:00
Jakob Borg
319abebd70 Update all dependencies 2015-07-22 12:09:44 +02:00
Jakob Borg
76480adda5 Add devices without restart (fixes #2083) 2015-07-22 10:43:47 +02:00
Jakob Borg
e205f8afbb Don't error integration tests on unexpected EOF at Stop() 2015-07-22 10:43:33 +02:00
Audrius Butkevicius
42dc51784e Merge pull request #2079 from snnd/bugfix_reloads
fix(core): prevent endless reload on cache requests
2015-07-21 23:01:15 +01:00
Dennis Wilson
a4a46f480d refactor(core): eleminate global state of guiVersion and deviceId 2015-07-21 23:47:35 +02:00
Dennis Wilson
e34be16237 style(core): add simple flow, hoisting, stacktrace infos 2015-07-21 23:41:10 +02:00
Dennis Wilson
dfcc166918 fix(core): prevent endless reload on cache requests 2015-07-21 22:35:51 +02:00
Audrius Butkevicius
895d56ed04 Merge pull request #2076 from calmh/ignoredelete
Optionally ignore remote deletes
2015-07-21 19:48:06 +01:00
Jakob Borg
12eab4a8ba Optionally ignore remote deletes (fixes #1254) 2015-07-21 20:39:19 +02:00
Zillode
3eb2b1f7a2 Fix Systemd readme link 2015-07-21 20:32:35 +02:00
Audrius Butkevicius
6ecc9bf93a Merge pull request #2077 from calmh/conflictwins
Determine conflict winner based on change type and modification time (fixes #1848)
2015-07-21 15:05:08 +01:00
Jakob Borg
da4ebb6535 Determine conflict winner based on change type and modification time (fixes #1848) 2015-07-21 15:39:20 +02:00
Jakob Borg
6e4d33c741 Don't run tests with audit on 2015-07-20 15:46:14 +02:00
Jakob Borg
d3387e2a28 Make sure CPU profile actually gets written before exiting 2015-07-20 15:34:40 +02:00
Jakob Borg
491452a19d Improve performance for syncing many small files quickly
Without this, as soon as we'd touched 1000 files in the last minute
(which can happen), we got stuck doing cache cleaning all the time,
burning a lot of CPU time.
2015-07-20 15:30:44 +02:00
Jakob Borg
7d3257b222 Use soft shutdown when running tests 2015-07-20 15:05:15 +02:00
Jakob Borg
1836ef2884 Squashed commit of pull request #1981
Conflicts:
	gui/scripts/syncthing/core/controllers/syncthingController.js
	internal/auto/gui.files.go
2015-07-20 14:48:03 +02:00
Jakob Borg
43d6322d0f Merge pull request #2061 from calmh/atomicwriter
Add osutil.AtomicWriter (take two)
2015-07-20 14:27:36 +02:00
Jakob Borg
f0684d83e9 Add osutil.AtomicWriter
This captures the common pattern of writing to a temp file and moving it
to it's real name only if everything went well. It reduces the amount of
code in some places where we do this, but maybe not as much as I
expected because the upgrade thing is still a special snowflake...
2015-07-20 14:27:14 +02:00
Jakob Borg
3f3170818d Merge pull request #2072 from dva/patch-1
Double curly brace notation displaying
2015-07-20 14:25:27 +02:00
Jakob Borg
7683096fe1 Add dva 2015-07-20 14:25:07 +02:00
Jakob Borg
bb438bfb17 Squashed commit of pull request #1990
commit 4eb3ff55ba
Merge: ddb3fea a04b005
Author: Brian R. Becker <brbecker@gmail.com>
Date:   Sat Jul 11 20:45:30 2015 -0700

    Merge remote-tracking branch 'upstream/master'

commit ddb3fea0d9
Author: Brian R. Becker <brbecker@gmail.com>
Date:   Mon Jun 22 11:36:58 2015 -0700

    Corrected spelling in GUI message
2015-07-20 14:22:36 +02:00
Jakob Borg
a11aa295de Squashed commit of pull request #1875
commit d60fbce311
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Mon Jun 1 11:16:36 2015 +0200

    Correct order of deb files

commit 3b2ecfcc45
Merge: f4daebb c23a601
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Mon Jun 1 11:15:06 2015 +0200

    Merge github.com:syncthing/syncthing

    Conflicts:
    	build.go

commit f4daebb851
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Tue May 26 12:58:25 2015 +0200

    Add me to AUTHORS

commit 9e77f4bea0
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Tue May 26 12:57:40 2015 +0200

    Add systemd files to deb packate
2015-07-20 14:18:07 +02:00
Denis A.
9a50f4ac1f Double curly brace notation displaying
Prevent double curly brace notation from displaying momentarily before angular.js compiles/interpolates document
2015-07-20 12:18:48 +03:00
Jakob Borg
59e829e595 Translation & docs update 2015-07-19 13:34:11 +02:00
Jakob Borg
78dca5fe8b Assets & translations 2015-07-16 14:03:21 +02:00
Jakob Borg
8c816f64e4 Merge pull request #2064 from uok/infotext
Improve info text for device addresses (fixes #2044)
2015-07-16 14:02:35 +02:00
Ben Schulz
22c525e3fe Improve info text for device addresses (fixes #2044)
Improve info text for device addresses (#2044)
2015-07-16 10:15:33 +02:00
Jakob Borg
f3f6b03d85 Don't let folder ID escape into HTML tag ID:s (fixes #2059) 2015-07-16 08:13:10 +02:00
Audrius Butkevicius
00bebc317e Merge pull request #2062 from canton7/feature/issue-2041
Allow #editIgnores to scroll in browser (fixes #2041)
2015-07-15 17:48:15 +01:00
Antony Male
8f38e83aaf Allow #editIgnores to scroll in browser (fixes #2041)
class modal-open is applied to <body>, which ultimately means that the
browser will scoll to the modal's content. However, #editFolder was
finishing its close animation (and removing this modal-open class)
after #editIgnores had set modal-open (and had started its open
animation). The end result is that <body> ends up without modal-open
when #editIgnores is open, and so the browser doesn't properly scroll.

Instead, only open the #editIgnores once #editFolder has finished closing.
2015-07-15 14:20:57 +01:00
Jakob Borg
8fab7ec5e3 Decrease timing sensitivity of ignore.TestCache 2015-07-14 12:12:57 +02:00
Jakob Borg
50eb968109 Merge pull request #2060 from brgmnn/master
Added select-on-click for remote 'Device ID' and 'API Key'
2015-07-14 10:55:32 +02:00
Daniel Bergmann
569314be45 Added select-on-click for remote 'Device ID' and 'API Key'
Now these uneditable boxes match the behaviour when clicking on the ID
box in Actions > Show ID.
2015-07-13 17:01:13 +01:00
Jakob Borg
909d60464e Revert "Merge pull request #2053 from calmh/atomicwriter" (fixes #2058)
This reverts commit b611f72e08, reversing
changes made to a04b005e93.
2015-07-13 12:47:32 +02:00
Jakob Borg
d855abf8b0 Translation & docs update 2015-07-13 08:24:04 +02:00
Audrius Butkevicius
b611f72e08 Merge pull request #2053 from calmh/atomicwriter
Add osutil.AtomicWriter
2015-07-12 12:10:54 +01:00
Jakob Borg
44e3bec42e Add osutil.AtomicWriter
This captures the common pattern of writing to a temp file and moving it
to it's real name only if everything went well. It reduces the amount of
code in some places where we do this, but maybe not as much as I
expected because the upgrade thing is still a special snowflake...
2015-07-12 14:28:59 +10:00
Jakob Borg
a04b005e93 Revert "Let suture logging bubble upwards"
This reverts commit 1b837116e6.
2015-07-11 11:12:20 +10:00
Jakob Borg
1b837116e6 Let suture logging bubble upwards 2015-07-11 10:52:57 +10:00
Jakob Borg
d16b04b683 Protocol dep update because I screwed up previous one 2015-07-10 19:58:56 +10:00
Audrius Butkevicius
d2e7a8004d Merge pull request #2048 from calmh/clusterconfigrace
Make sure connection is added to m.protoConn and m.rawConn before it's Start()ed (fixes #2034)
2015-07-10 08:46:31 +01:00
Jakob Borg
0c28216ee5 Make sure connection is added to m.protoConn and m.rawConn before it's Start()ed (fixes #2034) 2015-07-10 16:43:41 +10:00
Jakob Borg
5bb8ea7449 Remove one of two emits of events.DeviceConnected (ref #2034) 2015-07-10 16:12:59 +10:00
Jakob Borg
b78c515724 Debug output on request errors 2015-07-10 16:01:10 +10:00
Audrius Butkevicius
cb1a7a7bdc Update panic instructions (fixes #2039) 2015-07-09 22:43:16 +01:00
Audrius Butkevicius
b8dcb7c884 Update protocol package (fixes #2040) 2015-07-09 22:39:22 +01:00
Audrius Butkevicius
1ded554a15 Fix advanced option saving (fixes #2042) 2015-07-09 21:58:58 +01:00
Jakob Borg
bc0ce7b820 launchd: Log to files 2015-07-06 21:45:49 +02:00
Jakob Borg
1da3a57fe7 Asset rebuild 2015-07-05 19:44:55 +02:00
Jakob Borg
8697982302 Add canton7 2015-07-05 19:44:19 +02:00
Jakob Borg
c4168cf855 Merge pull request #2030 from canton7/feature/issue-2029
Use bootstrap grid instead of column-count:3 for aligning checkboxes (fixes #2029)
2015-07-05 19:41:44 +02:00
Antony Male
7023d3ca2b Use bootstrap grid instead of column-count:3 for aligning checkboxes (fixes #2029)
Upgrading to bootstrap 3.3.5 meant that checkboxes inside a div with
column-count:3 set would be unclickable in Chrome: in fact, the entire
div appears to sit on top of its contents, making interaction impossible.

This affected both the 'show folder with these devices' and 'these devices
can access this folder' sections of the UI.

I'm not sure what the the underlying cause is, but moving to Bootstrap's
grid system appears work around the issue. Devices/folders have to be
explicitly split out into rows, otherwise the final element appears offset.

To do this grouping by row, a new filter (groupFilter) has been added, which
turns an input of e.g. [1, 2, 3, 4, 5] with a groupSize of 3 into
[[1, 2, 3], [4, 5]]. However altering the collection in this way throws
Angular into an infinite watch loop, terminating in infdig. m59peacemaker's
pmkr.filterStabilize (MIT) was added to work around this issue.

This also has the nice side-effect of wrapping the list of devices/folders
when the screen width decreases.

See also:
 - #2027 (bootstrap update which triggered this issue)
 - #1121 (last time it happened)
2015-07-05 17:21:02 +01:00
Jakob Borg
57a5d13c47 Translation and docs update 2015-07-05 11:24:21 +02:00
Jakob Borg
500b96240b Don't show Failed Items on folder masters 2015-07-05 11:21:15 +02:00
Jakob Borg
13d961d41d Correctly show Override button when out of sync 2015-07-05 11:20:59 +02:00
Jakob Borg
fddc4c2fc0 Rebuild assets 2015-07-05 11:13:35 +02:00
Jakob Borg
061ec7369f Merge pull request #2027 from calmh/bootstrap
Update bootstrap
2015-07-05 11:07:13 +02:00
Jakob Borg
8366dbd8e0 Add link to home page (fixes #1993, fixes #1999) 2015-07-05 11:06:29 +02:00
Jakob Borg
b02047e4b5 Update Bootstrap 3.1.0 -> 3.3.5 2015-07-05 11:05:38 +02:00
Jakob Borg
e9545c4961 Merge pull request #2022 from brgmnn/master
Preserve setgid bit on local directores (fixes #2012)
2015-07-04 19:37:21 +02:00
Audrius Butkevicius
966a2b1df5 Merge pull request #2023 from calmh/advedit
Advanced configuration dialog
2015-07-04 15:08:40 +01:00
Jakob Borg
dec6540967 Implement "advanced configuration" dialog (fixes #2010) 2015-07-04 13:47:43 +02:00
Daniel Bergmann
3fe1673ce9 Preserve setgid bit on local directores (fixes #2012)
When setting the permissions on directories with ignore permissions off,
preserve the setgid bit to it's original value instead of setting it
off.
2015-07-04 09:01:34 +01:00
Jakob Borg
e9e13474c9 Merge pull request #2021 from brgmnn/master
Fixed add device button being overlapped by footer (fixes #1950)
2015-07-03 08:56:33 +02:00
Daniel Bergmann
aee9093848 Fixed add device button being overlapped by footer (fixes #1950) 2015-07-02 16:16:42 +01:00
Jakob Borg
76822c7c34 Merge pull request #2018 from brgmnn/master
Added select ID text on click to gui
2015-07-02 11:01:37 +02:00
Daniel Bergmann
5c18d34d89 Added a contact email address for myself.
Added myself to the AUTHORS, NICKS and GUI contributors files. Also
fixed the sort order in AUTHORS and NICKS when adding myself as there
were a couple of entries in both that were not quite in alphabetical
order.
2015-07-02 09:45:22 +01:00
Daniel Bergmann
970a9c7552 Added select ID text on click to gui 2015-07-02 09:34:12 +01:00
Audrius Butkevicius
37a42dc408 Fix CSRF tests (fixes #2009) 2015-06-30 19:38:27 +01:00
Audrius Butkevicius
a03c9f9457 Merge pull request #2001 from calmh/failed-files
Show failed files in web UI
2015-06-30 15:26:24 +01:00
Jakob Borg
60004ebff1 Show FolderErrors result in UI (fixes #1437) 2015-06-30 14:41:48 +02:00
Jakob Borg
2d9fcf6828 Collect puller errors, send FolderErrors event 2015-06-30 14:41:47 +02:00
Jakob Borg
c8ac9721d7 Translation and docs update 2015-06-28 21:10:57 +02:00
Audrius Butkevicius
b1b68b58fe Update protocol package 2015-06-28 11:40:53 +01:00
Jakob Borg
ca21db9481 Merge pull request #2006 from AudriusButkevicius/timeout
Make ping timeout configurable (fixes #1751)
2015-06-28 07:45:02 +02:00
Audrius Butkevicius
93ad803073 Make ping timeout configurable (fixes #1751) 2015-06-27 12:34:41 +01:00
Audrius Butkevicius
6cc7f70a65 Update protocol package 2015-06-27 12:07:42 +01:00
Jakob Borg
2b0c33f74d Merge pull request #1996 from AudriusButkevicius/checkrace
Potential race between folder being added and scan (fixes #1986)
2015-06-26 12:56:07 +02:00
Audrius Butkevicius
dae1d36a23 Trim string slices upon loading config (fixes #1750) 2015-06-25 16:50:27 +01:00
Audrius Butkevicius
824fa8f17a Fix go lint warnings 2015-06-24 22:05:27 +01:00
Audrius Butkevicius
31cd0b943c Potential race between folder being added and scan (potentially fixes #1986) 2015-06-24 21:59:03 +01:00
Jakob Borg
070eced2f6 Merge pull request #1985 from calmh/fix-reset
Fix reset DB
2015-06-24 14:07:15 +02:00
Audrius Butkevicius
986f8dfb2e Remove dead variable 2015-06-24 08:43:33 +01:00
Audrius Butkevicius
8c0c03eb38 Merge pull request #1989 from AudriusButkevicius/session
Use different session cookies per device
2015-06-23 13:56:14 +01:00
Audrius Butkevicius
fd9bc20bc5 Merge pull request #1988 from calmh/dups
Don't rename duplicate folders (fixes #1675)
2015-06-23 11:17:30 +01:00
Audrius Butkevicius
089fca2319 Use different session cookies per device 2015-06-22 19:51:46 +01:00
Jakob Borg
e936890927 Don't rename duplicate folders (fixes #1675)
Renaming them puts the user in a difficult situation as they can't
rename them back in the GUI. This way, they need to fix the config in
the same way it got broken (manual editing or external tool).
2015-06-22 11:27:47 +02:00
Zillode
0450d48f89 Merge pull request #1856 from calmh/fix-1391
Serialize scans (fixes #1391)
2015-06-21 14:26:17 +02:00
Jakob Borg
2463819a3d Update translations and docs 2015-06-21 11:45:54 +02:00
Jakob Borg
2b2cae2d50 Fix reset DB
The reset of all folders failed when there was no data for a given
folder, as it was not returned by db.ListFolders then. But we don't
really care about that, we can "reset" it anyway...
2015-06-21 09:35:41 +02:00
Zillode
0f1b40da71 Merge pull request #1982 from calmh/fix-1978
Sanitize rescan interval values (fixes #1978)
2015-06-20 23:28:18 +02:00
Jakob Borg
f73d5a9ab2 Serialize scans and pulls (fixes #1391) 2015-06-20 23:01:40 +02:00
Jakob Borg
4eb0e24c6e Sanitize rescan interval values (fixes #1978) 2015-06-20 23:01:30 +02:00
Jakob Borg
1d2235abe7 Model must be running for tests 2015-06-20 23:00:33 +02:00
Jakob Borg
d347e54acb Don't start model until services have been added (fixes #1969) 2015-06-20 20:04:47 +02:00
Jakob Borg
b5198d8119 Merge pull request #1968 from calmh/newtests
Refactored integration tests
2015-06-20 19:24:04 +02:00
Jakob Borg
b8b5c5ff34 Merge pull request #1913 from Zillode/fix-reset
Fix 'reset' Rest API on windows
2015-06-20 11:43:05 +02:00
Jakob Borg
a738490a3b Update translation strings 2015-06-20 11:40:05 +02:00
Jakob Borg
54a8de2059 Merge remote-tracking branch 'syncthing/pr/1979'
* syncthing/pr/1979:
  Display Local State Summary (All Folders)
2015-06-20 11:39:31 +02:00
Jakob Borg
cb2c0e7ac5 Add fti7 2015-06-20 11:32:55 +02:00
Frank Isemann
510d309b8a Display Local State Summary (All Folders) 2015-06-19 21:52:19 +02:00
Jakob Borg
a7ce2a7aa5 Merge pull request #1963 from wsgcsysadmin/master
Put thisDeviceName first in page tile to make small browser tabs distinguishable
2015-06-19 08:51:24 +02:00
Jakob Borg
cfe24ecdd9 Add wsgcsysadmin 2015-06-19 08:50:36 +02:00
Jakob Borg
c5e9cb025c Merge pull request #1977 from Zillode/fix-1976
Corrected API response when resetting folder (fixes #1976)
2015-06-19 08:48:40 +02:00
Jakob Borg
c3d07d60ca Refactored integration tests
Added internal/rc to remote control a Syncthing process and made the
"awaiting sync" determination reliable.
2015-06-19 08:47:47 +02:00
Lode Hoste
a0897a7456 Corrected API response when resetting folder (fixes #1976) 2015-06-19 08:30:19 +02:00
WSGCSysadmin
c50ba9267c Put thisDeviceName first in page title 2015-06-18 10:53:37 -05:00
Jakob Borg
423e69916c Merge pull request #1962 from Zillode/fix-pull-order-gui
Set default pull order in webGUI
2015-06-18 16:51:10 +02:00
Lode Hoste
b56c76f8ad Fix 'reset' Rest API on windows 2015-06-18 12:45:08 +02:00
Lode Hoste
cb2d2f000f Set default pull order in webGUI 2015-06-18 12:42:00 +02:00
Audrius Butkevicius
69af77a3bd Merge pull request #1967 from calmh/woops
Incorrect error condition on shortcuts
2015-06-18 11:07:07 +01:00
Jakob Borg
7767746d3e Incorrect error condition on shortcuts 2015-06-18 11:55:43 +02:00
Audrius Butkevicius
7219aaeb89 Merge pull request #1966 from calmh/overrideevents
Generate LocalIndexUpdated events in Override
2015-06-18 10:17:38 +01:00
Jakob Borg
7af1863e81 Generate LocalIndexUpdated events in Override
The override should look like we detected the changes locally, or the
GUI and other things won't update correctly.

This is/was caught by the Override integration test, on my newly
refactored integration test suite which is soon ready for prime time, so
a test is coming. :)
2015-06-18 10:49:24 +02:00
Jakob Borg
4beb42bf45 Merge pull request #1959 from AudriusButkevicius/lastfile
Add label next to "Last file received" (fixes #1952)
2015-06-18 10:45:27 +02:00
Audrius Butkevicius
12a3086a9e Add label next to "Last file received" (fixes #1952) 2015-06-16 12:12:34 +01:00
Audrius Butkevicius
198725216f Merge pull request #1957 from calmh/myid
Include myID in the StartupComplete event
2015-06-16 08:46:50 +01:00
Audrius Butkevicius
08647f1267 Merge pull request #1956 from calmh/earlyevents
Fix API event subscription
2015-06-16 08:46:29 +01:00
Audrius Butkevicius
87811efc30 Merge pull request #1955 from calmh/localver
Add version to LocalIndexUpdate event.
2015-06-16 08:45:09 +01:00
Jakob Borg
82c3e6f87f Include myID in the StartupComplete event
Nice to have...
2015-06-16 09:27:06 +02:00
Jakob Borg
1ac40a3043 Fix API event subscription
The API never got the first few events ("Starting" etc) as it subscribed
too late. Instead, set up a subscription for it early on. If the API is
configured not to run this is unnecessary but doesn't hurt very much.
2015-06-16 09:17:58 +02:00
Jakob Borg
127b0c3332 Add version to LocalIndexUpdate event.
Allows correlating LocalIndexUpdate events on one device with RemoteIndexUpdated on another to make sure the cluster has converged.
2015-06-16 08:30:15 +02:00
Jakob Borg
a6d9150b14 Skip extra newline between assets 2015-06-15 23:13:43 +02:00
Jakob Borg
7e5197c566 Help link for folder master 2015-06-15 22:34:39 +02:00
Jakob Borg
2d217e72bd Dependency update 2015-06-15 21:10:33 +02:00
Jakob Borg
12331cc62b Merge pull request #1943 from ralder/webgui-events-in-service
webgui: moved events controller to events service
2015-06-15 20:33:27 +02:00
Sergey Mishin
2449f1e1b6 webgui: moved events controller to events service 2015-06-15 19:05:46 +03:00
Audrius Butkevicius
6a6593c656 Merge pull request #1948 from calmh/symwarning
Dont warn about irrelevant symlinks, print path to culprit (ref #1945)
2015-06-15 10:41:51 +01:00
Jakob Borg
ad220d61f9 Merge pull request #1951 from AudriusButkevicius/voodoo
Voodoo
2015-06-15 11:31:03 +02:00
Audrius Butkevicius
1e35383b4d Merge pull request #1947 from calmh/metadata
Differentiate between content and metadata updates in ItemStarted/ItemFinished
2015-06-15 10:26:09 +01:00
Audrius Butkevicius
c8457ab005 Voodoo 2015-06-15 10:22:44 +01:00
Jakob Borg
070a308217 Dont warn about irrelevant symlinks, print path to culprit (ref #1945)
This skips the warning about "unsupported symlinks" for invalid or
deleted symlinks (and as a side effect also accepts them into the index,
which should be fine). It also prints the affected file paths to the
log. This should be in the hypothetical list of "errored files" we
should present instead of the cryptic "puller stopped" message in the
future...
2015-06-15 00:47:13 +02:00
Jakob Borg
c1f4477376 ItemStarted can be map[string]string 2015-06-14 22:59:21 +02:00
Jakob Borg
d728320ece New ItemStart/Finished type 'metadata' for shortcut updates 2015-06-14 22:56:41 +02:00
Audrius Butkevicius
fee0d7168a Merge pull request #1946 from calmh/guiassets
Default GUI override dir
2015-06-14 21:40:56 +01:00
Jakob Borg
7c23b32de3 Default GUI override dir
If STGUIASSETS is not set, look for assets in $confdir/gui by default.
Simplifies deploying overrides and stuff.
2015-06-14 22:28:40 +02:00
Jakob Borg
1437952aee Translation and docs update 2015-06-14 13:52:00 +02:00
Jakob Borg
a162157301 Add translation strings for trash can versioning 2015-06-14 11:08:25 +02:00
Audrius Butkevicius
6316bf3582 Merge pull request #1941 from AudriusButkevicius/errors
Correctly set and clear errors for missing folders (fixes #1937)
2015-06-13 23:47:20 +01:00
Audrius Butkevicius
1d856b4723 Correctly set and clear errors for missing folders (fixes #1937) 2015-06-13 23:45:54 +01:00
Audrius Butkevicius
7f56d5c23a Merge pull request #1935 from calmh/trashcan
Add trash can file versioning (fixes #1931)
2015-06-12 13:20:37 +01:00
Jakob Borg
a778763851 Add trash can file versioning (fixes #1931) 2015-06-12 13:30:49 +02:00
Audrius Butkevicius
983d7ec265 Merge pull request #1933 from calmh/fix-1907
More resilient broadcast handling (fixes #1907)
2015-06-11 14:59:31 +01:00
Jakob Borg
297769ef57 More resilient broadcast handling (fixes #1907)
My theory is that some error condition on the socket results in it
blocking for writes, which maybe also blocks reads... This separates the
two into separate services with their own socket, with restarts and
retries as appropriates on write timeouts and read/write errors. It
should be more robust, hopefully, but I have a hard time testing the
actual error conditions...
2015-06-11 15:06:05 +02:00
Jakob Borg
885d050e5f Correct docs site link 2015-06-11 14:24:39 +02:00
Jakob Borg
8fb4ce6cad Merge pull request #1927 from ralder/patch-1
fix disappeared status of folder after restart syncthing
2015-06-11 08:47:53 +02:00
Jakob Borg
42738ab54d Add missing copyright notice 2015-06-11 08:46:57 +02:00
ralder
7d1250620e fix disappeared status of folder after restart syncthing
sometimes after restart process syncthing '/rest/db/status' for folder may return data with 'state' = empty string
2015-06-10 16:48:16 +03:00
Jakob Borg
5c49b93c67 Links are nice, too 2015-06-10 00:04:53 +02:00
Jakob Borg
9a11f81fd3 Point to contribution guidelines and docs 2015-06-10 00:02:39 +02:00
Audrius Butkevicius
cba2e972fd Merge pull request #1810 from calmh/cfg-commit
Configuration commit thingy
2015-06-09 15:09:21 +01:00
Jakob Borg
76ad925842 Refactor config commit stuff to support restartless updates better
Includes restartless updates of the GUI settings (listening port etc) as
a proof of concept.
2015-06-09 15:41:22 +02:00
Jakob Borg
ef6f52f688 Correctly handle nil error in verbose logging (fixes #1921) 2015-06-09 09:04:03 +02:00
Audrius Butkevicius
197bfa9f11 Merge pull request #1919 from calmh/fix-1918
Start folders before GUI/API (fixes #1918)
2015-06-08 11:13:02 +01:00
Jakob Borg
145f8c7435 Start folders before GUI/API (fixes #1918) 2015-06-08 11:04:09 +02:00
Jakob Borg
a8b43ae598 Translation / docs update 2015-06-07 12:57:26 +02:00
Audrius Butkevicius
567f19bf68 Do not overwrite error value 2015-06-06 08:30:01 +01:00
Jakob Borg
5cd4cd2271 Merge pull request #1912 from AudriusButkevicius/warnings
Silence discovery warnings (fixes #1388)
2015-06-04 19:29:42 +02:00
Audrius Butkevicius
4180569443 Silence discovery warnings (fixes #1388)
Not performing net.InterfaceAddrs() check in the constructor, as that means we wouldn't start
the read loop, which completely kills it.
2015-06-04 12:59:06 +01:00
Jakob Borg
5f4a92c8e6 Correct link 2015-06-03 19:47:39 +02:00
Jakob Borg
ccf3fed950 Merge remote-tracking branch 'syncthing/pr/1911'
* syncthing/pr/1911:
  replaced (not all) wiki links to new location docs.syncthing.net
2015-06-03 19:24:30 +02:00
Lars K.W. Gohlke
3626003f68 replaced (not all) wiki links to new location docs.syncthing.net 2015-06-03 19:09:36 +02:00
Audrius Butkevicius
11cb040ad1 Merge pull request #1880 from calmh/itemfinished-err
Fix ItemFinished
2015-06-03 16:50:33 +01:00
Jakob Borg
c1761cab49 Trigger ItemFinished when temp file creation fails instead of failing silently 2015-06-03 16:28:31 +02:00
Jakob Borg
25b25b5434 Merge pull request #1885 from AudriusButkevicius/moar-checks
Additional cases for detecting folders disappearing
2015-06-03 08:40:21 +02:00
Jakob Borg
8bdf66d9c0 Merge pull request #1906 from ralder/fix-style-top-menu-for-small-devices
fix language menu for small screen devices
2015-06-03 08:38:40 +02:00
Sergey Mishin
ccebdd142a fix webgui top menu for small screen devices 2015-06-02 17:48:31 +03:00
Jakob Borg
e952da7f91 Ensure we always have an up to date list of language names 2015-06-02 08:47:26 +02:00
Jakob Borg
5bd1e4a167 Re-add mistakenly removed languages 2015-06-02 08:33:36 +02:00
Jakob Borg
6d3de41751 Merge pull request #1905 from ralder/fix-missing-languages
fix missing languages (fixes #1902)
2015-06-02 08:12:06 +02:00
Sergey Mishin
f11bac6705 fix missing languages (fixes #1902) 2015-06-02 01:19:21 +03:00
Jakob Borg
c23a601cc6 Random number is too large for 32 bit archs (fixes #1894) 2015-06-01 09:33:13 +02:00
Jakob Borg
36d4c69fd6 Translation update 2015-05-31 09:37:02 +02:00
Jakob Borg
ca7e7fa0c4 Docs link, capitalization 2015-05-31 09:35:17 +02:00
Jakob Borg
1f6dd5dbb9 Add man pages to Debian package 2015-05-30 13:11:17 +02:00
Jakob Borg
db52646655 Include generated man pages 2015-05-30 13:05:55 +02:00
Jakob Borg
8bb18fa988 Create 'prerelease' step to run before releases 2015-05-30 10:39:27 +02:00
Jakob Borg
f0edaf2f8c Merge pull request #1884 from calmh/helplink
Show help link, add icons, tweak icon spacing
2015-05-30 10:32:16 +02:00
Jakob Borg
d632e3aa1b Show help link, add icons, tweak icon spacing 2015-05-30 10:31:45 +02:00
Audrius Butkevicius
f0e58fa804 Additional cases for detecting folders disappearing 2015-05-27 22:46:10 +01:00
Jakob Borg
ceced09d02 Merge pull request #1882 from calmh/folderstats
Reduce db writes for small files
2015-05-27 22:02:04 +02:00
Jakob Borg
7d48115b90 Reduce db writes for small files
We introduced the dbUpdater routine to handle many small files
efficiently, but the folder stats call is almost equally expensive as it
results in two distinct write transactions to the database. This moves
it to the same routine.

(Doesn't make a *huge* difference with leveldb actually, but reduces the
50k-files benchmark time by 25% on my experimental bolt branch...)
2015-05-27 18:36:26 +02:00
Bart De Vries
d2205228fb Make syncthing honor both the ignorePerms and FlagNoPermBits settings (fixes #1871) 2015-05-26 16:27:26 +02:00
Audrius Butkevicius
5417fb7287 Merge pull request #1872 from calmh/large-file-transfer
Large file transfer
2015-05-25 17:13:58 +01:00
Jakob Borg
3f59d6daff Add validation cache 2015-05-25 13:19:18 +02:00
Jakob Borg
0d659933aa Merge pull request #1868 from mogwa1/umask
Change permissions of newly created files and directories (fixes #1339)
2015-05-25 08:34:34 +02:00
Jakob Borg
3e8eabe833 Add mogwa1 2015-05-25 08:07:59 +02:00
Bart De Vries
badfc77339 Change permissions of newly created files and directories (fixes #1339) 2015-05-25 00:12:51 +02:00
Audrius Butkevicius
c51d3e59ea Merge pull request #1862 from calmh/allocs
Reduce allocations during iteration
2015-05-24 12:31:39 +01:00
Jakob Borg
c0d02a65c3 Reduce allocations during iteration
Reuses key byte slices to reduce allocations:

	benchmark                    old allocs     new allocs     delta
	Benchmark10kReplace          178112         168615         -5.33%
	Benchmark10kUpdateChg        567954         561557         -1.13%
	Benchmark10kUpdateSme        238691         228680         -4.19%
	Benchmark10kNeed2k           105382         103383         -1.90%
	Benchmark10kHaveFullList     60230          60230          +0.00%
	Benchmark10kGlobal           237484         227493         -4.21%

	benchmark                    old bytes     new bytes     delta
	Benchmark10kReplace          8725368       7661788       -12.19%
	Benchmark10kUpdateChg        58155152      57441025      -1.23%
	Benchmark10kUpdateSme        16130875      14996717      -7.03%
	Benchmark10kNeed2k           6561685       6338283       -3.40%
	Benchmark10kHaveFullList     7611112       7611135       +0.00%
	Benchmark10kGlobal           21415056      20300723      -5.20%
2015-05-24 13:08:14 +02:00
Audrius Butkevicius
3a203b8d83 Merge pull request #1861 from calmh/fix-1860
Be more lenient against errors when deleting (fixes #1860)
2015-05-24 00:58:46 +01:00
Audrius Butkevicius
feecdcc7a4 Merge pull request #1854 from calmh/eventmemallocs
Reuse a timer instead of allocating a new one in subscription.Poll
2015-05-23 23:23:46 +01:00
Audrius Butkevicius
51ad533be6 Merge pull request #1859 from calmh/fix-1858
UPnP discovery results must not be collected in a background goroutine (fixes #1858)
2015-05-23 22:58:09 +01:00
Jakob Borg
29da0bc8f5 Be more lenient against errors when deleting (fixes #1860) 2015-05-23 23:57:41 +02:00
Jakob Borg
bccf7fc2a8 Merge pull request #1857 from calmh/testhax
Refactor integration tests to be a little cleaner and more stable, I hope
2015-05-23 23:57:21 +02:00
Jakob Borg
7b6b5981c4 UPnP discovery results must not be collected in a background goroutine (fixes #1858) 2015-05-23 23:27:02 +02:00
Jakob Borg
9463192224 Refactor integration tests to be a little cleaner and more stable, I hope 2015-05-23 23:26:23 +02:00
Audrius Butkevicius
d1689f0012 Merge pull request #1855 from calmh/dboptsagain
Reduce db write cache to (2*) 4 MiB
2015-05-23 20:38:41 +01:00
Jakob Borg
8ed67fe349 Reduce db write cache to (2*) 4 MiB
I haven't been able to reproduce any performance advantage of having it
set higher and it reduces the memory footprint a bit.
2015-05-23 21:05:52 +02:00
Jakob Borg
90a1d99785 Reuse a timer instead of allocating a new one in subscription.Poll
This is surprisingly memory expensive when Poll gets called a lot, such
as when syncing lots of small files generating itemstarted/itemfinished
events. It's line number three in this heap profile on the
TestBenchmarkManyFiles test:

	jb@syno:~/src/github.com/syncthing/syncthing/test (master) $ go tool pprof ../bin/syncthing heap-13194.pprof
	Entering interactive mode (type "help" for commands)
	(pprof) top
	80.91MB of 83.05MB total (97.42%)
	Dropped 1024 nodes (cum <= 0.42MB)
	Showing top 10 nodes out of 85 (cum >= 1.75MB)
	      flat  flat%   sum%        cum   cum%
	      32MB 38.53% 38.53%    32.01MB 38.54%  github.com/syndtr/goleveldb/leveldb/memdb.New
	   22.16MB 26.68% 65.21%    22.16MB 26.68%  github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).Get
	   13.02MB 15.68% 80.89%    13.02MB 15.68%  time.NewTimer
	    6.94MB  8.35% 89.24%     6.94MB  8.35%  github.com/syndtr/goleveldb/leveldb/memdb.(*DB).Put
	    3.18MB  3.82% 93.06%     3.18MB  3.82%  github.com/calmh/xdr.(*Reader).ReadBytesMaxInto

With this change the allocation is removed entirely.
2015-05-23 20:38:41 +02:00
Jakob Borg
e215cf6fb8 Add some REST API benchmarks 2015-05-23 20:15:54 +02:00
Jakob Borg
e827c0bd94 Run benchmarks in docker-all instead 2015-05-23 15:17:19 +02:00
Jakob Borg
8dd7e4e6b5 Run benchmarks when running tests 2015-05-23 15:08:17 +02:00
Jakob Borg
a2b94f4e06 Build Debian armhf and armel 2015-05-23 13:10:33 +02:00
Jakob Borg
8b0037ffab Make check-contribs a little more generous in recognizing a copyright header 2015-05-21 21:42:46 +02:00
Jakob Borg
4ab03f3bef Update CONTRIBUTING to not encourage changing AUTHORS 2015-05-21 20:58:17 +02:00
Jakob Borg
3c3db52f49 Merge pull request #1842 from Zillode/fix-1822
Support the creation of top-level folders on Windows (fixes #1822)
2015-05-21 20:50:52 +02:00
Jakob Borg
5b1e884659 Merge pull request #1847 from Zillode/fix-1833
Show date and time for web GUI notification (fixes #1833)
2015-05-21 20:49:04 +02:00
Lode Hoste
7ec6740e26 Show date and time for web GUI notification (fixes #1833) 2015-05-21 19:44:45 +02:00
Lode Hoste
f12b8c19be Support the creation of top-level folders on Windows (fixes #1822) 2015-05-21 19:21:19 +02:00
Jakob Borg
76174d31ce Merge pull request #1843 from Zillode/fix-1831
Set permanent UPnP lease when required
2015-05-21 12:50:48 +02:00
Lode Hoste
5042248260 Set permanent UPnP lease when required (fixes #1831) 2015-05-21 10:48:40 +02:00
Lode Hoste
1df6589533 Add status code to SOAP response 2015-05-21 09:54:39 +02:00
Jakob Borg
12f76b448c Merge pull request #1826 from AudriusButkevicius/screwflags
Don't check interface flags on Windows
2015-05-19 14:59:16 +02:00
Audrius Butkevicius
f112ef34f6 Don't check interface flags on Windows 2015-05-17 16:42:26 +01:00
Jakob Borg
e4b57a978f Translation update 2015-05-15 10:46:47 +02:00
Audrius Butkevicius
b51e09e0e8 Merge pull request #1811 from calmh/bc
Further reduce maximum db block cache
2015-05-15 11:12:25 +03:00
Jakob Borg
a3ba3f895c Further reduce maximum db block cache 2015-05-14 20:59:59 +02:00
Jakob Borg
947a129e12 Use updateLocals to ensure event generation 2015-05-14 08:56:42 +02:00
Jakob Borg
f3fe6a6cbd Assure existence of a folder marker in the test 2015-05-14 08:46:00 +02:00
Jakob Borg
9d150bef9f Refactor GetMtime for early return 2015-05-14 08:26:21 +02:00
Jakob Borg
65be18cc93 Merge pull request #1804 from cdhowie/virtual-mtimes
Implement virtual mtime support for Android
2015-05-14 08:14:29 +02:00
Jakob Borg
e27ea63900 Add cdhowie 2015-05-14 08:11:02 +02:00
Chris Howie
aa96f7b660 Virtual mtime support for environments that don't support altering mtimes (fixes #831) 2015-05-13 14:57:29 +00:00
Jakob Borg
053690d885 Don't attempt reschedule with zero interval
Can happen on manual rescans of folders with zero interval.
2015-05-13 16:51:08 +02:00
Jakob Borg
b9fc6397a3 Merge pull request #1791 from Zillode/test-master
Add unit test for overriding ignored files (fixes #1701)
2015-05-13 14:50:12 +02:00
Audrius Butkevicius
19b9f15da3 Merge pull request #1803 from calmh/kvspaceclear
Reset a namespaced kv-store
2015-05-13 15:43:52 +03:00
Jakob Borg
0b9441e1a4 Reset a namespaced kv-store 2015-05-13 14:41:47 +02:00
Audrius Butkevicius
2324d7420c Merge pull request #1800 from calmh/ursvc
Break out usage reporting into a service
2015-05-13 15:41:44 +03:00
Jakob Borg
c6b2ca8b19 Break out usage reporting into a service 2015-05-13 14:39:27 +02:00
Audrius Butkevicius
83ea8dc577 Merge pull request #1801 from calmh/fix-1799
Only restart global discovery on UPnP change if it was enabled to start with (fixes #1799)
2015-05-12 18:49:24 +03:00
Jakob Borg
d898277f62 stindex: add some missing newlines 2015-05-12 11:47:30 +02:00
Jakob Borg
a289cfb986 Only restart global discovery on UPnP change if it was enabled to start with (fixes #1799) 2015-05-12 09:48:55 +02:00
Jakob Borg
d603998617 Debian maintainer is release team 2015-05-11 21:59:40 +02:00
Jakob Borg
8bf9f4f5ab Build debian skeleton 2015-05-11 21:34:43 +02:00
Jakob Borg
2c4e6f2926 wip 2015-05-11 19:04:39 +02:00
Jakob Borg
21fbbc50cd hax 2015-05-11 18:39:53 +02:00
Audrius Butkevicius
99512418da Merge pull request #1796 from calmh/adaptive-cache-tweak
Tweak the database block cache size, and add config for it
2015-05-11 11:06:31 +03:00
Jakob Borg
c2f2d8771f Tweak the database block cache size, and add config for it 2015-05-11 09:08:25 +02:00
Jakob Borg
179c9ee8cc Merge pull request #1792 from Zillode/tempname
Retains part of meaningful filename, added unit test and did a small refactoring
2015-05-10 20:44:55 +02:00
Jakob Borg
eb8a505287 Merge pull request #1774 from Zillode/fix-rename-windows
If rename works we are happy (fixes #1767)
2015-05-10 19:04:03 +02:00
Lode Hoste
a7482a3644 Added simple unit test for temporary filenames 2015-05-10 18:57:27 +02:00
Lode Hoste
a692348336 Retain meaningful names for temporary files 2015-05-10 18:57:27 +02:00
Jakob Borg
285dcc30cf Use md5 hash of filename for temporary file (fixes #1786) 2015-05-10 18:57:27 +02:00
Jakob Borg
bcf51aed83 Translation update 2015-05-10 14:59:12 +02:00
Zillode
1a11ce6211 Merge pull request #1784 from calmh/fix-1782
Implement upnpSvc.Stop() (fixes #1782)
2015-05-10 14:47:02 +02:00
Lode Hoste
10021c97a6 Add unit test for overriding ignored files (fixes #1701) 2015-05-10 11:23:29 +02:00
Audrius Butkevicius
e869e3c534 Merge pull request #1789 from calmh/reduce-default-lease
Set default UPnP lease time to 60 minutes
2015-05-10 01:43:10 +03:00
Lode Hoste
f6416285db If rename works we are happy (fixes #1767) 2015-05-10 00:00:24 +02:00
Jakob Borg
7f0593cd2d Set default UPnP lease time to 60 minutes 2015-05-09 22:17:53 +02:00
Jakob Borg
e4c41718d8 Fix upnp mapping name 2015-05-09 20:04:15 +02:00
Jakob Borg
7234553990 Implement upnpSvc.Stop() (fixes #1782) 2015-05-09 12:49:58 +02:00
Jakob Borg
5761efdb73 Remove system/editor specific ignores (ref #1778) 2015-05-08 20:34:06 +02:00
Jakob Borg
1133192a86 Merge remote-tracking branch 'syncthing/pr/1779'
* syncthing/pr/1779:
  Remove stray VIM swap file
2015-05-08 20:32:02 +02:00
Chris Howie
af3288043a Remove stray VIM swap file 2015-05-08 16:11:32 +00:00
Jakob Borg
24a348f6a8 Use larger files for large file transfer benchmark 2015-05-08 14:59:37 +02:00
Audrius Butkevicius
5528b6c231 Merge pull request #1775 from calmh/fix-1765
Trigger pull check on remote index updates (fixes #1765)
2015-05-08 11:45:03 +03:00
Jakob Borg
245bd1eb17 Trigger pull check on remote index updates (fixes #1765)
Without this, when an index update comes in we only do a new pull if the
remote `localVersion` was increased. But it may not be, because the
index is sent alphabetically and the file with the highest local version
may come first. In that case we'll never do a new pull when the rest of
the index comes in, and we'll be stuck in idle but with lots of out of
sync data.
2015-05-08 10:02:46 +02:00
Jakob Borg
03506db76c Fix rename with capitalization test 2015-05-08 10:02:19 +02:00
Jakob Borg
cb5ef26020 Revert "Enforce line endings when cloning (fixes #1766)"
This reverts commit 2361a0dd6e.
2015-05-07 22:55:31 +02:00
Jakob Borg
2493ae4c2c Merge pull request #1773 from Zillode/fix-browser
Do not launch browser when running integration tests
2015-05-07 22:23:47 +02:00
Jakob Borg
2bd88344ad Merge pull request #1772 from Zillode/fix-lf
Enforce line endings when cloning (fixes #1766)
2015-05-07 22:03:20 +02:00
Lode Hoste
4950980d69 Don't launch browser for integration tests 2015-05-07 20:59:12 +02:00
Lode Hoste
2361a0dd6e Enforce line endings when cloning (fixes #1766) 2015-05-07 20:32:18 +02:00
Audrius Butkevicius
152cdd1750 Merge pull request #1771 from calmh/stindex
Simplify stindex
2015-05-07 11:13:49 +03:00
Jakob Borg
31797a5831 Simplify stindex 2015-05-07 09:59:04 +02:00
Jakob Borg
9308c42cff Skip boring concurrency test in internal/db 2015-05-07 09:57:49 +02:00
Audrius Butkevicius
c096cd34b9 Merge pull request #1763 from calmh/win-executable
Set the execute bit on Windows executables (fixes #1762)
2015-05-06 08:28:06 +03:00
Jakob Borg
5fc0808f28 Set the execute bit on Windows executables (fixes #1762) 2015-05-05 21:45:26 +02:00
Jakob Borg
e6866ee980 strings.TrimLeft is not actually TrimPrefix 2015-05-05 13:53:11 +02:00
Jakob Borg
45631d30b0 Merge pull request #1756 from LordLandon/master
Fix #1728 by using latest version property
2015-05-04 18:34:34 +02:00
Lord Landon "Warbles" Agahnim
0f432a0844 Use actual release version for release note link (fixes #1728) 2015-05-04 08:23:24 -07:00
Jakob Borg
df59bc7194 Merge pull request #1747 from Zillode/fix-1743
Partial fix for #1743
2015-05-04 10:51:40 +02:00
Jakob Borg
ff4706e450 Merge branch 'pr-1748'
* pr-1748:
  Reschedule before scan
  Use a channel instead of locks
  Reschedule the next scan interval (fixes #1591)
2015-05-04 10:40:14 +02:00
Jakob Borg
fc013bd04c Add LordLandon 2015-05-04 10:36:40 +02:00
Audrius Butkevicius
7e29a8b927 Merge pull request #1755 from calmh/fix-1754
Wait for stdout/stderr to close (fixes #1754)
2015-05-03 22:45:00 +03:00
Jakob Borg
67ae7a0b6c Wait for stdout/stderr to close (fixes #1754) 2015-05-03 17:36:01 +02:00
Jakob Borg
bd5a64bac0 Reschedule before scan 2015-05-03 14:18:50 +02:00
Jakob Borg
1bd85d8baf Use a channel instead of locks 2015-05-03 14:18:32 +02:00
Lode Hoste
fe34b08ece Reschedule the next scan interval (fixes #1591) 2015-05-03 12:48:44 +02:00
Jakob Borg
33048f88b8 Merge pull request #1752 from alex2108/master
Distinguish files with same name but different extension in staggered versioner (fixes #1738)
2015-05-03 12:32:20 +02:00
Alexander Graf
0ec01f4e78 Distinguish files with same name but different extension in staggered versioner (fixes #1738) 2015-05-03 10:36:46 +02:00
Lode Hoste
67f0c9bef0 Do not remove the file when renaming on a case-insensitive platform (fixes #1743) 2015-05-01 23:43:45 +02:00
Lode Hoste
58b15f9452 Limit alterfiles to a single operation per file 2015-05-01 13:03:03 +02:00
Lode Hoste
dedca59ac9 Add capitalization changes to integration tests 2015-05-01 12:11:57 +02:00
519 changed files with 35929 additions and 9214 deletions

4
.gitignore vendored
View File

@@ -3,8 +3,6 @@ syncthing.exe
*.tar.gz
*.zip
*.asc
*.sublime*
.idea/
.jshintrc
coverage.out
files/pidx
@@ -12,7 +10,7 @@ bin
perfstats*.csv
coverage.xml
!gui/scripts/syncthing
.DS_Store
syncthing.md5
syncthing.exe.md5
RELEASE
deb

18
AUTHORS
View File

@@ -1,30 +1,40 @@
# This is the official list of Syncthing authors for copyright purposes.
Aaron Bieber <qbit@deftly.net>
Adam Piggott <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com>
Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Antony Male <antony.male@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Bart De Vries <devriesb@gmail.com>
Ben Curthoys <ben@bencurthoys.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
Brendan Long <self@brendanlong.com>
Brian R. Becker <brbecker@gmail.com>
Caleb Callaway <enlightened.despot@gmail.com>
Carsten Hagemann <moter8@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Howie <me@chrishowie.com>
Chris Joel <chris@scriptolo.gy>
Colin Kennedy <moshen.colin@gmail.com>
Daniel Bergmann <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Martí <mvdan@mvdan.cc>
Denis A. <denisva@gmail.com>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Elias Jarlebring <jarlebring@gmail.com>
Emil Hessman <emil@hessman.se>
Erik Meitner <e.meitner@willystreet.coop>
Federico Castagnini <federico.castagnini@gmail.com>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
Francois-Xavier Gsell <fxgsell@gmail.com>
Frank Isemann <frank@isemann.name>
Gilli Sigurdsson <gilli@vx.is>
Jacek Szafarkiewicz <szafar@linux.pl>
Jakob Borg <jakob@nym.se>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec <dzardacz@gmail.com>
@@ -34,9 +44,11 @@ Johan Vromans <jvromans@squirrel.nl>
Karol Różycki <rozycki.karol@gmail.com>
Ken'ichi Kamada <kamada@nanohz.org>
Lode Hoste <zillode@zillode.be>
Marcin Dziadus <dziadus.marcin@gmail.com>
Lord Landon Agahnim <lordlandon@gmail.com>
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol <kilburn@la3.org>
Marcin Dziadus <dziadus.marcin@gmail.com>
Matt Burke <mburke@amplify.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Tilli <pyfisch@gmail.com>
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>
@@ -46,7 +58,7 @@ Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Sergey Mishin <ralder@yandex.ru>
Stefan Tatschner <stefan@sevenbyte.org>
Stefan Tatschner <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
Tim Abell <tim@timwise.co.uk>
Tobias Nygren <tnn@nygren.pp.se>
Tomas Cerveny <kozec@kozec.com>

View File

@@ -32,64 +32,15 @@ latest info on Transifex.
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Be prepared for a
[certain amount of review](https://github.com/syncthing/syncthing/wiki/FAQ#why-are-you-being-so-hard-on-my-pull-request);
it's all in the name of quality. :) Following the points below will make this
a smoother process.
where to start, any open issues are fair game! See the [Contribution
Guidelines](http://docs.syncthing.net/dev/contributing.html) for the full
story on committing code.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
## Contributing Documentation
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### Core Team
The Syncthing core team currently consists of the following members;
- Jakob Borg (@calmh)
- Audrius Butkevicius (@AudriusButkevicius)
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
as much as makes sense.
- All text files use Unix line endings.
- Each commit should be `go fmt` clean.
- The commit message subject should be a single short sentence
describing the change, starting with a capital letter.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
- A contribution solving a single issue or introducing a single new
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
Updates to the [documentation site](http://docs.syncthing.net/) can be
made as pull requests on the [documentation
repository](https://github.com/syncthing/docs).
## Licensing
@@ -99,42 +50,3 @@ strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
When accepting your first contribution, the maintainer of the project
will ensure that you are added to the AUTHORS file. You are welcome to
add yourself as a separate commit in your first pull request.
## Building
[See the documentation](https://github.com/syncthing/syncthing/wiki/Building)
on how to get started with a build environment.
## Branches
- `master` is the main branch containing good code that will end up in
the next release. You should base your work on it. It won't ever be
rebased or force-pushed to.
- `vx.y` branches exist to make patch releases on otherwise obsolete
minor releases. Should only contain fixes cherry picked from master.
Don't base any work on them.
- Other branches are probably topic branches and may be subject to
rebasing. Don't base any work on them unless you specifically know
otherwise.
## Tags
All releases are tagged semver style as `vx.y.z`. Release tags are
signed by GPG key BCE524C7.
## Tests
Yes please!
## Documentation
[Over here!](https://github.com/syncthing/syncthing/wiki)
## License
MPLv2

45
Godeps/Godeps.json generated
View File

@@ -1,17 +1,21 @@
{
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.4",
"GoVersion": "go1.5",
"Packages": [
"./cmd/..."
],
"Deps": [
{
"ImportPath": "github.com/bkaradzic/go-lz4",
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
"Rev": "d47913b1412890a261b9fefae99d72d2bf5aebd8"
},
{
"ImportPath": "github.com/calmh/du",
"Rev": "3c0690cca16228b97741327b1b6781397afbdb24"
},
{
"ImportPath": "github.com/calmh/logger",
"Rev": "4d4e2801954c5581e4c2a80a3d3beb3b3645fd04"
"Rev": "c96f6a1a8c7b6bf2f4860c667867d90174799eb2"
},
{
"ImportPath": "github.com/calmh/luhn",
@@ -21,29 +25,30 @@
"ImportPath": "github.com/calmh/xdr",
"Rev": "5f7208e86762911861c94f1849eddbfc0a60cbf0"
},
{
"ImportPath": "github.com/golang/snappy",
"Rev": "723cc1e459b8eea2dea4583200fd60757d40097a"
},
{
"ImportPath": "github.com/juju/ratelimit",
"Rev": "c5abe513796336ee2869745bff0638508450e9c5"
"Rev": "772f5c38e468398c4511514f4f6aa9a4185bc0a0"
},
{
"ImportPath": "github.com/kardianos/osext",
"Rev": "efacde03154693404c65e7aa7d461ac9014acd0c"
"Rev": "6e7f843663477789fac7c02def0d0909e969b4e5"
},
{
"ImportPath": "github.com/syncthing/protocol",
"Rev": "e7db2648034fb71b051902a02bc25d4468ed492e"
"Rev": "388a29bbe21d8772ee4c29f4520aa8040309607d"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "87e4e645d80ae9c537e8f2dee52b28036a5dd75e"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
"Rev": "156a073208e131d7d2e212cb749feae7c339e846"
"Rev": "b743d92d3215f11c9b5ce8830fafe1f16786adf4"
},
{
"ImportPath": "github.com/thejerf/suture",
"Rev": "ff19fb384c3fe30f42717967eaa69da91e5f317c"
"Comment": "v1.0.1",
"Rev": "99c1f2d613756768fc4299acd9dc621e11ed3fd7"
},
{
"ImportPath": "github.com/vitrun/qart/coding",
@@ -59,19 +64,27 @@
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
"Rev": "c16968172724c0b5e8bdc6ad33f5a79443a44cd7"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
"Rev": "c16968172724c0b5e8bdc6ad33f5a79443a44cd7"
},
{
"ImportPath": "golang.org/x/net/internal/iana",
"Rev": "66f0418ca41253f8d1a024eb9754e9441a8e79b9"
},
{
"ImportPath": "golang.org/x/net/ipv6",
"Rev": "66f0418ca41253f8d1a024eb9754e9441a8e79b9"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
"Rev": "723492b65e225eafcba054e76ba18bb9c5ac1ea2"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
"Rev": "723492b65e225eafcba054e76ba18bb9c5ac1ea2"
}
]
}

View File

@@ -4,4 +4,5 @@ go:
- 1.1
- 1.2
- 1.3
- 1.4
- tip

View File

@@ -4,7 +4,7 @@ go-lz4
go-lz4 is port of LZ4 lossless compression algorithm to Go. The original C code
is located at:
https://code.google.com/p/lz4/
https://github.com/Cyan4973/lz4
Status
------

View File

@@ -0,0 +1,23 @@
// +build gofuzz
package lz4
import "encoding/binary"
func Fuzz(data []byte) int {
if len(data) < 4 {
return 0
}
ln := binary.LittleEndian.Uint32(data)
if ln > (1 << 21) {
return 0
}
if _, err := Decode(nil, data); err != nil {
return 0
}
return 1
}

View File

@@ -141,7 +141,7 @@ func Decode(dst, src []byte) ([]byte, error) {
length += ln
}
if int(d.spos+length) > len(d.src) {
if int(d.spos+length) > len(d.src) || int(d.dpos+length) > len(d.dst) {
return nil, ErrCorrupt
}
@@ -179,7 +179,12 @@ func Decode(dst, src []byte) ([]byte, error) {
}
literal := d.dpos - d.ref
if literal < 4 {
if int(d.dpos+4) > len(d.dst) {
return nil, ErrCorrupt
}
d.cp(4, decr[literal])
} else {
length += 4

View File

@@ -25,8 +25,10 @@
package lz4
import "encoding/binary"
import "errors"
import (
"encoding/binary"
"errors"
)
const (
minMatch = 4

24
Godeps/_workspace/src/github.com/calmh/du/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org>

14
Godeps/_workspace/src/github.com/calmh/du/README.md generated vendored Normal file
View File

@@ -0,0 +1,14 @@
du
==
Get total and available disk space on a given volume.
Documentation
-------------
http://godoc.org/github.com/calmh/du
License
-------
Public Domain

View File

@@ -0,0 +1,21 @@
package main
import (
"fmt"
"log"
"os"
"github.com/calmh/du"
)
var KB = int64(1024)
func main() {
usage, err := du.Get(os.Args[1])
if err != nil {
log.Fatal(err)
}
fmt.Println("Free:", usage.FreeBytes/(KB*KB), "MiB")
fmt.Println("Available:", usage.AvailBytes/(KB*KB), "MiB")
fmt.Println("Size:", usage.TotalBytes/(KB*KB), "MiB")
}

View File

@@ -0,0 +1,8 @@
package du
// Usage holds information about total and available storage on a volume.
type Usage struct {
TotalBytes int64 // Size of volume
FreeBytes int64 // Unused size
AvailBytes int64 // Available to a non-privileged user
}

View File

@@ -0,0 +1,24 @@
// +build !windows,!netbsd,!openbsd,!solaris
package du
import (
"path/filepath"
"syscall"
)
// Get returns the Usage of a given path, or an error if usage data is
// unavailable.
func Get(path string) (Usage, error) {
var stat syscall.Statfs_t
err := syscall.Statfs(filepath.Clean(path), &stat)
if err != nil {
return Usage{}, err
}
u := Usage{
FreeBytes: int64(stat.Bfree) * int64(stat.Bsize),
TotalBytes: int64(stat.Blocks) * int64(stat.Bsize),
AvailBytes: int64(stat.Bavail) * int64(stat.Bsize),
}
return u, nil
}

View File

@@ -0,0 +1,13 @@
// +build netbsd openbsd solaris
package du
import "errors"
var ErrUnsupported = errors.New("unsupported platform")
// Get returns the Usage of a given path, or an error if usage data is
// unavailable.
func Get(path string) (Usage, error) {
return Usage{}, ErrUnsupported
}

View File

@@ -0,0 +1,27 @@
package du
import (
"syscall"
"unsafe"
)
// Get returns the Usage of a given path, or an error if usage data is
// unavailable.
func Get(path string) (Usage, error) {
h := syscall.MustLoadDLL("kernel32.dll")
c := h.MustFindProc("GetDiskFreeSpaceExW")
var u Usage
ret, _, err := c.Call(
uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(path))),
uintptr(unsafe.Pointer(&u.FreeBytes)),
uintptr(unsafe.Pointer(&u.TotalBytes)),
uintptr(unsafe.Pointer(&u.AvailBytes)))
if ret == 0 {
return u, err
}
return u, nil
}

View File

@@ -6,6 +6,7 @@ package logger
import (
"fmt"
"io/ioutil"
"log"
"os"
"strings"
@@ -37,6 +38,13 @@ type Logger struct {
var DefaultLogger = New()
func New() *Logger {
if os.Getenv("LOGGER_DISCARD") != "" {
// Hack to completely disable logging, for example when running benchmarks.
return &Logger{
logger: log.New(ioutil.Discard, "", 0),
}
}
return &Logger{
logger: log.New(os.Stdout, "", log.Ltime),
}

14
Godeps/_workspace/src/github.com/golang/snappy/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,14 @@
# This is the official list of Snappy-Go authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as
# Name or Organization <email address>
# The email address is not required for organizations.
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Google Inc.
Jan Mercl <0xjnml@gmail.com>
Sebastien Binet <seb.binet@gmail.com>

View File

@@ -0,0 +1,36 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the Snappy-Go repository.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Google employees are listed here
# but not in AUTHORS, because Google holds the copyright.
#
# The submission process automatically checks to make sure
# that people submitting code are listed in this file (by email address).
#
# Names should be added to this file only after verifying that
# the individual or the individual's organization has agreed to
# the appropriate Contributor License Agreement, found here:
#
# http://code.google.com/legal/individual-cla-v1.0.html
# http://code.google.com/legal/corporate-cla-v1.0.html
#
# The agreement for individuals can be filled out on the web.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file, depending on whether the
# individual or corporate CLA was used.
# Names should be added to this file like so:
# Name <email address>
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Jan Mercl <0xjnml@gmail.com>
Kai Backman <kaib@golang.org>
Marc-Antoine Ruel <maruel@chromium.org>
Nigel Tao <nigeltao@golang.org>
Rob Pike <r@golang.org>
Russ Cox <rsc@golang.org>
Sebastien Binet <seb.binet@gmail.com>

27
Godeps/_workspace/src/github.com/golang/snappy/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,7 @@
The Snappy compression format in the Go programming language.
To download and install from source:
$ go get github.com/golang/snappy
Unless otherwise noted, the Snappy-Go source files are distributed
under the BSD-style license found in the LICENSE file.

View File

@@ -13,6 +13,8 @@ import (
var (
// ErrCorrupt reports that the input is invalid.
ErrCorrupt = errors.New("snappy: corrupt input")
// ErrTooLarge reports that the uncompressed length is too large.
ErrTooLarge = errors.New("snappy: decoded block is too large")
// ErrUnsupported reports that the input isn't supported.
ErrUnsupported = errors.New("snappy: unsupported input")
)
@@ -27,11 +29,13 @@ func DecodedLen(src []byte) (int, error) {
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n == 0 {
if n <= 0 || v > 0xffffffff {
return 0, 0, ErrCorrupt
}
if uint64(int(v)) != v {
return 0, 0, errors.New("snappy: decoded block is too large")
const wordSize = 32 << (^uint(0) >> 32 & 1)
if wordSize == 32 && v > 0x7fffffff {
return 0, 0, ErrTooLarge
}
return int(v), n, nil
}
@@ -56,7 +60,7 @@ func Decode(dst, src []byte) ([]byte, error) {
x := uint(src[s] >> 2)
switch {
case x < 60:
s += 1
s++
case x == 60:
s += 2
if s > len(src) {
@@ -130,7 +134,7 @@ func Decode(dst, src []byte) ([]byte, error) {
// NewReader returns a new Reader that decompresses from r, using the framing
// format described at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewReader(r io.Reader) *Reader {
return &Reader{
r: r,
@@ -200,7 +204,7 @@ func (r *Reader) Read(p []byte) (int, error) {
}
// The chunk types are specified at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
@@ -280,13 +284,11 @@ func (r *Reader) Read(p []byte) (int, error) {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return 0, r.err
} else {
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen]) {
return 0, r.err
}
}
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen]) {
return 0, r.err
}
}
}

View File

@@ -79,7 +79,7 @@ func emitCopy(dst []byte, offset, length int) int {
// slice of dst if dst was large enough to hold the entire encoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Encode(dst, src []byte) ([]byte, error) {
func Encode(dst, src []byte) []byte {
if n := MaxEncodedLen(len(src)); len(dst) < n {
dst = make([]byte, n)
}
@@ -92,7 +92,7 @@ func Encode(dst, src []byte) ([]byte, error) {
if len(src) != 0 {
d += emitLiteral(dst[d:], src)
}
return dst[:d], nil
return dst[:d]
}
// Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive.
@@ -145,7 +145,7 @@ func Encode(dst, src []byte) ([]byte, error) {
if lit != len(src) {
d += emitLiteral(dst[d:], src[lit:])
}
return dst[:d], nil
return dst[:d]
}
// MaxEncodedLen returns the maximum length of a snappy block, given its
@@ -176,7 +176,7 @@ func MaxEncodedLen(srcLen int) int {
// NewWriter returns a new Writer that compresses to w, using the framing
// format described at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewWriter(w io.Writer) *Writer {
return &Writer{
w: w,
@@ -226,11 +226,7 @@ func (w *Writer) Write(p []byte) (n int, errRet error) {
// Compress the buffer, discarding the result if the improvement
// isn't at least 12.5%.
chunkType := uint8(chunkTypeCompressedData)
chunkBody, err := Encode(w.enc, uncompressed)
if err != nil {
w.err = err
return n, err
}
chunkBody := Encode(w.enc, uncompressed)
if len(chunkBody) >= len(uncompressed)-len(uncompressed)/8 {
chunkType, chunkBody = chunkTypeUncompressedData, uncompressed
}
@@ -244,11 +240,11 @@ func (w *Writer) Write(p []byte) (n int, errRet error) {
w.buf[5] = uint8(checksum >> 8)
w.buf[6] = uint8(checksum >> 16)
w.buf[7] = uint8(checksum >> 24)
if _, err = w.w.Write(w.buf[:]); err != nil {
if _, err := w.w.Write(w.buf[:]); err != nil {
w.err = err
return n, err
}
if _, err = w.w.Write(chunkBody); err != nil {
if _, err := w.w.Write(chunkBody); err != nil {
w.err = err
return n, err
}

View File

@@ -5,7 +5,7 @@
// Package snappy implements the snappy block-based compression format.
// It aims for very high speeds and reasonable compression.
//
// The C++ snappy implementation is at http://code.google.com/p/snappy/
// The C++ snappy implementation is at https://github.com/google/snappy
package snappy
import (
@@ -46,7 +46,7 @@ const (
chunkHeaderSize = 4
magicChunk = "\xff\x06\x00\x00" + magicBody
magicBody = "sNaPpY"
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt says
// https://github.com/google/snappy/blob/master/framing_format.txt says
// that "the uncompressed data in a chunk must be no longer than 65536 bytes".
maxUncompressedChunkLen = 65536
)
@@ -61,7 +61,7 @@ const (
var crcTable = crc32.MakeTable(crc32.Castagnoli)
// crc implements the checksum specified in section 3 of
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
func crc(b []byte) uint32 {
c := crc32.Update(0, crcTable, b)
return uint32(c>>15|c<<17) + 0xa282ead8

View File

@@ -24,11 +24,7 @@ var (
)
func roundtrip(b, ebuf, dbuf []byte) error {
e, err := Encode(ebuf, b)
if err != nil {
return fmt.Errorf("encoding error: %v", err)
}
d, err := Decode(dbuf, e)
d, err := Decode(dbuf, Encode(ebuf, b))
if err != nil {
return fmt.Errorf("decoding error: %v", err)
}
@@ -82,6 +78,26 @@ func TestSmallRegular(t *testing.T) {
}
}
func TestInvalidVarint(t *testing.T) {
data := []byte("\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
// The encoded varint overflows 32 bits
data = []byte("\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
}
func cmp(a, b []byte) error {
if len(a) != len(b) {
return fmt.Errorf("got %d bytes, want %d", len(a), len(b))
@@ -197,10 +213,7 @@ func TestWriterReset(t *testing.T) {
}
func benchDecode(b *testing.B, src []byte) {
encoded, err := Encode(nil, src)
if err != nil {
b.Fatal(err)
}
encoded := Encode(nil, src)
// Bandwidth is in amount of uncompressed data.
b.SetBytes(int64(len(src)))
b.ResetTimer()
@@ -222,7 +235,7 @@ func benchEncode(b *testing.B, src []byte) {
func readFile(b testing.TB, filename string) []byte {
src, err := ioutil.ReadFile(filename)
if err != nil {
b.Fatalf("failed reading %s: %s", filename, err)
b.Skipf("skipping benchmark: %v", err)
}
if len(src) == 0 {
b.Fatalf("%s has zero length", filename)
@@ -284,14 +297,14 @@ var testFiles = []struct {
// The test data files are present at this canonical URL.
const baseURL = "https://raw.githubusercontent.com/google/snappy/master/testdata/"
func downloadTestdata(basename string) (errRet error) {
func downloadTestdata(b *testing.B, basename string) (errRet error) {
filename := filepath.Join(*testdata, basename)
if stat, err := os.Stat(filename); err == nil && stat.Size() != 0 {
return nil
}
if !*download {
return fmt.Errorf("test data not found; skipping benchmark without the -download flag")
b.Skipf("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
@@ -326,7 +339,7 @@ func downloadTestdata(basename string) (errRet error) {
}
func benchFile(b *testing.B, n int, decode bool) {
if err := downloadTestdata(testFiles[n].filename); err != nil {
if err := downloadTestdata(b, testFiles[n].filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
data := readFile(b, filepath.Join(*testdata, testFiles[n].filename))

View File

@@ -1,5 +1,8 @@
This package contains an efficient token-bucket-based rate limiter.
Copyright (C) 2015 Canonical Ltd.
All files in this repository are licensed as follows. If you contribute
to this repository, it is assumed that you license your contribution
under the same license unless you state otherwise.
All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file.
This software is licensed under the LGPLv3, included below.

View File

@@ -20,7 +20,7 @@ token in the bucket represents one byte.
```go
func Writer(w io.Writer, bucket *Bucket) io.Writer
```
Writer returns a reader that is rate limited by the given token bucket. Each
Writer returns a writer that is rate limited by the given token bucket. Each
token in the bucket represents one byte.
#### type Bucket

View File

@@ -2,7 +2,8 @@
// Licensed under the LGPLv3 with static-linking exception.
// See LICENCE file for details.
// The ratelimit package provides an efficient token bucket implementation.
// The ratelimit package provides an efficient token bucket implementation
// that can be used to limit the rate of arbitrary things.
// See http://en.wikipedia.org/wiki/Token_bucket.
package ratelimit
@@ -10,6 +11,7 @@ import (
"strconv"
"sync"
"time"
"math"
)
// Bucket represents a token bucket that fills at a predetermined rate.
@@ -55,7 +57,7 @@ func NewBucketWithRate(rate float64, capacity int64) *Bucket {
continue
}
tb := NewBucketWithQuantum(fillInterval, capacity, quantum)
if diff := abs(tb.Rate() - rate); diff/rate <= rateMargin {
if diff := math.Abs(tb.Rate() - rate); diff/rate <= rateMargin {
return tb
}
}
@@ -217,10 +219,3 @@ func (tb *Bucket) adjust(now time.Time) (currentTick int64) {
tb.availTick = currentTick
return
}
func abs(f float64) float64 {
if f < 0 {
return -f
}
return f
}

View File

@@ -4,7 +4,9 @@
There is sometimes utility in finding the current executable file
that is running. This can be used for upgrading the current executable
or finding resources located relative to the executable file.
or finding resources located relative to the executable file. Both
working directory and the os.Args[0] value are arbitrary and cannot
be relied on; os.Args[0] can be "faked".
Multi-platform and supports:
* Linux

View File

@@ -16,12 +16,12 @@ func Executable() (string, error) {
}
// Returns same path as Executable, returns just the folder
// path. Excludes the executable name.
// path. Excludes the executable name and any trailing slash.
func ExecutableFolder() (string, error) {
p, err := Executable()
if err != nil {
return "", err
}
folder, _ := filepath.Split(p)
return folder, nil
return filepath.Dir(p), nil
}

View File

@@ -17,12 +17,14 @@ import (
func executable() (string, error) {
switch runtime.GOOS {
case "linux":
const deletedSuffix = " (deleted)"
const deletedTag = " (deleted)"
execpath, err := os.Readlink("/proc/self/exe")
if err != nil {
return execpath, err
}
return strings.TrimSuffix(execpath, deletedSuffix), nil
execpath = strings.TrimSuffix(execpath, deletedTag)
execpath = strings.TrimPrefix(execpath, deletedTag)
return execpath, nil
case "netbsd":
return os.Readlink("/proc/curproc/exe")
case "openbsd", "dragonfly":

View File

@@ -24,6 +24,29 @@ const (
executableEnvValueDelete = "delete"
)
func TestPrintExecutable(t *testing.T) {
ef, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
t.Log("Executable:", ef)
}
func TestPrintExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
t.Log("Executable Folder:", ef)
}
func TestExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
if ef[len(ef)-1] == filepath.Separator {
t.Fatal("ExecutableFolder ends with a trailing slash.")
}
}
func TestExecutableMatch(t *testing.T) {
ep, err := Executable()
if err != nil {

View File

@@ -31,15 +31,16 @@ func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo, fl
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
}
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
t.folder = folder
t.name = name
t.offset = offset
t.size = size
t.size = len(buf)
t.hash = hash
t.flags = flags
t.options = options
return t.data, nil
copy(buf, t.data)
return nil
}
func (t *TestModel) Close(deviceID DeviceID, err error) {

View File

@@ -0,0 +1,23 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
import "testing"
func TestWinsConflict(t *testing.T) {
testcases := [][2]FileInfo{
// The first should always win over the second
{{Modified: 42}, {Modified: 41}},
{{Modified: 41}, {Modified: 42, Flags: FlagDeleted}},
{{Modified: 41, Version: Vector{{42, 2}, {43, 1}}}, {Modified: 41, Version: Vector{{42, 1}, {43, 2}}}},
}
for _, tc := range testcases {
if !tc[0].WinsConflict(tc[1]) {
t.Errorf("%v should win over %v", tc[0], tc[1])
}
if tc[1].WinsConflict(tc[0]) {
t.Errorf("%v should not win over %v", tc[1], tc[0])
}
}
}

View File

@@ -0,0 +1,70 @@
// Copyright (C) 2015 The Protocol Authors.
// +build gofuzz
package protocol
import (
"bytes"
"encoding/binary"
"encoding/hex"
"fmt"
"reflect"
"sync"
)
func Fuzz(data []byte) int {
// Regenerate the length, or we'll most commonly exit quickly due to an
// unexpected eof which is unintestering.
if len(data) > 8 {
binary.BigEndian.PutUint32(data[4:], uint32(len(data))-8)
}
// Setup a rawConnection we'll use to parse the message.
c := rawConnection{
cr: &countingReader{Reader: bytes.NewReader(data)},
closed: make(chan struct{}),
pool: sync.Pool{
New: func() interface{} {
return make([]byte, BlockSize)
},
},
}
// Attempt to parse the message.
hdr, msg, err := c.readMessage()
if err != nil {
return 0
}
// If parsing worked, attempt to encode it again.
newBs, err := msg.AppendXDR(nil)
if err != nil {
panic("not encodable")
}
// Create an appriate header for the re-encoding.
newMsg := make([]byte, 8)
binary.BigEndian.PutUint32(newMsg, encodeHeader(hdr))
binary.BigEndian.PutUint32(newMsg[4:], uint32(len(newBs)))
newMsg = append(newMsg, newBs...)
// Use the rawConnection to parse the re-encoding.
c.cr = &countingReader{Reader: bytes.NewReader(newMsg)}
hdr2, msg2, err := c.readMessage()
if err != nil {
fmt.Println("Initial:\n" + hex.Dump(data))
fmt.Println("New:\n" + hex.Dump(newMsg))
panic("not parseable after re-encode: " + err.Error())
}
// Make sure the data is the same as it was before.
if hdr != hdr2 {
panic("headers differ")
}
if !reflect.DeepEqual(msg, msg2) {
panic("contents differ")
}
return 1
}

View File

@@ -0,0 +1,89 @@
// Copyright (C) 2015 The Protocol Authors.
// +build gofuzz
package protocol
import (
"encoding/binary"
"fmt"
"io/ioutil"
"os"
"strings"
"testing"
"testing/quick"
)
// This can be used to generate a corpus of valid messages as a starting point
// for the fuzzer.
func TestGenerateCorpus(t *testing.T) {
t.Skip("Use to generate initial corpus only")
n := 0
check := func(idx IndexMessage) bool {
for i := range idx.Options {
if len(idx.Options[i].Key) > 64 {
idx.Options[i].Key = idx.Options[i].Key[:64]
}
}
hdr := header{
version: 0,
msgID: 42,
msgType: messageTypeIndex,
compression: false,
}
msgBs := idx.MustMarshalXDR()
buf := make([]byte, 8)
binary.BigEndian.PutUint32(buf, encodeHeader(hdr))
binary.BigEndian.PutUint32(buf[4:], uint32(len(msgBs)))
buf = append(buf, msgBs...)
ioutil.WriteFile(fmt.Sprintf("testdata/corpus/test-%03d.xdr", n), buf, 0644)
n++
return true
}
if err := quick.Check(check, &quick.Config{MaxCount: 1000}); err != nil {
t.Fatal(err)
}
}
// Tests any crashers found by the fuzzer, for closer investigation.
func TestCrashers(t *testing.T) {
testFiles(t, "testdata/crashers")
}
// Tests the entire corpus, which should PASS before the fuzzer starts
// fuzzing.
func TestCorpus(t *testing.T) {
testFiles(t, "testdata/corpus")
}
func testFiles(t *testing.T, dir string) {
fd, err := os.Open(dir)
if err != nil {
t.Fatal(err)
}
crashers, err := fd.Readdirnames(-1)
if err != nil {
t.Fatal(err)
}
for _, name := range crashers {
if strings.HasSuffix(name, ".output") {
continue
}
if strings.HasSuffix(name, ".quoted") {
continue
}
t.Log(name)
crasher, err := ioutil.ReadFile(dir + "/" + name)
if err != nil {
t.Fatal(err)
}
Fuzz(crasher)
}
}

View File

@@ -9,7 +9,7 @@ import "fmt"
type IndexMessage struct {
Folder string
Files []FileInfo
Files []FileInfo // max:1000000
Flags uint32
Options []Option // max:64
}
@@ -20,7 +20,7 @@ type FileInfo struct {
Modified int64
Version Vector
LocalVersion int64
Blocks []BlockInfo
Blocks []BlockInfo // max:1000000
}
func (f FileInfo) String() string {
@@ -58,6 +58,31 @@ func (f FileInfo) HasPermissionBits() bool {
return f.Flags&FlagNoPermBits == 0
}
// WinsConflict returns true if "f" is the one to choose when it is in
// conflict with "other".
func (f FileInfo) WinsConflict(other FileInfo) bool {
// If a modification is in conflict with a delete, we pick the
// modification.
if !f.IsDeleted() && other.IsDeleted() {
return true
}
if f.IsDeleted() && !other.IsDeleted() {
return false
}
// The one with the newer modification time wins.
if f.Modified > other.Modified {
return true
}
if f.Modified < other.Modified {
return false
}
// The modification times were equal. Use the device ID in the version
// vector as tie breaker.
return f.Version.Compare(other.Version) == ConcurrentGreater
}
type BlockInfo struct {
Offset int64 // noencode (cache only)
Size int32
@@ -84,9 +109,9 @@ type ResponseMessage struct {
}
type ClusterConfigMessage struct {
ClientName string // max:64
ClientVersion string // max:64
Folders []Folder
ClientName string // max:64
ClientVersion string // max:64
Folders []Folder // max:1000000
Options []Option // max:64
}
@@ -100,8 +125,8 @@ func (o *ClusterConfigMessage) GetOption(key string) string {
}
type Folder struct {
ID string // max:64
Devices []Device
ID string // max:64
Devices []Device // max:1000000
Flags uint32
Options []Option // max:64
}

View File

@@ -42,7 +42,7 @@ IndexMessage Structure:
struct IndexMessage {
string Folder<>;
FileInfo Files<>;
FileInfo Files<1000000>;
unsigned int Flags;
Option Options<64>;
}
@@ -75,6 +75,9 @@ func (o IndexMessage) AppendXDR(bs []byte) ([]byte, error) {
func (o IndexMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteString(o.Folder)
if l := len(o.Files); l > 1000000 {
return xw.Tot(), xdr.ElementSizeExceeded("Files", l, 1000000)
}
xw.WriteUint32(uint32(len(o.Files)))
for i := range o.Files {
_, err := o.Files[i].EncodeXDRInto(xw)
@@ -111,7 +114,10 @@ func (o *IndexMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.Folder = xr.ReadString()
_FilesSize := int(xr.ReadUint32())
if _FilesSize < 0 {
return xdr.ElementSizeExceeded("Files", _FilesSize, 0)
return xdr.ElementSizeExceeded("Files", _FilesSize, 1000000)
}
if _FilesSize > 1000000 {
return xdr.ElementSizeExceeded("Files", _FilesSize, 1000000)
}
o.Files = make([]FileInfo, _FilesSize)
for i := range o.Files {
@@ -173,7 +179,7 @@ struct FileInfo {
hyper Modified;
Vector Version;
hyper LocalVersion;
BlockInfo Blocks<>;
BlockInfo Blocks<1000000>;
}
*/
@@ -214,6 +220,9 @@ func (o FileInfo) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), err
}
xw.WriteUint64(uint64(o.LocalVersion))
if l := len(o.Blocks); l > 1000000 {
return xw.Tot(), xdr.ElementSizeExceeded("Blocks", l, 1000000)
}
xw.WriteUint32(uint32(len(o.Blocks)))
for i := range o.Blocks {
_, err := o.Blocks[i].EncodeXDRInto(xw)
@@ -243,7 +252,10 @@ func (o *FileInfo) DecodeXDRFrom(xr *xdr.Reader) error {
o.LocalVersion = int64(xr.ReadUint64())
_BlocksSize := int(xr.ReadUint32())
if _BlocksSize < 0 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 0)
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 1000000)
}
if _BlocksSize > 1000000 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 1000000)
}
o.Blocks = make([]BlockInfo, _BlocksSize)
for i := range o.Blocks {
@@ -571,7 +583,7 @@ ClusterConfigMessage Structure:
struct ClusterConfigMessage {
string ClientName<64>;
string ClientVersion<64>;
Folder Folders<>;
Folder Folders<1000000>;
Option Options<64>;
}
@@ -610,6 +622,9 @@ func (o ClusterConfigMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("ClientVersion", l, 64)
}
xw.WriteString(o.ClientVersion)
if l := len(o.Folders); l > 1000000 {
return xw.Tot(), xdr.ElementSizeExceeded("Folders", l, 1000000)
}
xw.WriteUint32(uint32(len(o.Folders)))
for i := range o.Folders {
_, err := o.Folders[i].EncodeXDRInto(xw)
@@ -646,7 +661,10 @@ func (o *ClusterConfigMessage) DecodeXDRFrom(xr *xdr.Reader) error {
o.ClientVersion = xr.ReadStringMax(64)
_FoldersSize := int(xr.ReadUint32())
if _FoldersSize < 0 {
return xdr.ElementSizeExceeded("Folders", _FoldersSize, 0)
return xdr.ElementSizeExceeded("Folders", _FoldersSize, 1000000)
}
if _FoldersSize > 1000000 {
return xdr.ElementSizeExceeded("Folders", _FoldersSize, 1000000)
}
o.Folders = make([]Folder, _FoldersSize)
for i := range o.Folders {
@@ -697,7 +715,7 @@ Folder Structure:
struct Folder {
string ID<64>;
Device Devices<>;
Device Devices<1000000>;
unsigned int Flags;
Option Options<64>;
}
@@ -733,6 +751,9 @@ func (o Folder) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 64)
}
xw.WriteString(o.ID)
if l := len(o.Devices); l > 1000000 {
return xw.Tot(), xdr.ElementSizeExceeded("Devices", l, 1000000)
}
xw.WriteUint32(uint32(len(o.Devices)))
for i := range o.Devices {
_, err := o.Devices[i].EncodeXDRInto(xw)
@@ -769,7 +790,10 @@ func (o *Folder) DecodeXDRFrom(xr *xdr.Reader) error {
o.ID = xr.ReadStringMax(64)
_DevicesSize := int(xr.ReadUint32())
if _DevicesSize < 0 {
return xdr.ElementSizeExceeded("Devices", _DevicesSize, 0)
return xdr.ElementSizeExceeded("Devices", _DevicesSize, 1000000)
}
if _DevicesSize > 1000000 {
return xdr.ElementSizeExceeded("Devices", _DevicesSize, 1000000)
}
o.Devices = make([]Device, _DevicesSize)
for i := range o.Devices {

View File

@@ -26,9 +26,9 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
name = norm.NFD.String(name)
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
return m.next.Request(deviceID, folder, name, offset, hash, flags, options, buf)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {

View File

@@ -18,8 +18,8 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
return m.next.Request(deviceID, folder, name, offset, hash, flags, options, buf)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {

View File

@@ -25,18 +25,18 @@ type nativeModel struct {
}
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
fixupFiles(files)
fixupFiles(folder, files)
m.next.Index(deviceID, folder, files, flags, options)
}
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
fixupFiles(files)
fixupFiles(folder, files)
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
name = filepath.FromSlash(name)
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
return m.next.Request(deviceID, folder, name, offset, hash, flags, options, buf)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
@@ -47,7 +47,7 @@ func (m nativeModel) Close(deviceID DeviceID, err error) {
m.next.Close(deviceID, err)
}
func fixupFiles(files []FileInfo) {
func fixupFiles(folder string, files []FileInfo) {
for i, f := range files {
if strings.ContainsAny(f.Name, disallowedCharacters) {
if f.IsDeleted() {
@@ -56,7 +56,7 @@ func fixupFiles(files []FileInfo) {
continue
}
files[i].Flags |= FlagInvalid
l.Warnf("File name %q contains invalid characters; marked as invalid.", f.Name)
l.Warnf("File name %q (folder %q) contains invalid characters; marked as invalid.", f.Name, folder)
}
files[i].Name = filepath.FromSlash(files[i].Name)
}

View File

@@ -15,7 +15,11 @@ import (
)
const (
BlockSize = 128 * 1024
// Data block size (128 KiB)
BlockSize = 128 << 10
// We reject messages larger than this when encountered on the wire. (64 MiB)
MaxMessageLen = 64 << 20
)
const (
@@ -31,8 +35,7 @@ const (
const (
stateInitial = iota
stateCCRcvd
stateIdxRcvd
stateReady
)
// FileInfo flags
@@ -82,7 +85,7 @@ type Model interface {
// An index update was received from the peer device
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option)
// A request was made by the peer device
Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error)
Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error
// A cluster configuration message was received
ClusterConfig(deviceID DeviceID, config ClusterConfigMessage)
// The peer device closed the connection
@@ -90,6 +93,7 @@ type Model interface {
}
type Connection interface {
Start()
ID() DeviceID
Name() string
Index(folder string, files []FileInfo, flags uint32, options []Option) error
@@ -103,7 +107,6 @@ type rawConnection struct {
id DeviceID
name string
receiver Model
state int
cr *countingReader
cw *countingWriter
@@ -113,11 +116,11 @@ type rawConnection struct {
idxMut sync.Mutex // ensures serialization of Index calls
nextID chan int
outbox chan hdrMsg
closed chan struct{}
once sync.Once
nextID chan int
outbox chan hdrMsg
closed chan struct{}
once sync.Once
pool sync.Pool
compression Compression
rdbuf0 []byte // used & reused by readMessage
@@ -130,8 +133,9 @@ type asyncResult struct {
}
type hdrMsg struct {
hdr header
msg encodable
hdr header
msg encodable
done chan struct{}
}
type encodable interface {
@@ -142,9 +146,9 @@ type isEofer interface {
IsEOF() bool
}
const (
pingTimeout = 30 * time.Second
pingIdleTime = 60 * time.Second
var (
PingTimeout = 30 * time.Second
PingIdleTime = 60 * time.Second
)
func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiver Model, name string, compress Compression) Connection {
@@ -152,24 +156,32 @@ func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiv
cw := &countingWriter{Writer: writer}
c := rawConnection{
id: deviceID,
name: name,
receiver: nativeModel{receiver},
state: stateInitial,
cr: cr,
cw: cw,
outbox: make(chan hdrMsg),
nextID: make(chan int),
closed: make(chan struct{}),
id: deviceID,
name: name,
receiver: nativeModel{receiver},
cr: cr,
cw: cw,
outbox: make(chan hdrMsg),
nextID: make(chan int),
closed: make(chan struct{}),
pool: sync.Pool{
New: func() interface{} {
return make([]byte, BlockSize)
},
},
compression: compress,
}
return wireFormatConnection{&c}
}
// Start creates the goroutines for sending and receiving of messages. It must
// be called exactly once after creating a connection.
func (c *rawConnection) Start() {
go c.readerLoop()
go c.writerLoop()
go c.pingerLoop()
go c.idGenerator()
return wireFormatConnection{&c}
}
func (c *rawConnection) ID() DeviceID {
@@ -193,7 +205,7 @@ func (c *rawConnection) Index(folder string, idx []FileInfo, flags uint32, optio
Files: idx,
Flags: flags,
Options: options,
})
}, nil)
c.idxMut.Unlock()
return nil
}
@@ -211,7 +223,7 @@ func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo, flags uint32,
Files: idx,
Flags: flags,
Options: options,
})
}, nil)
c.idxMut.Unlock()
return nil
}
@@ -241,7 +253,7 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
Hash: hash,
Flags: flags,
Options: options,
})
}, nil)
if !ok {
return nil, ErrClosed
}
@@ -255,7 +267,7 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
// ClusterConfig send the cluster configuration message to the peer and returns any error
func (c *rawConnection) ClusterConfig(config ClusterConfigMessage) {
c.send(-1, messageTypeClusterConfig, config)
c.send(-1, messageTypeClusterConfig, config, nil)
}
func (c *rawConnection) ping() bool {
@@ -271,7 +283,7 @@ func (c *rawConnection) ping() bool {
c.awaiting[id] = rc
c.awaitingMut.Unlock()
ok := c.send(id, messageTypePing, nil)
ok := c.send(id, messageTypePing, nil, nil)
if !ok {
return false
}
@@ -285,6 +297,7 @@ func (c *rawConnection) readerLoop() (err error) {
c.close(err)
}()
state := stateInitial
for {
select {
case <-c.closed:
@@ -298,47 +311,54 @@ func (c *rawConnection) readerLoop() (err error) {
}
switch msg := msg.(type) {
case ClusterConfigMessage:
if state != stateInitial {
return fmt.Errorf("protocol error: cluster config message in state %d", state)
}
go c.receiver.ClusterConfig(c.id, msg)
state = stateReady
case IndexMessage:
switch hdr.msgType {
case messageTypeIndex:
if c.state < stateCCRcvd {
return fmt.Errorf("protocol error: index message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: index message in state %d", state)
}
c.handleIndex(msg)
c.state = stateIdxRcvd
state = stateReady
case messageTypeIndexUpdate:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: index update message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: index update message in state %d", state)
}
c.handleIndexUpdate(msg)
state = stateReady
}
case RequestMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: request message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: request message in state %d", state)
}
// Requests are handled asynchronously
go c.handleRequest(hdr.msgID, msg)
case ResponseMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: response message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: response message in state %d", state)
}
c.handleResponse(hdr.msgID, msg)
case pingMessage:
c.send(hdr.msgID, messageTypePong, pongMessage{})
if state != stateReady {
return fmt.Errorf("protocol error: ping message in state %d", state)
}
c.send(hdr.msgID, messageTypePong, pongMessage{}, nil)
case pongMessage:
c.handlePong(hdr.msgID)
case ClusterConfigMessage:
if c.state != stateInitial {
return fmt.Errorf("protocol error: cluster config message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: pong message in state %d", state)
}
go c.receiver.ClusterConfig(c.id, msg)
c.state = stateCCRcvd
c.handlePong(hdr.msgID)
case CloseMessage:
return errors.New(msg.Reason)
@@ -367,6 +387,11 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
l.Debugf("read header %v (msglen=%d)", hdr, msglen)
}
if msglen > MaxMessageLen {
err = fmt.Errorf("message length %d exceeds maximum %d", msglen, MaxMessageLen)
return
}
if hdr.version != 0 {
err = fmt.Errorf("unknown protocol version 0x%x", hdr.version)
return
@@ -387,7 +412,7 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
}
msgBuf := c.rdbuf0
if hdr.compression {
if hdr.compression && msglen > 0 {
c.rdbuf1 = c.rdbuf1[:cap(c.rdbuf1)]
c.rdbuf1, err = lz4.Decode(c.rdbuf1, c.rdbuf0)
if err != nil {
@@ -509,12 +534,36 @@ func filterIndexMessageFiles(fs []FileInfo) []FileInfo {
}
func (c *rawConnection) handleRequest(msgID int, req RequestMessage) {
data, err := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size), req.Hash, req.Flags, req.Options)
size := int(req.Size)
usePool := size <= BlockSize
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: data,
Code: errorToCode(err),
})
var buf []byte
var done chan struct{}
if usePool {
buf = c.pool.Get().([]byte)[:size]
done = make(chan struct{})
} else {
buf = make([]byte, size)
}
err := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), req.Hash, req.Flags, req.Options, buf)
if err != nil {
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: nil,
Code: errorToCode(err),
}, done)
} else {
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: buf,
Code: errorToCode(err),
}, done)
}
if usePool {
<-done
c.pool.Put(buf)
}
}
func (c *rawConnection) handleResponse(msgID int, resp ResponseMessage) {
@@ -537,7 +586,7 @@ func (c *rawConnection) handlePong(msgID int) {
c.awaitingMut.Unlock()
}
func (c *rawConnection) send(msgID int, msgType int, msg encodable) bool {
func (c *rawConnection) send(msgID int, msgType int, msg encodable, done chan struct{}) bool {
if msgID < 0 {
select {
case id := <-c.nextID:
@@ -554,7 +603,7 @@ func (c *rawConnection) send(msgID int, msgType int, msg encodable) bool {
}
select {
case c.outbox <- hdrMsg{hdr, msg}:
case c.outbox <- hdrMsg{hdr, msg, done}:
return true
case <-c.closed:
return false
@@ -573,6 +622,9 @@ func (c *rawConnection) writerLoop() {
if hm.msg != nil {
// Uncompressed message in uncBuf
uncBuf, err = hm.msg.AppendXDR(uncBuf[:0])
if hm.done != nil {
close(hm.done)
}
if err != nil {
c.close(err)
return
@@ -679,17 +731,17 @@ func (c *rawConnection) idGenerator() {
func (c *rawConnection) pingerLoop() {
var rc = make(chan bool, 1)
ticker := time.Tick(pingIdleTime / 2)
ticker := time.Tick(PingIdleTime / 2)
for {
select {
case <-ticker:
if d := time.Since(c.cr.Last()); d < pingIdleTime {
if d := time.Since(c.cr.Last()); d < PingIdleTime {
if debug {
l.Debugln(c.id, "ping skipped after rd", d)
}
continue
}
if d := time.Since(c.cw.Last()); d < pingIdleTime {
if d := time.Since(c.cw.Last()); d < PingIdleTime {
if debug {
l.Debugln(c.id, "ping skipped after wr", d)
}
@@ -709,7 +761,7 @@ func (c *rawConnection) pingerLoop() {
if !ok {
c.close(fmt.Errorf("ping failure"))
}
case <-time.After(pingTimeout):
case <-time.After(PingTimeout):
c.close(fmt.Errorf("ping timeout"))
case <-c.closed:
return

View File

@@ -67,8 +67,12 @@ func TestPing(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, nil, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c1 := NewConnection(c1ID, br, aw, nil, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c0 := NewConnection(c0ID, ar, bw, newTestModel(), "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c0.Start()
c1 := NewConnection(c1ID, br, aw, newTestModel(), "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
if ok := c0.ping(); !ok {
t.Error("c0 ping failed")
@@ -81,8 +85,8 @@ func TestPing(t *testing.T) {
func TestPingErr(t *testing.T) {
e := errors.New("something broke")
for i := 0; i < 16; i++ {
for j := 0; j < 16; j++ {
for i := 0; i < 32; i++ {
for j := 0; j < 32; j++ {
m0 := newTestModel()
m1 := newTestModel()
@@ -92,12 +96,18 @@ func TestPingErr(t *testing.T) {
ebw := &ErrPipe{PipeWriter: *bw, max: j, err: e}
c0 := NewConnection(c0ID, ar, ebw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, eaw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, eaw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
res := c0.ping()
if (i < 8 || j < 8) && res {
// This should have resulted in failure, as there is no way an empty ClusterConfig plus a Ping message fits in eight bytes.
t.Errorf("Unexpected ping success; i=%d, j=%d", i, j)
} else if (i >= 12 && j >= 12) && !res {
} else if (i >= 28 && j >= 28) && !res {
// This should have worked though, as 28 bytes is plenty for both.
t.Errorf("Unexpected ping fail; i=%d, j=%d", i, j)
}
}
@@ -168,7 +178,11 @@ func TestVersionErr(t *testing.T) {
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
w := xdr.NewWriter(c0.cw)
w.WriteUint32(encodeHeader(header{
@@ -191,7 +205,11 @@ func TestTypeErr(t *testing.T) {
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
w := xdr.NewWriter(c0.cw)
w.WriteUint32(encodeHeader(header{
@@ -214,7 +232,11 @@ func TestClose(t *testing.T) {
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
c0.close(nil)

View File

@@ -2,6 +2,8 @@
package protocol
import "github.com/calmh/xdr"
// This stuff is hacked up manually because genxdr doesn't support 'type
// Vector []Counter' declarations and it was tricky when I tried to add it...
@@ -28,6 +30,9 @@ func (v Vector) EncodeXDRInto(w xdrWriter) (int, error) {
// DecodeXDRFrom decodes the XDR objects from the given reader into itself.
func (v *Vector) DecodeXDRFrom(r xdrReader) error {
l := int(r.ReadUint32())
if l > 1e6 {
return xdr.ElementSizeExceeded("number of counters", l, 1e6)
}
n := make(Vector, l)
for i := range n {
n[i].ID = r.ReadUint64()

View File

@@ -12,6 +12,10 @@ type wireFormatConnection struct {
next Connection
}
func (c wireFormatConnection) Start() {
c.next.Start()
}
func (c wireFormatConnection) ID() DeviceID {
return c.next.ID()
}

View File

@@ -63,13 +63,14 @@ type DB struct {
journalAckC chan error
// Compaction.
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
mcompCmdC chan cCmd
compErrC chan error
compPerErrC chan error
compErrSetC chan error
compStats []cStats
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
mcompCmdC chan cCmd
compErrC chan error
compPerErrC chan error
compErrSetC chan error
compWriteLocking bool
compStats []cStats
// Close.
closeW sync.WaitGroup
@@ -108,28 +109,44 @@ func openDB(s *session) (*DB, error) {
closeC: make(chan struct{}),
}
if err := db.recoverJournal(); err != nil {
return nil, err
}
// Read-only mode.
readOnly := s.o.GetReadOnly()
// Remove any obsolete files.
if err := db.checkAndCleanFiles(); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
if readOnly {
// Recover journals (read-only mode).
if err := db.recoverJournalRO(); err != nil {
return nil, err
}
return nil, err
} else {
// Recover journals.
if err := db.recoverJournal(); err != nil {
return nil, err
}
// Remove any obsolete files.
if err := db.checkAndCleanFiles(); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
}
return nil, err
}
}
// Doesn't need to be included in the wait group.
go db.compactionError()
go db.mpoolDrain()
db.closeW.Add(3)
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
if readOnly {
db.SetReadOnly()
} else {
db.closeW.Add(3)
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
}
s.logf("db@open done T·%v", time.Since(start))
@@ -274,8 +291,9 @@ func recoverTable(s *session, o *opt.Options) error {
// We will drop corrupted table.
strict = o.GetStrict(opt.StrictRecovery)
noSync = o.GetNoSync()
rec = &sessionRecord{numLevel: o.GetNumLevel()}
rec = &sessionRecord{}
bpool = util.NewBufferPool(o.GetBlockSize() + 5)
)
buildTable := func(iter iterator.Iterator) (tmp storage.File, size int64, err error) {
@@ -311,9 +329,11 @@ func recoverTable(s *session, o *opt.Options) error {
if err != nil {
return
}
err = writer.Sync()
if err != nil {
return
if !noSync {
err = writer.Sync()
if err != nil {
return
}
}
size = int64(tw.BytesLen())
return
@@ -450,132 +470,136 @@ func recoverTable(s *session, o *opt.Options) error {
}
func (db *DB) recoverJournal() error {
// Get all tables and sort it by file number.
journalFiles_, err := db.s.getFiles(storage.TypeJournal)
// Get all journals and sort it by file number.
allJournalFiles, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
journalFiles := files(journalFiles_)
journalFiles.sort()
files(allJournalFiles).sort()
// Discard older journal.
prev := -1
for i, file := range journalFiles {
if file.Num() >= db.s.stJournalNum {
if prev >= 0 {
i--
journalFiles[i] = journalFiles[prev]
}
journalFiles = journalFiles[i:]
break
} else if file.Num() == db.s.stPrevJournalNum {
prev = i
// Journals that will be recovered.
var recJournalFiles []storage.File
for _, jf := range allJournalFiles {
if jf.Num() >= db.s.stJournalNum || jf.Num() == db.s.stPrevJournalNum {
recJournalFiles = append(recJournalFiles, jf)
}
}
var jr *journal.Reader
var of storage.File
var mem *memdb.DB
batch := new(Batch)
cm := newCMem(db.s)
buf := new(util.Buffer)
// Options.
strict := db.s.o.GetStrict(opt.StrictJournal)
checksum := db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer := db.s.o.GetWriteBuffer()
recoverJournal := func(file storage.File) error {
db.logf("journal@recovery recovering @%d", file.Num())
reader, err := file.Open()
if err != nil {
return err
}
defer reader.Close()
var (
of storage.File // Obsolete file.
rec = &sessionRecord{}
)
// Create/reset journal reader instance.
if jr == nil {
jr = journal.NewReader(reader, dropper{db.s, file}, strict, checksum)
} else {
jr.Reset(reader, dropper{db.s, file}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
return err
}
}
if err := cm.commit(file.Num(), db.seq); err != nil {
return err
}
cm.reset()
of.Remove()
of = nil
}
// Replay journal to memdb.
mem.Reset()
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
return errors.SetFile(err, file)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
} else {
return errors.SetFile(err, file)
}
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mem); err != nil {
if strict || !errors.IsCorrupted(err) {
return errors.SetFile(err, file)
} else {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
// Flush it if large enough.
if mem.Size() >= writeBuffer {
if err := cm.flush(mem, 0); err != nil {
return err
}
mem.Reset()
}
}
of = file
return nil
}
// Recover all journals.
if len(journalFiles) > 0 {
db.logf("journal@recovery F·%d", len(journalFiles))
// Recover journals.
if len(recJournalFiles) > 0 {
db.logf("journal@recovery F·%d", len(recJournalFiles))
// Mark file number as used.
db.s.markFileNum(journalFiles[len(journalFiles)-1].Num())
db.s.markFileNum(recJournalFiles[len(recJournalFiles)-1].Num())
mem = memdb.New(db.s.icmp, writeBuffer)
for _, file := range journalFiles {
if err := recoverJournal(file); err != nil {
var (
// Options.
strict = db.s.o.GetStrict(opt.StrictJournal)
checksum = db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer = db.s.o.GetWriteBuffer()
jr *journal.Reader
mdb = memdb.New(db.s.icmp, writeBuffer)
buf = &util.Buffer{}
batch = &Batch{}
)
for _, jf := range recJournalFiles {
db.logf("journal@recovery recovering @%d", jf.Num())
fr, err := jf.Open()
if err != nil {
return err
}
// Create or reset journal reader instance.
if jr == nil {
jr = journal.NewReader(fr, dropper{db.s, jf}, strict, checksum)
} else {
jr.Reset(fr, dropper{db.s, jf}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mdb.Len() > 0 {
if _, err := db.s.flushMemdb(rec, mdb, -1); err != nil {
fr.Close()
return err
}
}
rec.setJournalNum(jf.Num())
rec.setSeqNum(db.seq)
if err := db.s.commit(rec); err != nil {
fr.Close()
return err
}
rec.resetAddedTables()
of.Remove()
of = nil
}
// Replay journal to memdb.
mdb.Reset()
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
fr.Close()
return errors.SetFile(err, jf)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mdb); err != nil {
if !strict && errors.IsCorrupted(err) {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
// Flush it if large enough.
if mdb.Size() >= writeBuffer {
if _, err := db.s.flushMemdb(rec, mdb, 0); err != nil {
fr.Close()
return err
}
mdb.Reset()
}
}
fr.Close()
of = jf
}
// Flush the last journal.
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
// Flush the last memdb.
if mdb.Len() > 0 {
if _, err := db.s.flushMemdb(rec, mdb, 0); err != nil {
return err
}
}
@@ -587,8 +611,10 @@ func (db *DB) recoverJournal() error {
}
// Commit.
if err := cm.commit(db.journalFile.Num(), db.seq); err != nil {
// Close journal.
rec.setJournalNum(db.journalFile.Num())
rec.setSeqNum(db.seq)
if err := db.s.commit(rec); err != nil {
// Close journal on error.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
@@ -604,6 +630,103 @@ func (db *DB) recoverJournal() error {
return nil
}
func (db *DB) recoverJournalRO() error {
// Get all journals and sort it by file number.
allJournalFiles, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
files(allJournalFiles).sort()
// Journals that will be recovered.
var recJournalFiles []storage.File
for _, jf := range allJournalFiles {
if jf.Num() >= db.s.stJournalNum || jf.Num() == db.s.stPrevJournalNum {
recJournalFiles = append(recJournalFiles, jf)
}
}
var (
// Options.
strict = db.s.o.GetStrict(opt.StrictJournal)
checksum = db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer = db.s.o.GetWriteBuffer()
mdb = memdb.New(db.s.icmp, writeBuffer)
)
// Recover journals.
if len(recJournalFiles) > 0 {
db.logf("journal@recovery RO·Mode F·%d", len(recJournalFiles))
var (
jr *journal.Reader
buf = &util.Buffer{}
batch = &Batch{}
)
for _, jf := range recJournalFiles {
db.logf("journal@recovery recovering @%d", jf.Num())
fr, err := jf.Open()
if err != nil {
return err
}
// Create or reset journal reader instance.
if jr == nil {
jr = journal.NewReader(fr, dropper{db.s, jf}, strict, checksum)
} else {
jr.Reset(fr, dropper{db.s, jf}, strict, checksum)
}
// Replay journal to memdb.
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
fr.Close()
return errors.SetFile(err, jf)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mdb); err != nil {
if !strict && errors.IsCorrupted(err) {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
}
fr.Close()
}
}
// Set memDB.
db.mem = &memDB{db: db, DB: mdb, ref: 1}
return nil
}
func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err error) {
ikey := newIkey(key, seq, ktSeek)
@@ -614,7 +737,7 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
}
defer m.decref()
mk, mv, me := m.mdb.Find(ikey)
mk, mv, me := m.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
@@ -652,7 +775,7 @@ func (db *DB) has(key []byte, seq uint64, ro *opt.ReadOptions) (ret bool, err er
}
defer m.decref()
mk, _, me := m.mdb.Find(ikey)
mk, _, me := m.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
@@ -784,7 +907,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
const prefix = "leveldb."
if !strings.HasPrefix(name, prefix) {
return "", errors.New("leveldb: GetProperty: unknown property: " + name)
return "", ErrNotFound
}
p := name[len(prefix):]
@@ -798,7 +921,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
var rest string
n, _ := fmt.Sscanf(p[len(numFilesPrefix):], "%d%s", &level, &rest)
if n != 1 || int(level) >= db.s.o.GetNumLevel() {
err = errors.New("leveldb: GetProperty: invalid property: " + name)
err = ErrNotFound
} else {
value = fmt.Sprint(v.tLen(int(level)))
}
@@ -837,7 +960,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
case p == "aliveiters":
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
default:
err = errors.New("leveldb: GetProperty: unknown property: " + name)
err = ErrNotFound
}
return
@@ -900,6 +1023,9 @@ func (db *DB) Close() error {
var err error
select {
case err = <-db.compErrC:
if err == ErrReadOnly {
err = nil
}
default:
}

View File

@@ -11,7 +11,6 @@ import (
"time"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
@@ -62,58 +61,8 @@ func (p *cStatsStaging) stopTimer() {
}
}
type cMem struct {
s *session
level int
rec *sessionRecord
}
func newCMem(s *session) *cMem {
return &cMem{s: s, rec: &sessionRecord{numLevel: s.o.GetNumLevel()}}
}
func (c *cMem) flush(mem *memdb.DB, level int) error {
s := c.s
// Write memdb to table.
iter := mem.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
if err != nil {
return err
}
// Pick level.
if level < 0 {
v := s.version()
level = v.pickLevel(t.imin.ukey(), t.imax.ukey())
v.release()
}
c.rec.addTableFile(level, t)
s.logf("mem@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
c.level = level
return nil
}
func (c *cMem) reset() {
c.rec = &sessionRecord{numLevel: c.s.o.GetNumLevel()}
}
func (c *cMem) commit(journal, seq uint64) error {
c.rec.setJournalNum(journal)
c.rec.setSeqNum(seq)
// Commit changes.
return c.s.commit(c.rec)
}
func (db *DB) compactionError() {
var (
err error
wlocked bool
)
var err error
noerr:
// No error.
for {
@@ -121,7 +70,7 @@ noerr:
case err = <-db.compErrSetC:
switch {
case err == nil:
case errors.IsCorrupted(err):
case err == ErrReadOnly, errors.IsCorrupted(err):
goto hasperr
default:
goto haserr
@@ -139,7 +88,7 @@ haserr:
switch {
case err == nil:
goto noerr
case errors.IsCorrupted(err):
case err == ErrReadOnly, errors.IsCorrupted(err):
goto hasperr
default:
}
@@ -155,9 +104,9 @@ hasperr:
case db.compPerErrC <- err:
case db.writeLockC <- struct{}{}:
// Hold write lock, so that write won't pass-through.
wlocked = true
db.compWriteLocking = true
case _, _ = <-db.closeC:
if wlocked {
if db.compWriteLocking {
// We should release the lock or Close will hang.
<-db.writeLockC
}
@@ -287,21 +236,18 @@ func (db *DB) compactionExitTransact() {
}
func (db *DB) memCompaction() {
mem := db.getFrozenMem()
if mem == nil {
mdb := db.getFrozenMem()
if mdb == nil {
return
}
defer mem.decref()
defer mdb.decref()
c := newCMem(db.s)
stats := new(cStatsStaging)
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
db.logf("memdb@flush N·%d S·%s", mdb.Len(), shortenb(mdb.Size()))
// Don't compact empty memdb.
if mem.mdb.Len() == 0 {
db.logf("mem@flush skipping")
// drop frozen mem
if mdb.Len() == 0 {
db.logf("memdb@flush skipping")
// drop frozen memdb
db.dropFrozenMem()
return
}
@@ -317,13 +263,20 @@ func (db *DB) memCompaction() {
return
}
db.compactionTransactFunc("mem@flush", func(cnt *compactionTransactCounter) (err error) {
var (
rec = &sessionRecord{}
stats = &cStatsStaging{}
flushLevel int
)
db.compactionTransactFunc("memdb@flush", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.flush(mem.mdb, -1)
flushLevel, err = db.s.flushMemdb(rec, mdb.DB, -1)
stats.stopTimer()
return
}, func() error {
for _, r := range c.rec.addedTables {
db.logf("mem@flush revert @%d", r.num)
for _, r := range rec.addedTables {
db.logf("memdb@flush revert @%d", r.num)
f := db.s.getTableFile(r.num)
if err := f.Remove(); err != nil {
return err
@@ -332,20 +285,23 @@ func (db *DB) memCompaction() {
return nil
})
db.compactionTransactFunc("mem@commit", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransactFunc("memdb@commit", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.commit(db.journalFile.Num(), db.frozenSeq)
rec.setJournalNum(db.journalFile.Num())
rec.setSeqNum(db.frozenSeq)
err = db.s.commit(rec)
stats.stopTimer()
return
}, nil)
db.logf("mem@flush committed F·%d T·%v", len(c.rec.addedTables), stats.duration)
db.logf("memdb@flush committed F·%d T·%v", len(rec.addedTables), stats.duration)
for _, r := range c.rec.addedTables {
for _, r := range rec.addedTables {
stats.write += r.size
}
db.compStats[c.level].add(stats)
db.compStats[flushLevel].add(stats)
// Drop frozen mem.
// Drop frozen memdb.
db.dropFrozenMem()
// Resume table compaction.
@@ -557,7 +513,7 @@ func (b *tableCompactionBuilder) revert() error {
func (db *DB) tableCompaction(c *compaction, noTrivial bool) {
defer c.release()
rec := &sessionRecord{numLevel: db.s.o.GetNumLevel()}
rec := &sessionRecord{}
rec.addCompPtr(c.level, c.imax)
if !noTrivial && c.trivial() {

View File

@@ -8,6 +8,7 @@ package leveldb
import (
"errors"
"math/rand"
"runtime"
"sync"
"sync/atomic"
@@ -39,11 +40,11 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
ti := v.getIterators(slice, ro)
n := len(ti) + 2
i := make([]iterator.Iterator, 0, n)
emi := em.mdb.NewIterator(slice)
emi := em.NewIterator(slice)
emi.SetReleaser(&memdbReleaser{m: em})
i = append(i, emi)
if fm != nil {
fmi := fm.mdb.NewIterator(slice)
fmi := fm.NewIterator(slice)
fmi.SetReleaser(&memdbReleaser{m: fm})
i = append(i, fmi)
}
@@ -80,6 +81,10 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
return iter
}
func (db *DB) iterSamplingRate() int {
return rand.Intn(2 * db.s.o.GetIteratorSamplingRate())
}
type dir int
const (
@@ -98,11 +103,21 @@ type dbIter struct {
seq uint64
strict bool
dir dir
key []byte
value []byte
err error
releaser util.Releaser
smaplingGap int
dir dir
key []byte
value []byte
err error
releaser util.Releaser
}
func (i *dbIter) sampleSeek() {
ikey := i.iter.Key()
i.smaplingGap -= len(ikey) + len(i.iter.Value())
for i.smaplingGap < 0 {
i.smaplingGap += i.db.iterSamplingRate()
i.db.sampleSeek(ikey)
}
}
func (i *dbIter) setErr(err error) {
@@ -175,6 +190,7 @@ func (i *dbIter) Seek(key []byte) bool {
func (i *dbIter) next() bool {
for {
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if seq <= i.seq {
switch kt {
case ktDel:
@@ -225,6 +241,7 @@ func (i *dbIter) prev() bool {
if i.iter.Valid() {
for {
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if seq <= i.seq {
if !del && i.icmp.uCompare(ukey, i.key) < 0 {
return true
@@ -266,6 +283,7 @@ func (i *dbIter) Prev() bool {
case dirForward:
for i.iter.Prev() {
if ukey, _, _, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if i.icmp.uCompare(ukey, i.key) < 0 {
goto cont
}

View File

@@ -15,8 +15,8 @@ import (
)
type memDB struct {
db *DB
mdb *memdb.DB
db *DB
*memdb.DB
ref int32
}
@@ -27,12 +27,12 @@ func (m *memDB) incref() {
func (m *memDB) decref() {
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
// Only put back memdb with std capacity.
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
m.mdb.Reset()
m.db.mpoolPut(m.mdb)
if m.Capacity() == m.db.s.o.GetWriteBuffer() {
m.Reset()
m.db.mpoolPut(m.DB)
}
m.db = nil
m.mdb = nil
m.DB = nil
} else if ref < 0 {
panic("negative memdb ref")
}
@@ -48,6 +48,15 @@ func (db *DB) addSeq(delta uint64) {
atomic.AddUint64(&db.seq, delta)
}
func (db *DB) sampleSeek(ikey iKey) {
v := db.s.version()
if v.sampleSeek(ikey) {
// Trigger table compaction.
db.compSendTrigger(db.tcompCmdC)
}
v.release()
}
func (db *DB) mpoolPut(mem *memdb.DB) {
defer func() {
recover()
@@ -117,7 +126,7 @@ func (db *DB) newMem(n int) (mem *memDB, err error) {
}
mem = &memDB{
db: db,
mdb: mdb,
DB: mdb,
ref: 2,
}
db.mem = mem

View File

@@ -405,19 +405,21 @@ func (h *dbHarness) compactRange(min, max string) {
t.Log("DB range compaction done")
}
func (h *dbHarness) sizeAssert(start, limit string, low, hi uint64) {
t := h.t
db := h.db
s, err := db.SizeOf([]util.Range{
func (h *dbHarness) sizeOf(start, limit string) uint64 {
sz, err := h.db.SizeOf([]util.Range{
{[]byte(start), []byte(limit)},
})
if err != nil {
t.Error("SizeOf: got error: ", err)
h.t.Error("SizeOf: got error: ", err)
}
if s.Sum() < low || s.Sum() > hi {
t.Errorf("sizeof %q to %q not in range, want %d - %d, got %d",
shorten(start), shorten(limit), low, hi, s.Sum())
return sz.Sum()
}
func (h *dbHarness) sizeAssert(start, limit string, low, hi uint64) {
sz := h.sizeOf(start, limit)
if sz < low || sz > hi {
h.t.Errorf("sizeOf %q to %q not in range, want %d - %d, got %d",
shorten(start), shorten(limit), low, hi, sz)
}
}
@@ -2443,7 +2445,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
if err != nil {
t.Fatal(err)
}
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
rec := &sessionRecord{}
rec.addTableFile(i, tf)
if err := s.commit(rec); err != nil {
t.Fatal(err)
@@ -2453,7 +2455,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Build grandparent.
v := s.version()
c := newCompaction(s, v, 1, append(tFiles{}, v.tables[1]...))
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
rec := &sessionRecord{}
b := &tableCompactionBuilder{
s: s,
c: c,
@@ -2477,7 +2479,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Build level-1.
v = s.version()
c = newCompaction(s, v, 0, append(tFiles{}, v.tables[0]...))
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
b = &tableCompactionBuilder{
s: s,
c: c,
@@ -2521,7 +2523,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Compaction with transient error.
v = s.version()
c = newCompaction(s, v, 1, append(tFiles{}, v.tables[1]...))
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
b = &tableCompactionBuilder{
s: s,
c: c,
@@ -2577,3 +2579,123 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
}
v.release()
}
func testDB_IterTriggeredCompaction(t *testing.T, limitDiv int) {
const (
vSize = 200 * opt.KiB
tSize = 100 * opt.MiB
mIter = 100
n = tSize / vSize
)
h := newDbHarnessWopt(t, &opt.Options{
Compression: opt.NoCompression,
DisableBlockCache: true,
})
defer h.close()
key := func(x int) string {
return fmt.Sprintf("v%06d", x)
}
// Fill.
value := strings.Repeat("x", vSize)
for i := 0; i < n; i++ {
h.put(key(i), value)
}
h.compactMem()
// Delete all.
for i := 0; i < n; i++ {
h.delete(key(i))
}
h.compactMem()
var (
limit = n / limitDiv
startKey = key(0)
limitKey = key(limit)
maxKey = key(n)
slice = &util.Range{Limit: []byte(limitKey)}
initialSize0 = h.sizeOf(startKey, limitKey)
initialSize1 = h.sizeOf(limitKey, maxKey)
)
t.Logf("inital size %s [rest %s]", shortenb(int(initialSize0)), shortenb(int(initialSize1)))
for r := 0; true; r++ {
if r >= mIter {
t.Fatal("taking too long to compact")
}
// Iterates.
iter := h.db.NewIterator(slice, h.ro)
for iter.Next() {
}
if err := iter.Error(); err != nil {
t.Fatalf("Iter err: %v", err)
}
iter.Release()
// Wait compaction.
h.waitCompaction()
// Check size.
size0 := h.sizeOf(startKey, limitKey)
size1 := h.sizeOf(limitKey, maxKey)
t.Logf("#%03d size %s [rest %s]", r, shortenb(int(size0)), shortenb(int(size1)))
if size0 < initialSize0/10 {
break
}
}
if initialSize1 > 0 {
h.sizeAssert(limitKey, maxKey, initialSize1/4-opt.MiB, initialSize1+opt.MiB)
}
}
func TestDB_IterTriggeredCompaction(t *testing.T) {
testDB_IterTriggeredCompaction(t, 1)
}
func TestDB_IterTriggeredCompactionHalf(t *testing.T) {
testDB_IterTriggeredCompaction(t, 2)
}
func TestDB_ReadOnly(t *testing.T) {
h := newDbHarness(t)
defer h.close()
h.put("foo", "v1")
h.put("bar", "v2")
h.compactMem()
h.put("xfoo", "v1")
h.put("xbar", "v2")
t.Log("Trigger read-only")
if err := h.db.SetReadOnly(); err != nil {
h.close()
t.Fatalf("SetReadOnly error: %v", err)
}
h.stor.SetEmuErr(storage.TypeAll, tsOpCreate, tsOpReplace, tsOpRemove, tsOpWrite, tsOpWrite, tsOpSync)
ro := func(key, value, wantValue string) {
if err := h.db.Put([]byte(key), []byte(value), h.wo); err != ErrReadOnly {
t.Fatalf("unexpected error: %v", err)
}
h.getVal(key, wantValue)
}
ro("foo", "vx", "v1")
h.o.ReadOnly = true
h.reopenDB()
ro("foo", "vx", "v1")
ro("bar", "vx", "v2")
h.assertNumKeys(4)
}

View File

@@ -63,24 +63,24 @@ func (db *DB) rotateMem(n int) (mem *memDB, err error) {
return
}
func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
func (db *DB) flush(n int) (mdb *memDB, mdbFree int, err error) {
delayed := false
flush := func() (retry bool) {
v := db.s.version()
defer v.release()
mem = db.getEffectiveMem()
mdb = db.getEffectiveMem()
defer func() {
if retry {
mem.decref()
mem = nil
mdb.decref()
mdb = nil
}
}()
nn = mem.mdb.Free()
mdbFree = mdb.Free()
switch {
case v.tLen(0) >= db.s.o.GetWriteL0SlowdownTrigger() && !delayed:
delayed = true
time.Sleep(time.Millisecond)
case nn >= n:
case mdbFree >= n:
return false
case v.tLen(0) >= db.s.o.GetWriteL0PauseTrigger():
delayed = true
@@ -90,15 +90,15 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
}
default:
// Allow memdb to grow if it has no entry.
if mem.mdb.Len() == 0 {
nn = n
if mdb.Len() == 0 {
mdbFree = n
} else {
mem.decref()
mem, err = db.rotateMem(n)
mdb.decref()
mdb, err = db.rotateMem(n)
if err == nil {
nn = mem.mdb.Free()
mdbFree = mdb.Free()
} else {
nn = 0
mdbFree = 0
}
}
return false
@@ -129,7 +129,7 @@ func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
return
}
b.init(wo.GetSync())
b.init(wo.GetSync() && !db.s.o.GetNoSync())
// The write happen synchronously.
select {
@@ -157,18 +157,18 @@ func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
}
}()
mem, memFree, err := db.flush(b.size())
mdb, mdbFree, err := db.flush(b.size())
if err != nil {
return
}
defer mem.decref()
defer mdb.decref()
// Calculate maximum size of the batch.
m := 1 << 20
if x := b.size(); x <= 128<<10 {
m = x + (128 << 10)
}
m = minInt(m, memFree)
m = minInt(m, mdbFree)
// Merge with other batch.
drain:
@@ -197,7 +197,7 @@ drain:
select {
case db.journalC <- b:
// Write into memdb
if berr := b.memReplay(mem.mdb); berr != nil {
if berr := b.memReplay(mdb.DB); berr != nil {
panic(berr)
}
case err = <-db.compPerErrC:
@@ -211,7 +211,7 @@ drain:
case err = <-db.journalAckC:
if err != nil {
// Revert memdb if error detected
if berr := b.revertMemReplay(mem.mdb); berr != nil {
if berr := b.revertMemReplay(mdb.DB); berr != nil {
panic(berr)
}
return
@@ -225,7 +225,7 @@ drain:
if err != nil {
return
}
if berr := b.memReplay(mem.mdb); berr != nil {
if berr := b.memReplay(mdb.DB); berr != nil {
panic(berr)
}
}
@@ -233,7 +233,7 @@ drain:
// Set last seq number.
db.addSeq(uint64(b.Len()))
if b.size() >= memFree {
if b.size() >= mdbFree {
db.rotateMem(0)
}
return
@@ -249,8 +249,7 @@ func (db *DB) Put(key, value []byte, wo *opt.WriteOptions) error {
return db.Write(b, wo)
}
// Delete deletes the value for the given key. It returns ErrNotFound if
// the DB does not contain the key.
// Delete deletes the value for the given key.
//
// It is safe to modify the contents of the arguments after Delete returns.
func (db *DB) Delete(key []byte, wo *opt.WriteOptions) error {
@@ -290,9 +289,9 @@ func (db *DB) CompactRange(r util.Range) error {
}
// Check for overlaps in memdb.
mem := db.getEffectiveMem()
defer mem.decref()
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
mdb := db.getEffectiveMem()
defer mdb.decref()
if isMemOverlaps(db.s.icmp, mdb.DB, r.Start, r.Limit) {
// Memdb compaction.
if _, err := db.rotateMem(0); err != nil {
<-db.writeLockC
@@ -309,3 +308,31 @@ func (db *DB) CompactRange(r util.Range) error {
// Table compaction.
return db.compSendRange(db.tcompCmdC, -1, r.Start, r.Limit)
}
// SetReadOnly makes DB read-only. It will stay read-only until reopened.
func (db *DB) SetReadOnly() error {
if err := db.ok(); err != nil {
return err
}
// Lock writer.
select {
case db.writeLockC <- struct{}{}:
db.compWriteLocking = true
case err := <-db.compPerErrC:
return err
case _, _ = <-db.closeC:
return ErrClosed
}
// Set compaction read-only.
select {
case db.compErrSetC <- ErrReadOnly:
case perr := <-db.compPerErrC:
return perr
case _, _ = <-db.closeC:
return ErrClosed
}
return nil
}

View File

@@ -12,6 +12,7 @@ import (
var (
ErrNotFound = errors.ErrNotFound
ErrReadOnly = errors.New("leveldb: read-only mode")
ErrSnapshotReleased = errors.New("leveldb: snapshot released")
ErrIterReleased = errors.New("leveldb: iterator released")
ErrClosed = errors.New("leveldb: closed")

View File

@@ -52,12 +52,14 @@ func IsCorrupted(err error) bool {
switch err.(type) {
case *ErrCorrupted:
return true
case *storage.ErrCorrupted:
return true
}
return false
}
// ErrMissingFiles is the type that indicating a corruption due to missing
// files.
// files. ErrMissingFiles always wrapped with ErrCorrupted.
type ErrMissingFiles struct {
Files []*storage.FileInfo
}

View File

@@ -206,6 +206,7 @@ func (p *DB) randHeight() (h int) {
return
}
// Must hold RW-lock if prev == true, as it use shared prevNode slice.
func (p *DB) findGE(key []byte, prev bool) (int, bool) {
node := 0
h := p.maxHeight - 1
@@ -302,7 +303,7 @@ func (p *DB) Put(key []byte, value []byte) error {
node := len(p.nodeData)
p.nodeData = append(p.nodeData, kvOffset, len(key), len(value), h)
for i, n := range p.prevNode[:h] {
m := n + 4 + i
m := n + nNext + i
p.nodeData = append(p.nodeData, p.nodeData[m])
p.nodeData[m] = node
}
@@ -434,20 +435,22 @@ func (p *DB) Len() int {
// Reset resets the DB to initial empty state. Allows reuse the buffer.
func (p *DB) Reset() {
p.mu.Lock()
p.rnd = rand.New(rand.NewSource(0xdeadbeef))
p.maxHeight = 1
p.n = 0
p.kvSize = 0
p.kvData = p.kvData[:0]
p.nodeData = p.nodeData[:4+tMaxHeight]
p.nodeData = p.nodeData[:nNext+tMaxHeight]
p.nodeData[nKV] = 0
p.nodeData[nKey] = 0
p.nodeData[nVal] = 0
p.nodeData[nHeight] = tMaxHeight
for n := 0; n < tMaxHeight; n++ {
p.nodeData[4+n] = 0
p.nodeData[nNext+n] = 0
p.prevNode[n] = 0
}
p.mu.Unlock()
}
// New creates a new initalized in-memory key/value DB. The capacity

View File

@@ -34,10 +34,11 @@ var (
DefaultCompactionTotalSize = 10 * MiB
DefaultCompactionTotalSizeMultiplier = 10.0
DefaultCompressionType = SnappyCompression
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultIteratorSamplingRate = 1 * MiB
DefaultMaxMemCompationLevel = 2
DefaultNumLevel = 7
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultWriteBuffer = 4 * MiB
DefaultWriteL0PauseTrigger = 12
DefaultWriteL0SlowdownTrigger = 8
@@ -249,6 +250,11 @@ type Options struct {
// The default value (DefaultCompression) uses snappy compression.
Compression Compression
// DisableBufferPool allows disable use of util.BufferPool functionality.
//
// The default value is false.
DisableBufferPool bool
// DisableBlockCache allows disable use of cache.Cache functionality on
// 'sorted table' block.
//
@@ -288,6 +294,13 @@ type Options struct {
// The default value is nil.
Filter filter.Filter
// IteratorSamplingRate defines approximate gap (in bytes) between read
// sampling of an iterator. The samples will be used to determine when
// compaction should be triggered.
//
// The default is 1MiB.
IteratorSamplingRate int
// MaxMemCompationLevel defines maximum level a newly compacted 'memdb'
// will be pushed into if doesn't creates overlap. This should less than
// NumLevel. Use -1 for level-0.
@@ -295,6 +308,11 @@ type Options struct {
// The default is 2.
MaxMemCompationLevel int
// NoSync allows completely disable fsync.
//
// The default is false.
NoSync bool
// NumLevel defines number of database level. The level shouldn't changed
// between opens, or the database will panic.
//
@@ -313,6 +331,11 @@ type Options struct {
// The default value is 500.
OpenFilesCacheCapacity int
// If true then opens DB in read-only mode.
//
// The default value is false.
ReadOnly bool
// Strict defines the DB strict level.
Strict Strict
@@ -464,6 +487,20 @@ func (o *Options) GetCompression() Compression {
return o.Compression
}
func (o *Options) GetDisableBufferPool() bool {
if o == nil {
return false
}
return o.DisableBufferPool
}
func (o *Options) GetDisableBlockCache() bool {
if o == nil {
return false
}
return o.DisableBlockCache
}
func (o *Options) GetDisableCompactionBackoff() bool {
if o == nil {
return false
@@ -492,6 +529,13 @@ func (o *Options) GetFilter() filter.Filter {
return o.Filter
}
func (o *Options) GetIteratorSamplingRate() int {
if o == nil || o.IteratorSamplingRate <= 0 {
return DefaultIteratorSamplingRate
}
return o.IteratorSamplingRate
}
func (o *Options) GetMaxMemCompationLevel() int {
level := DefaultMaxMemCompationLevel
if o != nil {
@@ -507,6 +551,13 @@ func (o *Options) GetMaxMemCompationLevel() int {
return level
}
func (o *Options) GetNoSync() bool {
if o == nil {
return false
}
return o.NoSync
}
func (o *Options) GetNumLevel() int {
if o == nil || o.NumLevel <= 0 {
return DefaultNumLevel
@@ -533,6 +584,13 @@ func (o *Options) GetOpenFilesCacheCapacity() int {
return o.OpenFilesCacheCapacity
}
func (o *Options) GetReadOnly() bool {
if o == nil {
return false
}
return o.ReadOnly
}
func (o *Options) GetStrict(strict Strict) bool {
if o == nil || o.Strict == 0 {
return DefaultStrict&strict != 0

View File

@@ -11,10 +11,8 @@ import (
"io"
"os"
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/journal"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
@@ -127,11 +125,16 @@ func (s *session) recover() (err error) {
return
}
defer reader.Close()
strict := s.o.GetStrict(opt.StrictManifest)
jr := journal.NewReader(reader, dropper{s, m}, strict, true)
staging := s.stVersion.newStaging()
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
var (
// Options.
numLevel = s.o.GetNumLevel()
strict = s.o.GetStrict(opt.StrictManifest)
jr = journal.NewReader(reader, dropper{s, m}, strict, true)
rec = &sessionRecord{}
staging = s.stVersion.newStaging()
)
for {
var r io.Reader
r, err = jr.Next()
@@ -143,7 +146,7 @@ func (s *session) recover() (err error) {
return errors.SetFile(err, m)
}
err = rec.decode(r)
err = rec.decode(r, numLevel)
if err == nil {
// save compact pointers
for _, r := range rec.compPtrs {
@@ -206,250 +209,3 @@ func (s *session) commit(r *sessionRecord) (err error) {
return
}
// Pick a compaction based on current state; need external synchronization.
func (s *session) pickCompaction() *compaction {
v := s.version()
var level int
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cptr := s.stCompPtrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
ts := (*tSet)(p)
level = ts.level
t0 = append(t0, ts.table)
} else {
v.release()
return nil
}
}
return newCompaction(s, v, level, t0)
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version()
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
v.release()
return nil
}
// Avoid compacting too much in one shot in case the range is large.
// But we cannot do this for level-0 since level-0 files can overlap
// and we must not pick one file and drop another older file if the
// two files overlap.
if level > 0 {
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
total := uint64(0)
for i, t := range t0 {
total += t.size
if total >= limit {
s.logf("table@compaction limiting F·%d -> F·%d", len(t0), i+1)
t0 = t0[:i+1]
break
}
}
}
return newCompaction(s, v, level, t0)
}
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
c := &compaction{
s: s,
v: v,
level: level,
tables: [2]tFiles{t0, nil},
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
tPtrs: make([]int, s.o.GetNumLevel()),
}
c.expand()
c.save()
return c
}
// compaction represent a compaction state.
type compaction struct {
s *session
v *version
level int
tables [2]tFiles
maxGPOverlaps uint64
gp tFiles
gpi int
seenKey bool
gpOverlappedBytes uint64
imin, imax iKey
tPtrs []int
released bool
snapGPI int
snapSeenKey bool
snapGPOverlappedBytes uint64
snapTPtrs []int
}
func (c *compaction) save() {
c.snapGPI = c.gpi
c.snapSeenKey = c.seenKey
c.snapGPOverlappedBytes = c.gpOverlappedBytes
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
}
func (c *compaction) restore() {
c.gpi = c.snapGPI
c.seenKey = c.snapSeenKey
c.gpOverlappedBytes = c.snapGPOverlappedBytes
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
}
func (c *compaction) release() {
if !c.released {
c.released = true
c.v.release()
}
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
t0, t1 := c.tables[0], c.tables[1]
imin, imax := t0.getRange(c.s.icmp)
// We expand t0 here just incase ukey hop across tables.
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
if len(t0) != len(c.tables[0]) {
imin, imax = t0.getRange(c.s.icmp)
}
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if c.level+2 < c.s.o.GetNumLevel() {
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
}
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
}
c.tPtrs[level]++
}
}
return true
}
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpi < len(c.gp); c.gpi++ {
gp := c.gp[c.gpi]
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
c.gpOverlappedBytes += gp.size
}
}
c.seenKey = true
if c.gpOverlappedBytes > c.maxGPOverlaps {
// Too much overlap for current output; start new output.
c.gpOverlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
Strict: opt.StrictOverride,
}
strict := c.s.o.GetStrict(opt.StrictCompaction)
if strict {
ro.Strict |= opt.StrictReader
}
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, c.s.icmp, strict)
}

View File

@@ -0,0 +1,287 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
func (s *session) pickMemdbLevel(umin, umax []byte) int {
v := s.version()
defer v.release()
return v.pickMemdbLevel(umin, umax)
}
func (s *session) flushMemdb(rec *sessionRecord, mdb *memdb.DB, level int) (level_ int, err error) {
// Create sorted table.
iter := mdb.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
if err != nil {
return level, err
}
// Pick level and add to record.
if level < 0 {
level = s.pickMemdbLevel(t.imin.ukey(), t.imax.ukey())
}
rec.addTableFile(level, t)
s.logf("memdb@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
return level, nil
}
// Pick a compaction based on current state; need external synchronization.
func (s *session) pickCompaction() *compaction {
v := s.version()
var level int
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cptr := s.stCompPtrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
ts := (*tSet)(p)
level = ts.level
t0 = append(t0, ts.table)
} else {
v.release()
return nil
}
}
return newCompaction(s, v, level, t0)
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version()
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
v.release()
return nil
}
// Avoid compacting too much in one shot in case the range is large.
// But we cannot do this for level-0 since level-0 files can overlap
// and we must not pick one file and drop another older file if the
// two files overlap.
if level > 0 {
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
total := uint64(0)
for i, t := range t0 {
total += t.size
if total >= limit {
s.logf("table@compaction limiting F·%d -> F·%d", len(t0), i+1)
t0 = t0[:i+1]
break
}
}
}
return newCompaction(s, v, level, t0)
}
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
c := &compaction{
s: s,
v: v,
level: level,
tables: [2]tFiles{t0, nil},
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
tPtrs: make([]int, s.o.GetNumLevel()),
}
c.expand()
c.save()
return c
}
// compaction represent a compaction state.
type compaction struct {
s *session
v *version
level int
tables [2]tFiles
maxGPOverlaps uint64
gp tFiles
gpi int
seenKey bool
gpOverlappedBytes uint64
imin, imax iKey
tPtrs []int
released bool
snapGPI int
snapSeenKey bool
snapGPOverlappedBytes uint64
snapTPtrs []int
}
func (c *compaction) save() {
c.snapGPI = c.gpi
c.snapSeenKey = c.seenKey
c.snapGPOverlappedBytes = c.gpOverlappedBytes
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
}
func (c *compaction) restore() {
c.gpi = c.snapGPI
c.seenKey = c.snapSeenKey
c.gpOverlappedBytes = c.snapGPOverlappedBytes
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
}
func (c *compaction) release() {
if !c.released {
c.released = true
c.v.release()
}
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
t0, t1 := c.tables[0], c.tables[1]
imin, imax := t0.getRange(c.s.icmp)
// We expand t0 here just incase ukey hop across tables.
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
if len(t0) != len(c.tables[0]) {
imin, imax = t0.getRange(c.s.icmp)
}
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if c.level+2 < c.s.o.GetNumLevel() {
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
}
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
}
c.tPtrs[level]++
}
}
return true
}
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpi < len(c.gp); c.gpi++ {
gp := c.gp[c.gpi]
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
c.gpOverlappedBytes += gp.size
}
}
c.seenKey = true
if c.gpOverlappedBytes > c.maxGPOverlaps {
// Too much overlap for current output; start new output.
c.gpOverlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0.
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
Strict: opt.StrictOverride,
}
strict := c.s.o.GetStrict(opt.StrictCompaction)
if strict {
ro.Strict |= opt.StrictReader
}
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, c.s.icmp, strict)
}

View File

@@ -52,8 +52,6 @@ type dtRecord struct {
}
type sessionRecord struct {
numLevel int
hasRec int
comparer string
journalNum uint64
@@ -230,7 +228,7 @@ func (p *sessionRecord) readBytes(field string, r byteReader) []byte {
return x
}
func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
func (p *sessionRecord) readLevel(field string, r io.ByteReader, numLevel int) int {
if p.err != nil {
return 0
}
@@ -238,14 +236,14 @@ func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
if p.err != nil {
return 0
}
if x >= uint64(p.numLevel) {
if x >= uint64(numLevel) {
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, "invalid level number"})
return 0
}
return int(x)
}
func (p *sessionRecord) decode(r io.Reader) error {
func (p *sessionRecord) decode(r io.Reader, numLevel int) error {
br, ok := r.(byteReader)
if !ok {
br = bufio.NewReader(r)
@@ -286,13 +284,13 @@ func (p *sessionRecord) decode(r io.Reader) error {
p.setSeqNum(x)
}
case recCompPtr:
level := p.readLevel("comp-ptr.level", br)
level := p.readLevel("comp-ptr.level", br, numLevel)
ikey := p.readBytes("comp-ptr.ikey", br)
if p.err == nil {
p.addCompPtr(level, iKey(ikey))
}
case recAddTable:
level := p.readLevel("add-table.level", br)
level := p.readLevel("add-table.level", br, numLevel)
num := p.readUvarint("add-table.num", br)
size := p.readUvarint("add-table.size", br)
imin := p.readBytes("add-table.imin", br)
@@ -301,7 +299,7 @@ func (p *sessionRecord) decode(r io.Reader) error {
p.addTable(level, num, size, imin, imax)
}
case recDelTable:
level := p.readLevel("del-table.level", br)
level := p.readLevel("del-table.level", br, numLevel)
num := p.readUvarint("del-table.num", br)
if p.err == nil {
p.delTable(level, num)

View File

@@ -19,8 +19,8 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
if err != nil {
return
}
v2 := &sessionRecord{numLevel: opt.DefaultNumLevel}
err = v.decode(b)
v2 := &sessionRecord{}
err = v.decode(b, opt.DefaultNumLevel)
if err != nil {
return
}
@@ -34,7 +34,7 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
func TestSessionRecord_EncodeDecode(t *testing.T) {
big := uint64(1) << 50
v := &sessionRecord{numLevel: opt.DefaultNumLevel}
v := &sessionRecord{}
i := uint64(0)
test := func() {
res, err := decodeEncode(v)

View File

@@ -182,7 +182,7 @@ func (s *session) newManifest(rec *sessionRecord, v *version) (err error) {
defer v.release()
}
if rec == nil {
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
}
s.fillRecord(rec, true)
v.fillRecord(rec)
@@ -240,9 +240,11 @@ func (s *session) flushManifest(rec *sessionRecord) (err error) {
if err != nil {
return
}
err = s.manifestWriter.Sync()
if err != nil {
return
if !s.o.GetNoSync() {
err = s.manifestWriter.Sync()
if err != nil {
return
}
}
s.recordCommited(rec)
return

View File

@@ -243,7 +243,10 @@ func (fs *fileStorage) GetManifest() (f File, err error) {
rem = append(rem, fn)
}
if !pend1 || cerr == nil {
cerr = fmt.Errorf("leveldb/storage: corrupted or incomplete %s file", fn)
cerr = &ErrCorrupted{
File: fsParseName(filepath.Base(fn)),
Err: errors.New("leveldb/storage: corrupted or incomplete manifest file"),
}
}
} else if f != nil && f1.Num() < f.Num() {
fs.log(fmt.Sprintf("skipping %s: obsolete", fn))
@@ -326,8 +329,7 @@ func (fs *fileStorage) Close() error {
runtime.SetFinalizer(fs, nil)
if fs.open > 0 {
fs.log(fmt.Sprintf("refuse to close, %d files still open", fs.open))
return fmt.Errorf("leveldb/storage: cannot close, %d files still open", fs.open)
fs.log(fmt.Sprintf("close: warning, %d files still open", fs.open))
}
fs.open = -1
e1 := fs.logw.Close()
@@ -505,30 +507,37 @@ func (f *file) path() string {
return filepath.Join(f.fs.path, f.name())
}
func (f *file) parse(name string) bool {
var num uint64
func fsParseName(name string) *FileInfo {
fi := &FileInfo{}
var tail string
_, err := fmt.Sscanf(name, "%d.%s", &num, &tail)
_, err := fmt.Sscanf(name, "%d.%s", &fi.Num, &tail)
if err == nil {
switch tail {
case "log":
f.t = TypeJournal
fi.Type = TypeJournal
case "ldb", "sst":
f.t = TypeTable
fi.Type = TypeTable
case "tmp":
f.t = TypeTemp
fi.Type = TypeTemp
default:
return false
return nil
}
f.num = num
return true
return fi
}
n, _ := fmt.Sscanf(name, "MANIFEST-%d%s", &num, &tail)
n, _ := fmt.Sscanf(name, "MANIFEST-%d%s", &fi.Num, &tail)
if n == 1 {
f.t = TypeManifest
f.num = num
return true
fi.Type = TypeManifest
return fi
}
return false
return nil
}
func (f *file) parse(name string) bool {
fi := fsParseName(name)
if fi == nil {
return false
}
f.t = fi.Type
f.num = fi.Num
return true
}

View File

@@ -50,13 +50,23 @@ func rename(oldpath, newpath string) error {
return os.Rename(oldpath, newpath)
}
func isErrInvalid(err error) bool {
if err == os.ErrInvalid {
return true
}
if syserr, ok := err.(*os.SyscallError); ok && syserr.Err == syscall.EINVAL {
return true
}
return false
}
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
if err := f.Sync(); err != nil && !isErrInvalid(err) {
return err
}
return nil

View File

@@ -46,6 +46,22 @@ var (
ErrClosed = errors.New("leveldb/storage: closed")
)
// ErrCorrupted is the type that wraps errors that indicate corruption of
// a file. Package storage has its own type instead of using
// errors.ErrCorrupted to prevent circular import.
type ErrCorrupted struct {
File *FileInfo
Err error
}
func (e *ErrCorrupted) Error() string {
if e.File != nil {
return fmt.Sprintf("%v [file=%v]", e.Err, e.File)
} else {
return e.Err.Error()
}
}
// Syncer is the interface that wraps basic Sync method.
type Syncer interface {
// Sync commits the current contents of the file to stable storage.

View File

@@ -42,6 +42,8 @@ type tsOp uint
const (
tsOpOpen tsOp = iota
tsOpCreate
tsOpReplace
tsOpRemove
tsOpRead
tsOpReadAt
tsOpWrite
@@ -241,6 +243,10 @@ func (tf tsFile) Replace(newfile storage.File) (err error) {
if err != nil {
return
}
if tf.shouldErr(tsOpReplace) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Replace(newfile.(tsFile).File)
if err != nil {
ts.t.Errorf("E: cannot replace file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
@@ -258,6 +264,10 @@ func (tf tsFile) Remove() (err error) {
if err != nil {
return
}
if tf.shouldErr(tsOpRemove) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Remove()
if err != nil {
ts.t.Errorf("E: cannot remove file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)

View File

@@ -287,6 +287,7 @@ func (x *tFilesSortByNum) Less(i, j int) bool {
// Table operations.
type tOps struct {
s *session
noSync bool
cache *cache.Cache
bcache *cache.Cache
bpool *util.BufferPool
@@ -441,22 +442,27 @@ func newTableOps(s *session) *tOps {
var (
cacher cache.Cacher
bcache *cache.Cache
bpool *util.BufferPool
)
if s.o.GetOpenFilesCacheCapacity() > 0 {
cacher = cache.NewLRU(s.o.GetOpenFilesCacheCapacity())
}
if !s.o.DisableBlockCache {
if !s.o.GetDisableBlockCache() {
var bcacher cache.Cacher
if s.o.GetBlockCacheCapacity() > 0 {
bcacher = cache.NewLRU(s.o.GetBlockCacheCapacity())
}
bcache = cache.NewCache(bcacher)
}
if !s.o.GetDisableBufferPool() {
bpool = util.NewBufferPool(s.o.GetBlockSize() + 5)
}
return &tOps{
s: s,
noSync: s.o.GetNoSync(),
cache: cache.NewCache(cacher),
bcache: bcache,
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
bpool: bpool,
}
}
@@ -501,9 +507,11 @@ func (w *tWriter) finish() (f *tFile, err error) {
if err != nil {
return
}
err = w.w.Sync()
if err != nil {
return
if !w.t.noSync {
err = w.w.Sync()
if err != nil {
return
}
}
f = newTableFile(w.file, uint64(w.tw.BytesLen()), iKey(w.first), iKey(w.last))
return

View File

@@ -14,7 +14,7 @@ import (
"strings"
"sync"
"github.com/syndtr/gosnappy/snappy"
"github.com/golang/snappy"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/comparer"

View File

@@ -12,7 +12,7 @@ import (
"fmt"
"io"
"github.com/syndtr/gosnappy/snappy"
"github.com/golang/snappy"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/filter"
@@ -167,11 +167,7 @@ func (w *Writer) writeBlock(buf *util.Buffer, compression opt.Compression) (bh b
if n := snappy.MaxEncodedLen(buf.Len()) + blockTrailerLen; len(w.compressionScratch) < n {
w.compressionScratch = make([]byte, n)
}
var compressed []byte
compressed, err = snappy.Encode(w.compressionScratch, buf.Bytes())
if err != nil {
return
}
compressed := snappy.Encode(w.compressionScratch, buf.Bytes())
n := len(compressed)
b = compressed[:n+blockTrailerLen]
b[n] = blockTypeSnappyCompression

View File

@@ -136,9 +136,8 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byt
if !tseek {
if tset == nil {
tset = &tSet{level, t}
} else if tset.table.consumeSeek() <= 0 {
} else {
tseek = true
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
}
@@ -203,6 +202,28 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byt
return true
})
if tseek && tset.table.consumeSeek() <= 0 {
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
return
}
func (v *version) sampleSeek(ikey iKey) (tcomp bool) {
var tset *tSet
v.walkOverlapping(ikey, func(level int, t *tFile) bool {
if tset == nil {
tset = &tSet{level, t}
return true
} else {
if tset.table.consumeSeek() <= 0 {
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
return false
}
}, nil)
return
}
@@ -279,7 +300,7 @@ func (v *version) offsetOf(ikey iKey) (n uint64, err error) {
return
}
func (v *version) pickLevel(umin, umax []byte) (level int) {
func (v *version) pickMemdbLevel(umin, umax []byte) (level int) {
if !v.tables[0].overlaps(v.s.icmp, umin, umax, true) {
var overlaps tFiles
maxLevel := v.s.o.GetMaxMemCompationLevel()

View File

@@ -1,4 +1,4 @@
Copyright (c) 2014 Barracuda Networks, Inc.
Copyright (c) 2014-2015 Barracuda Networks, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -43,3 +43,14 @@ easily fit into the OTP paradigm. It ought to someday be considered a good
idea to distribute libraries that provide some sort of supervisor tree
functionality out of the box. It is possible to provide this functionality
without explicitly depending on the Suture library.
Changelog
---------
suture uses semantic versioning.
1. 1.0.0
* Initial release.
2. 1.0.1
* Fixed data race on the .state variable.

View File

@@ -36,6 +36,13 @@ to your Supervisor. Supervisors are also services, so you can create a
tree structure here, depending on the exact combination of restarts
you want to create.
As a special case, when adding Supervisors to Supervisors, the "sub"
supervisor will have the "super" supervisor's Log function copied.
This allows you to set one log function on the "top" supervisor, and
have it propagate down to all the sub-supervisors. This also allows
libraries or modules to provide Supervisors without having to commit
their users to a particular logging method.
Finally, as what is probably the last line of your main() function, call
.Serve() on your top level supervisor. This will start all the services
you've defined.
@@ -52,6 +59,7 @@ import (
"log"
"math"
"runtime"
"sync"
"sync/atomic"
"time"
)
@@ -116,7 +124,6 @@ type Supervisor struct {
lastFail time.Time
failures float64
restartQueue []serviceID
state uint8
serviceCounter serviceID
control chan supervisorMessage
resumeTimer <-chan time.Time
@@ -126,14 +133,19 @@ type Supervisor struct {
// If you ever come up with some need to get into these, submit a pull
// request to make them public and some smidge of justification, and
// I'll happily do it.
logBadStop func(Service)
logFailure func(service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
// But since I've now changed the signature on these once, I'm glad I
// didn't start with them public... :)
logBadStop func(*Supervisor, Service)
logFailure func(supervisor *Supervisor, service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
logBackoff func(*Supervisor, bool)
// avoid a dependency on github.com/thejerf/abtime by just implementing
// a minimal chunk.
getNow func() time.Time
getResume func(time.Duration) <-chan time.Time
sync.Mutex
state uint8
}
// Spec is used to pass arguments to the New function to create a
@@ -233,10 +245,10 @@ func New(name string, spec Spec) (s *Supervisor) {
s.resumeTimer = make(chan time.Time)
// set up the default logging handlers
s.logBadStop = func(service Service) {
s.log(fmt.Sprintf("Service %s failed to terminate in a timely manner", serviceName(service)))
s.logBadStop = func(supervisor *Supervisor, service Service) {
s.log(fmt.Sprintf("%s: Service %s failed to terminate in a timely manner", serviceName(supervisor), serviceName(service)))
}
s.logFailure = func(service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
s.logFailure = func(supervisor *Supervisor, service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
var errString string
e, canError := err.(error)
@@ -246,7 +258,7 @@ func New(name string, spec Spec) (s *Supervisor) {
errString = fmt.Sprintf("%#v", err)
}
s.log(fmt.Sprintf("Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(service), failures, threshold, restarting, errString, string(st)))
s.log(fmt.Sprintf("%s: Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(supervisor), serviceName(service), failures, threshold, restarting, errString, string(st)))
}
s.logBackoff = func(s *Supervisor, entering bool) {
if entering {
@@ -346,12 +358,25 @@ will be started when the supervisor is.
The returned ServiceID may be passed to the Remove method of the Supervisor
to terminate the service.
As a special behavior, if the service added is itself a supervisor, the
supervisor being added will copy the Log function from the Supervisor it
is being added to. This allows factoring out providing a Supervisor
from its logging.
*/
func (s *Supervisor) Add(service Service) ServiceToken {
if s == nil {
panic("can't add service to nil *suture.Supervisor")
}
if supervisor, isSupervisor := service.(*Supervisor); isSupervisor {
supervisor.logBadStop = s.logBadStop
supervisor.logFailure = s.logFailure
supervisor.logBackoff = s.logBackoff
}
s.Lock()
if s.state == notRunning {
id := s.serviceCounter
s.serviceCounter++
@@ -359,8 +384,10 @@ func (s *Supervisor) Add(service Service) ServiceToken {
s.services[id] = service
s.restartQueue = append(s.restartQueue, id)
s.Unlock()
return ServiceToken{uint64(s.id)<<32 | uint64(id)}
}
s.Unlock()
response := make(chan serviceID)
s.control <- addService{service, response}
@@ -387,16 +414,19 @@ func (s *Supervisor) Serve() {
}
defer func() {
s.Lock()
s.state = notRunning
s.Unlock()
}()
s.Lock()
if s.state != notRunning {
// FIXME: Don't explain why I don't need a semaphore, just use one
// This doesn't use a semaphore because it's just a sanity check.
s.Unlock()
panic("Running a supervisor while it is already running?")
}
s.state = normal
s.Unlock()
// for all the services I currently know about, start them
for _, id := range s.restartQueue {
@@ -451,7 +481,9 @@ func (s *Supervisor) Serve() {
// excessive thrashing
// FIXME: Ought to permit some spacing of these functions, rather
// than simply hammering through them
s.Lock()
s.state = normal
s.Unlock()
s.failures = 0
s.logBackoff(s, false)
for _, id := range s.restartQueue {
@@ -478,7 +510,9 @@ func (s *Supervisor) handleFailedService(id serviceID, err interface{}, stacktra
}
if s.failures > s.failureThreshold {
s.Lock()
s.state = paused
s.Unlock()
s.logBackoff(s, true)
s.resumeTimer = s.getResume(s.failureBackoff)
}
@@ -490,14 +524,20 @@ func (s *Supervisor) handleFailedService(id serviceID, err interface{}, stacktra
// It is possible for a service to be no longer monitored
// by the time we get here. In that case, just ignore it.
if monitored {
if s.state == normal {
// this may look dangerous because the state could change, but this
// code is only ever run in the one goroutine that is permitted to
// change the state, so nothing else will.
s.Lock()
curState := s.state
s.Unlock()
if curState == normal {
s.runService(failedService, id)
s.logFailure(failedService, s.failures, s.failureThreshold, true, err, stacktrace)
s.logFailure(s, failedService, s.failures, s.failureThreshold, true, err, stacktrace)
} else {
// FIXME: When restarting, check that the service still
// exists (it may have been stopped in the meantime)
s.restartQueue = append(s.restartQueue, id)
s.logFailure(failedService, s.failures, s.failureThreshold, false, err, stacktrace)
s.logFailure(s, failedService, s.failures, s.failureThreshold, false, err, stacktrace)
}
}
}
@@ -536,7 +576,7 @@ func (s *Supervisor) removeService(id serviceID) {
case <-successChan:
// Life is good!
case <-failChan:
s.logBadStop(service)
s.logBadStop(s, service)
}
}()
}

View File

@@ -17,7 +17,7 @@ func (i *Incrementor) Serve() {
for {
select {
case i.next <- i.current:
i.current += 1
i.current++
case <-i.stop:
// We sync here just to guarantee the output of "Stopping the service",
// so this passes the test reliably.

View File

@@ -77,7 +77,7 @@ func TestFailures(t *testing.T) {
// to avoid deadlocks during shutdown, we have to not try to send
// things out on channels while we're shutting down (this undoes the
// logFailure overide about 25 lines down)
s.logFailure = func(Service, float64, float64, bool, interface{}, []byte) {}
s.logFailure = func(*Supervisor, Service, float64, float64, bool, interface{}, []byte) {}
s.Stop()
}()
s.sync()
@@ -102,7 +102,7 @@ func TestFailures(t *testing.T) {
failNotify := make(chan bool)
// use this to synchronize on here
s.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
s.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- r
}
@@ -276,8 +276,8 @@ func TestDefaultLogging(t *testing.T) {
serviceName(&BarelyService{})
s.logBadStop(service)
s.logFailure(service, 1, 1, true, errors.New("test error"), []byte{})
s.logBadStop(s, service)
s.logFailure(s, service, 1, 1, true, errors.New("test error"), []byte{})
s.Stop()
}
@@ -289,9 +289,17 @@ func TestNestedSupervisors(t *testing.T) {
super2 := NewSimple("Nested5")
service := NewService("Service5")
super2.logBadStop = func(*Supervisor, Service) {
panic("Failed to copy logBadStop")
}
super1.Add(super2)
super2.Add(service)
// test the functions got copied from super1; if this panics, it didn't
// get copied
super2.logBadStop(super2, service)
go super1.Serve()
super1.sync()
@@ -340,7 +348,7 @@ func TestStoppingStillWorksWithHungServices(t *testing.T) {
return resumeChan
}
failNotify := make(chan struct{})
s.logBadStop = func(s Service) {
s.logBadStop = func(supervisor *Supervisor, s Service) {
failNotify <- struct{}{}
}
@@ -438,7 +446,7 @@ func TestFailingSupervisors(t *testing.T) {
}
failNotify := make(chan string)
// use this to synchronize on here
s1.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
s1.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- fmt.Sprintf("%s", s)
}
@@ -470,6 +478,22 @@ func TestNilSupervisorAdd(t *testing.T) {
s.Add(s)
}
// https://github.com/thejerf/suture/issues/11
//
// The purpose of this test is to verify that it does not cause data races,
// so there are no obvious assertions.
func TestIssue11(t *testing.T) {
t.Parallel()
s := NewSimple("main")
s.ServeBackground()
subsuper := NewSimple("sub")
s.Add(subsuper)
subsuper.Add(NewService("may cause data race"))
}
// http://golangtutorials.blogspot.com/2011/10/gotest-unit-testing-and-benchmarking-go.html
// claims test function are run in the same order as the source file...
// I'm not sure if this is part of the contract, though. Especially in the
@@ -501,7 +525,7 @@ func (s *FailableService) Serve() {
everMultistarted = true
panic("Multi-started the same service! " + s.name)
}
s.existing += 1
s.existing++
s.started <- true
@@ -514,13 +538,13 @@ func (s *FailableService) Serve() {
case Happy:
// Do nothing on purpose. Life is good!
case Fail:
s.existing -= 1
s.existing--
if useStopChan {
s.stop <- true
}
return
case Panic:
s.existing -= 1
s.existing--
panic("Panic!")
case Hang:
// or more specifically, "hang until I release you"
@@ -529,7 +553,7 @@ func (s *FailableService) Serve() {
useStopChan = true
}
case <-s.shutdown:
s.existing -= 1
s.existing--
if useStopChan {
s.stop <- true
}

View File

@@ -0,0 +1,181 @@
// go generate gen.go
// GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
// Package iana provides protocol number resources managed by the Internet Assigned Numbers Authority (IANA).
package iana
// Differentiated Services Field Codepoints (DSCP), Updated: 2013-06-25
const (
DiffServCS0 = 0x0 // CS0
DiffServCS1 = 0x20 // CS1
DiffServCS2 = 0x40 // CS2
DiffServCS3 = 0x60 // CS3
DiffServCS4 = 0x80 // CS4
DiffServCS5 = 0xa0 // CS5
DiffServCS6 = 0xc0 // CS6
DiffServCS7 = 0xe0 // CS7
DiffServAF11 = 0x28 // AF11
DiffServAF12 = 0x30 // AF12
DiffServAF13 = 0x38 // AF13
DiffServAF21 = 0x48 // AF21
DiffServAF22 = 0x50 // AF22
DiffServAF23 = 0x58 // AF23
DiffServAF31 = 0x68 // AF31
DiffServAF32 = 0x70 // AF32
DiffServAF33 = 0x78 // AF33
DiffServAF41 = 0x88 // AF41
DiffServAF42 = 0x90 // AF42
DiffServAF43 = 0x98 // AF43
DiffServEFPHB = 0xb8 // EF PHB
DiffServVOICEADMIT = 0xb0 // VOICE-ADMIT
)
// IPv4 TOS Byte and IPv6 Traffic Class Octet, Updated: 2001-09-06
const (
NotECNTransport = 0x0 // Not-ECT (Not ECN-Capable Transport)
ECNTransport1 = 0x1 // ECT(1) (ECN-Capable Transport(1))
ECNTransport0 = 0x2 // ECT(0) (ECN-Capable Transport(0))
CongestionExperienced = 0x3 // CE (Congestion Experienced)
)
// Protocol Numbers, Updated: 2015-06-23
const (
ProtocolIP = 0 // IPv4 encapsulation, pseudo protocol number
ProtocolHOPOPT = 0 // IPv6 Hop-by-Hop Option
ProtocolICMP = 1 // Internet Control Message
ProtocolIGMP = 2 // Internet Group Management
ProtocolGGP = 3 // Gateway-to-Gateway
ProtocolIPv4 = 4 // IPv4 encapsulation
ProtocolST = 5 // Stream
ProtocolTCP = 6 // Transmission Control
ProtocolCBT = 7 // CBT
ProtocolEGP = 8 // Exterior Gateway Protocol
ProtocolIGP = 9 // any private interior gateway (used by Cisco for their IGRP)
ProtocolBBNRCCMON = 10 // BBN RCC Monitoring
ProtocolNVPII = 11 // Network Voice Protocol
ProtocolPUP = 12 // PUP
ProtocolARGUS = 13 // ARGUS
ProtocolEMCON = 14 // EMCON
ProtocolXNET = 15 // Cross Net Debugger
ProtocolCHAOS = 16 // Chaos
ProtocolUDP = 17 // User Datagram
ProtocolMUX = 18 // Multiplexing
ProtocolDCNMEAS = 19 // DCN Measurement Subsystems
ProtocolHMP = 20 // Host Monitoring
ProtocolPRM = 21 // Packet Radio Measurement
ProtocolXNSIDP = 22 // XEROX NS IDP
ProtocolTRUNK1 = 23 // Trunk-1
ProtocolTRUNK2 = 24 // Trunk-2
ProtocolLEAF1 = 25 // Leaf-1
ProtocolLEAF2 = 26 // Leaf-2
ProtocolRDP = 27 // Reliable Data Protocol
ProtocolIRTP = 28 // Internet Reliable Transaction
ProtocolISOTP4 = 29 // ISO Transport Protocol Class 4
ProtocolNETBLT = 30 // Bulk Data Transfer Protocol
ProtocolMFENSP = 31 // MFE Network Services Protocol
ProtocolMERITINP = 32 // MERIT Internodal Protocol
ProtocolDCCP = 33 // Datagram Congestion Control Protocol
Protocol3PC = 34 // Third Party Connect Protocol
ProtocolIDPR = 35 // Inter-Domain Policy Routing Protocol
ProtocolXTP = 36 // XTP
ProtocolDDP = 37 // Datagram Delivery Protocol
ProtocolIDPRCMTP = 38 // IDPR Control Message Transport Proto
ProtocolTPPP = 39 // TP++ Transport Protocol
ProtocolIL = 40 // IL Transport Protocol
ProtocolIPv6 = 41 // IPv6 encapsulation
ProtocolSDRP = 42 // Source Demand Routing Protocol
ProtocolIPv6Route = 43 // Routing Header for IPv6
ProtocolIPv6Frag = 44 // Fragment Header for IPv6
ProtocolIDRP = 45 // Inter-Domain Routing Protocol
ProtocolRSVP = 46 // Reservation Protocol
ProtocolGRE = 47 // Generic Routing Encapsulation
ProtocolDSR = 48 // Dynamic Source Routing Protocol
ProtocolBNA = 49 // BNA
ProtocolESP = 50 // Encap Security Payload
ProtocolAH = 51 // Authentication Header
ProtocolINLSP = 52 // Integrated Net Layer Security TUBA
ProtocolNARP = 54 // NBMA Address Resolution Protocol
ProtocolMOBILE = 55 // IP Mobility
ProtocolTLSP = 56 // Transport Layer Security Protocol using Kryptonet key management
ProtocolSKIP = 57 // SKIP
ProtocolIPv6ICMP = 58 // ICMP for IPv6
ProtocolIPv6NoNxt = 59 // No Next Header for IPv6
ProtocolIPv6Opts = 60 // Destination Options for IPv6
ProtocolCFTP = 62 // CFTP
ProtocolSATEXPAK = 64 // SATNET and Backroom EXPAK
ProtocolKRYPTOLAN = 65 // Kryptolan
ProtocolRVD = 66 // MIT Remote Virtual Disk Protocol
ProtocolIPPC = 67 // Internet Pluribus Packet Core
ProtocolSATMON = 69 // SATNET Monitoring
ProtocolVISA = 70 // VISA Protocol
ProtocolIPCV = 71 // Internet Packet Core Utility
ProtocolCPNX = 72 // Computer Protocol Network Executive
ProtocolCPHB = 73 // Computer Protocol Heart Beat
ProtocolWSN = 74 // Wang Span Network
ProtocolPVP = 75 // Packet Video Protocol
ProtocolBRSATMON = 76 // Backroom SATNET Monitoring
ProtocolSUNND = 77 // SUN ND PROTOCOL-Temporary
ProtocolWBMON = 78 // WIDEBAND Monitoring
ProtocolWBEXPAK = 79 // WIDEBAND EXPAK
ProtocolISOIP = 80 // ISO Internet Protocol
ProtocolVMTP = 81 // VMTP
ProtocolSECUREVMTP = 82 // SECURE-VMTP
ProtocolVINES = 83 // VINES
ProtocolTTP = 84 // Transaction Transport Protocol
ProtocolIPTM = 84 // Internet Protocol Traffic Manager
ProtocolNSFNETIGP = 85 // NSFNET-IGP
ProtocolDGP = 86 // Dissimilar Gateway Protocol
ProtocolTCF = 87 // TCF
ProtocolEIGRP = 88 // EIGRP
ProtocolOSPFIGP = 89 // OSPFIGP
ProtocolSpriteRPC = 90 // Sprite RPC Protocol
ProtocolLARP = 91 // Locus Address Resolution Protocol
ProtocolMTP = 92 // Multicast Transport Protocol
ProtocolAX25 = 93 // AX.25 Frames
ProtocolIPIP = 94 // IP-within-IP Encapsulation Protocol
ProtocolSCCSP = 96 // Semaphore Communications Sec. Pro.
ProtocolETHERIP = 97 // Ethernet-within-IP Encapsulation
ProtocolENCAP = 98 // Encapsulation Header
ProtocolGMTP = 100 // GMTP
ProtocolIFMP = 101 // Ipsilon Flow Management Protocol
ProtocolPNNI = 102 // PNNI over IP
ProtocolPIM = 103 // Protocol Independent Multicast
ProtocolARIS = 104 // ARIS
ProtocolSCPS = 105 // SCPS
ProtocolQNX = 106 // QNX
ProtocolAN = 107 // Active Networks
ProtocolIPComp = 108 // IP Payload Compression Protocol
ProtocolSNP = 109 // Sitara Networks Protocol
ProtocolCompaqPeer = 110 // Compaq Peer Protocol
ProtocolIPXinIP = 111 // IPX in IP
ProtocolVRRP = 112 // Virtual Router Redundancy Protocol
ProtocolPGM = 113 // PGM Reliable Transport Protocol
ProtocolL2TP = 115 // Layer Two Tunneling Protocol
ProtocolDDX = 116 // D-II Data Exchange (DDX)
ProtocolIATP = 117 // Interactive Agent Transfer Protocol
ProtocolSTP = 118 // Schedule Transfer Protocol
ProtocolSRP = 119 // SpectraLink Radio Protocol
ProtocolUTI = 120 // UTI
ProtocolSMP = 121 // Simple Message Protocol
ProtocolPTP = 123 // Performance Transparency Protocol
ProtocolISIS = 124 // ISIS over IPv4
ProtocolFIRE = 125 // FIRE
ProtocolCRTP = 126 // Combat Radio Transport Protocol
ProtocolCRUDP = 127 // Combat Radio User Datagram
ProtocolSSCOPMCE = 128 // SSCOPMCE
ProtocolIPLT = 129 // IPLT
ProtocolSPS = 130 // Secure Packet Shield
ProtocolPIPE = 131 // Private IP Encapsulation within IP
ProtocolSCTP = 132 // Stream Control Transmission Protocol
ProtocolFC = 133 // Fibre Channel
ProtocolRSVPE2EIGNORE = 134 // RSVP-E2E-IGNORE
ProtocolMobilityHeader = 135 // Mobility Header
ProtocolUDPLite = 136 // UDPLite
ProtocolMPLSinIP = 137 // MPLS-in-IP
ProtocolMANET = 138 // MANET Protocols
ProtocolHIP = 139 // Host Identity Protocol
ProtocolShim6 = 140 // Shim6 Protocol
ProtocolWESP = 141 // Wrapped Encapsulating Security Payload
ProtocolROHC = 142 // Robust Header Compression
ProtocolReserved = 255 // Reserved
)

View File

@@ -0,0 +1,293 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
//go:generate go run gen.go
// This program generates internet protocol constants and tables by
// reading IANA protocol registries.
package main
import (
"bytes"
"encoding/xml"
"fmt"
"go/format"
"io"
"io/ioutil"
"net/http"
"os"
"strconv"
"strings"
)
var registries = []struct {
url string
parse func(io.Writer, io.Reader) error
}{
{
"http://www.iana.org/assignments/dscp-registry/dscp-registry.xml",
parseDSCPRegistry,
},
{
"http://www.iana.org/assignments/ipv4-tos-byte/ipv4-tos-byte.xml",
parseTOSTCByte,
},
{
"http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xml",
parseProtocolNumbers,
},
}
func main() {
var bb bytes.Buffer
fmt.Fprintf(&bb, "// go generate gen.go\n")
fmt.Fprintf(&bb, "// GENERATED BY THE COMMAND ABOVE; DO NOT EDIT\n\n")
fmt.Fprintf(&bb, "// Package iana provides protocol number resources managed by the Internet Assigned Numbers Authority (IANA).\n")
fmt.Fprintf(&bb, `package iana // import "golang.org/x/net/internal/iana"`+"\n\n")
for _, r := range registries {
resp, err := http.Get(r.url)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
fmt.Fprintf(os.Stderr, "got HTTP status code %v for %v\n", resp.StatusCode, r.url)
os.Exit(1)
}
if err := r.parse(&bb, resp.Body); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
fmt.Fprintf(&bb, "\n")
}
b, err := format.Source(bb.Bytes())
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
if err := ioutil.WriteFile("const.go", b, 0644); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func parseDSCPRegistry(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var dr dscpRegistry
if err := dec.Decode(&dr); err != nil {
return err
}
drs := dr.escape()
fmt.Fprintf(w, "// %s, Updated: %s\n", dr.Title, dr.Updated)
fmt.Fprintf(w, "const (\n")
for _, dr := range drs {
fmt.Fprintf(w, "DiffServ%s = %#x", dr.Name, dr.Value)
fmt.Fprintf(w, "// %s\n", dr.OrigName)
}
fmt.Fprintf(w, ")\n")
return nil
}
type dscpRegistry struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
Note string `xml:"note"`
RegTitle string `xml:"registry>title"`
PoolRecords []struct {
Name string `xml:"name"`
Space string `xml:"space"`
} `xml:"registry>record"`
Records []struct {
Name string `xml:"name"`
Space string `xml:"space"`
} `xml:"registry>registry>record"`
}
type canonDSCPRecord struct {
OrigName string
Name string
Value int
}
func (drr *dscpRegistry) escape() []canonDSCPRecord {
drs := make([]canonDSCPRecord, len(drr.Records))
sr := strings.NewReplacer(
"+", "",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, dr := range drr.Records {
s := strings.TrimSpace(dr.Name)
drs[i].OrigName = s
drs[i].Name = sr.Replace(s)
n, err := strconv.ParseUint(dr.Space, 2, 8)
if err != nil {
continue
}
drs[i].Value = int(n) << 2
}
return drs
}
func parseTOSTCByte(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var ttb tosTCByte
if err := dec.Decode(&ttb); err != nil {
return err
}
trs := ttb.escape()
fmt.Fprintf(w, "// %s, Updated: %s\n", ttb.Title, ttb.Updated)
fmt.Fprintf(w, "const (\n")
for _, tr := range trs {
fmt.Fprintf(w, "%s = %#x", tr.Keyword, tr.Value)
fmt.Fprintf(w, "// %s\n", tr.OrigKeyword)
}
fmt.Fprintf(w, ")\n")
return nil
}
type tosTCByte struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
Note string `xml:"note"`
RegTitle string `xml:"registry>title"`
Records []struct {
Binary string `xml:"binary"`
Keyword string `xml:"keyword"`
} `xml:"registry>record"`
}
type canonTOSTCByteRecord struct {
OrigKeyword string
Keyword string
Value int
}
func (ttb *tosTCByte) escape() []canonTOSTCByteRecord {
trs := make([]canonTOSTCByteRecord, len(ttb.Records))
sr := strings.NewReplacer(
"Capable", "",
"(", "",
")", "",
"+", "",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, tr := range ttb.Records {
s := strings.TrimSpace(tr.Keyword)
trs[i].OrigKeyword = s
ss := strings.Split(s, " ")
if len(ss) > 1 {
trs[i].Keyword = strings.Join(ss[1:], " ")
} else {
trs[i].Keyword = ss[0]
}
trs[i].Keyword = sr.Replace(trs[i].Keyword)
n, err := strconv.ParseUint(tr.Binary, 2, 8)
if err != nil {
continue
}
trs[i].Value = int(n)
}
return trs
}
func parseProtocolNumbers(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var pn protocolNumbers
if err := dec.Decode(&pn); err != nil {
return err
}
prs := pn.escape()
prs = append([]canonProtocolRecord{{
Name: "IP",
Descr: "IPv4 encapsulation, pseudo protocol number",
Value: 0,
}}, prs...)
fmt.Fprintf(w, "// %s, Updated: %s\n", pn.Title, pn.Updated)
fmt.Fprintf(w, "const (\n")
for _, pr := range prs {
if pr.Name == "" {
continue
}
fmt.Fprintf(w, "Protocol%s = %d", pr.Name, pr.Value)
s := pr.Descr
if s == "" {
s = pr.OrigName
}
fmt.Fprintf(w, "// %s\n", s)
}
fmt.Fprintf(w, ")\n")
return nil
}
type protocolNumbers struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
RegTitle string `xml:"registry>title"`
Note string `xml:"registry>note"`
Records []struct {
Value string `xml:"value"`
Name string `xml:"name"`
Descr string `xml:"description"`
} `xml:"registry>record"`
}
type canonProtocolRecord struct {
OrigName string
Name string
Descr string
Value int
}
func (pn *protocolNumbers) escape() []canonProtocolRecord {
prs := make([]canonProtocolRecord, len(pn.Records))
sr := strings.NewReplacer(
"-in-", "in",
"-within-", "within",
"-over-", "over",
"+", "P",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, pr := range pn.Records {
if strings.Contains(pr.Name, "Deprecated") ||
strings.Contains(pr.Name, "deprecated") {
continue
}
prs[i].OrigName = pr.Name
s := strings.TrimSpace(pr.Name)
switch pr.Name {
case "ISIS over IPv4":
prs[i].Name = "ISIS"
case "manet":
prs[i].Name = "MANET"
default:
prs[i].Name = sr.Replace(s)
}
ss := strings.Split(pr.Descr, "\n")
for i := range ss {
ss[i] = strings.TrimSpace(ss[i])
}
if len(ss) > 1 {
prs[i].Descr = strings.Join(ss, " ")
} else {
prs[i].Descr = ss[0]
}
prs[i].Value, _ = strconv.Atoi(pr.Value)
}
return prs
}

92
Godeps/_workspace/src/golang.org/x/net/ipv6/control.go generated vendored Normal file
View File

@@ -0,0 +1,92 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import (
"errors"
"fmt"
"net"
"sync"
)
var (
errMissingAddress = errors.New("missing address")
errInvalidConnType = errors.New("invalid conn type")
errNoSuchInterface = errors.New("no such interface")
)
// Note that RFC 3542 obsoletes RFC 2292 but OS X Snow Leopard and the
// former still support RFC 2292 only. Please be aware that almost
// all protocol implementations prohibit using a combination of RFC
// 2292 and RFC 3542 for some practical reasons.
type rawOpt struct {
sync.RWMutex
cflags ControlFlags
}
func (c *rawOpt) set(f ControlFlags) { c.cflags |= f }
func (c *rawOpt) clear(f ControlFlags) { c.cflags &^= f }
func (c *rawOpt) isset(f ControlFlags) bool { return c.cflags&f != 0 }
// A ControlFlags represents per packet basis IP-level socket option
// control flags.
type ControlFlags uint
const (
FlagTrafficClass ControlFlags = 1 << iota // pass the traffic class on the received packet
FlagHopLimit // pass the hop limit on the received packet
FlagSrc // pass the source address on the received packet
FlagDst // pass the destination address on the received packet
FlagInterface // pass the interface index on the received packet
FlagPathMTU // pass the path MTU on the received packet path
)
const flagPacketInfo = FlagDst | FlagInterface
// A ControlMessage represents per packet basis IP-level socket
// options.
type ControlMessage struct {
// Receiving socket options: SetControlMessage allows to
// receive the options from the protocol stack using ReadFrom
// method of PacketConn.
//
// Specifying socket options: ControlMessage for WriteTo
// method of PacketConn allows to send the options to the
// protocol stack.
//
TrafficClass int // traffic class, must be 1 <= value <= 255 when specifying
HopLimit int // hop limit, must be 1 <= value <= 255 when specifying
Src net.IP // source address, specifying only
Dst net.IP // destination address, receiving only
IfIndex int // interface index, must be 1 <= value when specifying
NextHop net.IP // next hop address, specifying only
MTU int // path MTU, receiving only
}
func (cm *ControlMessage) String() string {
if cm == nil {
return "<nil>"
}
return fmt.Sprintf("tclass: %#x, hoplim: %v, src: %v, dst: %v, ifindex: %v, nexthop: %v, mtu: %v", cm.TrafficClass, cm.HopLimit, cm.Src, cm.Dst, cm.IfIndex, cm.NextHop, cm.MTU)
}
// Ancillary data socket options
const (
ctlTrafficClass = iota // header field
ctlHopLimit // header field
ctlPacketInfo // inbound or outbound packet path
ctlNextHop // nexthop
ctlPathMTU // path mtu
ctlMax
)
// A ctlOpt represents a binding for ancillary data socket option.
type ctlOpt struct {
name int // option name, must be equal or greater than 1
length int // option length
marshal func([]byte, *ControlMessage) []byte
parse func(*ControlMessage, []byte)
}

View File

@@ -0,0 +1,56 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin
package ipv6
import (
"syscall"
"unsafe"
"golang.org/x/net/internal/iana"
)
func marshal2292HopLimit(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_2292HOPLIMIT
m.SetLen(syscall.CmsgLen(4))
if cm != nil {
data := b[syscall.CmsgLen(0):]
// TODO(mikio): fix potential misaligned memory access
*(*int32)(unsafe.Pointer(&data[:4][0])) = int32(cm.HopLimit)
}
return b[syscall.CmsgSpace(4):]
}
func marshal2292PacketInfo(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_2292PKTINFO
m.SetLen(syscall.CmsgLen(sysSizeofInet6Pktinfo))
if cm != nil {
pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
if ip := cm.Src.To16(); ip != nil && ip.To4() == nil {
copy(pi.Addr[:], ip)
}
if cm.IfIndex > 0 {
pi.setIfindex(cm.IfIndex)
}
}
return b[syscall.CmsgSpace(sysSizeofInet6Pktinfo):]
}
func marshal2292NextHop(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_2292NEXTHOP
m.SetLen(syscall.CmsgLen(sysSizeofSockaddrInet6))
if cm != nil {
sa := (*sysSockaddrInet6)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
sa.setSockaddr(cm.NextHop, cm.IfIndex)
}
return b[syscall.CmsgSpace(sysSizeofSockaddrInet6):]
}

View File

@@ -0,0 +1,103 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux netbsd openbsd
package ipv6
import (
"syscall"
"unsafe"
"golang.org/x/net/internal/iana"
)
func marshalTrafficClass(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_TCLASS
m.SetLen(syscall.CmsgLen(4))
if cm != nil {
data := b[syscall.CmsgLen(0):]
// TODO(mikio): fix potential misaligned memory access
*(*int32)(unsafe.Pointer(&data[:4][0])) = int32(cm.TrafficClass)
}
return b[syscall.CmsgSpace(4):]
}
func parseTrafficClass(cm *ControlMessage, b []byte) {
// TODO(mikio): fix potential misaligned memory access
cm.TrafficClass = int(*(*int32)(unsafe.Pointer(&b[:4][0])))
}
func marshalHopLimit(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_HOPLIMIT
m.SetLen(syscall.CmsgLen(4))
if cm != nil {
data := b[syscall.CmsgLen(0):]
// TODO(mikio): fix potential misaligned memory access
*(*int32)(unsafe.Pointer(&data[:4][0])) = int32(cm.HopLimit)
}
return b[syscall.CmsgSpace(4):]
}
func parseHopLimit(cm *ControlMessage, b []byte) {
// TODO(mikio): fix potential misaligned memory access
cm.HopLimit = int(*(*int32)(unsafe.Pointer(&b[:4][0])))
}
func marshalPacketInfo(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_PKTINFO
m.SetLen(syscall.CmsgLen(sysSizeofInet6Pktinfo))
if cm != nil {
pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
if ip := cm.Src.To16(); ip != nil && ip.To4() == nil {
copy(pi.Addr[:], ip)
}
if cm.IfIndex > 0 {
pi.setIfindex(cm.IfIndex)
}
}
return b[syscall.CmsgSpace(sysSizeofInet6Pktinfo):]
}
func parsePacketInfo(cm *ControlMessage, b []byte) {
pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[0]))
cm.Dst = pi.Addr[:]
cm.IfIndex = int(pi.Ifindex)
}
func marshalNextHop(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_NEXTHOP
m.SetLen(syscall.CmsgLen(sysSizeofSockaddrInet6))
if cm != nil {
sa := (*sysSockaddrInet6)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
sa.setSockaddr(cm.NextHop, cm.IfIndex)
}
return b[syscall.CmsgSpace(sysSizeofSockaddrInet6):]
}
func parseNextHop(cm *ControlMessage, b []byte) {
}
func marshalPathMTU(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_PATHMTU
m.SetLen(syscall.CmsgLen(sysSizeofIPv6Mtuinfo))
return b[syscall.CmsgSpace(sysSizeofIPv6Mtuinfo):]
}
func parsePathMTU(cm *ControlMessage, b []byte) {
mi := (*sysIPv6Mtuinfo)(unsafe.Pointer(&b[0]))
cm.Dst = mi.Addr.Addr[:]
cm.IfIndex = int(mi.Addr.Scope_id)
cm.MTU = int(mi.Mtu)
}

View File

@@ -0,0 +1,23 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build nacl plan9 solaris
package ipv6
func setControlMessage(fd int, opt *rawOpt, cf ControlFlags, on bool) error {
return errOpNoSupport
}
func newControlMessage(opt *rawOpt) (oob []byte) {
return nil
}
func parseControlMessage(b []byte) (*ControlMessage, error) {
return nil, errOpNoSupport
}
func marshalControlMessage(cm *ControlMessage) (oob []byte) {
return nil
}

View File

@@ -0,0 +1,166 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux netbsd openbsd
package ipv6
import (
"os"
"syscall"
"golang.org/x/net/internal/iana"
)
func setControlMessage(fd int, opt *rawOpt, cf ControlFlags, on bool) error {
opt.Lock()
defer opt.Unlock()
if cf&FlagTrafficClass != 0 && sockOpts[ssoReceiveTrafficClass].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceiveTrafficClass], boolint(on)); err != nil {
return err
}
if on {
opt.set(FlagTrafficClass)
} else {
opt.clear(FlagTrafficClass)
}
}
if cf&FlagHopLimit != 0 && sockOpts[ssoReceiveHopLimit].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceiveHopLimit], boolint(on)); err != nil {
return err
}
if on {
opt.set(FlagHopLimit)
} else {
opt.clear(FlagHopLimit)
}
}
if cf&flagPacketInfo != 0 && sockOpts[ssoReceivePacketInfo].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceivePacketInfo], boolint(on)); err != nil {
return err
}
if on {
opt.set(cf & flagPacketInfo)
} else {
opt.clear(cf & flagPacketInfo)
}
}
if cf&FlagPathMTU != 0 && sockOpts[ssoReceivePathMTU].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceivePathMTU], boolint(on)); err != nil {
return err
}
if on {
opt.set(FlagPathMTU)
} else {
opt.clear(FlagPathMTU)
}
}
return nil
}
func newControlMessage(opt *rawOpt) (oob []byte) {
opt.RLock()
var l int
if opt.isset(FlagTrafficClass) && ctlOpts[ctlTrafficClass].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlTrafficClass].length)
}
if opt.isset(FlagHopLimit) && ctlOpts[ctlHopLimit].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlHopLimit].length)
}
if opt.isset(flagPacketInfo) && ctlOpts[ctlPacketInfo].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlPacketInfo].length)
}
if opt.isset(FlagPathMTU) && ctlOpts[ctlPathMTU].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlPathMTU].length)
}
if l > 0 {
oob = make([]byte, l)
b := oob
if opt.isset(FlagTrafficClass) && ctlOpts[ctlTrafficClass].name > 0 {
b = ctlOpts[ctlTrafficClass].marshal(b, nil)
}
if opt.isset(FlagHopLimit) && ctlOpts[ctlHopLimit].name > 0 {
b = ctlOpts[ctlHopLimit].marshal(b, nil)
}
if opt.isset(flagPacketInfo) && ctlOpts[ctlPacketInfo].name > 0 {
b = ctlOpts[ctlPacketInfo].marshal(b, nil)
}
if opt.isset(FlagPathMTU) && ctlOpts[ctlPathMTU].name > 0 {
b = ctlOpts[ctlPathMTU].marshal(b, nil)
}
}
opt.RUnlock()
return
}
func parseControlMessage(b []byte) (*ControlMessage, error) {
if len(b) == 0 {
return nil, nil
}
cmsgs, err := syscall.ParseSocketControlMessage(b)
if err != nil {
return nil, os.NewSyscallError("parse socket control message", err)
}
cm := &ControlMessage{}
for _, m := range cmsgs {
if m.Header.Level != iana.ProtocolIPv6 {
continue
}
switch int(m.Header.Type) {
case ctlOpts[ctlTrafficClass].name:
ctlOpts[ctlTrafficClass].parse(cm, m.Data[:])
case ctlOpts[ctlHopLimit].name:
ctlOpts[ctlHopLimit].parse(cm, m.Data[:])
case ctlOpts[ctlPacketInfo].name:
ctlOpts[ctlPacketInfo].parse(cm, m.Data[:])
case ctlOpts[ctlPathMTU].name:
ctlOpts[ctlPathMTU].parse(cm, m.Data[:])
}
}
return cm, nil
}
func marshalControlMessage(cm *ControlMessage) (oob []byte) {
if cm == nil {
return
}
var l int
tclass := false
if ctlOpts[ctlTrafficClass].name > 0 && cm.TrafficClass > 0 {
tclass = true
l += syscall.CmsgSpace(ctlOpts[ctlTrafficClass].length)
}
hoplimit := false
if ctlOpts[ctlHopLimit].name > 0 && cm.HopLimit > 0 {
hoplimit = true
l += syscall.CmsgSpace(ctlOpts[ctlHopLimit].length)
}
pktinfo := false
if ctlOpts[ctlPacketInfo].name > 0 && (cm.Src.To16() != nil && cm.Src.To4() == nil || cm.IfIndex > 0) {
pktinfo = true
l += syscall.CmsgSpace(ctlOpts[ctlPacketInfo].length)
}
nexthop := false
if ctlOpts[ctlNextHop].name > 0 && cm.NextHop.To16() != nil && cm.NextHop.To4() == nil {
nexthop = true
l += syscall.CmsgSpace(ctlOpts[ctlNextHop].length)
}
if l > 0 {
oob = make([]byte, l)
b := oob
if tclass {
b = ctlOpts[ctlTrafficClass].marshal(b, cm)
}
if hoplimit {
b = ctlOpts[ctlHopLimit].marshal(b, cm)
}
if pktinfo {
b = ctlOpts[ctlPacketInfo].marshal(b, cm)
}
if nexthop {
b = ctlOpts[ctlNextHop].marshal(b, cm)
}
}
return
}

View File

@@ -0,0 +1,27 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import "syscall"
func setControlMessage(fd syscall.Handle, opt *rawOpt, cf ControlFlags, on bool) error {
// TODO(mikio): implement this
return syscall.EWINDOWS
}
func newControlMessage(opt *rawOpt) (oob []byte) {
// TODO(mikio): implement this
return nil
}
func parseControlMessage(b []byte) (*ControlMessage, error) {
// TODO(mikio): implement this
return nil, syscall.EWINDOWS
}
func marshalControlMessage(cm *ControlMessage) (oob []byte) {
// TODO(mikio): implement this
return nil
}

View File

@@ -0,0 +1,112 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#define __APPLE_USE_RFC_3542
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_2292PKTINFO = C.IPV6_2292PKTINFO
sysIPV6_2292HOPLIMIT = C.IPV6_2292HOPLIMIT
sysIPV6_2292NEXTHOP = C.IPV6_2292NEXTHOP
sysIPV6_2292HOPOPTS = C.IPV6_2292HOPOPTS
sysIPV6_2292DSTOPTS = C.IPV6_2292DSTOPTS
sysIPV6_2292RTHDR = C.IPV6_2292RTHDR
sysIPV6_2292PKTOPTIONS = C.IPV6_2292PKTOPTIONS
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR
sysIPV6_MSFILTER = C.IPV6_MSFILTER
sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP
sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP
sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP
sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP
sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE
sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE
sysIPV6_BOUND_IF = C.IPV6_BOUND_IF
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrStorage = C.sizeof_struct_sockaddr_storage
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofGroupReq = C.sizeof_struct_group_req
sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrStorage C.struct_sockaddr_storage
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter
type sysGroupReq C.struct_group_req
type sysGroupSourceReq C.struct_group_source_req

View File

@@ -0,0 +1,84 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,105 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR
sysIPV6_BINDANY = C.IPV6_BINDANY
sysIPV6_MSFILTER = C.IPV6_MSFILTER
sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP
sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP
sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP
sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP
sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE
sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrStorage = C.sizeof_struct_sockaddr_storage
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofGroupReq = C.sizeof_struct_group_req
sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrStorage C.struct_sockaddr_storage
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysGroupReq C.struct_group_req
type sysGroupSourceReq C.struct_group_source_req
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,136 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/ipv6.h>
#include <linux/icmpv6.h>
*/
import "C"
const (
sysIPV6_ADDRFORM = C.IPV6_ADDRFORM
sysIPV6_2292PKTINFO = C.IPV6_2292PKTINFO
sysIPV6_2292HOPOPTS = C.IPV6_2292HOPOPTS
sysIPV6_2292DSTOPTS = C.IPV6_2292DSTOPTS
sysIPV6_2292RTHDR = C.IPV6_2292RTHDR
sysIPV6_2292PKTOPTIONS = C.IPV6_2292PKTOPTIONS
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_2292HOPLIMIT = C.IPV6_2292HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_FLOWINFO = C.IPV6_FLOWINFO
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_ADD_MEMBERSHIP = C.IPV6_ADD_MEMBERSHIP
sysIPV6_DROP_MEMBERSHIP = C.IPV6_DROP_MEMBERSHIP
sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP
sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP
sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP
sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP
sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE
sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE
sysMCAST_MSFILTER = C.MCAST_MSFILTER
sysIPV6_ROUTER_ALERT = C.IPV6_ROUTER_ALERT
sysIPV6_MTU_DISCOVER = C.IPV6_MTU_DISCOVER
sysIPV6_MTU = C.IPV6_MTU
sysIPV6_RECVERR = C.IPV6_RECVERR
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_JOIN_ANYCAST = C.IPV6_JOIN_ANYCAST
sysIPV6_LEAVE_ANYCAST = C.IPV6_LEAVE_ANYCAST
//sysIPV6_PMTUDISC_DONT = C.IPV6_PMTUDISC_DONT
//sysIPV6_PMTUDISC_WANT = C.IPV6_PMTUDISC_WANT
//sysIPV6_PMTUDISC_DO = C.IPV6_PMTUDISC_DO
//sysIPV6_PMTUDISC_PROBE = C.IPV6_PMTUDISC_PROBE
//sysIPV6_PMTUDISC_INTERFACE = C.IPV6_PMTUDISC_INTERFACE
//sysIPV6_PMTUDISC_OMIT = C.IPV6_PMTUDISC_OMIT
sysIPV6_FLOWLABEL_MGR = C.IPV6_FLOWLABEL_MGR
sysIPV6_FLOWINFO_SEND = C.IPV6_FLOWINFO_SEND
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_XFRM_POLICY = C.IPV6_XFRM_POLICY
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_ADDR_PREFERENCES = C.IPV6_ADDR_PREFERENCES
sysIPV6_PREFER_SRC_TMP = C.IPV6_PREFER_SRC_TMP
sysIPV6_PREFER_SRC_PUBLIC = C.IPV6_PREFER_SRC_PUBLIC
sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = C.IPV6_PREFER_SRC_PUBTMP_DEFAULT
sysIPV6_PREFER_SRC_COA = C.IPV6_PREFER_SRC_COA
sysIPV6_PREFER_SRC_HOME = C.IPV6_PREFER_SRC_HOME
sysIPV6_PREFER_SRC_CGA = C.IPV6_PREFER_SRC_CGA
sysIPV6_PREFER_SRC_NONCGA = C.IPV6_PREFER_SRC_NONCGA
sysIPV6_MINHOPCOUNT = C.IPV6_MINHOPCOUNT
sysIPV6_ORIGDSTADDR = C.IPV6_ORIGDSTADDR
sysIPV6_RECVORIGDSTADDR = C.IPV6_RECVORIGDSTADDR
sysIPV6_TRANSPARENT = C.IPV6_TRANSPARENT
sysIPV6_UNICAST_IF = C.IPV6_UNICAST_IF
sysICMPV6_FILTER = C.ICMPV6_FILTER
sysICMPV6_FILTER_BLOCK = C.ICMPV6_FILTER_BLOCK
sysICMPV6_FILTER_PASS = C.ICMPV6_FILTER_PASS
sysICMPV6_FILTER_BLOCKOTHERS = C.ICMPV6_FILTER_BLOCKOTHERS
sysICMPV6_FILTER_PASSONLY = C.ICMPV6_FILTER_PASSONLY
sysSizeofKernelSockaddrStorage = C.sizeof_struct___kernel_sockaddr_storage
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6FlowlabelReq = C.sizeof_struct_in6_flowlabel_req
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofGroupReq = C.sizeof_struct_group_req
sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysKernelSockaddrStorage C.struct___kernel_sockaddr_storage
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6FlowlabelReq C.struct_in6_flowlabel_req
type sysIPv6Mreq C.struct_ipv6_mreq
type sysGroupReq C.struct_group_req
type sysGroupSourceReq C.struct_group_source_req
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,80 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,89 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_AUTH_LEVEL = C.IPV6_AUTH_LEVEL
sysIPV6_ESP_TRANS_LEVEL = C.IPV6_ESP_TRANS_LEVEL
sysIPV6_ESP_NETWORK_LEVEL = C.IPV6_ESP_NETWORK_LEVEL
sysIPSEC6_OUTSA = C.IPSEC6_OUTSA
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_IPCOMP_LEVEL = C.IPV6_IPCOMP_LEVEL
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PIPEX = C.IPV6_PIPEX
sysIPV6_RTABLE = C.IPV6_RTABLE
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,96 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVRTHDRDSTOPTS = C.IPV6_RECVRTHDRDSTOPTS
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_SEC_OPT = C.IPV6_SEC_OPT
sysIPV6_SRC_PREFERENCES = C.IPV6_SRC_PREFERENCES
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_PREFER_SRC_HOME = C.IPV6_PREFER_SRC_HOME
sysIPV6_PREFER_SRC_COA = C.IPV6_PREFER_SRC_COA
sysIPV6_PREFER_SRC_PUBLIC = C.IPV6_PREFER_SRC_PUBLIC
sysIPV6_PREFER_SRC_TMP = C.IPV6_PREFER_SRC_TMP
sysIPV6_PREFER_SRC_NONCGA = C.IPV6_PREFER_SRC_NONCGA
sysIPV6_PREFER_SRC_CGA = C.IPV6_PREFER_SRC_CGA
sysIPV6_PREFER_SRC_MIPMASK = C.IPV6_PREFER_SRC_MIPMASK
sysIPV6_PREFER_SRC_MIPDEFAULT = C.IPV6_PREFER_SRC_MIPDEFAULT
sysIPV6_PREFER_SRC_TMPMASK = C.IPV6_PREFER_SRC_TMPMASK
sysIPV6_PREFER_SRC_TMPDEFAULT = C.IPV6_PREFER_SRC_TMPDEFAULT
sysIPV6_PREFER_SRC_CGAMASK = C.IPV6_PREFER_SRC_CGAMASK
sysIPV6_PREFER_SRC_CGADEFAULT = C.IPV6_PREFER_SRC_CGADEFAULT
sysIPV6_PREFER_SRC_MASK = C.IPV6_PREFER_SRC_MASK
sysIPV6_PREFER_SRC_DEFAULT = C.IPV6_PREFER_SRC_DEFAULT
sysIPV6_BOUND_IF = C.IPV6_BOUND_IF
sysIPV6_UNSPEC_SRC = C.IPV6_UNSPEC_SRC
sysICMP6_FILTER = C.ICMP6_FILTER
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,288 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux netbsd openbsd windows
package ipv6
import (
"net"
"syscall"
)
// MulticastHopLimit returns the hop limit field value for outgoing
// multicast packets.
func (c *dgramOpt) MulticastHopLimit() (int, error) {
if !c.ok() {
return 0, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return 0, err
}
return getInt(fd, &sockOpts[ssoMulticastHopLimit])
}
// SetMulticastHopLimit sets the hop limit field value for future
// outgoing multicast packets.
func (c *dgramOpt) SetMulticastHopLimit(hoplim int) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
return setInt(fd, &sockOpts[ssoMulticastHopLimit], hoplim)
}
// MulticastInterface returns the default interface for multicast
// packet transmissions.
func (c *dgramOpt) MulticastInterface() (*net.Interface, error) {
if !c.ok() {
return nil, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return nil, err
}
return getInterface(fd, &sockOpts[ssoMulticastInterface])
}
// SetMulticastInterface sets the default interface for future
// multicast packet transmissions.
func (c *dgramOpt) SetMulticastInterface(ifi *net.Interface) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
return setInterface(fd, &sockOpts[ssoMulticastInterface], ifi)
}
// MulticastLoopback reports whether transmitted multicast packets
// should be copied and send back to the originator.
func (c *dgramOpt) MulticastLoopback() (bool, error) {
if !c.ok() {
return false, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return false, err
}
on, err := getInt(fd, &sockOpts[ssoMulticastLoopback])
if err != nil {
return false, err
}
return on == 1, nil
}
// SetMulticastLoopback sets whether transmitted multicast packets
// should be copied and send back to the originator.
func (c *dgramOpt) SetMulticastLoopback(on bool) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
return setInt(fd, &sockOpts[ssoMulticastLoopback], boolint(on))
}
// JoinGroup joins the group address group on the interface ifi.
// By default all sources that can cast data to group are accepted.
// It's possible to mute and unmute data transmission from a specific
// source by using ExcludeSourceSpecificGroup and
// IncludeSourceSpecificGroup.
// JoinGroup uses the system assigned multicast interface when ifi is
// nil, although this is not recommended because the assignment
// depends on platforms and sometimes it might require routing
// configuration.
func (c *dgramOpt) JoinGroup(ifi *net.Interface, group net.Addr) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
grp := netAddrToIP16(group)
if grp == nil {
return errMissingAddress
}
return setGroup(fd, &sockOpts[ssoJoinGroup], ifi, grp)
}
// LeaveGroup leaves the group address group on the interface ifi
// regardless of whether the group is any-source group or
// source-specific group.
func (c *dgramOpt) LeaveGroup(ifi *net.Interface, group net.Addr) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
grp := netAddrToIP16(group)
if grp == nil {
return errMissingAddress
}
return setGroup(fd, &sockOpts[ssoLeaveGroup], ifi, grp)
}
// JoinSourceSpecificGroup joins the source-specific group comprising
// group and source on the interface ifi.
// JoinSourceSpecificGroup uses the system assigned multicast
// interface when ifi is nil, although this is not recommended because
// the assignment depends on platforms and sometimes it might require
// routing configuration.
func (c *dgramOpt) JoinSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
grp := netAddrToIP16(group)
if grp == nil {
return errMissingAddress
}
src := netAddrToIP16(source)
if src == nil {
return errMissingAddress
}
return setSourceGroup(fd, &sockOpts[ssoJoinSourceGroup], ifi, grp, src)
}
// LeaveSourceSpecificGroup leaves the source-specific group on the
// interface ifi.
func (c *dgramOpt) LeaveSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
grp := netAddrToIP16(group)
if grp == nil {
return errMissingAddress
}
src := netAddrToIP16(source)
if src == nil {
return errMissingAddress
}
return setSourceGroup(fd, &sockOpts[ssoLeaveSourceGroup], ifi, grp, src)
}
// ExcludeSourceSpecificGroup excludes the source-specific group from
// the already joined any-source groups by JoinGroup on the interface
// ifi.
func (c *dgramOpt) ExcludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
grp := netAddrToIP16(group)
if grp == nil {
return errMissingAddress
}
src := netAddrToIP16(source)
if src == nil {
return errMissingAddress
}
return setSourceGroup(fd, &sockOpts[ssoBlockSourceGroup], ifi, grp, src)
}
// IncludeSourceSpecificGroup includes the excluded source-specific
// group by ExcludeSourceSpecificGroup again on the interface ifi.
func (c *dgramOpt) IncludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
grp := netAddrToIP16(group)
if grp == nil {
return errMissingAddress
}
src := netAddrToIP16(source)
if src == nil {
return errMissingAddress
}
return setSourceGroup(fd, &sockOpts[ssoUnblockSourceGroup], ifi, grp, src)
}
// Checksum reports whether the kernel will compute, store or verify a
// checksum for both incoming and outgoing packets. If on is true, it
// returns an offset in bytes into the data of where the checksum
// field is located.
func (c *dgramOpt) Checksum() (on bool, offset int, err error) {
if !c.ok() {
return false, 0, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return false, 0, err
}
offset, err = getInt(fd, &sockOpts[ssoChecksum])
if err != nil {
return false, 0, err
}
if offset < 0 {
return false, 0, nil
}
return true, offset, nil
}
// SetChecksum enables the kernel checksum processing. If on is ture,
// the offset should be an offset in bytes into the data of where the
// checksum field is located.
func (c *dgramOpt) SetChecksum(on bool, offset int) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
if !on {
offset = -1
}
return setInt(fd, &sockOpts[ssoChecksum], offset)
}
// ICMPFilter returns an ICMP filter.
func (c *dgramOpt) ICMPFilter() (*ICMPFilter, error) {
if !c.ok() {
return nil, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return nil, err
}
return getICMPFilter(fd, &sockOpts[ssoICMPFilter])
}
// SetICMPFilter deploys the ICMP filter.
func (c *dgramOpt) SetICMPFilter(f *ICMPFilter) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
return setICMPFilter(fd, &sockOpts[ssoICMPFilter], f)
}

View File

@@ -0,0 +1,119 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build nacl plan9 solaris
package ipv6
import "net"
// MulticastHopLimit returns the hop limit field value for outgoing
// multicast packets.
func (c *dgramOpt) MulticastHopLimit() (int, error) {
return 0, errOpNoSupport
}
// SetMulticastHopLimit sets the hop limit field value for future
// outgoing multicast packets.
func (c *dgramOpt) SetMulticastHopLimit(hoplim int) error {
return errOpNoSupport
}
// MulticastInterface returns the default interface for multicast
// packet transmissions.
func (c *dgramOpt) MulticastInterface() (*net.Interface, error) {
return nil, errOpNoSupport
}
// SetMulticastInterface sets the default interface for future
// multicast packet transmissions.
func (c *dgramOpt) SetMulticastInterface(ifi *net.Interface) error {
return errOpNoSupport
}
// MulticastLoopback reports whether transmitted multicast packets
// should be copied and send back to the originator.
func (c *dgramOpt) MulticastLoopback() (bool, error) {
return false, errOpNoSupport
}
// SetMulticastLoopback sets whether transmitted multicast packets
// should be copied and send back to the originator.
func (c *dgramOpt) SetMulticastLoopback(on bool) error {
return errOpNoSupport
}
// JoinGroup joins the group address group on the interface ifi.
// By default all sources that can cast data to group are accepted.
// It's possible to mute and unmute data transmission from a specific
// source by using ExcludeSourceSpecificGroup and
// IncludeSourceSpecificGroup.
// JoinGroup uses the system assigned multicast interface when ifi is
// nil, although this is not recommended because the assignment
// depends on platforms and sometimes it might require routing
// configuration.
func (c *dgramOpt) JoinGroup(ifi *net.Interface, group net.Addr) error {
return errOpNoSupport
}
// LeaveGroup leaves the group address group on the interface ifi
// regardless of whether the group is any-source group or
// source-specific group.
func (c *dgramOpt) LeaveGroup(ifi *net.Interface, group net.Addr) error {
return errOpNoSupport
}
// JoinSourceSpecificGroup joins the source-specific group comprising
// group and source on the interface ifi.
// JoinSourceSpecificGroup uses the system assigned multicast
// interface when ifi is nil, although this is not recommended because
// the assignment depends on platforms and sometimes it might require
// routing configuration.
func (c *dgramOpt) JoinSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
return errOpNoSupport
}
// LeaveSourceSpecificGroup leaves the source-specific group on the
// interface ifi.
func (c *dgramOpt) LeaveSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
return errOpNoSupport
}
// ExcludeSourceSpecificGroup excludes the source-specific group from
// the already joined any-source groups by JoinGroup on the interface
// ifi.
func (c *dgramOpt) ExcludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
return errOpNoSupport
}
// IncludeSourceSpecificGroup includes the excluded source-specific
// group by ExcludeSourceSpecificGroup again on the interface ifi.
func (c *dgramOpt) IncludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error {
return errOpNoSupport
}
// Checksum reports whether the kernel will compute, store or verify a
// checksum for both incoming and outgoing packets. If on is true, it
// returns an offset in bytes into the data of where the checksum
// field is located.
func (c *dgramOpt) Checksum() (on bool, offset int, err error) {
return false, 0, errOpNoSupport
}
// SetChecksum enables the kernel checksum processing. If on is ture,
// the offset should be an offset in bytes into the data of where the
// checksum field is located.
func (c *dgramOpt) SetChecksum(on bool, offset int) error {
return errOpNoSupport
}
// ICMPFilter returns an ICMP filter.
func (c *dgramOpt) ICMPFilter() (*ICMPFilter, error) {
return nil, errOpNoSupport
}
// SetICMPFilter deploys the ICMP filter.
func (c *dgramOpt) SetICMPFilter(f *ICMPFilter) error {
return errOpNoSupport
}

240
Godeps/_workspace/src/golang.org/x/net/ipv6/doc.go generated vendored Normal file
View File

@@ -0,0 +1,240 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package ipv6 implements IP-level socket options for the Internet
// Protocol version 6.
//
// The package provides IP-level socket options that allow
// manipulation of IPv6 facilities.
//
// The IPv6 protocol is defined in RFC 2460.
// Basic and advanced socket interface extensions are defined in RFC
// 3493 and RFC 3542.
// Socket interface extensions for multicast source filters are
// defined in RFC 3678.
// MLDv1 and MLDv2 are defined in RFC 2710 and RFC 3810.
// Source-specific multicast is defined in RFC 4607.
//
//
// Unicasting
//
// The options for unicasting are available for net.TCPConn,
// net.UDPConn and net.IPConn which are created as network connections
// that use the IPv6 transport. When a single TCP connection carrying
// a data flow of multiple packets needs to indicate the flow is
// important, ipv6.Conn is used to set the traffic class field on the
// IPv6 header for each packet.
//
// ln, err := net.Listen("tcp6", "[::]:1024")
// if err != nil {
// // error handling
// }
// defer ln.Close()
// for {
// c, err := ln.Accept()
// if err != nil {
// // error handling
// }
// go func(c net.Conn) {
// defer c.Close()
//
// The outgoing packets will be labeled DiffServ assured forwarding
// class 1 low drop precedence, known as AF11 packets.
//
// if err := ipv6.NewConn(c).SetTrafficClass(DiffServAF11); err != nil {
// // error handling
// }
// if _, err := c.Write(data); err != nil {
// // error handling
// }
// }(c)
// }
//
//
// Multicasting
//
// The options for multicasting are available for net.UDPConn and
// net.IPconn which are created as network connections that use the
// IPv6 transport. A few network facilities must be prepared before
// you begin multicasting, at a minimum joining network interfaces and
// multicast groups.
//
// en0, err := net.InterfaceByName("en0")
// if err != nil {
// // error handling
// }
// en1, err := net.InterfaceByIndex(911)
// if err != nil {
// // error handling
// }
// group := net.ParseIP("ff02::114")
//
// First, an application listens to an appropriate address with an
// appropriate service port.
//
// c, err := net.ListenPacket("udp6", "[::]:1024")
// if err != nil {
// // error handling
// }
// defer c.Close()
//
// Second, the application joins multicast groups, starts listening to
// the groups on the specified network interfaces. Note that the
// service port for transport layer protocol does not matter with this
// operation as joining groups affects only network and link layer
// protocols, such as IPv6 and Ethernet.
//
// p := ipv6.NewPacketConn(c)
// if err := p.JoinGroup(en0, &net.UDPAddr{IP: group}); err != nil {
// // error handling
// }
// if err := p.JoinGroup(en1, &net.UDPAddr{IP: group}); err != nil {
// // error handling
// }
//
// The application might set per packet control message transmissions
// between the protocol stack within the kernel. When the application
// needs a destination address on an incoming packet,
// SetControlMessage of ipv6.PacketConn is used to enable control
// message transmissons.
//
// if err := p.SetControlMessage(ipv6.FlagDst, true); err != nil {
// // error handling
// }
//
// The application could identify whether the received packets are
// of interest by using the control message that contains the
// destination address of the received packet.
//
// b := make([]byte, 1500)
// for {
// n, rcm, src, err := p.ReadFrom(b)
// if err != nil {
// // error handling
// }
// if rcm.Dst.IsMulticast() {
// if rcm.Dst.Equal(group) {
// // joined group, do something
// } else {
// // unknown group, discard
// continue
// }
// }
//
// The application can also send both unicast and multicast packets.
//
// p.SetTrafficClass(DiffServCS0)
// p.SetHopLimit(16)
// if _, err := p.WriteTo(data[:n], nil, src); err != nil {
// // error handling
// }
// dst := &net.UDPAddr{IP: group, Port: 1024}
// wcm := ipv6.ControlMessage{TrafficClass: DiffServCS7, HopLimit: 1}
// for _, ifi := range []*net.Interface{en0, en1} {
// wcm.IfIndex = ifi.Index
// if _, err := p.WriteTo(data[:n], &wcm, dst); err != nil {
// // error handling
// }
// }
// }
//
//
// More multicasting
//
// An application that uses PacketConn may join multiple multicast
// groups. For example, a UDP listener with port 1024 might join two
// different groups across over two different network interfaces by
// using:
//
// c, err := net.ListenPacket("udp6", "[::]:1024")
// if err != nil {
// // error handling
// }
// defer c.Close()
// p := ipv6.NewPacketConn(c)
// if err := p.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::1:114")}); err != nil {
// // error handling
// }
// if err := p.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::2:114")}); err != nil {
// // error handling
// }
// if err := p.JoinGroup(en1, &net.UDPAddr{IP: net.ParseIP("ff02::2:114")}); err != nil {
// // error handling
// }
//
// It is possible for multiple UDP listeners that listen on the same
// UDP port to join the same multicast group. The net package will
// provide a socket that listens to a wildcard address with reusable
// UDP port when an appropriate multicast address prefix is passed to
// the net.ListenPacket or net.ListenUDP.
//
// c1, err := net.ListenPacket("udp6", "[ff02::]:1024")
// if err != nil {
// // error handling
// }
// defer c1.Close()
// c2, err := net.ListenPacket("udp6", "[ff02::]:1024")
// if err != nil {
// // error handling
// }
// defer c2.Close()
// p1 := ipv6.NewPacketConn(c1)
// if err := p1.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::114")}); err != nil {
// // error handling
// }
// p2 := ipv6.NewPacketConn(c2)
// if err := p2.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::114")}); err != nil {
// // error handling
// }
//
// Also it is possible for the application to leave or rejoin a
// multicast group on the network interface.
//
// if err := p.LeaveGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::114")}); err != nil {
// // error handling
// }
// if err := p.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff01::114")}); err != nil {
// // error handling
// }
//
//
// Source-specific multicasting
//
// An application that uses PacketConn on MLDv2 supported platform is
// able to join source-specific multicast groups.
// The application may use JoinSourceSpecificGroup and
// LeaveSourceSpecificGroup for the operation known as "include" mode,
//
// ssmgroup := net.UDPAddr{IP: net.ParseIP("ff32::8000:9")}
// ssmsource := net.UDPAddr{IP: net.ParseIP("fe80::cafe")}
// if err := p.JoinSourceSpecificGroup(en0, &ssmgroup, &ssmsource); err != nil {
// // error handling
// }
// if err := p.LeaveSourceSpecificGroup(en0, &ssmgroup, &ssmsource); err != nil {
// // error handling
// }
//
// or JoinGroup, ExcludeSourceSpecificGroup,
// IncludeSourceSpecificGroup and LeaveGroup for the operation known
// as "exclude" mode.
//
// exclsource := net.UDPAddr{IP: net.ParseIP("fe80::dead")}
// if err := p.JoinGroup(en0, &ssmgroup); err != nil {
// // error handling
// }
// if err := p.ExcludeSourceSpecificGroup(en0, &ssmgroup, &exclsource); err != nil {
// // error handling
// }
// if err := p.LeaveGroup(en0, &ssmgroup); err != nil {
// // error handling
// }
//
// Note that it depends on each platform implementation what happens
// when an application which runs on MLDv2 unsupported platform uses
// JoinSourceSpecificGroup and LeaveSourceSpecificGroup.
// In general the platform tries to fall back to conversations using
// MLDv1 and starts to listen to multicast traffic.
// In the fallback case, ExcludeSourceSpecificGroup and
// IncludeSourceSpecificGroup may return an error.
package ipv6

123
Godeps/_workspace/src/golang.org/x/net/ipv6/endpoint.go generated vendored Normal file
View File

@@ -0,0 +1,123 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import (
"net"
"syscall"
"time"
)
// A Conn represents a network endpoint that uses IPv6 transport.
// It allows to set basic IP-level socket options such as traffic
// class and hop limit.
type Conn struct {
genericOpt
}
type genericOpt struct {
net.Conn
}
func (c *genericOpt) ok() bool { return c != nil && c.Conn != nil }
// PathMTU returns a path MTU value for the destination associated
// with the endpoint.
func (c *Conn) PathMTU() (int, error) {
if !c.genericOpt.ok() {
return 0, syscall.EINVAL
}
fd, err := c.genericOpt.sysfd()
if err != nil {
return 0, err
}
_, mtu, err := getMTUInfo(fd, &sockOpts[ssoPathMTU])
if err != nil {
return 0, err
}
return mtu, nil
}
// NewConn returns a new Conn.
func NewConn(c net.Conn) *Conn {
return &Conn{
genericOpt: genericOpt{Conn: c},
}
}
// A PacketConn represents a packet network endpoint that uses IPv6
// transport. It is used to control several IP-level socket options
// including IPv6 header manipulation. It also provides datagram
// based network I/O methods specific to the IPv6 and higher layer
// protocols such as OSPF, GRE, and UDP.
type PacketConn struct {
genericOpt
dgramOpt
payloadHandler
}
type dgramOpt struct {
net.PacketConn
}
func (c *dgramOpt) ok() bool { return c != nil && c.PacketConn != nil }
// SetControlMessage allows to receive the per packet basis IP-level
// socket options.
func (c *PacketConn) SetControlMessage(cf ControlFlags, on bool) error {
if !c.payloadHandler.ok() {
return syscall.EINVAL
}
fd, err := c.payloadHandler.sysfd()
if err != nil {
return err
}
return setControlMessage(fd, &c.payloadHandler.rawOpt, cf, on)
}
// SetDeadline sets the read and write deadlines associated with the
// endpoint.
func (c *PacketConn) SetDeadline(t time.Time) error {
if !c.payloadHandler.ok() {
return syscall.EINVAL
}
return c.payloadHandler.SetDeadline(t)
}
// SetReadDeadline sets the read deadline associated with the
// endpoint.
func (c *PacketConn) SetReadDeadline(t time.Time) error {
if !c.payloadHandler.ok() {
return syscall.EINVAL
}
return c.payloadHandler.SetReadDeadline(t)
}
// SetWriteDeadline sets the write deadline associated with the
// endpoint.
func (c *PacketConn) SetWriteDeadline(t time.Time) error {
if !c.payloadHandler.ok() {
return syscall.EINVAL
}
return c.payloadHandler.SetWriteDeadline(t)
}
// Close closes the endpoint.
func (c *PacketConn) Close() error {
if !c.payloadHandler.ok() {
return syscall.EINVAL
}
return c.payloadHandler.Close()
}
// NewPacketConn returns a new PacketConn using c as its underlying
// transport.
func NewPacketConn(c net.PacketConn) *PacketConn {
return &PacketConn{
genericOpt: genericOpt{Conn: c.(net.Conn)},
dgramOpt: dgramOpt{PacketConn: c},
payloadHandler: payloadHandler{PacketConn: c},
}
}

View File

@@ -0,0 +1,214 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"fmt"
"log"
"net"
"os"
"time"
"golang.org/x/net/icmp"
"golang.org/x/net/ipv6"
)
func ExampleConn_markingTCP() {
ln, err := net.Listen("tcp6", "[::]:1024")
if err != nil {
log.Fatal(err)
}
defer ln.Close()
for {
c, err := ln.Accept()
if err != nil {
log.Fatal(err)
}
go func(c net.Conn) {
defer c.Close()
p := ipv6.NewConn(c)
if err := p.SetTrafficClass(0x28); err != nil { // DSCP AF11
log.Fatal(err)
}
if err := p.SetHopLimit(128); err != nil {
log.Fatal(err)
}
if _, err := c.Write([]byte("HELLO-R-U-THERE-ACK")); err != nil {
log.Fatal(err)
}
}(c)
}
}
func ExamplePacketConn_servingOneShotMulticastDNS() {
c, err := net.ListenPacket("udp6", "[::]:5353") // mDNS over UDP
if err != nil {
log.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
en0, err := net.InterfaceByName("en0")
if err != nil {
log.Fatal(err)
}
mDNSLinkLocal := net.UDPAddr{IP: net.ParseIP("ff02::fb")}
if err := p.JoinGroup(en0, &mDNSLinkLocal); err != nil {
log.Fatal(err)
}
defer p.LeaveGroup(en0, &mDNSLinkLocal)
if err := p.SetControlMessage(ipv6.FlagDst|ipv6.FlagInterface, true); err != nil {
log.Fatal(err)
}
var wcm ipv6.ControlMessage
b := make([]byte, 1500)
for {
_, rcm, peer, err := p.ReadFrom(b)
if err != nil {
log.Fatal(err)
}
if !rcm.Dst.IsMulticast() || !rcm.Dst.Equal(mDNSLinkLocal.IP) {
continue
}
wcm.IfIndex = rcm.IfIndex
answers := []byte("FAKE-MDNS-ANSWERS") // fake mDNS answers, you need to implement this
if _, err := p.WriteTo(answers, &wcm, peer); err != nil {
log.Fatal(err)
}
}
}
func ExamplePacketConn_tracingIPPacketRoute() {
// Tracing an IP packet route to www.google.com.
const host = "www.google.com"
ips, err := net.LookupIP(host)
if err != nil {
log.Fatal(err)
}
var dst net.IPAddr
for _, ip := range ips {
if ip.To16() != nil && ip.To4() == nil {
dst.IP = ip
fmt.Printf("using %v for tracing an IP packet route to %s\n", dst.IP, host)
break
}
}
if dst.IP == nil {
log.Fatal("no AAAA record found")
}
c, err := net.ListenPacket("ip6:58", "::") // ICMP for IPv6
if err != nil {
log.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
if err := p.SetControlMessage(ipv6.FlagHopLimit|ipv6.FlagSrc|ipv6.FlagDst|ipv6.FlagInterface, true); err != nil {
log.Fatal(err)
}
wm := icmp.Message{
Type: ipv6.ICMPTypeEchoRequest, Code: 0,
Body: &icmp.Echo{
ID: os.Getpid() & 0xffff,
Data: []byte("HELLO-R-U-THERE"),
},
}
var f ipv6.ICMPFilter
f.SetAll(true)
f.Accept(ipv6.ICMPTypeTimeExceeded)
f.Accept(ipv6.ICMPTypeEchoReply)
if err := p.SetICMPFilter(&f); err != nil {
log.Fatal(err)
}
var wcm ipv6.ControlMessage
rb := make([]byte, 1500)
for i := 1; i <= 64; i++ { // up to 64 hops
wm.Body.(*icmp.Echo).Seq = i
wb, err := wm.Marshal(nil)
if err != nil {
log.Fatal(err)
}
// In the real world usually there are several
// multiple traffic-engineered paths for each hop.
// You may need to probe a few times to each hop.
begin := time.Now()
wcm.HopLimit = i
if _, err := p.WriteTo(wb, &wcm, &dst); err != nil {
log.Fatal(err)
}
if err := p.SetReadDeadline(time.Now().Add(3 * time.Second)); err != nil {
log.Fatal(err)
}
n, rcm, peer, err := p.ReadFrom(rb)
if err != nil {
if err, ok := err.(net.Error); ok && err.Timeout() {
fmt.Printf("%v\t*\n", i)
continue
}
log.Fatal(err)
}
rm, err := icmp.ParseMessage(58, rb[:n])
if err != nil {
log.Fatal(err)
}
rtt := time.Since(begin)
// In the real world you need to determine whether the
// received message is yours using ControlMessage.Src,
// ControlMesage.Dst, icmp.Echo.ID and icmp.Echo.Seq.
switch rm.Type {
case ipv6.ICMPTypeTimeExceeded:
names, _ := net.LookupAddr(peer.String())
fmt.Printf("%d\t%v %+v %v\n\t%+v\n", i, peer, names, rtt, rcm)
case ipv6.ICMPTypeEchoReply:
names, _ := net.LookupAddr(peer.String())
fmt.Printf("%d\t%v %+v %v\n\t%+v\n", i, peer, names, rtt, rcm)
return
}
}
}
func ExamplePacketConn_advertisingOSPFHello() {
c, err := net.ListenPacket("ip6:89", "::") // OSPF for IPv6
if err != nil {
log.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
en0, err := net.InterfaceByName("en0")
if err != nil {
log.Fatal(err)
}
allSPFRouters := net.IPAddr{IP: net.ParseIP("ff02::5")}
if err := p.JoinGroup(en0, &allSPFRouters); err != nil {
log.Fatal(err)
}
defer p.LeaveGroup(en0, &allSPFRouters)
hello := make([]byte, 24) // fake hello data, you need to implement this
ospf := make([]byte, 16) // fake ospf header, you need to implement this
ospf[0] = 3 // version 3
ospf[1] = 1 // hello packet
ospf = append(ospf, hello...)
if err := p.SetChecksum(true, 12); err != nil {
log.Fatal(err)
}
cm := ipv6.ControlMessage{
TrafficClass: 0xc0, // DSCP CS6
HopLimit: 1,
IfIndex: en0.Index,
}
if _, err := p.WriteTo(ospf, &cm, &allSPFRouters); err != nil {
log.Fatal(err)
}
}

208
Godeps/_workspace/src/golang.org/x/net/ipv6/gen.go generated vendored Normal file
View File

@@ -0,0 +1,208 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
//go:generate go run gen.go
// This program generates system adaptation constants and types,
// internet protocol constants and tables by reading template files
// and IANA protocol registries.
package main
import (
"bytes"
"encoding/xml"
"fmt"
"go/format"
"io"
"io/ioutil"
"net/http"
"os"
"os/exec"
"runtime"
"strconv"
"strings"
)
func main() {
if err := genzsys(); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
if err := geniana(); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func genzsys() error {
defs := "defs_" + runtime.GOOS + ".go"
f, err := os.Open(defs)
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
f.Close()
cmd := exec.Command("go", "tool", "cgo", "-godefs", defs)
b, err := cmd.Output()
if err != nil {
return err
}
// The ipv6 pacakge still supports go1.2, and so we need to
// take care of additional platforms in go1.3 and above for
// working with go1.2.
switch {
case runtime.GOOS == "dragonfly" || runtime.GOOS == "solaris":
b = bytes.Replace(b, []byte("package ipv6\n"), []byte("// +build "+runtime.GOOS+"\n\npackage ipv6\n"), 1)
case runtime.GOOS == "linux" && (runtime.GOARCH == "arm64" || runtime.GOARCH == "ppc64" || runtime.GOARCH == "ppc64le"):
b = bytes.Replace(b, []byte("package ipv6\n"), []byte("// +build "+runtime.GOOS+","+runtime.GOARCH+"\n\npackage ipv6\n"), 1)
}
b, err = format.Source(b)
if err != nil {
return err
}
zsys := "zsys_" + runtime.GOOS + ".go"
switch runtime.GOOS {
case "freebsd", "linux":
zsys = "zsys_" + runtime.GOOS + "_" + runtime.GOARCH + ".go"
}
if err := ioutil.WriteFile(zsys, b, 0644); err != nil {
return err
}
return nil
}
var registries = []struct {
url string
parse func(io.Writer, io.Reader) error
}{
{
"http://www.iana.org/assignments/icmpv6-parameters/icmpv6-parameters.xml",
parseICMPv6Parameters,
},
}
func geniana() error {
var bb bytes.Buffer
fmt.Fprintf(&bb, "// go generate gen.go\n")
fmt.Fprintf(&bb, "// GENERATED BY THE COMMAND ABOVE; DO NOT EDIT\n\n")
fmt.Fprintf(&bb, "package ipv6\n\n")
for _, r := range registries {
resp, err := http.Get(r.url)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("got HTTP status code %v for %v\n", resp.StatusCode, r.url)
}
if err := r.parse(&bb, resp.Body); err != nil {
return err
}
fmt.Fprintf(&bb, "\n")
}
b, err := format.Source(bb.Bytes())
if err != nil {
return err
}
if err := ioutil.WriteFile("iana.go", b, 0644); err != nil {
return err
}
return nil
}
func parseICMPv6Parameters(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var icp icmpv6Parameters
if err := dec.Decode(&icp); err != nil {
return err
}
prs := icp.escape()
fmt.Fprintf(w, "// %s, Updated: %s\n", icp.Title, icp.Updated)
fmt.Fprintf(w, "const (\n")
for _, pr := range prs {
if pr.Name == "" {
continue
}
fmt.Fprintf(w, "ICMPType%s ICMPType = %d", pr.Name, pr.Value)
fmt.Fprintf(w, "// %s\n", pr.OrigName)
}
fmt.Fprintf(w, ")\n\n")
fmt.Fprintf(w, "// %s, Updated: %s\n", icp.Title, icp.Updated)
fmt.Fprintf(w, "var icmpTypes = map[ICMPType]string{\n")
for _, pr := range prs {
if pr.Name == "" {
continue
}
fmt.Fprintf(w, "%d: %q,\n", pr.Value, strings.ToLower(pr.OrigName))
}
fmt.Fprintf(w, "}\n")
return nil
}
type icmpv6Parameters struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
Registries []struct {
Title string `xml:"title"`
Records []struct {
Value string `xml:"value"`
Name string `xml:"name"`
} `xml:"record"`
} `xml:"registry"`
}
type canonICMPv6ParamRecord struct {
OrigName string
Name string
Value int
}
func (icp *icmpv6Parameters) escape() []canonICMPv6ParamRecord {
id := -1
for i, r := range icp.Registries {
if strings.Contains(r.Title, "Type") || strings.Contains(r.Title, "type") {
id = i
break
}
}
if id < 0 {
return nil
}
prs := make([]canonICMPv6ParamRecord, len(icp.Registries[id].Records))
sr := strings.NewReplacer(
"Messages", "",
"Message", "",
"ICMP", "",
"+", "P",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, pr := range icp.Registries[id].Records {
if strings.Contains(pr.Name, "Reserved") ||
strings.Contains(pr.Name, "Unassigned") ||
strings.Contains(pr.Name, "Deprecated") ||
strings.Contains(pr.Name, "Experiment") ||
strings.Contains(pr.Name, "experiment") {
continue
}
ss := strings.Split(pr.Name, "\n")
if len(ss) > 1 {
prs[i].Name = strings.Join(ss, " ")
} else {
prs[i].Name = ss[0]
}
s := strings.TrimSpace(prs[i].Name)
prs[i].OrigName = s
prs[i].Name = sr.Replace(s)
prs[i].Value, _ = strconv.Atoi(pr.Value)
}
return prs
}

View File

@@ -0,0 +1,60 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux netbsd openbsd windows
package ipv6
import "syscall"
// TrafficClass returns the traffic class field value for outgoing
// packets.
func (c *genericOpt) TrafficClass() (int, error) {
if !c.ok() {
return 0, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return 0, err
}
return getInt(fd, &sockOpts[ssoTrafficClass])
}
// SetTrafficClass sets the traffic class field value for future
// outgoing packets.
func (c *genericOpt) SetTrafficClass(tclass int) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
return setInt(fd, &sockOpts[ssoTrafficClass], tclass)
}
// HopLimit returns the hop limit field value for outgoing packets.
func (c *genericOpt) HopLimit() (int, error) {
if !c.ok() {
return 0, syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return 0, err
}
return getInt(fd, &sockOpts[ssoHopLimit])
}
// SetHopLimit sets the hop limit field value for future outgoing
// packets.
func (c *genericOpt) SetHopLimit(hoplim int) error {
if !c.ok() {
return syscall.EINVAL
}
fd, err := c.sysfd()
if err != nil {
return err
}
return setInt(fd, &sockOpts[ssoHopLimit], hoplim)
}

View File

@@ -0,0 +1,30 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build nacl plan9 solaris
package ipv6
// TrafficClass returns the traffic class field value for outgoing
// packets.
func (c *genericOpt) TrafficClass() (int, error) {
return 0, errOpNoSupport
}
// SetTrafficClass sets the traffic class field value for future
// outgoing packets.
func (c *genericOpt) SetTrafficClass(tclass int) error {
return errOpNoSupport
}
// HopLimit returns the hop limit field value for outgoing packets.
func (c *genericOpt) HopLimit() (int, error) {
return 0, errOpNoSupport
}
// SetHopLimit sets the hop limit field value for future outgoing
// packets.
func (c *genericOpt) SetHopLimit(hoplim int) error {
return errOpNoSupport
}

55
Godeps/_workspace/src/golang.org/x/net/ipv6/header.go generated vendored Normal file
View File

@@ -0,0 +1,55 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import (
"errors"
"fmt"
"net"
)
const (
Version = 6 // protocol version
HeaderLen = 40 // header length
)
// A Header represents an IPv6 base header.
type Header struct {
Version int // protocol version
TrafficClass int // traffic class
FlowLabel int // flow label
PayloadLen int // payload length
NextHeader int // next header
HopLimit int // hop limit
Src net.IP // source address
Dst net.IP // destination address
}
func (h *Header) String() string {
if h == nil {
return "<nil>"
}
return fmt.Sprintf("ver: %v, tclass: %#x, flowlbl: %#x, payloadlen: %v, nxthdr: %v, hoplim: %v, src: %v, dst: %v", h.Version, h.TrafficClass, h.FlowLabel, h.PayloadLen, h.NextHeader, h.HopLimit, h.Src, h.Dst)
}
// ParseHeader parses b as an IPv6 base header.
func ParseHeader(b []byte) (*Header, error) {
if len(b) < HeaderLen {
return nil, errors.New("header too short")
}
h := &Header{
Version: int(b[0]) >> 4,
TrafficClass: int(b[0]&0x0f)<<4 | int(b[1])>>4,
FlowLabel: int(b[1]&0x0f)<<16 | int(b[2])<<8 | int(b[3]),
PayloadLen: int(b[4])<<8 | int(b[5]),
NextHeader: int(b[6]),
HopLimit: int(b[7]),
}
h.Src = make(net.IP, net.IPv6len)
copy(h.Src, b[8:24])
h.Dst = make(net.IP, net.IPv6len)
copy(h.Dst, b[24:40])
return h, nil
}

View File

@@ -0,0 +1,50 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"net"
"reflect"
"testing"
"golang.org/x/net/internal/iana"
"golang.org/x/net/ipv6"
)
var (
wireHeaderFromKernel = [ipv6.HeaderLen]byte{
0x69, 0x8b, 0xee, 0xf1,
0xca, 0xfe, 0x2c, 0x01,
0x20, 0x01, 0x0d, 0xb8,
0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01,
0x20, 0x01, 0x0d, 0xb8,
0x00, 0x02, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01,
}
testHeader = &ipv6.Header{
Version: ipv6.Version,
TrafficClass: iana.DiffServAF43,
FlowLabel: 0xbeef1,
PayloadLen: 0xcafe,
NextHeader: iana.ProtocolIPv6Frag,
HopLimit: 1,
Src: net.ParseIP("2001:db8:1::1"),
Dst: net.ParseIP("2001:db8:2::1"),
}
)
func TestParseHeader(t *testing.T) {
h, err := ipv6.ParseHeader(wireHeaderFromKernel[:])
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(h, testHeader) {
t.Fatalf("got %#v; want %#v", h, testHeader)
}
}

33
Godeps/_workspace/src/golang.org/x/net/ipv6/helper.go generated vendored Normal file
View File

@@ -0,0 +1,33 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import (
"errors"
"net"
)
var errOpNoSupport = errors.New("operation not supported")
func boolint(b bool) int {
if b {
return 1
}
return 0
}
func netAddrToIP16(a net.Addr) net.IP {
switch v := a.(type) {
case *net.UDPAddr:
if ip := v.IP.To16(); ip != nil && ip.To4() == nil {
return ip
}
case *net.IPAddr:
if ip := v.IP.To16(); ip != nil && ip.To4() == nil {
return ip
}
}
return nil
}

View File

@@ -0,0 +1,19 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build nacl plan9 solaris
package ipv6
func (c *genericOpt) sysfd() (int, error) {
return 0, errOpNoSupport
}
func (c *dgramOpt) sysfd() (int, error) {
return 0, errOpNoSupport
}
func (c *payloadHandler) sysfd() (int, error) {
return 0, errOpNoSupport
}

Some files were not shown because too many files have changed in this diff Show More