Compare commits

...

1228 Commits

Author SHA1 Message Date
Jakob Borg
ad8c266f76 Update docs & translation 2015-11-15 18:15:10 +01:00
Jakob Borg
807c3bdcc7 Merge pull request #2473 from syncthing/revert-2470-nostoppedrescan
Revert "Don't show "Rescan" button for stopped folders"
2015-11-14 21:10:52 +01:00
Audrius Butkevicius
62efbd17de Revert "Don't show "Rescan" button for stopped folders" 2015-11-14 14:01:22 -05:00
Audrius Butkevicius
d99350fd61 Merge pull request #2470 from calmh/nostoppedrescan
Don't show "Rescan" button for stopped folders
2015-11-14 09:09:28 -05:00
Jakob Borg
40c70a9a2a Don't show "Rescan" button for stopped folders 2015-11-14 12:14:09 +01:00
Audrius Butkevicius
8fb7f40a6a Merge pull request #2468 from calmh/removefolder
Remove folder without restart (fixes #2262)
2015-11-13 09:51:58 -05:00
Jakob Borg
2f12d41d9d Don't dirty blockmap key between lookups (fixes #2455) 2015-11-13 15:44:30 +01:00
Jakob Borg
9fbdb6b305 Cancel a running scan 2015-11-13 15:30:21 +01:00
Jakob Borg
73285cadb6 Remove folder without restart (fixes #2262) 2015-11-13 13:32:52 +01:00
Jakob Borg
56f1c295b6 Merge pull request #2462 from AudriusButkevicius/mtimes
Use virtualMtime when deciding if a file is up to date
2015-11-12 08:48:45 +01:00
Jakob Borg
7f76ed8413 Fix build 2015-11-12 08:46:13 +01:00
AudriusButkevicius
f7edd36931 Use virtualMtime when deciding if a file is up to date 2015-11-12 03:40:06 +00:00
Audrius Butkevicius
c39f2b7c05 Merge pull request #2463 from boone/fix_typos
Fix typos.
2015-11-11 21:29:34 -05:00
Mike Boone
342036408e Fix typos. 2015-11-11 21:20:34 -05:00
Jakob Borg
f4904fce17 Merge pull request #2449 from Stefan-Code/upgrade-system
made upgrade-system smarter (fixes #2446)
2015-11-10 21:15:02 +01:00
Stefan Kuntz
2abb2de753 Made upgrade-system smarter (fixes #2446) 2015-11-10 17:41:50 +01:00
Jakob Borg
ef0a0db07e More local discovery URL debugging (ref #2444) 2015-11-10 10:16:55 +01:00
Jakob Borg
a45795efec Add tylerbrazier 2015-11-10 08:20:16 +01:00
Jakob Borg
88ae353aef Merge pull request #2443 from tylerbrazier/master
Audit logins with new Login event (fixes #2377)
2015-11-10 08:19:03 +01:00
Tyler Brazier
97b9690711 Audit logins with new LoginAttempt event (fixes #2377) 2015-11-10 00:49:51 -05:00
Audrius Butkevicius
48ce356d5c Merge pull request #2448 from calmh/connhandling
Refactor out methods resolveAddresses, connectDirect, connectViaRelay
2015-11-09 22:01:06 -05:00
Audrius Butkevicius
92451f94bc Merge pull request #2450 from syncthing/Zillode-patch-1
Correct commentary for ConnectionStats
2015-11-09 18:32:19 -05:00
Zillode
593632045d Correct commentary for ConnectionStats
Related to https://github.com/syncthing/syncthing-android/issues/473 and 944d9c84a0
ConnectionStats returns connection information for all devices, even when disconnected.
2015-11-09 23:58:06 +01:00
Jakob Borg
e3c55ef307 Fix address list in DeviceDiscovered, add debug prints (ref #2444) 2015-11-09 21:36:17 +01:00
Jakob Borg
edf1730fd2 Merge pull request #2447 from alex2108/master
Add default-v4 and default-v6 as options for discovery
2015-11-09 16:02:37 +01:00
Alexander Graf
34cd8e3f95 Add default-v4 and default-v6 as options for discovery 2015-11-09 15:56:46 +01:00
Jakob Borg
242cce022a Refactor out methods resolveAddresses, connectDirect, connectViaRelay 2015-11-09 15:35:32 +01:00
Jakob Borg
19bf51cefb Revert "Merge pull request #2440 from Stefan-Code/master"
This reverts commit 81bc6bf34b, reversing
changes made to 7de736e8d0.

Unfortunately this tricks the upgrade system into picking the wrong
binary. We need to fix the upgrade system before merging this.
2015-11-09 14:23:26 +01:00
Jakob Borg
74a2e80142 Update docs & translations 2015-11-09 14:00:10 +01:00
Jakob Borg
b9b630e3b6 Change certificate on discovery-2 2015-11-09 13:58:44 +01:00
Jakob Borg
f0bdf833d1 Oh come *on* 2015-11-09 12:39:09 +01:00
Jakob Borg
81bc6bf34b Merge pull request #2440 from Stefan-Code/master
Added ufw firewall application preset
2015-11-09 12:19:42 +01:00
Jakob Borg
7de736e8d0 Ok then, lower case 2015-11-09 12:17:23 +01:00
Jakob Borg
cd097e0fce Add Stefan-Code 2015-11-09 12:14:32 +01:00
Stefan-Code
cc81a7ccfe added ufw firewall application preset (fixes #2435) 2015-11-09 11:58:59 +01:00
Jakob Borg
58d320c270 String slice formatting 2015-11-08 18:06:06 +01:00
Jakob Borg
59565fd1d1 Woops 2015-11-07 11:25:00 +01:00
Jakob Borg
55592137a2 Use constructor functions for FolderConfiguration and DeviceConfiguration 2015-11-07 09:50:04 +01:00
Jakob Borg
58523060f0 Actually do negative caching on failed discovery lookups (fixes #2434) 2015-11-06 17:14:20 +01:00
Audrius Butkevicius
07d53be9fc Merge pull request #2432 from calmh/cachepath
Cache the folderconfig Path() call
2015-11-06 08:37:39 +00:00
Jakob Borg
d4b0235a8b Correctly report the default relay server in usage stats 2015-11-06 07:16:15 +00:00
Jakob Borg
34aa41e17b Cache the Path() call, as it's quite expensive and called a lot 2015-11-06 07:11:22 +00:00
Jakob Borg
36f6a9347c Benchmark must use *db.Instance 2015-11-05 17:46:53 +00:00
Jakob Borg
d49d386ef2 Docs and translation update 2015-11-05 15:47:06 +00:00
Jakob Borg
00c363829c Refactor: move folder prepare to it's own function 2015-11-05 08:01:47 +00:00
Audrius Butkevicius
a9691dbdf4 Merge pull request #2430 from calmh/jsondecode
Run JSON decoding through the usual setting of defaults and fixing up
2015-11-04 20:44:56 +00:00
Jakob Borg
9df701906f Run JSON decoding through the usual setting of defaults and fixing up
I see no reason not to do this, and it gives a unified place (the prepare()
call) to initialize cached attributes and so on.
2015-11-04 20:33:10 +00:00
Jakob Borg
283671fa9d Remove old dead code 2015-11-04 20:15:36 +00:00
Jakob Borg
435c29755d We haven't had cleartext passwords in the config for ages 2015-11-04 20:15:11 +00:00
Jakob Borg
686f91777c Don't force rescan dirs and symlinks
We can't look for changed modtime on these as we don't track the modtime
to start with.
2015-11-04 19:53:07 +00:00
Audrius Butkevicius
2aa028facb Add user-agent header, capitalize headers as others seems to do it (fixes #2422) 2015-10-31 15:36:08 +00:00
Audrius Butkevicius
b4bbd050c2 Merge pull request #2424 from calmh/dbinstance
We should pass around db.Instance instead of leveldb.DB
2015-10-31 12:51:23 +00:00
Jakob Borg
2a4fc28318 We should pass around db.Instance instead of leveldb.DB
We're going to need the db.Instance to keep some state, and for that to
work we need the same one passed around everywhere. Hence this moves the
leveldb-specific file opening stuff into the db package and exports the
dbInstance type.
2015-10-31 12:35:30 +01:00
Jakob Borg
313485e406 Remove file that snuck in by mistake 2015-10-31 11:38:59 +01:00
Jakob Borg
faf4267c73 Refactor: the various db key functions should be instance methods 2015-10-31 11:27:04 +01:00
Jakob Borg
e6277d799f Undo incorrect revert of folder ID in test config 2015-10-31 11:27:04 +01:00
Jakob Borg
cdbc8004fb Comment pedantry 2015-10-31 11:16:07 +01:00
Audrius Butkevicius
1fac2f686d Merge pull request #2423 from calmh/urls
Create a correct URL is more difficult than just slapping on a scheme (fixes #2316)
2015-10-30 20:50:32 +00:00
Jakob Borg
08c8d679ac Create a correct URL is more difficult than just slapping on a scheme (fixes #2316) 2015-10-30 21:22:40 +01:00
Jakob Borg
48c34b7234 Translation update 2015-10-30 10:23:09 +01:00
Audrius Butkevicius
28603f0d2c Merge pull request #2420 from calmh/closelog
Enable log rotation by automatically closing log file (fixes #2251)
2015-10-29 15:25:51 +00:00
Jakob Borg
b2855f02fe Enable log rotation by automatically closing log file (fixes #2251) 2015-10-29 16:04:07 +01:00
Audrius Butkevicius
bef3d88076 Merge pull request #2418 from calmh/fix2416
Rescan changed files before pulling on top of them (fixes #2416)
2015-10-29 08:15:36 +00:00
Jakob Borg
e1a8ea7dec Rescan changed files before pulling on top of them (fixes #2416) 2015-10-29 09:12:37 +01:00
Jakob Borg
c4ad97136f Move leveldb instance and transactions into separate files 2015-10-29 08:07:51 +01:00
Audrius Butkevicius
eab1d6782b Merge pull request #2415 from calmh/dbkeys
Add database and transaction instances
2015-10-28 21:50:29 +00:00
Jakob Borg
fd7b8ec77e Neater transaction handling 2015-10-28 22:04:00 +01:00
Jakob Borg
e28c991331 Create an instance type to tie database methods to 2015-10-28 21:03:05 +01:00
Jakob Borg
a52811dfa3 Don't use godep to run tests 2015-10-28 09:22:07 +01:00
Jakob Borg
9e210d705d The PublicKey() method is an addition in Go 1.4 2015-10-27 16:03:14 +01:00
Jakob Borg
c42f1b53ab pulorder.go -> pullorder.go 2015-10-27 12:14:14 +01:00
Jakob Borg
d171173e90 AlwaysLocalNets should not default to null 2015-10-27 12:04:51 +01:00
Jakob Borg
679f0f9363 Fix some config Copy() things we had forgotten 2015-10-27 11:53:42 +01:00
Jakob Borg
724c1e297f Remove handling of config versions < 10 (v0.11.0) 2015-10-27 11:46:33 +01:00
Jakob Borg
83154569b1 Refactor config types into separate files 2015-10-27 11:37:03 +01:00
Jakob Borg
e3c0fba34b Must not call hex.Dump in non-debug mode... 2015-10-27 10:27:18 +01:00
Jakob Borg
2b6a6b91f3 Remove unused struct field 2015-10-27 09:55:05 +01:00
Audrius Butkevicius
09a555fdd2 Merge pull request #2410 from calmh/hashalloc
Reduce allocations in HashFile
2015-10-27 08:45:38 +00:00
Jakob Borg
dc32f7f0a3 Reduce allocations in HashFile
By using copyBuffer we avoid a buffer allocation for each block we hash,
and by allocating space for the hashes up front we get one large backing
array instead of a small one for each block. For a 17 MiB file this
makes quite a difference in the amount of memory allocated:

	benchmark               old ns/op     new ns/op     delta
	BenchmarkHashFile-8     102045110     100459158     -1.55%

	benchmark               old allocs     new allocs     delta
	BenchmarkHashFile-8     415            144            -65.30%

	benchmark               old bytes     new bytes     delta
	BenchmarkHashFile-8     4504296       48104         -98.93%
2015-10-27 09:37:27 +01:00
Jakob Borg
1efd8d6c75 Add benchmark of HashFile 2015-10-27 09:30:34 +01:00
Jakob Borg
898fc72313 Fixup NICKS/authors 2015-10-27 08:38:25 +01:00
Jakob Borg
21c5806cbf Merge pull request #2405 from acogdev/master
Documentation and examples for autostarting with Upstart
2015-10-27 08:37:14 +01:00
Jakob Borg
464e6bec95 Log lines in REST should have lower case keys 2015-10-27 08:22:35 +01:00
Audrius Butkevicius
2ae832d919 Fix typo introduced 2015-10-25 21:10:55 +00:00
Audrius Butkevicius
5b03c2d949 Remove dead code 2015-10-25 20:46:09 +00:00
Audrius Butkevicius
f629a998a0 Change errNoDevice message to something more human 2015-10-25 13:27:26 +00:00
Jake Peterson
fe88781bc8 Changed system conf file to use $USER 2015-10-24 14:53:08 -06:00
Audrius Butkevicius
e725c97967 Merge pull request #2406 from syncthing/fix-non-local-local-networks
Consider 'AlwaysLocalNets' in bandwidth limiters
2015-10-24 13:07:04 +01:00
Matt Burke
63caf22671 Consider 'AlwaysLocalNets' in bandwidth limiters
'AlwaysLocalNets' was getting printed, but was getting used
when setting up connections. Now, the nets that should be
considered local are printed and used.
2015-10-24 01:14:25 -04:00
Jake Peterson
44790b1333 Added Jake Peterson to AUTHORS 2015-10-23 22:49:25 -06:00
Jake Peterson
b40bb64612 Documentation and examples for Ubuntu-like linux systems using
Upstart as the init system.
2015-10-23 22:37:35 -06:00
Audrius Butkevicius
7b5ab29a6d Because I am a muppet 2015-10-23 20:21:21 +01:00
Audrius Butkevicius
4fd614be09 Add a different mode to stindex 2015-10-23 20:02:38 +01:00
Audrius Butkevicius
73236e58c5 Close channel after the client is stopped 2015-10-22 23:09:02 +01:00
Jakob Borg
32414853c6 Fix Raleway font 2015-10-22 21:08:24 +02:00
Jakob Borg
f3dc78d457 Don't deadlock after checking relay client status (fixes #2404) 2015-10-22 20:32:15 +02:00
Jakob Borg
a32ac62208 Don't expect ending slash on Windows 2015-10-22 13:49:41 +02:00
Jakob Borg
d7a934cf0e Paths must not end with slash on Windows 2015-10-22 11:39:34 +02:00
Jakob Borg
503491392d Correct amount of stack unwinding for debug prints 2015-10-22 11:38:45 +02:00
Jakob Borg
b3a2bf367b Tweak new folder defaults 2015-10-22 09:01:10 +02:00
Jakob Borg
c19eff4872 Revive remote client version in the GUI 2015-10-22 08:53:28 +02:00
Jakob Borg
2941a813c2 Fix upgrade tests 2015-10-22 08:35:48 +02:00
Jakob Borg
0a022d38fa Upgrade lib should use same criteria for beta check as main 2015-10-22 08:28:35 +02:00
Jakob Borg
7ed3b3dd3a Docs update 2015-10-22 08:14:43 +02:00
Jakob Borg
ce52963d2b Update test configs to modern v0.12 defaults 2015-10-22 08:06:17 +02:00
Jakob Borg
9a1922fdc6 Merge pull request #2385 from AudriusButkevicius/you-are-next
Add separate client for dynamic relays (fixes #2368)
2015-10-22 08:01:22 +02:00
Audrius Butkevicius
967424a538 Merge pull request #2402 from calmh/truncfaster
Don't load block list in ...Truncated methods
2015-10-21 23:03:48 +01:00
Jakob Borg
83131103cf Don't load block list in ...Truncated methods
Speeds up and reduces allocations on those operations, at the price of
having a manually tweaked XDR decoder for FileInfoTruncated.

benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             1868198122     1880206886     +0.64%
BenchmarkUpdateOneChanged-8       231852         172695         -25.51%
BenchmarkUpdateOneUnchanged-8     230624         179341         -22.24%
BenchmarkNeedHalf-8               104601744      109461427      +4.65%
BenchmarkHave-8                   29102480       34105026       +17.19%
BenchmarkGlobal-8                 150547687      172778045      +14.77%
BenchmarkNeedHalfTruncated-8      102471355      76564986       -25.28%
BenchmarkHaveTruncated-8          28758368       14277481       -50.35%
BenchmarkGlobalTruncated-8        151192913      106070136      -29.84%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             555577         557554         +0.36%
BenchmarkUpdateOneChanged-8       1135           587            -48.28%
BenchmarkUpdateOneUnchanged-8     1135           587            -48.28%
BenchmarkNeedHalf-8               374780         374775         -0.00%
BenchmarkHave-8                   151992         152085         +0.06%
BenchmarkGlobal-8                 530033         530135         +0.02%
BenchmarkNeedHalfTruncated-8      374699         22160          -94.09%
BenchmarkHaveTruncated-8          151834         4904           -96.77%
BenchmarkGlobalTruncated-8        530037         30536          -94.24%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             1765116216     1765305376     +0.01%
BenchmarkUpdateOneChanged-8       135085         93043          -31.12%
BenchmarkUpdateOneUnchanged-8     134976         92928          -31.15%
BenchmarkNeedHalf-8               44758752       44751791       -0.02%
BenchmarkHave-8                   11845052       11967172       +1.03%
BenchmarkGlobal-8                 80431136       80431065       -0.00%
BenchmarkNeedHalfTruncated-8      46526459       18243543       -60.79%
BenchmarkHaveTruncated-8          11348357       418998         -96.31%
BenchmarkGlobalTruncated-8        80977672       43116991       -46.75%
2015-10-21 23:49:10 +02:00
Audrius Butkevicius
9f4a0d3216 Merge pull request #2401 from calmh/blockmap2
Performance tweaks on leveldb code and blockmap
2015-10-21 22:32:10 +01:00
Jakob Borg
c1591a5efd Only run benchmarks with -tags benchmark
Avoids creating temp database and stuff on a normal test run
2015-10-21 23:19:26 +02:00
Jakob Borg
918ef4dff8 Use batches in blockmap, speeds up and reduces memory usage on large Replace and Update ops
benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             2880834572     1868198122     -35.15%
BenchmarkUpdateOneChanged-8       236596         231852         -2.01%
BenchmarkUpdateOneUnchanged-8     227326         230624         +1.45%
BenchmarkNeedHalf-8               105151538      104601744      -0.52%
BenchmarkHave-8                   28827492       29102480       +0.95%
BenchmarkGlobal-8                 150768724      150547687      -0.15%
BenchmarkNeedHalfTruncated-8      104434216      102471355      -1.88%
BenchmarkHaveTruncated-8          27860093       28758368       +3.22%
BenchmarkGlobalTruncated-8        149972888      151192913      +0.81%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             555451         555577         +0.02%
BenchmarkUpdateOneChanged-8       1135           1135           +0.00%
BenchmarkUpdateOneUnchanged-8     1135           1135           +0.00%
BenchmarkNeedHalf-8               374779         374780         +0.00%
BenchmarkHave-8                   151996         151992         -0.00%
BenchmarkGlobal-8                 530066         530033         -0.01%
BenchmarkNeedHalfTruncated-8      374702         374699         -0.00%
BenchmarkHaveTruncated-8          151834         151834         +0.00%
BenchmarkGlobalTruncated-8        530049         530037         -0.00%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             5018351912     1765116216     -64.83%
BenchmarkUpdateOneChanged-8       135085         135085         +0.00%
BenchmarkUpdateOneUnchanged-8     134976         134976         +0.00%
BenchmarkNeedHalf-8               44769400       44758752       -0.02%
BenchmarkHave-8                   11930612       11845052       -0.72%
BenchmarkGlobal-8                 81523668       80431136       -1.34%
BenchmarkNeedHalfTruncated-8      46692342       46526459       -0.36%
BenchmarkHaveTruncated-8          11348357       11348357       +0.00%
BenchmarkGlobalTruncated-8        81843956       80977672       -1.06%
2015-10-21 23:05:23 +02:00
Jakob Borg
0d9a04c713 Reuse blockkey, speeds up large Update and Replace calls
benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             2866418930     2880834572     +0.50%
BenchmarkUpdateOneChanged-8       226635         236596         +4.40%
BenchmarkUpdateOneUnchanged-8     229090         227326         -0.77%
BenchmarkNeedHalf-8               104483393      105151538      +0.64%
BenchmarkHave-8                   29288220       28827492       -1.57%
BenchmarkGlobal-8                 159269126      150768724      -5.34%
BenchmarkNeedHalfTruncated-8      108235000      104434216      -3.51%
BenchmarkHaveTruncated-8          28945489       27860093       -3.75%
BenchmarkGlobalTruncated-8        149355833      149972888      +0.41%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             1054944        555451         -47.35%
BenchmarkUpdateOneChanged-8       1135           1135           +0.00%
BenchmarkUpdateOneUnchanged-8     1135           1135           +0.00%
BenchmarkNeedHalf-8               374777         374779         +0.00%
BenchmarkHave-8                   151995         151996         +0.00%
BenchmarkGlobal-8                 530063         530066         +0.00%
BenchmarkNeedHalfTruncated-8      374699         374702         +0.00%
BenchmarkHaveTruncated-8          151834         151834         +0.00%
BenchmarkGlobalTruncated-8        530021         530049         +0.01%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             5074297112     5018351912     -1.10%
BenchmarkUpdateOneChanged-8       135097         135085         -0.01%
BenchmarkUpdateOneUnchanged-8     134976         134976         +0.00%
BenchmarkNeedHalf-8               44759436       44769400       +0.02%
BenchmarkHave-8                   11911138       11930612       +0.16%
BenchmarkGlobal-8                 81609867       81523668       -0.11%
BenchmarkNeedHalfTruncated-8      46588024       46692342       +0.22%
BenchmarkHaveTruncated-8          11348354       11348357       +0.00%
BenchmarkGlobalTruncated-8        79485168       81843956       +2.97%
2015-10-21 23:05:23 +02:00
Jakob Borg
0c0c69f0cf The GC runs are legacy and slows things down quite a bit
benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             2942370526     2866418930     -2.58%
BenchmarkUpdateOneChanged-8       7402489        226635         -96.94%
BenchmarkUpdateOneUnchanged-8     7298777        229090         -96.86%
BenchmarkNeedHalf-8               113608416      104483393      -8.03%
BenchmarkHave-8                   29834263       29288220       -1.83%
BenchmarkGlobal-8                 162773699      159269126      -2.15%
BenchmarkNeedHalfTruncated-8      111943400      108235000      -3.31%
BenchmarkHaveTruncated-8          29490369       28945489       -1.85%
BenchmarkGlobalTruncated-8        165841081      149355833      -9.94%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             1054942        1054944        +0.00%
BenchmarkUpdateOneChanged-8       1149           1135           -1.22%
BenchmarkUpdateOneUnchanged-8     1135           1135           +0.00%
BenchmarkNeedHalf-8               374774         374777         +0.00%
BenchmarkHave-8                   151995         151995         +0.00%
BenchmarkGlobal-8                 530042         530063         +0.00%
BenchmarkNeedHalfTruncated-8      374697         374699         +0.00%
BenchmarkHaveTruncated-8          151834         151834         +0.00%
BenchmarkGlobalTruncated-8        530050         530021         -0.01%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             5074294728     5074297112     +0.00%
BenchmarkUpdateOneChanged-8       141048         135097         -4.22%
BenchmarkUpdateOneUnchanged-8     134976         134976         +0.00%
BenchmarkNeedHalf-8               44734813       44759436       +0.06%
BenchmarkHave-8                   11911634       11911138       -0.00%
BenchmarkGlobal-8                 80436854       81609867       +1.46%
BenchmarkNeedHalfTruncated-8      46514673       46588024       +0.16%
BenchmarkHaveTruncated-8          11348357       11348354       -0.00%
BenchmarkGlobalTruncated-8        81730740       79485168       -2.75%
2015-10-21 23:05:22 +02:00
Jakob Borg
943e80e26c Make benchmarks more realistic 2015-10-21 23:04:29 +02:00
Audrius Butkevicius
058a327584 Merge pull request #2397 from calmh/localsize
Keep LocalSize & GlobalSize data in RAM
2015-10-21 21:05:18 +01:00
Jakob Borg
1eca4170f7 Add test for LocalSize/GlobalSize results 2015-10-21 21:58:48 +02:00
Jakob Borg
c268e4ad1b Also keep GlobalSize in RAM 2015-10-21 21:58:48 +02:00
Jakob Borg
d4f81e8791 Keep LocalSize data in RAM 2015-10-21 21:58:48 +02:00
Audrius Butkevicius
4f0680c3c8 Add separate client for dynamic relays (fixes #2368)
Did some manual tests in the playground, such as kicking off two clients in parallel, first connecting,
second one getting a message about already being connected, falling back to the second address.
2015-10-21 20:08:14 +01:00
Jakob Borg
8c26fe44c3 Actually run protocol tests faster with -short (on Go 1.5...) 2015-10-21 14:45:18 +02:00
Jakob Borg
8c7d9f3dd2 Protocol tests should run faster with -short 2015-10-21 14:35:59 +02:00
Jakob Borg
f241b7e79a Global discovery should time out (fixes #2389) 2015-10-21 14:24:55 +02:00
Audrius Butkevicius
dc1f3503be Merge pull request #2391 from burkemw3/warn-overwrite-config-files
Emit warning when sync could overwrite configuration
2015-10-20 21:57:16 +01:00
Matt Burke
c2a5e180b8 Emit warning when sync could overwrite configuration
Overwriting configuration files is likely to happen if a
user syncs their home directories across computers. In this
case, the biggest risk is that all nodes will end up with
the same certificate and thus Device ID.

When the model prepares a folder for syncing, it checks to
see if the configuration files this instance is using are
getting synced. If the are getting synced, and they aren't
getting ignored, a warning is emitted. The model is used
so that when a new folder is added dynamically, a warning
is also emitted.

This will not prevent a user from shooting themselves in
the foot, and will not cover all cases (e.g. symlinks).
It should provide _something_ for many users in this
situation to go on, though.
2015-10-20 12:22:27 -04:00
Jakob Borg
7351217489 Relative GOBIN not allowed in Go 1.5.2+ 2015-10-20 15:59:38 +02:00
Jakob Borg
aa42aafe33 Don't panic on clean shutdown 2015-10-20 15:59:37 +02:00
Stefan Tatschner
b2da0120d6 Add syncthing-localdisco.7 2015-10-20 14:06:14 +02:00
Jakob Borg
9e84e09c26 Update specs link 2015-10-20 13:41:24 +02:00
Jakob Borg
32c1a9bc45 Docs & translation update 2015-10-20 09:59:50 +02:00
Audrius Butkevicius
a7e95922c1 Merge pull request #2395 from calmh/printhashrate
Print the single thread hash performance at startup
2015-10-20 08:10:52 +01:00
Jakob Borg
1392d0bc14 Print the single thread hash performance at startup 2015-10-20 08:51:59 +02:00
Jakob Borg
0f9fa9507e Tests must use locking to avoid race (fixes #2394) 2015-10-20 08:51:31 +02:00
Jakob Borg
1087535d8f Add address for burkemw3 2015-10-20 08:38:12 +02:00
Audrius Butkevicius
0e51f51979 Merge pull request #2379 from calmh/nodbvalidate
Don't validate requests against the database
2015-10-19 16:57:58 +01:00
Jakob Borg
90e0141ac5 Request() should return protocol errors 2015-10-19 15:14:41 +02:00
Jakob Borg
8435a8678e Don't validate requests against the database 2015-10-19 15:14:41 +02:00
Jakob Borg
bd2888fc3b Include maxConflicts -1 in test configs 2015-10-19 15:14:06 +02:00
Jakob Borg
6578ffe2c9 Merge pull request #2320 from AudriusButkevicius/proto
More proto changes
2015-10-18 08:47:30 +02:00
Jakob Borg
175340522f Merge pull request #2375 from AudriusButkevicius/proxy
Add proxy support (fixes #271)
2015-10-18 08:45:17 +02:00
Audrius Butkevicius
afc917b582 Fix tests 2015-10-17 09:48:41 +01:00
Audrius Butkevicius
9f4cd7716e Add more information about the folders to ClusterConfig 2015-10-17 09:46:46 +01:00
Jakob Borg
29b0017445 Merge pull request #2386 from AudriusButkevicius/epoint
Change relaypoolsrv endpoint
2015-10-17 09:14:35 +09:00
Jakob Borg
910a7c619a Merge pull request #2381 from AudriusButkevicius/maxcon
Allow limiting max conflicts (fixes #2282)
2015-10-17 09:10:31 +09:00
Audrius Butkevicius
273fac2028 Change relaypoolsrv endpoint
Just incase we want to show some stats in the future, such as a Geo-IP based map of where relays are, their dot size being proportional to global rate limits,
together with potentially how much data in total has been transferred, and how many sessions there by crawling relay status pages etc ;)
2015-10-17 00:10:01 +01:00
Audrius Butkevicius
a323d85d32 Add more information about the device to ClusterConfig 2015-10-16 19:40:12 +01:00
Audrius Butkevicius
491a33de0b Move device name into the protocol messages 2015-10-16 19:40:12 +01:00
Audrius Butkevicius
d6a0a44432 Update xdr 2015-10-16 19:40:11 +01:00
Audrius Butkevicius
752533489a Allow limiting max conflicts (fixes #2282) 2015-10-16 19:26:38 +01:00
Audrius Butkevicius
e4e3c19e96 Our dialer sets up TCP options 2015-10-16 19:18:22 +01:00
Jakob Borg
4ddb066728 Update lang-en 2015-10-16 18:54:41 +09:00
Jakob Borg
958bbbc8cb Fix mateon1 2015-10-16 18:54:07 +09:00
Jakob Borg
e15be5c2bf Merge pull request #2354 from eipiminus1/issue1361
Add trailing folder seperator to allow symlinks as folder path (fixes #1361)
2015-10-16 09:29:37 +09:00
Jakob Borg
cc436dc8cb Merge pull request #2372 from calmh/fix2371
Option -gui-address should accept scheme prefixes
2015-10-16 09:25:41 +09:00
Audrius Butkevicius
abbcd1f436 Patch up HTTP clients 2015-10-15 21:02:17 +01:00
Audrius Butkevicius
db494f2afc God damn godeps 2015-10-15 21:01:48 +01:00
Audrius Butkevicius
985ea29940 Add proxy support (fixes #271) 2015-10-15 21:01:42 +01:00
Jakob Borg
76359da58e Apparently -race adds some stuff gocov doesn't like. Lets try this instead. 2015-10-14 15:41:15 +09:00
Jakob Borg
368cd44558 Fix race conditions in model tests 2015-10-14 14:41:16 +09:00
Jakob Borg
cc1387ec0c Tests should be run with -race 2015-10-14 14:41:16 +09:00
Jakob Borg
7c79985a29 Clarify listen address 2015-10-13 22:07:22 +09:00
Jakob Borg
2b56961b54 ... with alternate email 2015-10-13 08:39:35 +09:00
Jakob Borg
ff9920cbdc Add eipiminus1 2015-10-13 08:38:03 +09:00
Jakob Borg
953a67bc3a Option -gui-address should accept scheme prefixes (fixes #2371) 2015-10-13 08:26:07 +09:00
Audrius Butkevicius
29343aec3a Fix division by zero (fixes #2373) 2015-10-12 18:57:15 +01:00
Audrius Butkevicius
2972472179 Add missing close 2015-10-12 17:10:59 +01:00
Jakob Borg
240e7b0835 Log/error fields changed name 2015-10-12 14:18:53 +09:00
Jakob Borg
baf5191433 Add mateon1 2015-10-12 14:13:38 +09:00
Jakob Borg
2645e87766 Errors may now be null, and that's fine 2015-10-12 10:12:57 +09:00
Jakob Borg
ec8bc02d33 Silence spurious debug (fixes #2369) 2015-10-12 10:11:58 +09:00
Yannic A
054bc970e2 Add trailing folder seperator to allow symlinks as folder path (fixes #1361) 2015-10-10 19:38:59 +02:00
Jakob Borg
c1c41242bb Merge pull request #2362 from AudriusButkevicius/sleepysleep
Make puller pause configurable
2015-10-10 20:13:10 +09:00
Audrius Butkevicius
169ff73d26 Merge pull request #2365 from mateon1/master
Change Out of Sync message in folder details (fixes #2364)
2015-10-10 11:37:24 +01:00
Audrius Butkevicius
d985ed553a Make puller pause configurable 2015-10-10 11:36:09 +01:00
Mateon1
dea1ef24d9 Change Out of Sync message in folder details (fixes #2364) 2015-10-09 01:34:11 +02:00
Jakob Borg
49f29a0453 Merge pull request #2355 from calmh/trace3
New debug logging infrastructure
2015-10-05 21:01:37 +09:00
Jakob Borg
76af9ba53d Implement facility based logger, debugging via REST API
This implements a new debug/trace infrastructure based on a slightly
hacked up logger. Instead of the traditional "if debug { ... }" I've
rewritten the logger to have no-op Debugln and Debugf, unless debugging
has been enabled for a given "facility". The "facility" is just a
string, typically a package name.

This will be slightly slower than before; but not that much as it's
mostly a function call that returns immediately. For the cases where it
matters (the Debugln takes a hex.Dump() of something for example, and
it's not in a very occasional "if err != nil" branch) there is an
l.ShouldDebug(facility) that is fast enough to be used like the old "if
debug".

The point of all this is that we can now toggle debugging for the
various packages on and off at runtime. There's a new method
/rest/system/debug that can be POSTed a set of facilities to enable and
disable debug for, or GET from to get a list of facilities with
descriptions and their current debug status.

Similarly a /rest/system/log?since=... can grab the latest log entries,
up to 250 of them (hardcoded constant in main.go) plus the initial few.

Not implemented in this commit (but planned) is a simple debug GUI
available on /debug that shows the current log in an easily pasteable
format and has checkboxes to enable the various debug facilities.

The debug instructions to a user then becomes "visit this URL, check
these boxes, reproduce your problem, copy and paste the log". The actual
log viewer on the hypothetical /debug URL can poll regularly for new log
entries and this bypass the 250 line limit.

The existing STTRACE=foo variable is still obeyed and just sets the
start state of the system.
2015-10-03 18:09:53 +02:00
Jakob Borg
2de364414f Adopt calmh/logger into lib/logger 2015-10-03 18:09:53 +02:00
Audrius Butkevicius
5ae84970e7 Merge pull request #2352 from uok/morespace
Add space for scrolling (fixes #2351)
2015-10-03 15:49:20 +01:00
Ben Schulz
141b0d38a6 Add space for scrolling (fixes #2351)
Add space at bottom for scrolling on small resolutions
2015-10-03 16:15:06 +02:00
Jakob Borg
44891b6924 Merge pull request #2349 from rumpelsepp/man-update
Update refresh.sh to fetch missing manpages
2015-10-02 09:13:59 +02:00
Stefan Tatschner
f008588307 Update refresh.sh to fetch missing manpages
syncthing-bep(7) and syncthing-localdisco(7) had been added.
2015-10-02 08:54:20 +02:00
Audrius Butkevicius
e481d03b5e Merge pull request #2347 from uok/clickselect
Select text on click
2015-10-01 14:57:48 +01:00
Ben Schulz
a8a73b60c4 Add select text on click
add (again?) select text on click
- device ID
- remote device ID
- API key
2015-10-01 14:34:23 +02:00
Audrius Butkevicius
7e8b76e8ea Merge pull request #2346 from calmh/dststatdir
Create missing directories
2015-10-01 10:47:16 +01:00
Jakob Borg
36c746bd9f Create missing directories 2015-10-01 09:43:16 +02:00
Audrius Butkevicius
7476c583e7 Merge pull request #2343 from calmh/discoprio2
Add discovery source priorities (fixes #2339)
2015-10-01 08:22:17 +01:00
Jakob Borg
89928ca8e4 Add discovery source priorities (fixes #2339)
Sources are given a priority, lower being better, when added to a
CachingMux.
2015-10-01 08:45:40 +02:00
Jakob Borg
38a3bf3ada Merge pull request #2317 from simplypeachy/patch-1
Add missing parameter in disk space warning
2015-10-01 08:10:36 +02:00
Jakob Borg
96b3d31b42 Add simplypeachy 2015-10-01 08:02:38 +02:00
Jakob Borg
362ae5c4bb Add copyright 2015-09-30 21:40:27 +02:00
Jakob Borg
dc303c2a71 Merge pull request #2340 from syncthing/revert-2337-caseins
Revert "Case insensitive renames, part 1"
2015-09-30 21:40:15 +02:00
Jakob Borg
be2ca0ea22 Revert "Case insensitive renames, part 1" 2015-09-30 21:40:04 +02:00
Audrius Butkevicius
460cb19839 Merge pull request #2337 from calmh/caseins
Case insensitive renames, part 1
2015-09-30 20:35:55 +01:00
Jakob Borg
ddfebb17cf Case insensitive renames, part 1 2015-09-30 12:41:29 +02:00
Audrius Butkevicius
375c9dd116 Merge pull request #2336 from calmh/overrides
Fix STGUIAPIKEY and STGUIADDR overrides (fixes #2335)
2015-09-30 08:47:44 +01:00
Jakob Borg
15716a0772 Fix STGUIAPIKEY and STGUIADDR overrides (fixes #2335)
Also removes STGUIAUTH and corresponding --gui-authentication as this
seems fundamentally insecure and I'm unsure of the actual use case for
it?
2015-09-30 09:36:11 +02:00
Audrius Butkevicius
6f6c1cd330 Merge pull request #2333 from calmh/brokenupgrade
Remove global cfg variable (fixes #2294)
2015-09-29 19:55:30 +01:00
Jakob Borg
36ac757c3a Remove global cfg variable (fixes #2294)
Not necessarily the easiest way to fix just this bug, but the root cause
was using the (at that point uninitialized) cfg variable, so it seemed
sensible to just get rid of it to avoid that kind of crap.
2015-09-29 20:23:15 +02:00
Audrius Butkevicius
b614cfffcb Merge pull request #2330 from calmh/eventids
Subscribing to events should not bump event ID (fixes #2329)
2015-09-29 19:05:57 +01:00
Audrius Butkevicius
b58a52c7b4 Merge pull request #2332 from calmh/brokenignore
Correctly report errors encountered parsing ignores (fixes #2309)
2015-09-29 17:15:07 +01:00
Jakob Borg
a80fc1b062 Correctly report errors encountered parsing ignores (fixes #2309, fixes #2296)
An error on opening .stignore will satisfy os.IsNotExist() and not be
reported. Other errors will be reported and stop the folder, including
is-not-exist errors from #include as these are passed through fmt.Errorf.

Also fixes minor issue where we would not print cause of folder stopping
to the log.

Also fixes minor issue with capitalization of errors.
2015-09-29 18:04:18 +02:00
Audrius Butkevicius
90d18189da Merge pull request #2331 from calmh/unique
CachingMux should return unique addresses only (fixes #2321)
2015-09-29 16:43:49 +01:00
Jakob Borg
22a2e95126 CachingMux should return unique addresses only (fixes #2321) 2015-09-29 17:40:39 +02:00
Jakob Borg
11e1a99e14 Subscribing to events should not bump event ID (fixes #2329) 2015-09-29 17:17:09 +02:00
Jakob Borg
3c6bfb880d Docs and translation update 2015-09-27 22:31:19 +02:00
Audrius Butkevicius
ad2c05c3f5 Merge pull request #2322 from calmh/colons
Don't naively join host and port using colon (fixes #2316)
2015-09-27 20:52:29 +01:00
Jakob Borg
fa75f54a05 Don't naively join host and port using colon (fixes #2316) 2015-09-27 21:44:08 +02:00
Audrius Butkevicius
6405fd4770 Merge pull request #2240 from calmh/masterfreespace
Don't check for free space on master folders (fixes #2236)
2015-09-27 11:11:03 +01:00
simplypeachy
f42f20c70c Update rwfolder.go
Revert copyright/grammar changes.
2015-09-27 09:50:54 +01:00
Jakob Borg
209bc59e8c Don't check for free space on master folders (fixes #2236)
Also clean up the logic a little, I thought it was a bit muddled.
2015-09-27 09:33:11 +02:00
Jakob Borg
84ee86f6b3 Merge pull request #2261 from rumpelsepp/readme-update
Cleanup markdown inline link mess
2015-09-27 09:28:51 +02:00
simplypeachy
ed7791d824 Add missing parameter in disk space warning
Plus copyright bump, minor grammatical fix.
2015-09-26 16:46:42 +01:00
Stefan Tatschner
0580b10385 Cleanup markdown inline link mess
I just wanted to add the freenode webchat link, because people who are
not used to irc can join the chatroom instantly. I tried to clean up the
markdown file a bit and removed the links to the footer; that makes the
"source code" less ugly.
2015-09-26 16:42:15 +02:00
Jakob Borg
47c7351b16 Clean up connection.Model interface a little 2015-09-26 13:19:22 +02:00
Jakob Borg
b158072a15 Merge pull request #2189 from burkemw3/lib-ify-connections
Decouple connections service from model
2015-09-26 13:18:23 +02:00
Jakob Borg
b6486c26e6 Add burkemw3 2015-09-26 13:15:25 +02:00
Audrius Butkevicius
da0ac96704 Merge pull request #2312 from rumpelsepp/systemd
Pull syncthing-inotify.service as an optional dep
2015-09-25 23:24:44 +01:00
Stefan Tatschner
af1fbda892 Pull syncthing-inotify.service as an optional dep
This patch adds syncthing-inotify.service as an optional dependency to
syncthing.service. That means, if syncthing-inotify.service is
available, it is started and stopped with syncthing.

See discussion here:
https://forum.syncthing.net/t/gnome-shell-extension-syncthing-icon/5759
2015-09-25 23:39:03 +02:00
Matt Burke
2234c45c19 Decouple connections service from model
The connections service no longer depends directly on the
syncthing model object, but on an interface instead. This
makes it drastically easier to write clients that handle
the model differently, but still want to benefit from
existing and future connections changes in the core.

This was motivated by burkemw3's interest in creating a
FUSE client that can present a view of the global model,
but not have all of the file data locally.

The actual decoupling was done by adding a connections.Model
interface. This interface is effectively an extension of the
protocol.Model interface that also handles connections
alongside the modified service.
2015-09-25 12:19:30 -04:00
Audrius Butkevicius
fd38fb684a Merge pull request #2307 from calmh/relaxlabels
Relax folder label restrictions
2015-09-25 13:05:28 +01:00
Jakob Borg
e0a16e08dd Relax folder label restrictions 2015-09-25 13:45:58 +02:00
Jakob Borg
43189dfe3a This unexpected EOF is really quite expected 2015-09-24 14:19:21 +02:00
Audrius Butkevicius
46d4f6037d Merge pull request #2303 from calmh/disco
Encapsulate local discovery address in struct
2015-09-22 23:07:49 +01:00
Jakob Borg
e522811a52 Encapsulate local discovery address in struct
The XDR encoder doesn't understart slices of strings very well. It can
encode and decode them, but there's no way to set limits on the length
of the strings themselves (only on the length of the slice), and the
generated diagrams are incorrect. This trivially works around this,
while also documenting what the string actually is (a URL).
2015-09-22 23:28:00 +02:00
Jakob Borg
6124bbb12a There is no local discovery query packet 2015-09-22 23:10:05 +02:00
Jakob Borg
fbf911cf7e Unbreak comments 2015-09-22 21:51:05 +02:00
Jakob Borg
7fdfa81fb8 Fix vet and lint complaints 2015-09-22 20:34:24 +02:00
Audrius Butkevicius
a4673f3007 Merge pull request #2302 from calmh/includeproto
Move external packages into the fold
2015-09-22 19:23:58 +01:00
Jakob Borg
24c499d282 Clean up deps 2015-09-22 19:39:07 +02:00
Jakob Borg
4581c57478 Fix import paths 2015-09-22 19:38:46 +02:00
Jakob Borg
f177924629 Rejiggle lib/relaysrv/* -> lib/relay/* 2015-09-22 19:37:03 +02:00
Jakob Borg
633e888ba7 Add 'lib/relaysrv/' from commit '6e126fb97e2ff566d35f8d8824e86793d22b2147'
git-subtree-dir: lib/relaysrv
git-subtree-mainline: 4316992d95
git-subtree-split: 6e126fb97e
2015-09-22 19:34:52 +02:00
Jakob Borg
4316992d95 Add 'lib/protocol/' from commit 'f91191218b192ace841c878f161832d19c09145a'
git-subtree-dir: lib/protocol
git-subtree-mainline: 5ecb8bdd8a
git-subtree-split: f91191218b
2015-09-22 19:34:29 +02:00
Jakob Borg
5ecb8bdd8a Correct success/error handling for multicast/broadcast sends 2015-09-22 16:04:48 +02:00
Jakob Borg
3b81d4b8a5 Update suture for data race bug 2015-09-21 15:48:37 +02:00
Jakob Borg
6e3b3dc4e7 Comment typo fix 2015-09-21 14:20:33 +02:00
Jakob Borg
8d421a62d2 Usage reporting should recognize new discovery server IP:s 2015-09-21 10:54:21 +02:00
Jakob Borg
185b0690c8 Further forgotten copyright notices 2015-09-21 10:43:36 +02:00
Jakob Borg
bd0e97023e We don't need a separate subscription lock
We forgot to lock it during replace, so data rate. This is simpler.
2015-09-21 10:42:07 +02:00
Jakob Borg
34ff0706a3 Add missing copyright notice 2015-09-21 10:34:20 +02:00
Jakob Borg
acba61babb Ping handling changes in protocol, removed from config here 2015-09-21 10:14:27 +02:00
Audrius Butkevicius
f91191218b Merge pull request #19 from syncthing/pingfix
Simplify and improve the ping mechanism
2015-09-21 09:00:40 +01:00
Jakob Borg
05c79ac8c2 Simplify and improve the ping mechanism
This should resolve the spurious ping timeouts we've had on low powered
boxes. Those errors are the result of us requiring a timely Pong
response to our Pings. However this is unnecessarily strict - as long as
we've received *anything* recently, we know the other peer is alive. So
the new mechanism removes the Pong message entirely and separates the
ping check into two routines:

 - One that makes sure to send ping periodically, if nothing else has
   been sent. This guarantees a message sent every 45-90 seconds.

 - One that checks how long it was since we last received a message. If
   it's longer than 300 seconds, we trigger an ErrTimeout.

So we're guaranteed to detect a connection failure in 300 + 300/2
seconds (due to how often the check runs) and we may detect it much
sooner if we get an actual error on the ping write (a connection reset
or so).

This is more sluggish than before but I think that's an OK price to pay
for making it actually work out of the box.

This removes the configurability of it, as the timeout on one side is
dependent on the send interval on the other side. Do we still need it
configurable?
2015-09-21 08:51:42 +02:00
Jakob Borg
24d2a93c0d Change default discovery server names 2015-09-20 22:30:31 +02:00
Audrius Butkevicius
de43080228 Merge pull request #2275 from calmh/tlsdisco
New global discovery protocol over HTTPS (fixes #628)
2015-09-20 20:16:04 +01:00
Jakob Borg
b0cd7be39b New global discovery protocol over HTTPS (fixes #628, fixes #1907) 2015-09-20 21:10:53 +02:00
Jakob Borg
a7169a6348 Add AppVeyor for Windows builds 2015-09-20 15:18:50 +02:00
Audrius Butkevicius
6e126fb97e Merge pull request #8 from syncthing/latency
Connected clients should know their own latency
2015-09-20 12:47:37 +01:00
Jakob Borg
22783d8f6c Connected clients should know their own latency 2015-09-20 13:40:24 +02:00
Jakob Borg
1d710bdcd9 Update lang-en.json 2015-09-18 14:11:01 +02:00
Audrius Butkevicius
5feffba1ff Merge pull request #2285 from uok/patch-1
Fix translation (fixes #2284)
2015-09-18 09:42:43 +01:00
Ben S.
6c56586a6b Add missing translation (fixes #2284) 2015-09-18 10:26:32 +02:00
Audrius Butkevicius
a32b66c7c3 Merge pull request #2278 from calmh/relaydep
lib/relay need not depend on lib/model any more
2015-09-14 20:16:31 +01:00
Jakob Borg
7e3c06191e lib/relay need not depend on lib/model any more 2015-09-14 20:19:39 +02:00
Jakob Borg
372c96c6b2 Merge pull request #5 from syncthing/info
Tweaks
2015-09-14 16:20:08 +02:00
Audrius Butkevicius
7073b8721a Merge pull request #7 from syncthing/ping
Server should respond to ping
2015-09-14 12:49:12 +01:00
Jakob Borg
fccb9c0bf4 Server should respond to ping 2015-09-14 13:46:20 +02:00
Audrius Butkevicius
3d09090c4e Merge pull request #2273 from calmh/relaydeps
Invert initialization dependence on relay/conns
2015-09-14 10:16:06 +01:00
Jakob Borg
596a49c112 Invert initialization dependence on relay/conns
This makes it so we can initialize the relay management and then give
that to the connection management, instead of the other way around.

This is important to me in the discovery revamp I'm doing, as otherwise
I get a circular dependency when constructing stuff, with relaying
depending on connection, connection depending on discovery, and
discovery depending on relaying.

With this fixed, discovery will depend on relaying, and connection will
depend on both discovery and relaying.
2015-09-14 10:21:55 +02:00
Jakob Borg
95fc253d6b Rename externalAddr to addressLister
It's going to have to list internal addresses too.
2015-09-13 18:09:44 +02:00
Jakob Borg
e6d5372029 Fix -no-upgrade 2015-09-13 18:04:58 +02:00
Jakob Borg
8e5c692244 Merge pull request #2269 from calmh/externaladdr
Add external address tracker object
2015-09-13 17:39:51 +02:00
Jakob Borg
e694c664e5 Add external address tracker object 2015-09-13 07:56:13 +02:00
Audrius Butkevicius
8779f93746 Merge pull request #2270 from calmh/diskspaceagain
Don't require free disk space when we might only update metadata
2015-09-12 22:11:26 +01:00
Jakob Borg
1f0f5c1e23 Don't require free disk space when we might only update metadata
Instead, make sure we do the check as part of CheckFolderHealth before
pulling, and individually per file to try to not run out of space at
that stage.

(The latter is far from fool proof as we may pull lots of stuff in
parallell, but it's worth a try.)
2015-09-12 23:00:43 +02:00
Audrius Butkevicius
f9f12131ae Drop all sessions when we realize a node has gone away 2015-09-11 22:29:50 +01:00
Audrius Butkevicius
7e0106da0c Tweaks
1. Advertise relay server paramters so that clients could make a decision wether or not to connect
2. Generate certificate if it's not there.
2015-09-11 20:06:14 +01:00
Audrius Butkevicius
aaf6bf3cd2 Merge pull request #2264 from calmh/customlan
Add custom networks that are considered local (internal routing, VPN etc)
2015-09-11 15:21:37 +01:00
Jakob Borg
fa95c82daf Add custom networks that are considered local (internal routing, VPN etc)
Allows things like this in the <options> element:

  <alwaysLocalNet>10.0.0.0/8</alwaysLocalNet>
2015-09-11 15:10:41 +02:00
Audrius Butkevicius
0da7142a59 Merge pull request #2231 from calmh/symtype
lib/symlinks need not depend on protocol
2015-09-11 12:28:38 +01:00
Jakob Borg
446a938b06 lib/symlinks need not depend on protocol 2015-09-11 12:55:25 +02:00
Audrius Butkevicius
21c3994650 Merge pull request #2260 from rumpelsepp/authors-update
Add my email to AUTHORS as well
2015-09-10 20:59:38 +01:00
Stefan Tatschner
f90f3a5ca6 Add my email to AUTHORS as well
The pull request has been merged faster than I was able to fix that
mistack. :) Sorry.
2015-09-10 21:23:38 +02:00
Audrius Butkevicius
a34d2b72c0 Merge pull request #2259 from rumpelsepp/mailmap-update
Add my "community email address" to NICKS
2015-09-10 20:18:51 +01:00
Stefan Tatschner
9d617dcfec Add my "community email address" to NICKS
Since my amount of received emails increased over the time, I would like
to separate my private email address from my "community email address".
2015-09-10 21:14:22 +02:00
Audrius Butkevicius
55506a0fc3 Merge pull request #2257 from rumpelsepp/docs-update
Update etc/linux-systemd/README.md
2015-09-10 20:12:31 +01:00
Stefan Tatschner
19c03f504f Update etc/linux-systemd/README.md 2015-09-10 21:04:52 +02:00
Jakob Borg
985a3436e2 Always show columns headers (accessibility) 2015-09-10 15:05:08 +02:00
Jakob Borg
9dae87c80c Allow configuration of releases URL 2015-09-10 14:16:44 +02:00
Jakob Borg
46364a38c6 Allow configuration of usage reporting URL 2015-09-10 14:08:40 +02:00
Jakob Borg
148b2b9d02 Fix crash when relaying or global discovery is disabled (fixes #2246) 2015-09-09 12:58:57 +02:00
Jakob Borg
64354b51c9 Generate certs with SHA256 signature instead of SHA1
Doesn't matter at all for BEP, but the same stuff is used by the web UI
and modern browsers are starting to dislike SHA1 extra much.
2015-09-09 12:55:17 +02:00
AudriusButkevicius
d180bc794b Handle 403 2015-09-07 18:12:18 +01:00
AudriusButkevicius
eab5fd5bdd Join relay pool by default 2015-09-07 09:21:23 +01:00
AudriusButkevicius
e3ca797dad Receive the invite, otherwise stop blocks, add extra arguments 2015-09-06 20:25:53 +01:00
Audrius Butkevicius
f2db1c8ab2 Merge pull request #2245 from calmh/ur
Relay server info, urVersion in ur
2015-09-06 20:24:00 +01:00
Jakob Borg
36b8a75ede Relay server info, urVersion in ur 2015-09-06 21:15:46 +02:00
AudriusButkevicius
11b2815b88 Add a test method, fix nil pointer panic 2015-09-06 18:35:38 +01:00
Audrius Butkevicius
0a42b85c06 Merge pull request #2242 from calmh/ur
Add interesting fields to usage report (fixes #559)
2015-09-06 17:19:18 +01:00
Jakob Borg
baf231e3b6 Add interesting fields to usage report (fixes #559) 2015-09-06 18:17:30 +02:00
Jakob Borg
e26e85b6d6 Only check pull file size if check is enabled (ref #2241) 2015-09-06 17:13:00 +02:00
Audrius Butkevicius
b2193b23e5 Merge pull request #2237 from calmh/freespace
Allow fractional percentages (fixes #2233)
2015-09-05 12:29:03 +01:00
Jakob Borg
2af3a92833 Allow fractional percentages (fixes #2233) 2015-09-05 12:39:15 +02:00
Jakob Borg
d2af6dcf38 CircleCI just plain doesn't work for us. 2015-09-04 15:31:03 +02:00
Jakob Borg
dd6b66167e Ok, last attempt now. 2015-09-04 15:17:23 +02:00
Jakob Borg
51a4a88a81 Dammit,CircleCI 2015-09-04 14:59:32 +02:00
Jakob Borg
516efb21cf Seriously CircleCI, come on... 2015-09-04 14:53:10 +02:00
Jakob Borg
2d4397af53 Clean out before build/test on CircleCI 2015-09-04 14:39:22 +02:00
Jakob Borg
06f319a380 lib/stats need not depend on protocol 2015-09-04 13:23:18 +02:00
Jakob Borg
37ed5a01e0 Fix sudden nil pointer dereference in walk 2015-09-04 13:13:08 +02:00
Jakob Borg
4a9997e449 lib/db need not depend on lib/config 2015-09-04 12:01:00 +02:00
Jakob Borg
54269553c8 lib/scanner need not depend on lib/ignore 2015-09-04 11:50:47 +02:00
AudriusButkevicius
541d05df1b Use new method name 2015-09-02 22:02:17 +01:00
AudriusButkevicius
3d6ea23511 Clarify names 2015-09-02 22:00:51 +01:00
AudriusButkevicius
c0554c9fbf Use a single socket for relaying 2015-09-02 21:35:52 +01:00
AudriusButkevicius
61130ea191 Add AcceptNoWrap to DowngradingListener 2015-09-02 21:25:56 +01:00
Jakob Borg
2581e56503 Use raw strings to describe regexes, avoids double escaping 2015-09-02 22:19:45 +02:00
Jakob Borg
fea0ae7f2f Merge pull request #2225 from AudriusButkevicius/onerelay
Pick a single relay (fixes #2182)
2015-09-02 22:15:00 +02:00
AudriusButkevicius
3299438cbd Move TLS utilities into a separate package 2015-09-02 21:05:54 +01:00
Audrius Butkevicius
5c160048df Merge pull request #2226 from calmh/ignores
Correctly handle (?i) in ignores (fixes #1953)
2015-09-02 20:44:32 +01:00
Jakob Borg
e3e1036dda Correctly handle (?i) in ignores (fixes #1953) 2015-09-02 21:12:41 +02:00
AudriusButkevicius
876d7ac85e Pick a single relay (fixes #2182) 2015-09-02 18:05:34 +01:00
Jakob Borg
70b37dc469 Add specific build setup for CircleCI 2015-09-02 14:56:16 +02:00
Jakob Borg
6393f69138 Second opinion build status via Circleci 2015-09-02 11:57:53 +02:00
Audrius Butkevicius
0ca39f6c65 Merge pull request #2220 from calmh/hashertweaks
Adjust defaults for number of hashers based on OS
2015-09-01 10:48:45 +01:00
Jakob Borg
02493251d5 Adjust defaults for number of hashers based on OS
https://forum.syncthing.net/t/syncthing-is-such-a-massive-resource-hog/5494/19?u=calmh
2015-09-01 10:30:35 +02:00
Jakob Borg
ff04648112 Remove leftovers from signing 2015-08-31 17:46:48 +02:00
Jakob Borg
55002d7adf Signing is done by stsigtool only 2015-08-30 20:50:07 +02:00
Jakob Borg
0664c6b5b0 Translation & docs update 2015-08-30 14:38:47 +02:00
Jakob Borg
a4ebac147b Harmonize rendering of identicons with other icons (fixes #2212) 2015-08-30 14:25:11 +02:00
AudriusButkevicius
cf802dc67e Hide .stigore (fixes #2114) 2015-08-30 12:59:01 +01:00
Jakob Borg
84365882de Fix tests 2015-08-28 09:01:21 +02:00
Jakob Borg
37fb6473b3 Woops, fix scanner test 2015-08-28 08:57:37 +02:00
Jakob Borg
ba676f2810 Dividing by zero is frowned upon 2015-08-27 21:41:39 +02:00
Jakob Borg
b3d7c622c3 Show folder scan progress in -verbose, hide local index updates 2015-08-27 21:37:41 +02:00
Jakob Borg
bc016e360e Refactor: ints used in arithmetic should be signed 2015-08-27 21:37:12 +02:00
Jakob Borg
594918dd3c Line height on headers to avoid cut off descenders 2015-08-27 21:28:07 +02:00
Jakob Borg
ec5feb41e8 Merge remote-tracking branch 'syncthing/pr/2200'
* syncthing/pr/2200:
  Add scan percentages (fixes #1030)
2015-08-27 21:25:45 +02:00
AudriusButkevicius
94c52e3a77 Add scan percentages (fixes #1030) 2015-08-27 19:20:43 +01:00
AudriusButkevicius
875de4f637 Use new address schema when creating default config 2015-08-27 19:18:45 +01:00
Audrius Butkevicius
72fa5a69f1 Merge pull request #2206 from calmh/logfile
Allow -logfile on all platforms (fixes #2004)
2015-08-27 18:23:03 +01:00
Jakob Borg
d63e54237b Allow -logfile on all platforms (fixes #2004) 2015-08-27 19:11:10 +02:00
Audrius Butkevicius
e256d93b43 Merge pull request #2205 from calmh/mclisten
Bind to IPv6 multicast group instead of ::
2015-08-27 17:50:47 +01:00
Audrius Butkevicius
4d12df5424 Merge pull request #2203 from calmh/discoport
Local discovery should use the same port on v4 as v6 (fixes #2201)
2015-08-27 17:45:13 +01:00
Jakob Borg
cae120fd4d Bind to IPv6 multicast group instead of ::
This makes it possible to run multiple instances on the same box, all
receiving local discovery packets. Tested on Mac, Windows, supposed to
work on at least Linux too. For Windows, there may be issues with XP and
earlier, but meh...
2015-08-27 17:51:15 +02:00
Jakob Borg
be332a6223 Local discovery should use the same port on v4 as v6 (fixes #2201) 2015-08-27 16:04:21 +02:00
Jakob Borg
bda0bb6f13 Merge pull request #2197 from kozec/instance-id
Feature request: startTime in system/status
2015-08-27 08:23:01 +02:00
AudriusButkevicius
68c5dcd83d Add CachedSize field 2015-08-26 22:33:03 +01:00
kozec
9bdcadf634 Added startTime into system/status REST call 2015-08-26 20:28:34 +02:00
Audrius Butkevicius
037be433f8 Merge pull request #2195 from calmh/noreload
Don't trust response header (fixes #2186)
2015-08-25 14:53:45 +01:00
Jakob Borg
2bed62dd9e Don't trust response header (fixes #2186)
Either Angular or the browser sometimes returns cached repsonse header,
causing a flap between requests that return the new version and requests
that return the old one. Here, instead, we trust the actual data
returned by the uncached /rest/system/version call.
2015-08-25 15:40:24 +02:00
Jakob Borg
a27bc4ebea stsigtool should use the built in key by default 2015-08-24 16:24:00 +02:00
Jakob Borg
d6e34761dc Fix events timeout errors
Resetting the timeout doesn't fully cut it, as it may timeout after we
got an event and be delivered later. This should fix it well enough for
the moment. https://github.com/golang/go/issues/11513
2015-08-24 09:38:39 +02:00
Audrius Butkevicius
98effcd8e3 Merge pull request #2188 from calmh/pausedevs
Pause and resume devices (ref #215)
2015-08-23 21:33:15 +01:00
Jakob Borg
baa87bc823 Command line switch -paused 2015-08-23 22:03:58 +02:00
Jakob Borg
944d9c84a0 Pause and resume devices (ref #215) 2015-08-23 22:00:21 +02:00
Jakob Borg
1e447741ee Update dependencies 2015-08-23 15:57:26 +02:00
Jakob Borg
4405ac7386 Report reason for no IPv6 multicast with STTRACE=discover 2015-08-23 15:50:57 +02:00
Audrius Butkevicius
e1190f0f0f Merge pull request #2184 from calmh/mc
Multicast double whammy!
2015-08-23 14:28:44 +01:00
Jakob Borg
a7f2416c0c IPv6 multicast on Windows (fixes #1817) 2015-08-23 15:14:26 +02:00
Jakob Borg
40d0100132 Change default IPv6 multicast address (fixes #2090) 2015-08-23 14:59:38 +02:00
Audrius Butkevicius
37f7c48cfc Merge pull request #2181 from calmh/refactor2
Small refactorings on relay
2015-08-23 13:02:06 +01:00
Jakob Borg
21adf752c8 Refactor: slightly simplify relay.Svc 2015-08-23 09:39:53 +02:00
Jakob Borg
aec143b882 Refactor: make IntermediateConnection more like Connection 2015-08-23 08:55:32 +02:00
Jakob Borg
f691040936 Refactor: s/Basic/Direct/ on connection type 2015-08-23 08:43:33 +02:00
Audrius Butkevicius
42acf0ed60 Try harder removing the temp file 2015-08-22 14:18:19 +01:00
Jakob Borg
ca4a3589e5 Increase event test timeout; the build server is slow, especially under -race 2015-08-21 13:26:16 +02:00
Jakob Borg
2eead17224 Actually map the key into Docker 2015-08-21 13:24:50 +02:00
Jakob Borg
626b26a227 Pass -sign parameter to build.go from Docker if key is present 2015-08-21 13:13:12 +02:00
Audrius Butkevicius
c46db0761e Merge pull request #2179 from calmh/signedrels
Use signed releases for automatic upgrade
2015-08-21 09:58:57 +01:00
Jakob Borg
cfed06697d Only accept correctly signed upgrades 2015-08-21 10:36:28 +02:00
Jakob Borg
a0d9183b14 Sign binaries when given "-sign keyfile" option 2015-08-21 09:33:46 +02:00
Jakob Borg
d3eb674b30 Add a signature package and stsigtool CLI utility 2015-08-21 09:31:17 +02:00
Jakob Borg
7fe1fdd8c7 Improve status reporter 2015-08-20 14:29:57 +02:00
Jakob Borg
37cbe68204 Fix broken connection close 2015-08-20 13:58:07 +02:00
Jakob Borg
f76a66fc55 Very basic status service 2015-08-20 12:59:44 +02:00
Jakob Borg
d7949aa58e I contribute stuff 2015-08-20 12:33:52 +02:00
Jakob Borg
f0c0c5483f Cleaner build 2015-08-20 12:33:11 +02:00
Jakob Borg
7d444021bb Update protocol dependency 2015-08-20 12:14:08 +02:00
Audrius Butkevicius
388a29bbe2 Merge pull request #17 from syncthing/fuzzing
Add some robustness for failure modes detected by go-fuzz
2015-08-20 09:47:47 +01:00
Jakob Borg
a03dd1bd41 Update test configs to v12 2015-08-20 09:38:47 +02:00
Jakob Borg
4b366f2857 This is now the v0.12 branch 2015-08-20 09:19:55 +02:00
Jakob Borg
c87faace6b Merge remote-tracking branch 'syncthing/pr/1995'
* syncthing/pr/1995:
  Add switch to disable relays
  Do not start relay service unless explicitly asked for, or global announcement server is running
  Add dynamic relay lookup (DDoS relays.syncthing.net!)
  Discovery clients now take an announcer, global discovery is delayed
  Expose connection type and relay status in the UI
  Add dependencies (fixes #1364)
  Check relays for available devices
  Add incoming connection relay service
  Add unsubscribe to config
  Connections have types
  Large refactoring/feature commit
2015-08-20 09:13:37 +02:00
Zillode
e2be051558 Merge pull request #2169 from calmh/restartmon
Retain standard streams over restart (fixes #2155)
2015-08-19 23:16:32 +02:00
Audrius Butkevicius
1e8b185377 Add switch to disable relays 2015-08-19 21:13:40 +01:00
Audrius Butkevicius
031804827f Do not start relay service unless explicitly asked for, or global announcement server is running 2015-08-19 21:13:10 +01:00
Audrius Butkevicius
6cccd9b6fc Add dynamic relay lookup (DDoS relays.syncthing.net!) 2015-08-19 21:12:34 +01:00
Audrius Butkevicius
687fbb0a7e Discovery clients now take an announcer, global discovery is delayed 2015-08-19 21:12:00 +01:00
Audrius Butkevicius
8f2db99c86 Expose connection type and relay status in the UI 2015-08-19 21:11:55 +01:00
Audrius Butkevicius
2c0f8dc546 Add dependencies (fixes #1364) 2015-08-19 21:06:46 +01:00
Audrius Butkevicius
a388fb0bb7 Check relays for available devices 2015-08-19 20:57:37 +01:00
Audrius Butkevicius
27465353c1 Add incoming connection relay service 2015-08-19 20:57:33 +01:00
Audrius Butkevicius
c2ccab4361 Add unsubscribe to config 2015-08-19 20:55:33 +01:00
Audrius Butkevicius
bb876eac82 Connections have types 2015-08-19 20:55:29 +01:00
Audrius Butkevicius
34c04babbe Large refactoring/feature commit
1. Change listen addresses to URIs
2. Break out connectionSvc to support listeners and dialers based on schema
3. Add relay announcement and lookups part of discovery service
2015-08-19 20:53:01 +01:00
Audrius Butkevicius
7c6a310179 Fix after package move 2015-08-19 20:49:34 +01:00
Audrius Butkevicius
50702eda94 Merge pull request #2171 from Zillode/staggered-test
Add unit test for staggered versioning (fixes #2165)
2015-08-19 20:11:19 +01:00
Jakob Borg
59eeafbdfa Merge pull request #2174 from alex2108/master
Fix time zone error in staggered versioning (fixes #2165)
2015-08-19 20:25:24 +02:00
Alexander Graf
abc606608c Fix time zone error in staggered versioning (fixes #2165) 2015-08-19 17:23:50 +02:00
Jakob Borg
1487552b48 s/in/at/ (fixes #2158) 2015-08-19 10:42:48 +02:00
Jakob Borg
c7dbe18df6 Newest first should be different from oldest first (fixes #2161) 2015-08-19 09:42:52 +02:00
Jakob Borg
c2bc3358cc Merge pull request #2168 from calmh/codename
Add release code name
2015-08-19 08:31:22 +02:00
Lode Hoste
47a1494d68 Add unit test for staggered versioning (fixes #2165) 2015-08-18 19:52:58 +02:00
Jakob Borg
dbb388719e Retain standard streams over restart (fixes #2155) 2015-08-18 17:24:50 +02:00
Jakob Borg
283c91548a Add release code name
I figured we're missing out on being cool and awesome by not having an
alphabetically based release code name like the big guys. This commit
fixes that. I've unilaterally decided on a theme of "$metal $bug"
because metals are kind of cool, and bugs, well, ...
2015-08-18 13:33:36 +02:00
Audrius Butkevicius
38b93bd310 Merge pull request #2167 from uok/fixicon
Fix missing folder master icon
2015-08-18 09:52:22 +01:00
Ben Schulz
8dcc30ac83 Fix missing folder master icon 2015-08-18 10:40:18 +02:00
Jakob Borg
0ee123375d Merge remote-tracking branch 'syncthing/pr/2117'
* syncthing/pr/2117:
  Disable testing upgrade endpoint because it fails when disconnected
2015-08-18 09:15:00 +02:00
Jakob Borg
be18cbef8b Update dependencies 2015-08-18 08:56:07 +02:00
Jakob Borg
6a36ec63d7 Empty messages with the compression bit set should be accepted 2015-08-18 08:46:40 +02:00
Jakob Borg
9c8b907ff1 All slice types must have limits
The XDR unmarshaller allocates a []T when it sees a slice type and reads
the expected length, so we must always limit the length in order to
avoid allocating too much memory when encountering corruption.
2015-08-18 08:46:40 +02:00
Jakob Borg
f769df16e8 Reject unreasonably large messages
We allocate a []byte to read the message into, so if the header says the
messages is several gigabytes large we may run into trouble. In reality,
a message should never be that large so we impose a limit.
2015-08-18 08:46:40 +02:00
Jakob Borg
c6f5075721 Enable testing with go-fuzz 2015-08-18 08:46:40 +02:00
Lode Hoste
8eb494c13e Disable testing upgrade endpoint because it fails when disconnected 2015-08-17 22:08:35 +02:00
Audrius Butkevicius
b6b6375f70 Merge pull request #2163 from calmh/dbrecover
Recover from 'corrupted or incomplete CURRENT file' etc
2015-08-16 15:58:51 +01:00
Jakob Borg
8783688391 Recover from 'corrupted or incomplete CURRENT file' etc (fixes #2017) 2015-08-16 16:36:06 +02:00
Jakob Borg
98afc3e99c Docs & translation update 2015-08-16 15:29:48 +02:00
Audrius Butkevicius
50a1858367 Merge pull request #2136 from calmh/noarchivedir
Clarify and correct handling of existing files/directories when pulling
2015-08-15 14:31:38 +01:00
Audrius Butkevicius
f3f586773b Merge pull request #2160 from calmh/rlimit
Increase open file (fd) limit if possible
2015-08-15 14:31:20 +01:00
Jakob Borg
61a182077f Clarify and correct handling of existing files/directories when pulling
This fixes a corner case I discovered in the symlink branch, where we
unexpectedly succeed in "replacing" an entire non-empty directory tree
with a file or symlink. This happens when archiving is in use, as we
then just move the entire tree away into the archive. This is wrong as
we should just archive files and fail on non-empty dirs in all cases.

New handling first checks what the (old) thing is, and if it's a
directory or symlink just does the delete, otherwise does conflict
handling or archiving as appropriate.
2015-08-15 15:29:59 +02:00
Jakob Borg
1c9513e770 Increase open file (fd) limit if possible
This will decrease the risk of running out of file descriptors for the
database and other bad things, which could otherwise potentially happen
if we're serving lots of requests and scanning in parallel, etc.

Windows doesn't have a per process open file limit like Unix so we don't
need to worry about it there.
2015-08-15 15:28:53 +02:00
Jakob Borg
5e5eb9bf8e Update test configs to v11 2015-08-14 14:19:43 +02:00
Audrius Butkevicius
7a9bb65e03 Merge pull request #2156 from calmh/stuckatzero
Don't get stuck at "Syncing 0%" when adding a new folder
2015-08-14 10:42:01 +01:00
Jakob Borg
a5345ac71e Don't get stuck at "Syncing 0%" when adding a new folder
The number of copiers and pullers is set to default at config loading
time, but the new folder configuration doesn't pass through config
loading so we start up with 0 copiers and 0 pullers and hence get stuck.
I moved the default handling to the puller itself instead. I think this
way is also cleaner as we get to keep the 0 in the config and the puller
gets to decide the defaults on it's own.
2015-08-14 10:35:51 +02:00
Jakob Borg
ae5079f7b4 Update lang-en.json 2015-08-14 09:13:09 +02:00
Jakob Borg
ea1ecfbc38 Merge pull request #2147 from uok/dontbesonegative
Prevent negative values for number inputs
2015-08-14 09:12:10 +02:00
Jakob Borg
a84b6b4bcc Merge pull request #2145 from uok/awesome
Change to Font Awesome icon font (fixes #2138)
2015-08-14 09:11:14 +02:00
Ben Schulz
93023128fd Prevent negative values for number inputs
- settings: incoming/outgoing rate limit - min: 0
- folder: maximum age (staggered file versioning) - min: 0
- help texts for validation
2015-08-13 15:56:10 +02:00
Ben Schulz
77157f16a1 Change to Font Awesome icon font (fixes #2138)
- remove Glyphicon assets and customize bootstrap CSS
- add Font Awesome v4.4.0 assets
- replace Glyphicons with Font Awesome icons in HTML
- add icons to modal headers
- add attribution for Font Awesome
- format HTML source code for buttons
2015-08-13 15:41:51 +02:00
Jakob Borg
99736e5066 Ensure dir before files ordering when scanning 2015-08-13 13:01:50 +02:00
Jakob Borg
681306b7a1 Clean up the scripts a bit (...)
- Move the Go files into script/ instead of random places
- Rewrite check-contrib.sh into check-authors.go and check-copyright.go
- Clean up build.sh a little bit
2015-08-13 12:35:26 +02:00
Jakob Borg
5f36c9d4de Merge pull request #2152 from Zillode/fix-gui-reorg
Revert small changes made during reorg GUI
2015-08-12 21:47:25 +02:00
Lode Hoste
6cbc8791b1 Revert small changes made during reorg GUI 2015-08-12 21:31:34 +02:00
Jakob Borg
c08de67b0d Remove erroneous file 2015-08-11 17:05:59 +02:00
Audrius Butkevicius
b21e18dfad Merge pull request #2149 from kamadak/fix-delete-folder
Fix deleting a folder.
2015-08-11 00:14:26 +01:00
KAMADA Ken'ichi
1e497915be Fix deleting a folder.
Removing a folder does not work in the "Edit Folder" dialog.
This bug was introduced in 26d52be.
2015-08-11 07:48:48 +09:00
Audrius Butkevicius
3704c41dda Merge pull request #2146 from uok/double
Remove double slashes in directives (fixes #2143)
2015-08-10 11:45:03 +01:00
Ben Schulz
6ff31ac666 Remove double slashes in directives (fixes #2143) 2015-08-10 11:48:39 +02:00
Jakob Borg
a2f73a7d35 Allow specifying Docker image to use for building 2015-08-09 14:40:18 +02:00
Jakob Borg
1492e57676 Minor typo in UPnP service description list 2015-08-09 14:14:13 +02:00
Jakob Borg
7504fc53b6 Merge branch 'v0.11'
* v0.11:
  Translations and docs update
  Enable browser caching of static resources
  Handle multiple case insensitivity prefixes in ignores (fixes #2134)
  Make rescan available for unshared folders
  Add timeout for peek (fixes #1035)
  Fix TestReset when Syncthing shuts down too fast
  Clarify password in integration tests
  Properly rename config files during integration tests (fixes #1769)
2015-08-09 11:58:18 +02:00
Jakob Borg
daa2bcefad Translations and docs update 2015-08-09 11:56:22 +02:00
Jakob Borg
49aa9399be Repair config tests 2015-08-09 11:46:28 +02:00
Jakob Borg
a71090df81 Enable browser caching of static resources
This sends the Cache-Control header to allow caching of static resources,
and checks the If-Modified-Since header to allow browser to use the
cached resource on refresh.
2015-08-09 11:36:06 +02:00
Jakob Borg
0bfcafc5c6 Handle multiple case insensitivity prefixes in ignores (fixes #2134) 2015-08-09 11:35:12 +02:00
Lode Hoste
161d5c8379 Make rescan available for unshared folders 2015-08-09 11:34:44 +02:00
Audrius Butkevicius
5cfb578170 Add timeout for peek (fixes #1035) 2015-08-09 11:34:30 +02:00
Lode Hoste
9b0d47e9eb Fix TestReset when Syncthing shuts down too fast 2015-08-09 11:34:21 +02:00
Lode Hoste
13f4706067 Clarify password in integration tests 2015-08-09 11:34:10 +02:00
Lode Hoste
7ebdb1736f Properly rename config files during integration tests (fixes #1769) 2015-08-09 11:34:05 +02:00
Jakob Borg
2bcb57c994 Merge branch 'pr-2066'
* pr-2066:
  Configurable home disk percentage, translations
  Add minimum disk free percentage to GUI
  Stop folder when running out of disk space (fixes #2057)
2015-08-09 10:38:33 +02:00
Jakob Borg
a2df691c7d Configurable home disk percentage, translations 2015-08-09 10:37:23 +02:00
Lode Hoste
58f1191f2d Add minimum disk free percentage to GUI 2015-08-09 10:37:23 +02:00
Lode Hoste
dfaa999291 Stop folder when running out of disk space (fixes #2057)
& tweaks by calmh
2015-08-09 10:37:23 +02:00
Jakob Borg
a693698279 Mend tests 2015-08-09 10:00:28 +02:00
Jakob Borg
6a58033f2b Merge pull request #2124 from calmh/go15
Updates for Go 1.5
2015-08-09 09:37:09 +02:00
Jakob Borg
7705a6c1f1 mv internal lib 2015-08-09 09:35:26 +02:00
Jakob Borg
0a803891a4 Updates for Go 1.5 2015-08-09 09:35:25 +02:00
Jakob Borg
7d620a93b9 Merge pull request #2141 from kamadak/fix-setting-addrlist
Fix editing address lists in the Settings dialog.
2015-08-09 09:11:30 +02:00
KAMADA Ken'ichi
3f3b2f4c99 Fix editing address lists in the Settings dialog.
Setting "Sync Protocol Listen Addresses" and "Global Discovery
Server" in the Settings dialog does not work.  This bug seems to
have been introduced in 26d52be.
2015-08-09 15:52:22 +09:00
Audrius Butkevicius
9eb4089710 Merge pull request #2137 from calmh/caching
Enable browser caching of static resources
2015-08-08 13:20:58 +01:00
Jakob Borg
257d1afdf8 Enable browser caching of static resources
This sends the Cache-Control header to allow caching of static resources,
and checks the If-Modified-Since header to allow browser to use the
cached resource on refresh. Also fixes some paths that caused redirects
(core//foo -> core/foo)
2015-08-08 13:50:18 +02:00
Audrius Butkevicius
dad1fb7805 Merge pull request #2135 from calmh/caseinsign
Handle multiple case insensitivity prefixes in ignores (fixes #2134)
2015-08-08 12:04:46 +01:00
Jakob Borg
e312fdd4f8 Handle multiple case insensitivity prefixes in ignores (fixes #2134) 2015-08-08 11:58:20 +02:00
Jakob Borg
9b6681d543 Use grid instead of three-column (fixes #2130) 2015-08-08 11:37:19 +02:00
Jakob Borg
bcc04623c1 Fix editing device address (fixes #2129) 2015-08-08 11:29:13 +02:00
Jakob Borg
d75d162058 Merge pull request #2126 from AudriusButkevicius/peek
Add timeout for peek (fixes #1035)
2015-08-07 17:30:54 +02:00
Audrius Butkevicius
8f2294bbd4 Merge pull request #2128 from Zillode/fix-rescan-btn-unshared
Make rescan available for unshared folders
2015-08-06 20:54:41 +01:00
Lode Hoste
d1b91c7619 Make rescan available for unshared folders 2015-08-06 21:52:29 +02:00
Audrius Butkevicius
1b6b481fcc Add timeout for peek (fixes #1035) 2015-08-06 12:07:34 +01:00
Jakob Borg
dd64ba1910 Merge pull request #2116 from Zillode/fix-test-reset
Fix TestReset when Syncthing shuts down too fast
2015-08-05 09:47:23 +02:00
Jakob Borg
82a5d1cd79 Merge pull request #2108 from Zillode/fix-delete-button
Rename 'delete' to 'remove' (fixes #1007)
2015-08-05 09:34:27 +02:00
Jakob Borg
c26c172d01 Merge pull request #2103 from Zillode/fix-integration-rename-windows
Fix integration rename windows
2015-08-05 08:59:53 +02:00
Lode Hoste
a7b7aaa7cb Fix TestReset when Syncthing shuts down too fast 2015-08-04 09:20:46 +02:00
Lode Hoste
4e5c02c05c Clarify password in integration tests 2015-08-03 20:03:50 +02:00
Lode Hoste
2baf61fda3 Properly rename config files during integration tests (fixes #1769) 2015-08-03 20:03:50 +02:00
Lode Hoste
219a25fe80 Rename 'delete' to 'remove' (fixes #1007) 2015-08-03 20:01:26 +02:00
Jakob Borg
b1dd704819 Rebuild assets 2015-08-02 09:41:22 +02:00
Jakob Borg
9513e91d66 Flatten GUI tree somewhat
The very deep tree structure didn't really aggree with me, sorry. This
makes the core module rather large, but on the other hand that just
highlights that it is rather large.
2015-08-02 09:24:01 +02:00
Dennis Wilson
26d52bedb3 Squashed commit of pull request #1954 2015-08-02 09:21:46 +02:00
Jakob Borg
19451e0654 Translation and docs update 2015-08-02 09:19:32 +02:00
Jakob Borg
fa922d7792 Send index immediately on local change 2015-08-02 08:08:24 +02:00
Jakob Borg
4949e3ba41 Merge pull request #16 from syncthing/bufs
Use bytepool for response buffers
2015-08-02 08:08:10 +02:00
Jakob Borg
bbe1de3119 Merge pull request #2100 from AudriusButkevicius/memory
Use protocol provided buffer for requests (fixes #1157)
2015-08-02 08:07:52 +02:00
Jakob Borg
f87e9b596d Merge pull request #2106 from Zillode/scan-deletes
Reduce scanning effort
2015-08-02 07:42:39 +02:00
Jakob Borg
917e12952e Merge pull request #2109 from Zillode/update-credits
Update credits for dependencies (fixes #2082)
2015-08-02 07:25:45 +02:00
Audrius Butkevicius
1977c526e4 Use protocol provided buffers for requests (fixes #1157) 2015-08-01 12:35:47 +01:00
Audrius Butkevicius
b63351074c Update protocol package 2015-08-01 12:22:19 +01:00
Audrius Butkevicius
ebcdea63c0 Use sync.Pool for response buffers 2015-08-01 12:21:49 +01:00
Lode Hoste
d78eb1247e Update credits for dependencies (fixes #2082) 2015-07-31 23:19:09 +02:00
Lode Hoste
9b9fe0d65c Reduce scanning effort 2015-07-31 21:32:49 +02:00
Jakob Borg
5a2db802d9 Fix TestReset 2015-07-28 21:31:01 +04:00
Jakob Borg
d3972b88f2 Remove set.ReplaceWithDelete (dead code) 2015-07-28 21:09:43 +04:00
Audrius Butkevicius
e62cf13760 Add stwatchfile 2015-07-27 19:00:22 +01:00
Jakob Borg
a5f6e3bba0 Update translations and docs 2015-07-26 11:25:40 +02:00
Jakob Borg
d170660c25 Usage -> Statistics 2015-07-24 12:04:16 +02:00
Audrius Butkevicius
2202aaed51 Merge pull request #2087 from calmh/norestart
Add folders without restart (fixes #2063)
2015-07-24 08:06:36 +01:00
Audrius Butkevicius
cbefcd50cf Merge pull request #2088 from calmh/usagedata
Link to usage data (ref syncthing/website#17)
2015-07-24 07:55:41 +01:00
Jakob Borg
1acfa291a0 Link to usage data (ref syncthing/website#17) 2015-07-24 08:53:33 +02:00
Jakob Borg
21accd534c Add folders without restart (fixes #2063) 2015-07-24 08:20:57 +02:00
Audrius Butkevicius
eb29989dff Connection errors are debug errors 2015-07-23 20:53:16 +01:00
Audrius Butkevicius
de89d7a976 Merge pull request #2084 from calmh/norestart
Add devices without restart (fixes #2083)
2015-07-23 11:07:27 +01:00
Audrius Butkevicius
78ef42daa1 Add ability to lookup relay status 2015-07-22 22:34:05 +01:00
Jakob Borg
319abebd70 Update all dependencies 2015-07-22 12:09:44 +02:00
Jakob Borg
76480adda5 Add devices without restart (fixes #2083) 2015-07-22 10:43:47 +02:00
Jakob Borg
e205f8afbb Don't error integration tests on unexpected EOF at Stop() 2015-07-22 10:43:33 +02:00
Audrius Butkevicius
42dc51784e Merge pull request #2079 from snnd/bugfix_reloads
fix(core): prevent endless reload on cache requests
2015-07-21 23:01:15 +01:00
Dennis Wilson
a4a46f480d refactor(core): eleminate global state of guiVersion and deviceId 2015-07-21 23:47:35 +02:00
Dennis Wilson
e34be16237 style(core): add simple flow, hoisting, stacktrace infos 2015-07-21 23:41:10 +02:00
Dennis Wilson
dfcc166918 fix(core): prevent endless reload on cache requests 2015-07-21 22:35:51 +02:00
Audrius Butkevicius
895d56ed04 Merge pull request #2076 from calmh/ignoredelete
Optionally ignore remote deletes
2015-07-21 19:48:06 +01:00
Jakob Borg
12eab4a8ba Optionally ignore remote deletes (fixes #1254) 2015-07-21 20:39:19 +02:00
Zillode
3eb2b1f7a2 Fix Systemd readme link 2015-07-21 20:32:35 +02:00
Audrius Butkevicius
6ecc9bf93a Merge pull request #2077 from calmh/conflictwins
Determine conflict winner based on change type and modification time (fixes #1848)
2015-07-21 15:05:08 +01:00
Audrius Butkevicius
22e24fc387 Merge pull request #15 from syncthing/conflictwins
Base for better conflict resolution
2015-07-21 14:45:17 +01:00
Jakob Borg
516d88b072 Base for better conflict resolution 2015-07-21 15:40:02 +02:00
Jakob Borg
da4ebb6535 Determine conflict winner based on change type and modification time (fixes #1848) 2015-07-21 15:39:20 +02:00
Audrius Butkevicius
77457e91e9 Merge pull request #1 from syncthing/review
Code review
2015-07-20 18:43:31 +01:00
Jakob Borg
6e4d33c741 Don't run tests with audit on 2015-07-20 15:46:14 +02:00
Jakob Borg
d3387e2a28 Make sure CPU profile actually gets written before exiting 2015-07-20 15:34:40 +02:00
Jakob Borg
491452a19d Improve performance for syncing many small files quickly
Without this, as soon as we'd touched 1000 files in the last minute
(which can happen), we got stuck doing cache cleaning all the time,
burning a lot of CPU time.
2015-07-20 15:30:44 +02:00
Jakob Borg
7d3257b222 Use soft shutdown when running tests 2015-07-20 15:05:15 +02:00
Jakob Borg
1836ef2884 Squashed commit of pull request #1981
Conflicts:
	gui/scripts/syncthing/core/controllers/syncthingController.js
	internal/auto/gui.files.go
2015-07-20 14:48:03 +02:00
Jakob Borg
43d6322d0f Merge pull request #2061 from calmh/atomicwriter
Add osutil.AtomicWriter (take two)
2015-07-20 14:27:36 +02:00
Jakob Borg
f0684d83e9 Add osutil.AtomicWriter
This captures the common pattern of writing to a temp file and moving it
to it's real name only if everything went well. It reduces the amount of
code in some places where we do this, but maybe not as much as I
expected because the upgrade thing is still a special snowflake...
2015-07-20 14:27:14 +02:00
Jakob Borg
3f3170818d Merge pull request #2072 from dva/patch-1
Double curly brace notation displaying
2015-07-20 14:25:27 +02:00
Jakob Borg
7683096fe1 Add dva 2015-07-20 14:25:07 +02:00
Jakob Borg
bb438bfb17 Squashed commit of pull request #1990
commit 4eb3ff55ba
Merge: ddb3fea a04b005
Author: Brian R. Becker <brbecker@gmail.com>
Date:   Sat Jul 11 20:45:30 2015 -0700

    Merge remote-tracking branch 'upstream/master'

commit ddb3fea0d9
Author: Brian R. Becker <brbecker@gmail.com>
Date:   Mon Jun 22 11:36:58 2015 -0700

    Corrected spelling in GUI message
2015-07-20 14:22:36 +02:00
Jakob Borg
a11aa295de Squashed commit of pull request #1875
commit d60fbce311
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Mon Jun 1 11:16:36 2015 +0200

    Correct order of deb files

commit 3b2ecfcc45
Merge: f4daebb c23a601
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Mon Jun 1 11:15:06 2015 +0200

    Merge github.com:syncthing/syncthing

    Conflicts:
    	build.go

commit f4daebb851
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Tue May 26 12:58:25 2015 +0200

    Add me to AUTHORS

commit 9e77f4bea0
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Tue May 26 12:57:40 2015 +0200

    Add systemd files to deb packate
2015-07-20 14:18:07 +02:00
Jakob Borg
049d92b525 Style and minor fixes, client package 2015-07-20 14:04:34 +02:00
Jakob Borg
f9bd59f031 Style and minor fixes, main package 2015-07-20 14:04:34 +02:00
Audrius Butkevicius
6d0d5bd566 Merge pull request #2 from syncthing/ratelimit
Implement global and per session rate limiting
2015-07-20 12:57:55 +01:00
Jakob Borg
35d20a19bc Implement global and per session rate limiting 2015-07-20 13:37:11 +02:00
Jakob Borg
dab1c4cfc9 Build script from discosrv 2015-07-20 12:11:06 +02:00
Denis A.
9a50f4ac1f Double curly brace notation displaying
Prevent double curly brace notation from displaying momentarily before angular.js compiles/interpolates document
2015-07-20 12:18:48 +03:00
Jakob Borg
59e829e595 Translation & docs update 2015-07-19 13:34:11 +02:00
Audrius Butkevicius
f86946c6df Fix bugs 2015-07-17 22:04:02 +01:00
Audrius Butkevicius
e97f75cad5 Change receiver type, add GoStringer 2015-07-17 21:49:45 +01:00
Audrius Butkevicius
2505f82ce5 General cleanup 2015-07-17 20:17:49 +01:00
Jakob Borg
78dca5fe8b Assets & translations 2015-07-16 14:03:21 +02:00
Jakob Borg
8c816f64e4 Merge pull request #2064 from uok/infotext
Improve info text for device addresses (fixes #2044)
2015-07-16 14:02:35 +02:00
Ben Schulz
22c525e3fe Improve info text for device addresses (fixes #2044)
Improve info text for device addresses (#2044)
2015-07-16 10:15:33 +02:00
Jakob Borg
f3f6b03d85 Don't let folder ID escape into HTML tag ID:s (fixes #2059) 2015-07-16 08:13:10 +02:00
Audrius Butkevicius
00bebc317e Merge pull request #2062 from canton7/feature/issue-2041
Allow #editIgnores to scroll in browser (fixes #2041)
2015-07-15 17:48:15 +01:00
Antony Male
8f38e83aaf Allow #editIgnores to scroll in browser (fixes #2041)
class modal-open is applied to <body>, which ultimately means that the
browser will scoll to the modal's content. However, #editFolder was
finishing its close animation (and removing this modal-open class)
after #editIgnores had set modal-open (and had started its open
animation). The end result is that <body> ends up without modal-open
when #editIgnores is open, and so the browser doesn't properly scroll.

Instead, only open the #editIgnores once #editFolder has finished closing.
2015-07-15 14:20:57 +01:00
Jakob Borg
8fab7ec5e3 Decrease timing sensitivity of ignore.TestCache 2015-07-14 12:12:57 +02:00
Jakob Borg
50eb968109 Merge pull request #2060 from brgmnn/master
Added select-on-click for remote 'Device ID' and 'API Key'
2015-07-14 10:55:32 +02:00
Daniel Bergmann
569314be45 Added select-on-click for remote 'Device ID' and 'API Key'
Now these uneditable boxes match the behaviour when clicking on the ID
box in Actions > Show ID.
2015-07-13 17:01:13 +01:00
Jakob Borg
909d60464e Revert "Merge pull request #2053 from calmh/atomicwriter" (fixes #2058)
This reverts commit b611f72e08, reversing
changes made to a04b005e93.
2015-07-13 12:47:32 +02:00
Jakob Borg
d855abf8b0 Translation & docs update 2015-07-13 08:24:04 +02:00
Audrius Butkevicius
b611f72e08 Merge pull request #2053 from calmh/atomicwriter
Add osutil.AtomicWriter
2015-07-12 12:10:54 +01:00
Jakob Borg
44e3bec42e Add osutil.AtomicWriter
This captures the common pattern of writing to a temp file and moving it
to it's real name only if everything went well. It reduces the amount of
code in some places where we do this, but maybe not as much as I
expected because the upgrade thing is still a special snowflake...
2015-07-12 14:28:59 +10:00
Jakob Borg
a04b005e93 Revert "Let suture logging bubble upwards"
This reverts commit 1b837116e6.
2015-07-11 11:12:20 +10:00
Jakob Borg
1b837116e6 Let suture logging bubble upwards 2015-07-11 10:52:57 +10:00
Jakob Borg
d16b04b683 Protocol dep update because I screwed up previous one 2015-07-10 19:58:56 +10:00
Audrius Butkevicius
d2e7a8004d Merge pull request #2048 from calmh/clusterconfigrace
Make sure connection is added to m.protoConn and m.rawConn before it's Start()ed (fixes #2034)
2015-07-10 08:46:31 +01:00
Audrius Butkevicius
7996ef0d45 Merge pull request #14 from syncthing/clusterconfigrace
Connection now needs explicit Start()
2015-07-10 08:31:57 +01:00
Jakob Borg
0c28216ee5 Make sure connection is added to m.protoConn and m.rawConn before it's Start()ed (fixes #2034) 2015-07-10 16:43:41 +10:00
Jakob Borg
b05c1a5bb9 Connection now needs explicit Start() 2015-07-10 16:40:39 +10:00
Jakob Borg
5bb8ea7449 Remove one of two emits of events.DeviceConnected (ref #2034) 2015-07-10 16:12:59 +10:00
Jakob Borg
b78c515724 Debug output on request errors 2015-07-10 16:01:10 +10:00
Audrius Butkevicius
cb1a7a7bdc Update panic instructions (fixes #2039) 2015-07-09 22:43:16 +01:00
Audrius Butkevicius
b8dcb7c884 Update protocol package (fixes #2040) 2015-07-09 22:39:22 +01:00
Audrius Butkevicius
9dd6f848bd Name the folder in error messages 2015-07-09 22:38:21 +01:00
Audrius Butkevicius
1ded554a15 Fix advanced option saving (fixes #2042) 2015-07-09 21:58:58 +01:00
Jakob Borg
bc0ce7b820 launchd: Log to files 2015-07-06 21:45:49 +02:00
Jakob Borg
1da3a57fe7 Asset rebuild 2015-07-05 19:44:55 +02:00
Jakob Borg
8697982302 Add canton7 2015-07-05 19:44:19 +02:00
Jakob Borg
c4168cf855 Merge pull request #2030 from canton7/feature/issue-2029
Use bootstrap grid instead of column-count:3 for aligning checkboxes (fixes #2029)
2015-07-05 19:41:44 +02:00
Antony Male
7023d3ca2b Use bootstrap grid instead of column-count:3 for aligning checkboxes (fixes #2029)
Upgrading to bootstrap 3.3.5 meant that checkboxes inside a div with
column-count:3 set would be unclickable in Chrome: in fact, the entire
div appears to sit on top of its contents, making interaction impossible.

This affected both the 'show folder with these devices' and 'these devices
can access this folder' sections of the UI.

I'm not sure what the the underlying cause is, but moving to Bootstrap's
grid system appears work around the issue. Devices/folders have to be
explicitly split out into rows, otherwise the final element appears offset.

To do this grouping by row, a new filter (groupFilter) has been added, which
turns an input of e.g. [1, 2, 3, 4, 5] with a groupSize of 3 into
[[1, 2, 3], [4, 5]]. However altering the collection in this way throws
Angular into an infinite watch loop, terminating in infdig. m59peacemaker's
pmkr.filterStabilize (MIT) was added to work around this issue.

This also has the nice side-effect of wrapping the list of devices/folders
when the screen width decreases.

See also:
 - #2027 (bootstrap update which triggered this issue)
 - #1121 (last time it happened)
2015-07-05 17:21:02 +01:00
Jakob Borg
57a5d13c47 Translation and docs update 2015-07-05 11:24:21 +02:00
Jakob Borg
500b96240b Don't show Failed Items on folder masters 2015-07-05 11:21:15 +02:00
Jakob Borg
13d961d41d Correctly show Override button when out of sync 2015-07-05 11:20:59 +02:00
Jakob Borg
fddc4c2fc0 Rebuild assets 2015-07-05 11:13:35 +02:00
Jakob Borg
061ec7369f Merge pull request #2027 from calmh/bootstrap
Update bootstrap
2015-07-05 11:07:13 +02:00
Jakob Borg
8366dbd8e0 Add link to home page (fixes #1993, fixes #1999) 2015-07-05 11:06:29 +02:00
Jakob Borg
b02047e4b5 Update Bootstrap 3.1.0 -> 3.3.5 2015-07-05 11:05:38 +02:00
Jakob Borg
e9545c4961 Merge pull request #2022 from brgmnn/master
Preserve setgid bit on local directores (fixes #2012)
2015-07-04 19:37:21 +02:00
Audrius Butkevicius
966a2b1df5 Merge pull request #2023 from calmh/advedit
Advanced configuration dialog
2015-07-04 15:08:40 +01:00
Jakob Borg
dec6540967 Implement "advanced configuration" dialog (fixes #2010) 2015-07-04 13:47:43 +02:00
Daniel Bergmann
3fe1673ce9 Preserve setgid bit on local directores (fixes #2012)
When setting the permissions on directories with ignore permissions off,
preserve the setgid bit to it's original value instead of setting it
off.
2015-07-04 09:01:34 +01:00
Jakob Borg
e9e13474c9 Merge pull request #2021 from brgmnn/master
Fixed add device button being overlapped by footer (fixes #1950)
2015-07-03 08:56:33 +02:00
Daniel Bergmann
aee9093848 Fixed add device button being overlapped by footer (fixes #1950) 2015-07-02 16:16:42 +01:00
Jakob Borg
76822c7c34 Merge pull request #2018 from brgmnn/master
Added select ID text on click to gui
2015-07-02 11:01:37 +02:00
Daniel Bergmann
5c18d34d89 Added a contact email address for myself.
Added myself to the AUTHORS, NICKS and GUI contributors files. Also
fixed the sort order in AUTHORS and NICKS when adding myself as there
were a couple of entries in both that were not quite in alphabetical
order.
2015-07-02 09:45:22 +01:00
Daniel Bergmann
970a9c7552 Added select ID text on click to gui 2015-07-02 09:34:12 +01:00
Audrius Butkevicius
37a42dc408 Fix CSRF tests (fixes #2009) 2015-06-30 19:38:27 +01:00
Audrius Butkevicius
a03c9f9457 Merge pull request #2001 from calmh/failed-files
Show failed files in web UI
2015-06-30 15:26:24 +01:00
Jakob Borg
60004ebff1 Show FolderErrors result in UI (fixes #1437) 2015-06-30 14:41:48 +02:00
Jakob Borg
2d9fcf6828 Collect puller errors, send FolderErrors event 2015-06-30 14:41:47 +02:00
Audrius Butkevicius
e1959afb6b Change EOL 2015-06-28 21:18:38 +01:00
Audrius Butkevicius
c68c78d412 Do scheme validation in the client 2015-06-28 20:34:28 +01:00
Jakob Borg
c8ac9721d7 Translation and docs update 2015-06-28 21:10:57 +02:00
Audrius Butkevicius
b72d31f87f Progress 2015-06-28 19:57:13 +01:00
Audrius Butkevicius
b1b68b58fe Update protocol package 2015-06-28 11:40:53 +01:00
Audrius Butkevicius
95e15c95f2 Merge pull request #13 from syncthing/timeout
Expose timeouts in protocol
2015-06-28 11:39:51 +01:00
Jakob Borg
ca21db9481 Merge pull request #2006 from AudriusButkevicius/timeout
Make ping timeout configurable (fixes #1751)
2015-06-28 07:45:02 +02:00
Audrius Butkevicius
93ad803073 Make ping timeout configurable (fixes #1751) 2015-06-27 12:34:41 +01:00
Audrius Butkevicius
6cc7f70a65 Update protocol package 2015-06-27 12:07:42 +01:00
Audrius Butkevicius
9f871a3726 Expose timeouts in protocol 2015-06-27 11:07:44 +01:00
Audrius Butkevicius
e19e2c123e Merge pull request #12 from syncthing/ccreq
Enforce ClusterConfiguration at start, then no ordering
2015-06-26 14:44:56 +01:00
Jakob Borg
cbe44e1fff Enforce ClusterConfiguration at start, then no ordering 2015-06-26 15:38:56 +02:00
Jakob Borg
2b0c33f74d Merge pull request #1996 from AudriusButkevicius/checkrace
Potential race between folder being added and scan (fixes #1986)
2015-06-26 12:56:07 +02:00
Audrius Butkevicius
dae1d36a23 Trim string slices upon loading config (fixes #1750) 2015-06-25 16:50:27 +01:00
Audrius Butkevicius
824fa8f17a Fix go lint warnings 2015-06-24 22:05:27 +01:00
Audrius Butkevicius
31cd0b943c Potential race between folder being added and scan (potentially fixes #1986) 2015-06-24 21:59:03 +01:00
Audrius Butkevicius
8e191c8e6b Add initial code 2015-06-24 15:02:23 +01:00
Jakob Borg
070eced2f6 Merge pull request #1985 from calmh/fix-reset
Fix reset DB
2015-06-24 14:07:15 +02:00
Audrius Butkevicius
986f8dfb2e Remove dead variable 2015-06-24 08:43:33 +01:00
Audrius Butkevicius
19d742b9e4 Initial commit 2015-06-24 00:34:16 +01:00
Audrius Butkevicius
8c0c03eb38 Merge pull request #1989 from AudriusButkevicius/session
Use different session cookies per device
2015-06-23 13:56:14 +01:00
Audrius Butkevicius
fd9bc20bc5 Merge pull request #1988 from calmh/dups
Don't rename duplicate folders (fixes #1675)
2015-06-23 11:17:30 +01:00
Audrius Butkevicius
089fca2319 Use different session cookies per device 2015-06-22 19:51:46 +01:00
Jakob Borg
e936890927 Don't rename duplicate folders (fixes #1675)
Renaming them puts the user in a difficult situation as they can't
rename them back in the GUI. This way, they need to fix the config in
the same way it got broken (manual editing or external tool).
2015-06-22 11:27:47 +02:00
Zillode
0450d48f89 Merge pull request #1856 from calmh/fix-1391
Serialize scans (fixes #1391)
2015-06-21 14:26:17 +02:00
Jakob Borg
2463819a3d Update translations and docs 2015-06-21 11:45:54 +02:00
Jakob Borg
2b2cae2d50 Fix reset DB
The reset of all folders failed when there was no data for a given
folder, as it was not returned by db.ListFolders then. But we don't
really care about that, we can "reset" it anyway...
2015-06-21 09:35:41 +02:00
Zillode
0f1b40da71 Merge pull request #1982 from calmh/fix-1978
Sanitize rescan interval values (fixes #1978)
2015-06-20 23:28:18 +02:00
Jakob Borg
f73d5a9ab2 Serialize scans and pulls (fixes #1391) 2015-06-20 23:01:40 +02:00
Jakob Borg
4eb0e24c6e Sanitize rescan interval values (fixes #1978) 2015-06-20 23:01:30 +02:00
Jakob Borg
1d2235abe7 Model must be running for tests 2015-06-20 23:00:33 +02:00
Jakob Borg
d347e54acb Don't start model until services have been added (fixes #1969) 2015-06-20 20:04:47 +02:00
Jakob Borg
b5198d8119 Merge pull request #1968 from calmh/newtests
Refactored integration tests
2015-06-20 19:24:04 +02:00
Jakob Borg
b8b5c5ff34 Merge pull request #1913 from Zillode/fix-reset
Fix 'reset' Rest API on windows
2015-06-20 11:43:05 +02:00
Jakob Borg
a738490a3b Update translation strings 2015-06-20 11:40:05 +02:00
Jakob Borg
54a8de2059 Merge remote-tracking branch 'syncthing/pr/1979'
* syncthing/pr/1979:
  Display Local State Summary (All Folders)
2015-06-20 11:39:31 +02:00
Jakob Borg
cb2c0e7ac5 Add fti7 2015-06-20 11:32:55 +02:00
Frank Isemann
510d309b8a Display Local State Summary (All Folders) 2015-06-19 21:52:19 +02:00
Jakob Borg
a7ce2a7aa5 Merge pull request #1963 from wsgcsysadmin/master
Put thisDeviceName first in page tile to make small browser tabs distinguishable
2015-06-19 08:51:24 +02:00
Jakob Borg
cfe24ecdd9 Add wsgcsysadmin 2015-06-19 08:50:36 +02:00
Jakob Borg
c5e9cb025c Merge pull request #1977 from Zillode/fix-1976
Corrected API response when resetting folder (fixes #1976)
2015-06-19 08:48:40 +02:00
Jakob Borg
c3d07d60ca Refactored integration tests
Added internal/rc to remote control a Syncthing process and made the
"awaiting sync" determination reliable.
2015-06-19 08:47:47 +02:00
Lode Hoste
a0897a7456 Corrected API response when resetting folder (fixes #1976) 2015-06-19 08:30:19 +02:00
WSGCSysadmin
c50ba9267c Put thisDeviceName first in page title 2015-06-18 10:53:37 -05:00
Jakob Borg
423e69916c Merge pull request #1962 from Zillode/fix-pull-order-gui
Set default pull order in webGUI
2015-06-18 16:51:10 +02:00
Lode Hoste
b56c76f8ad Fix 'reset' Rest API on windows 2015-06-18 12:45:08 +02:00
Lode Hoste
cb2d2f000f Set default pull order in webGUI 2015-06-18 12:42:00 +02:00
Audrius Butkevicius
69af77a3bd Merge pull request #1967 from calmh/woops
Incorrect error condition on shortcuts
2015-06-18 11:07:07 +01:00
Jakob Borg
7767746d3e Incorrect error condition on shortcuts 2015-06-18 11:55:43 +02:00
Audrius Butkevicius
7219aaeb89 Merge pull request #1966 from calmh/overrideevents
Generate LocalIndexUpdated events in Override
2015-06-18 10:17:38 +01:00
Jakob Borg
7af1863e81 Generate LocalIndexUpdated events in Override
The override should look like we detected the changes locally, or the
GUI and other things won't update correctly.

This is/was caught by the Override integration test, on my newly
refactored integration test suite which is soon ready for prime time, so
a test is coming. :)
2015-06-18 10:49:24 +02:00
Jakob Borg
4beb42bf45 Merge pull request #1959 from AudriusButkevicius/lastfile
Add label next to "Last file received" (fixes #1952)
2015-06-18 10:45:27 +02:00
Audrius Butkevicius
12a3086a9e Add label next to "Last file received" (fixes #1952) 2015-06-16 12:12:34 +01:00
Audrius Butkevicius
198725216f Merge pull request #1957 from calmh/myid
Include myID in the StartupComplete event
2015-06-16 08:46:50 +01:00
Audrius Butkevicius
08647f1267 Merge pull request #1956 from calmh/earlyevents
Fix API event subscription
2015-06-16 08:46:29 +01:00
Audrius Butkevicius
87811efc30 Merge pull request #1955 from calmh/localver
Add version to LocalIndexUpdate event.
2015-06-16 08:45:09 +01:00
Jakob Borg
82c3e6f87f Include myID in the StartupComplete event
Nice to have...
2015-06-16 09:27:06 +02:00
Jakob Borg
1ac40a3043 Fix API event subscription
The API never got the first few events ("Starting" etc) as it subscribed
too late. Instead, set up a subscription for it early on. If the API is
configured not to run this is unnecessary but doesn't hurt very much.
2015-06-16 09:17:58 +02:00
Jakob Borg
127b0c3332 Add version to LocalIndexUpdate event.
Allows correlating LocalIndexUpdate events on one device with RemoteIndexUpdated on another to make sure the cluster has converged.
2015-06-16 08:30:15 +02:00
Jakob Borg
a6d9150b14 Skip extra newline between assets 2015-06-15 23:13:43 +02:00
Jakob Borg
7e5197c566 Help link for folder master 2015-06-15 22:34:39 +02:00
Jakob Borg
2d217e72bd Dependency update 2015-06-15 21:10:33 +02:00
Jakob Borg
12331cc62b Merge pull request #1943 from ralder/webgui-events-in-service
webgui: moved events controller to events service
2015-06-15 20:33:27 +02:00
Sergey Mishin
2449f1e1b6 webgui: moved events controller to events service 2015-06-15 19:05:46 +03:00
Audrius Butkevicius
6a6593c656 Merge pull request #1948 from calmh/symwarning
Dont warn about irrelevant symlinks, print path to culprit (ref #1945)
2015-06-15 10:41:51 +01:00
Jakob Borg
ad220d61f9 Merge pull request #1951 from AudriusButkevicius/voodoo
Voodoo
2015-06-15 11:31:03 +02:00
Audrius Butkevicius
1e35383b4d Merge pull request #1947 from calmh/metadata
Differentiate between content and metadata updates in ItemStarted/ItemFinished
2015-06-15 10:26:09 +01:00
Audrius Butkevicius
c8457ab005 Voodoo 2015-06-15 10:22:44 +01:00
Jakob Borg
070a308217 Dont warn about irrelevant symlinks, print path to culprit (ref #1945)
This skips the warning about "unsupported symlinks" for invalid or
deleted symlinks (and as a side effect also accepts them into the index,
which should be fine). It also prints the affected file paths to the
log. This should be in the hypothetical list of "errored files" we
should present instead of the cryptic "puller stopped" message in the
future...
2015-06-15 00:47:13 +02:00
Jakob Borg
c1f4477376 ItemStarted can be map[string]string 2015-06-14 22:59:21 +02:00
Jakob Borg
d728320ece New ItemStart/Finished type 'metadata' for shortcut updates 2015-06-14 22:56:41 +02:00
Audrius Butkevicius
fee0d7168a Merge pull request #1946 from calmh/guiassets
Default GUI override dir
2015-06-14 21:40:56 +01:00
Jakob Borg
7c23b32de3 Default GUI override dir
If STGUIASSETS is not set, look for assets in $confdir/gui by default.
Simplifies deploying overrides and stuff.
2015-06-14 22:28:40 +02:00
Jakob Borg
1437952aee Translation and docs update 2015-06-14 13:52:00 +02:00
Jakob Borg
a162157301 Add translation strings for trash can versioning 2015-06-14 11:08:25 +02:00
Audrius Butkevicius
6316bf3582 Merge pull request #1941 from AudriusButkevicius/errors
Correctly set and clear errors for missing folders (fixes #1937)
2015-06-13 23:47:20 +01:00
Audrius Butkevicius
1d856b4723 Correctly set and clear errors for missing folders (fixes #1937) 2015-06-13 23:45:54 +01:00
Audrius Butkevicius
7f56d5c23a Merge pull request #1935 from calmh/trashcan
Add trash can file versioning (fixes #1931)
2015-06-12 13:20:37 +01:00
Jakob Borg
a778763851 Add trash can file versioning (fixes #1931) 2015-06-12 13:30:49 +02:00
Audrius Butkevicius
983d7ec265 Merge pull request #1933 from calmh/fix-1907
More resilient broadcast handling (fixes #1907)
2015-06-11 14:59:31 +01:00
Jakob Borg
297769ef57 More resilient broadcast handling (fixes #1907)
My theory is that some error condition on the socket results in it
blocking for writes, which maybe also blocks reads... This separates the
two into separate services with their own socket, with restarts and
retries as appropriates on write timeouts and read/write errors. It
should be more robust, hopefully, but I have a hard time testing the
actual error conditions...
2015-06-11 15:06:05 +02:00
Jakob Borg
885d050e5f Correct docs site link 2015-06-11 14:24:39 +02:00
Jakob Borg
8fb4ce6cad Merge pull request #1927 from ralder/patch-1
fix disappeared status of folder after restart syncthing
2015-06-11 08:47:53 +02:00
Jakob Borg
42738ab54d Add missing copyright notice 2015-06-11 08:46:57 +02:00
ralder
7d1250620e fix disappeared status of folder after restart syncthing
sometimes after restart process syncthing '/rest/db/status' for folder may return data with 'state' = empty string
2015-06-10 16:48:16 +03:00
Jakob Borg
5c49b93c67 Links are nice, too 2015-06-10 00:04:53 +02:00
Jakob Borg
9a11f81fd3 Point to contribution guidelines and docs 2015-06-10 00:02:39 +02:00
Audrius Butkevicius
cba2e972fd Merge pull request #1810 from calmh/cfg-commit
Configuration commit thingy
2015-06-09 15:09:21 +01:00
Jakob Borg
76ad925842 Refactor config commit stuff to support restartless updates better
Includes restartless updates of the GUI settings (listening port etc) as
a proof of concept.
2015-06-09 15:41:22 +02:00
Jakob Borg
ef6f52f688 Correctly handle nil error in verbose logging (fixes #1921) 2015-06-09 09:04:03 +02:00
Audrius Butkevicius
197bfa9f11 Merge pull request #1919 from calmh/fix-1918
Start folders before GUI/API (fixes #1918)
2015-06-08 11:13:02 +01:00
Jakob Borg
145f8c7435 Start folders before GUI/API (fixes #1918) 2015-06-08 11:04:09 +02:00
Jakob Borg
a8b43ae598 Translation / docs update 2015-06-07 12:57:26 +02:00
Audrius Butkevicius
567f19bf68 Do not overwrite error value 2015-06-06 08:30:01 +01:00
Jakob Borg
5cd4cd2271 Merge pull request #1912 from AudriusButkevicius/warnings
Silence discovery warnings (fixes #1388)
2015-06-04 19:29:42 +02:00
Audrius Butkevicius
4180569443 Silence discovery warnings (fixes #1388)
Not performing net.InterfaceAddrs() check in the constructor, as that means we wouldn't start
the read loop, which completely kills it.
2015-06-04 12:59:06 +01:00
Jakob Borg
5f4a92c8e6 Correct link 2015-06-03 19:47:39 +02:00
Jakob Borg
ccf3fed950 Merge remote-tracking branch 'syncthing/pr/1911'
* syncthing/pr/1911:
  replaced (not all) wiki links to new location docs.syncthing.net
2015-06-03 19:24:30 +02:00
Lars K.W. Gohlke
3626003f68 replaced (not all) wiki links to new location docs.syncthing.net 2015-06-03 19:09:36 +02:00
Audrius Butkevicius
11cb040ad1 Merge pull request #1880 from calmh/itemfinished-err
Fix ItemFinished
2015-06-03 16:50:33 +01:00
Jakob Borg
c1761cab49 Trigger ItemFinished when temp file creation fails instead of failing silently 2015-06-03 16:28:31 +02:00
Jakob Borg
25b25b5434 Merge pull request #1885 from AudriusButkevicius/moar-checks
Additional cases for detecting folders disappearing
2015-06-03 08:40:21 +02:00
Jakob Borg
8bdf66d9c0 Merge pull request #1906 from ralder/fix-style-top-menu-for-small-devices
fix language menu for small screen devices
2015-06-03 08:38:40 +02:00
Sergey Mishin
ccebdd142a fix webgui top menu for small screen devices 2015-06-02 17:48:31 +03:00
Jakob Borg
e952da7f91 Ensure we always have an up to date list of language names 2015-06-02 08:47:26 +02:00
Jakob Borg
5bd1e4a167 Re-add mistakenly removed languages 2015-06-02 08:33:36 +02:00
Jakob Borg
6d3de41751 Merge pull request #1905 from ralder/fix-missing-languages
fix missing languages (fixes #1902)
2015-06-02 08:12:06 +02:00
Sergey Mishin
f11bac6705 fix missing languages (fixes #1902) 2015-06-02 01:19:21 +03:00
Jakob Borg
c23a601cc6 Random number is too large for 32 bit archs (fixes #1894) 2015-06-01 09:33:13 +02:00
Jakob Borg
36d4c69fd6 Translation update 2015-05-31 09:37:02 +02:00
Jakob Borg
ca7e7fa0c4 Docs link, capitalization 2015-05-31 09:35:17 +02:00
Jakob Borg
1f6dd5dbb9 Add man pages to Debian package 2015-05-30 13:11:17 +02:00
Jakob Borg
db52646655 Include generated man pages 2015-05-30 13:05:55 +02:00
Jakob Borg
8bb18fa988 Create 'prerelease' step to run before releases 2015-05-30 10:39:27 +02:00
Jakob Borg
f0edaf2f8c Merge pull request #1884 from calmh/helplink
Show help link, add icons, tweak icon spacing
2015-05-30 10:32:16 +02:00
Jakob Borg
d632e3aa1b Show help link, add icons, tweak icon spacing 2015-05-30 10:31:45 +02:00
Audrius Butkevicius
f0e58fa804 Additional cases for detecting folders disappearing 2015-05-27 22:46:10 +01:00
Jakob Borg
ceced09d02 Merge pull request #1882 from calmh/folderstats
Reduce db writes for small files
2015-05-27 22:02:04 +02:00
Jakob Borg
7d48115b90 Reduce db writes for small files
We introduced the dbUpdater routine to handle many small files
efficiently, but the folder stats call is almost equally expensive as it
results in two distinct write transactions to the database. This moves
it to the same routine.

(Doesn't make a *huge* difference with leveldb actually, but reduces the
50k-files benchmark time by 25% on my experimental bolt branch...)
2015-05-27 18:36:26 +02:00
Bart De Vries
d2205228fb Make syncthing honor both the ignorePerms and FlagNoPermBits settings (fixes #1871) 2015-05-26 16:27:26 +02:00
Audrius Butkevicius
5417fb7287 Merge pull request #1872 from calmh/large-file-transfer
Large file transfer
2015-05-25 17:13:58 +01:00
Jakob Borg
3f59d6daff Add validation cache 2015-05-25 13:19:18 +02:00
Jakob Borg
0d659933aa Merge pull request #1868 from mogwa1/umask
Change permissions of newly created files and directories (fixes #1339)
2015-05-25 08:34:34 +02:00
Jakob Borg
3e8eabe833 Add mogwa1 2015-05-25 08:07:59 +02:00
Bart De Vries
badfc77339 Change permissions of newly created files and directories (fixes #1339) 2015-05-25 00:12:51 +02:00
Audrius Butkevicius
c51d3e59ea Merge pull request #1862 from calmh/allocs
Reduce allocations during iteration
2015-05-24 12:31:39 +01:00
Jakob Borg
c0d02a65c3 Reduce allocations during iteration
Reuses key byte slices to reduce allocations:

	benchmark                    old allocs     new allocs     delta
	Benchmark10kReplace          178112         168615         -5.33%
	Benchmark10kUpdateChg        567954         561557         -1.13%
	Benchmark10kUpdateSme        238691         228680         -4.19%
	Benchmark10kNeed2k           105382         103383         -1.90%
	Benchmark10kHaveFullList     60230          60230          +0.00%
	Benchmark10kGlobal           237484         227493         -4.21%

	benchmark                    old bytes     new bytes     delta
	Benchmark10kReplace          8725368       7661788       -12.19%
	Benchmark10kUpdateChg        58155152      57441025      -1.23%
	Benchmark10kUpdateSme        16130875      14996717      -7.03%
	Benchmark10kNeed2k           6561685       6338283       -3.40%
	Benchmark10kHaveFullList     7611112       7611135       +0.00%
	Benchmark10kGlobal           21415056      20300723      -5.20%
2015-05-24 13:08:14 +02:00
Audrius Butkevicius
3a203b8d83 Merge pull request #1861 from calmh/fix-1860
Be more lenient against errors when deleting (fixes #1860)
2015-05-24 00:58:46 +01:00
Audrius Butkevicius
feecdcc7a4 Merge pull request #1854 from calmh/eventmemallocs
Reuse a timer instead of allocating a new one in subscription.Poll
2015-05-23 23:23:46 +01:00
Audrius Butkevicius
51ad533be6 Merge pull request #1859 from calmh/fix-1858
UPnP discovery results must not be collected in a background goroutine (fixes #1858)
2015-05-23 22:58:09 +01:00
Jakob Borg
29da0bc8f5 Be more lenient against errors when deleting (fixes #1860) 2015-05-23 23:57:41 +02:00
Jakob Borg
bccf7fc2a8 Merge pull request #1857 from calmh/testhax
Refactor integration tests to be a little cleaner and more stable, I hope
2015-05-23 23:57:21 +02:00
Jakob Borg
7b6b5981c4 UPnP discovery results must not be collected in a background goroutine (fixes #1858) 2015-05-23 23:27:02 +02:00
Jakob Borg
9463192224 Refactor integration tests to be a little cleaner and more stable, I hope 2015-05-23 23:26:23 +02:00
Audrius Butkevicius
d1689f0012 Merge pull request #1855 from calmh/dboptsagain
Reduce db write cache to (2*) 4 MiB
2015-05-23 20:38:41 +01:00
Jakob Borg
8ed67fe349 Reduce db write cache to (2*) 4 MiB
I haven't been able to reproduce any performance advantage of having it
set higher and it reduces the memory footprint a bit.
2015-05-23 21:05:52 +02:00
Jakob Borg
90a1d99785 Reuse a timer instead of allocating a new one in subscription.Poll
This is surprisingly memory expensive when Poll gets called a lot, such
as when syncing lots of small files generating itemstarted/itemfinished
events. It's line number three in this heap profile on the
TestBenchmarkManyFiles test:

	jb@syno:~/src/github.com/syncthing/syncthing/test (master) $ go tool pprof ../bin/syncthing heap-13194.pprof
	Entering interactive mode (type "help" for commands)
	(pprof) top
	80.91MB of 83.05MB total (97.42%)
	Dropped 1024 nodes (cum <= 0.42MB)
	Showing top 10 nodes out of 85 (cum >= 1.75MB)
	      flat  flat%   sum%        cum   cum%
	      32MB 38.53% 38.53%    32.01MB 38.54%  github.com/syndtr/goleveldb/leveldb/memdb.New
	   22.16MB 26.68% 65.21%    22.16MB 26.68%  github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).Get
	   13.02MB 15.68% 80.89%    13.02MB 15.68%  time.NewTimer
	    6.94MB  8.35% 89.24%     6.94MB  8.35%  github.com/syndtr/goleveldb/leveldb/memdb.(*DB).Put
	    3.18MB  3.82% 93.06%     3.18MB  3.82%  github.com/calmh/xdr.(*Reader).ReadBytesMaxInto

With this change the allocation is removed entirely.
2015-05-23 20:38:41 +02:00
Jakob Borg
e215cf6fb8 Add some REST API benchmarks 2015-05-23 20:15:54 +02:00
Jakob Borg
e827c0bd94 Run benchmarks in docker-all instead 2015-05-23 15:17:19 +02:00
Jakob Borg
8dd7e4e6b5 Run benchmarks when running tests 2015-05-23 15:08:17 +02:00
Jakob Borg
a2b94f4e06 Build Debian armhf and armel 2015-05-23 13:10:33 +02:00
Jakob Borg
8b0037ffab Make check-contribs a little more generous in recognizing a copyright header 2015-05-21 21:42:46 +02:00
Jakob Borg
4ab03f3bef Update CONTRIBUTING to not encourage changing AUTHORS 2015-05-21 20:58:17 +02:00
Jakob Borg
3c3db52f49 Merge pull request #1842 from Zillode/fix-1822
Support the creation of top-level folders on Windows (fixes #1822)
2015-05-21 20:50:52 +02:00
Jakob Borg
5b1e884659 Merge pull request #1847 from Zillode/fix-1833
Show date and time for web GUI notification (fixes #1833)
2015-05-21 20:49:04 +02:00
Lode Hoste
7ec6740e26 Show date and time for web GUI notification (fixes #1833) 2015-05-21 19:44:45 +02:00
Lode Hoste
f12b8c19be Support the creation of top-level folders on Windows (fixes #1822) 2015-05-21 19:21:19 +02:00
Jakob Borg
76174d31ce Merge pull request #1843 from Zillode/fix-1831
Set permanent UPnP lease when required
2015-05-21 12:50:48 +02:00
Lode Hoste
5042248260 Set permanent UPnP lease when required (fixes #1831) 2015-05-21 10:48:40 +02:00
Lode Hoste
1df6589533 Add status code to SOAP response 2015-05-21 09:54:39 +02:00
Jakob Borg
12f76b448c Merge pull request #1826 from AudriusButkevicius/screwflags
Don't check interface flags on Windows
2015-05-19 14:59:16 +02:00
Audrius Butkevicius
f112ef34f6 Don't check interface flags on Windows 2015-05-17 16:42:26 +01:00
Jakob Borg
e4b57a978f Translation update 2015-05-15 10:46:47 +02:00
Audrius Butkevicius
b51e09e0e8 Merge pull request #1811 from calmh/bc
Further reduce maximum db block cache
2015-05-15 11:12:25 +03:00
Jakob Borg
a3ba3f895c Further reduce maximum db block cache 2015-05-14 20:59:59 +02:00
Jakob Borg
947a129e12 Use updateLocals to ensure event generation 2015-05-14 08:56:42 +02:00
Jakob Borg
f3fe6a6cbd Assure existence of a folder marker in the test 2015-05-14 08:46:00 +02:00
Jakob Borg
9d150bef9f Refactor GetMtime for early return 2015-05-14 08:26:21 +02:00
Jakob Borg
65be18cc93 Merge pull request #1804 from cdhowie/virtual-mtimes
Implement virtual mtime support for Android
2015-05-14 08:14:29 +02:00
Jakob Borg
e27ea63900 Add cdhowie 2015-05-14 08:11:02 +02:00
Chris Howie
aa96f7b660 Virtual mtime support for environments that don't support altering mtimes (fixes #831) 2015-05-13 14:57:29 +00:00
Jakob Borg
053690d885 Don't attempt reschedule with zero interval
Can happen on manual rescans of folders with zero interval.
2015-05-13 16:51:08 +02:00
Jakob Borg
b9fc6397a3 Merge pull request #1791 from Zillode/test-master
Add unit test for overriding ignored files (fixes #1701)
2015-05-13 14:50:12 +02:00
Audrius Butkevicius
19b9f15da3 Merge pull request #1803 from calmh/kvspaceclear
Reset a namespaced kv-store
2015-05-13 15:43:52 +03:00
Jakob Borg
0b9441e1a4 Reset a namespaced kv-store 2015-05-13 14:41:47 +02:00
Audrius Butkevicius
2324d7420c Merge pull request #1800 from calmh/ursvc
Break out usage reporting into a service
2015-05-13 15:41:44 +03:00
Jakob Borg
c6b2ca8b19 Break out usage reporting into a service 2015-05-13 14:39:27 +02:00
Audrius Butkevicius
83ea8dc577 Merge pull request #1801 from calmh/fix-1799
Only restart global discovery on UPnP change if it was enabled to start with (fixes #1799)
2015-05-12 18:49:24 +03:00
Jakob Borg
d898277f62 stindex: add some missing newlines 2015-05-12 11:47:30 +02:00
Jakob Borg
a289cfb986 Only restart global discovery on UPnP change if it was enabled to start with (fixes #1799) 2015-05-12 09:48:55 +02:00
Jakob Borg
d603998617 Debian maintainer is release team 2015-05-11 21:59:40 +02:00
Jakob Borg
8bf9f4f5ab Build debian skeleton 2015-05-11 21:34:43 +02:00
Jakob Borg
2c4e6f2926 wip 2015-05-11 19:04:39 +02:00
Jakob Borg
21fbbc50cd hax 2015-05-11 18:39:53 +02:00
Audrius Butkevicius
99512418da Merge pull request #1796 from calmh/adaptive-cache-tweak
Tweak the database block cache size, and add config for it
2015-05-11 11:06:31 +03:00
Jakob Borg
c2f2d8771f Tweak the database block cache size, and add config for it 2015-05-11 09:08:25 +02:00
Jakob Borg
179c9ee8cc Merge pull request #1792 from Zillode/tempname
Retains part of meaningful filename, added unit test and did a small refactoring
2015-05-10 20:44:55 +02:00
Jakob Borg
eb8a505287 Merge pull request #1774 from Zillode/fix-rename-windows
If rename works we are happy (fixes #1767)
2015-05-10 19:04:03 +02:00
Lode Hoste
a7482a3644 Added simple unit test for temporary filenames 2015-05-10 18:57:27 +02:00
Lode Hoste
a692348336 Retain meaningful names for temporary files 2015-05-10 18:57:27 +02:00
Jakob Borg
285dcc30cf Use md5 hash of filename for temporary file (fixes #1786) 2015-05-10 18:57:27 +02:00
Jakob Borg
bcf51aed83 Translation update 2015-05-10 14:59:12 +02:00
Zillode
1a11ce6211 Merge pull request #1784 from calmh/fix-1782
Implement upnpSvc.Stop() (fixes #1782)
2015-05-10 14:47:02 +02:00
Lode Hoste
10021c97a6 Add unit test for overriding ignored files (fixes #1701) 2015-05-10 11:23:29 +02:00
Audrius Butkevicius
e869e3c534 Merge pull request #1789 from calmh/reduce-default-lease
Set default UPnP lease time to 60 minutes
2015-05-10 01:43:10 +03:00
Lode Hoste
f6416285db If rename works we are happy (fixes #1767) 2015-05-10 00:00:24 +02:00
Jakob Borg
7f0593cd2d Set default UPnP lease time to 60 minutes 2015-05-09 22:17:53 +02:00
Jakob Borg
e4c41718d8 Fix upnp mapping name 2015-05-09 20:04:15 +02:00
Jakob Borg
7234553990 Implement upnpSvc.Stop() (fixes #1782) 2015-05-09 12:49:58 +02:00
Jakob Borg
5761efdb73 Remove system/editor specific ignores (ref #1778) 2015-05-08 20:34:06 +02:00
Jakob Borg
1133192a86 Merge remote-tracking branch 'syncthing/pr/1779'
* syncthing/pr/1779:
  Remove stray VIM swap file
2015-05-08 20:32:02 +02:00
Chris Howie
af3288043a Remove stray VIM swap file 2015-05-08 16:11:32 +00:00
Jakob Borg
24a348f6a8 Use larger files for large file transfer benchmark 2015-05-08 14:59:37 +02:00
Audrius Butkevicius
5528b6c231 Merge pull request #1775 from calmh/fix-1765
Trigger pull check on remote index updates (fixes #1765)
2015-05-08 11:45:03 +03:00
Jakob Borg
245bd1eb17 Trigger pull check on remote index updates (fixes #1765)
Without this, when an index update comes in we only do a new pull if the
remote `localVersion` was increased. But it may not be, because the
index is sent alphabetically and the file with the highest local version
may come first. In that case we'll never do a new pull when the rest of
the index comes in, and we'll be stuck in idle but with lots of out of
sync data.
2015-05-08 10:02:46 +02:00
Jakob Borg
03506db76c Fix rename with capitalization test 2015-05-08 10:02:19 +02:00
Jakob Borg
cb5ef26020 Revert "Enforce line endings when cloning (fixes #1766)"
This reverts commit 2361a0dd6e.
2015-05-07 22:55:31 +02:00
Jakob Borg
2493ae4c2c Merge pull request #1773 from Zillode/fix-browser
Do not launch browser when running integration tests
2015-05-07 22:23:47 +02:00
Jakob Borg
2bd88344ad Merge pull request #1772 from Zillode/fix-lf
Enforce line endings when cloning (fixes #1766)
2015-05-07 22:03:20 +02:00
Lode Hoste
4950980d69 Don't launch browser for integration tests 2015-05-07 20:59:12 +02:00
Lode Hoste
2361a0dd6e Enforce line endings when cloning (fixes #1766) 2015-05-07 20:32:18 +02:00
Audrius Butkevicius
152cdd1750 Merge pull request #1771 from calmh/stindex
Simplify stindex
2015-05-07 11:13:49 +03:00
Jakob Borg
31797a5831 Simplify stindex 2015-05-07 09:59:04 +02:00
Jakob Borg
9308c42cff Skip boring concurrency test in internal/db 2015-05-07 09:57:49 +02:00
Audrius Butkevicius
c096cd34b9 Merge pull request #1763 from calmh/win-executable
Set the execute bit on Windows executables (fixes #1762)
2015-05-06 08:28:06 +03:00
Jakob Borg
5fc0808f28 Set the execute bit on Windows executables (fixes #1762) 2015-05-05 21:45:26 +02:00
Jakob Borg
e6866ee980 strings.TrimLeft is not actually TrimPrefix 2015-05-05 13:53:11 +02:00
Jakob Borg
45631d30b0 Merge pull request #1756 from LordLandon/master
Fix #1728 by using latest version property
2015-05-04 18:34:34 +02:00
Lord Landon "Warbles" Agahnim
0f432a0844 Use actual release version for release note link (fixes #1728) 2015-05-04 08:23:24 -07:00
Jakob Borg
df59bc7194 Merge pull request #1747 from Zillode/fix-1743
Partial fix for #1743
2015-05-04 10:51:40 +02:00
Jakob Borg
ff4706e450 Merge branch 'pr-1748'
* pr-1748:
  Reschedule before scan
  Use a channel instead of locks
  Reschedule the next scan interval (fixes #1591)
2015-05-04 10:40:14 +02:00
Jakob Borg
fc013bd04c Add LordLandon 2015-05-04 10:36:40 +02:00
Audrius Butkevicius
7e29a8b927 Merge pull request #1755 from calmh/fix-1754
Wait for stdout/stderr to close (fixes #1754)
2015-05-03 22:45:00 +03:00
Jakob Borg
67ae7a0b6c Wait for stdout/stderr to close (fixes #1754) 2015-05-03 17:36:01 +02:00
Jakob Borg
bd5a64bac0 Reschedule before scan 2015-05-03 14:18:50 +02:00
Jakob Borg
1bd85d8baf Use a channel instead of locks 2015-05-03 14:18:32 +02:00
Lode Hoste
fe34b08ece Reschedule the next scan interval (fixes #1591) 2015-05-03 12:48:44 +02:00
Jakob Borg
33048f88b8 Merge pull request #1752 from alex2108/master
Distinguish files with same name but different extension in staggered versioner (fixes #1738)
2015-05-03 12:32:20 +02:00
Alexander Graf
0ec01f4e78 Distinguish files with same name but different extension in staggered versioner (fixes #1738) 2015-05-03 10:36:46 +02:00
Jakob Borg
d0ebf06ff8 Translation update 2015-05-03 08:29:29 +02:00
Jakob Borg
687b249034 Don't create stopped folder in staggered versioning (fixes #1749) 2015-05-03 08:22:11 +02:00
Jakob Borg
d1528dcff0 Remove old name conversion from staggered versioning 2015-05-03 08:22:11 +02:00
Jakob Borg
1c31cf6319 Merge pull request #1744 from Zillode/fix-1692
Upgrade running Syncthing instances (fixes #1692)
2015-05-02 15:17:45 +02:00
Lode Hoste
67f0c9bef0 Do not remove the file when renaming on a case-insensitive platform (fixes #1743) 2015-05-01 23:43:45 +02:00
Lode Hoste
d54c366150 Upgrade running Syncthing instances (fixes #1692) 2015-05-01 23:38:54 +02:00
Jakob Borg
4ff535f883 Twitter link (lazily wrong icon, because not in Glyphicons...) 2015-05-01 23:32:51 +02:00
Lode Hoste
58b15f9452 Limit alterfiles to a single operation per file 2015-05-01 13:03:03 +02:00
Lode Hoste
dedca59ac9 Add capitalization changes to integration tests 2015-05-01 12:11:57 +02:00
Jakob Borg
3cbddfe545 Merge pull request #1745 from Zillode/fix-1678
Added test for combining case insensitive and negated patterns (fixes #1678)
2015-05-01 09:20:51 +02:00
Lode Hoste
e3cae69495 Added test for combining case insensitive and negated patterns (fixes #1678) 2015-05-01 00:58:44 +02:00
Audrius Butkevicius
aee40316f8 Merge pull request #1732 from calmh/guisvc
Break out GUI into an API service
2015-04-30 22:15:47 +01:00
Audrius Butkevicius
d754f9ae89 Merge pull request #1742 from calmh/adaptive-cache
Adaptive database cache size
2015-04-30 21:56:40 +01:00
Audrius Butkevicius
3c50b3a9e0 Merge pull request #1741 from calmh/verbose
Verbose logging
2015-04-30 21:54:49 +01:00
Jakob Borg
fb312a71f7 Add verbose logging (fixes #179) 2015-04-30 20:47:21 +02:00
Jakob Borg
136d79eaa3 Break out GUI into an API service 2015-04-30 20:36:07 +02:00
Jakob Borg
834336499a Adaptive database cache size 2015-04-30 20:25:44 +02:00
Jakob Borg
c9da8237df Report usage statistics after transfer bench 2015-04-30 08:43:57 +02:00
Audrius Butkevicius
9638dcda0a Merge pull request #1735 from calmh/kinder-rehashing
User fewer hasher routines when there are many folders.
2015-04-29 20:43:51 +01:00
Jakob Borg
756c5a2604 User fewer hasher routines when there are many folders. 2015-04-29 21:26:08 +02:00
Jakob Borg
a9c31652b6 Merge branch 'pr-1725'
* pr-1725:
  typos and spelling correction
2015-04-29 17:09:30 +02:00
dartraiden
32a76901a9 typos and spelling correction 2015-04-29 15:59:47 +02:00
Audrius Butkevicius
a5e11c7489 Merge pull request #1730 from calmh/bug-1721
Don't hang when attempting multicast discovery on non-multicast interfaces
2015-04-29 11:01:25 +01:00
Audrius Butkevicius
f2b12014e1 Merge pull request #1729 from calmh/lint-clean
Run vet and lint. Make us lint clean.
2015-04-29 10:16:36 +01:00
Jakob Borg
60fcaebfdb Run vet and lint. Make us lint clean. 2015-04-29 10:38:02 +02:00
Audrius Butkevicius
32fe2cb659 Merge pull request #1700 from calmh/upnpsvc
Break out UPnP port mapping into a service
2015-04-29 00:18:49 +01:00
Jakob Borg
1207d54fdd Tweak UPnP discovery, avoid non-multicast interfaces (fixes #1721) 2015-04-28 22:32:10 +02:00
Stefan Tatschner
3932884688 Drop systemd README instruction in favor of the wiki 2015-04-28 14:11:12 +02:00
Audrius Butkevicius
19a2042746 Merge pull request #1723 from calmh/bug-1722
Handle conflict with local delete (fixes #1722)
2015-04-28 10:39:22 +01:00
Jakob Borg
4c6eb137da Merge pull request #1720 from AudriusButkevicius/ignores
Matcher is always there
2015-04-28 11:35:12 +02:00
Jakob Borg
57ec2ff915 Handle conflict with local delete (fixes #1722) 2015-04-28 11:34:16 +02:00
Audrius Butkevicius
77a161a087 Matcher checks nil receiver 2015-04-28 10:17:14 +01:00
Jakob Borg
0642402449 Break out UPnP port mapping into a service 2015-04-28 10:25:25 +02:00
Audrius Butkevicius
50d377d9fe Fix integration tests 2015-04-28 00:09:44 +01:00
Jakob Borg
f5211b0697 Add some more cache forbidding headers, for various user agents. 2015-04-27 09:08:55 +02:00
Jakob Borg
fd4ea46fd7 Merge pull request #1708 from jarlebring/upnp_close_conn_fix
Fix to for routers that cannot handle many open HTTP-connections
2015-04-26 22:54:47 +02:00
Elias Jarlebring
8d8546868d Fix to for routers that cannot handle many open HTTP-connections
Some routers do not respond when too many subsequent SOAP-requests are sent and will generate an EOF-error in the DefaultClient.Do(req). This modification adds a req.Close = True following the description on http://stackoverflow.com/questions/17714494/golang-http-request-results-in-eof-errors-when-making-multiple-requests-successi
2015-04-26 22:31:43 +02:00
Audrius Butkevicius
8ce547edeb Fix HTML (fixes #1707) 2015-04-26 20:14:34 +01:00
Jakob Borg
a17c48aed6 Merge pull request #1705 from AudriusButkevicius/glob
Add osutil.Glob to deal with Windows (fixes #1690)
2015-04-26 18:28:25 +02:00
Audrius Butkevicius
d12db3e7b8 Add osutil.Glob to deal with Windows (fixes #1690) 2015-04-26 16:37:50 +01:00
Jakob Borg
15b87ae297 Merge pull request #1704 from jarlebring/upnp_caps
Fix capitalization in HTTP-header in SOAP request (fixes #1696)
2015-04-26 20:10:14 +09:00
Jakob Borg
02fdf59839 Add jarlebring 2015-04-26 19:40:02 +09:00
jarlebring
d9da02b7a8 Formatting with gofmt 2015-04-26 12:37:37 +02:00
Elias Jarlebring
8f2ad6418d Fix capitalization in HTTP-header in SOAP request (fixes #1696)
Some routers are sensitive to the capitalization in  "SOAPAction" in the HTTP-header in SOAP request. This modification follows the recommendation of preserving caps in HTTP-headers in go described on http://stackoverflow.com/questions/26351716/how-to-keep-key-case-sensitive-in-request-header-using-golang?lq=1
2015-04-26 12:16:40 +02:00
Jakob Borg
ff984425a3 Merge pull request #1703 from AudriusButkevicius/page
Add pagination to Out of sync item list (fixes #1509)
2015-04-26 18:42:17 +09:00
Audrius Butkevicius
ac1058359f Rebuild assets 2015-04-26 00:22:30 +01:00
Audrius Butkevicius
9afbca3001 Add pagination to Out of sync item list (fixes #1509) 2015-04-26 00:22:26 +01:00
Audrius Butkevicius
ec3f17cb9c Add angular-dirPagination 2015-04-25 22:52:52 +01:00
Audrius Butkevicius
73b9d5c5f9 Merge pull request #1698 from calmh/pull-order
Configurable file pull order (alphabetic, random, by size or age)
2015-04-25 15:41:02 +01:00
Audrius Butkevicius
ecc8591c95 Merge pull request #1699 from calmh/connsvc
Break out connection handling into a service
2015-04-25 15:37:08 +01:00
Audrius Butkevicius
696b67e4b1 Merge pull request #1697 from calmh/auditsvc
Add audit log feature
2015-04-25 15:34:34 +01:00
Jakob Borg
266a5116a1 Break out connection handling into a service 2015-04-25 23:21:42 +09:00
Jakob Borg
131f2be857 Add audit log feature 2015-04-25 23:20:39 +09:00
Jakob Borg
be7b3a9952 Configurable file pull order (alphabetic, random, by size or age) 2015-04-25 23:20:21 +09:00
Jakob Borg
bb31b1785b Add a service manager to main (future use) 2015-04-25 23:16:46 +09:00
Jakob Borg
2a60f4b1e9 Add .gitattributes; normalize line endings 2015-04-25 23:16:46 +09:00
Jakob Borg
33a4fb5a1a Fix folder check tests 2015-04-25 23:16:46 +09:00
Audrius Butkevicius
aece6e8b6c Merge pull request #1689 from calmh/nolocks
events.Subscription.Poll does not seem to require locking
2015-04-24 10:26:58 +01:00
Jakob Borg
7bf55dd14f events.Subscription.Poll does not seem to require locking
This is a large source of output from the new lock logging, and it
doesn't seem to accomplish anything useful that I can see. Running
integration with the race detector to make sure...
2015-04-24 11:25:42 +09:00
Jakob Borg
e158f17c2b Adjust sync test intervals to be less latency sensitive 2015-04-24 11:25:24 +09:00
Jakob Borg
c5027d9478 Merge branch 'pr-1688'
* pr-1688:
  Minor fixup
  Add tests, fix getCaller, replace wg.Done with wg.Wait
2015-04-24 09:43:52 +09:00
Jakob Borg
36c1d82146 Minor fixup 2015-04-24 09:43:40 +09:00
Audrius Butkevicius
bd4f404d45 Add tests, fix getCaller, replace wg.Done with wg.Wait 2015-04-23 20:09:14 +01:00
Jakob Borg
43d39844f7 Merge pull request #1685 from AudriusButkevicius/mut
Add mutex logging
2015-04-23 21:16:23 +09:00
Audrius Butkevicius
e041a4d212 Track RUnlockers while locking a RWMutex 2015-04-23 11:29:23 +01:00
Audrius Butkevicius
433b923ea7 Add mutex logging 2015-04-23 10:54:14 +01:00
Audrius Butkevicius
f8f1c72b44 Merge pull request #1686 from calmh/major-upgrade-v11
Allow major upgrades
2015-04-23 09:31:58 +01:00
Jakob Borg
542716e216 Allow major upgrades 2015-04-23 17:13:11 +09:00
Jakob Borg
b35958d024 Avoid spurious request for /qr?text={{myID}} (fixes #1679) 2015-04-22 09:37:18 +09:00
Audrius Butkevicius
9ee3541655 Merge pull request #1673 from calmh/filestatus-json
Clean up REST JSON a little further
2015-04-21 17:11:06 +01:00
Jakob Borg
bf7d84c12a Clean up REST JSON a little further 2015-04-21 23:28:58 +09:00
Audrius Butkevicius
34c691087e Merge pull request #1674 from calmh/rc-upgrade
Loosen the requirements on what can be upgraded to what
2015-04-21 08:42:33 +01:00
Jakob Borg
08c383012f Loosen the requirements on what can be upgraded to what 2015-04-21 09:06:10 +09:00
Jakob Borg
e2420495f3 Fix type in device sort (fixes #1668) 2015-04-20 22:18:19 +09:00
Audrius Butkevicius
d530c5eda7 Merge pull request #1665 from calmh/wat
Don't initialize subscription in init()
2015-04-20 08:12:58 +01:00
Audrius Butkevicius
ef7420ecf6 Merge pull request #1666 from calmh/cpu-remind
Reminder in debug output to explain high CPU usage
2015-04-20 08:09:56 +01:00
Jakob Borg
c905a41e2a Reminder in debug output to explain high CPU usage 2015-04-20 14:29:38 +09:00
Jakob Borg
42ff4b5bf0 changelog.go should not be built 2015-04-20 14:03:50 +09:00
Jakob Borg
4fb74a32cc Don't initialize subscription in init()
By doing it init(), the monitor process also gets a subscription thing
running, which is unnecessary (and really confused me when seeing it in
the debug output).
2015-04-20 12:58:58 +09:00
Jakob Borg
c741465328 Use versionString() in about modal (fixes #1663) 2015-04-20 08:23:59 +09:00
Jakob Borg
fbca537a40 Merge pull request #1655 from kamadak/fix-nil-deref
Fix nil pointer dereferences in REST with non-existent folders
2015-04-19 17:20:47 +09:00
Jakob Borg
83420b0199 Merge pull request #1654 from AudriusButkevicius/fixes
Fix capitalization (fixes #1652, fixes #1649)
2015-04-19 17:20:05 +09:00
KAMADA Ken'ichi
33d3ba1b45 Fix nil pointer dereferences in REST with non-existent folders 2015-04-18 22:41:47 +09:00
Audrius Butkevicius
497f85a236 Fix capitalization (fixes #1652, fixes #1649) 2015-04-18 11:23:21 +01:00
Audrius Butkevicius
a624c302ab Merge pull request #1648 from calmh/scanner-batches
Don't buffer large files a long time while scanning
2015-04-17 09:05:09 +01:00
Jakob Borg
cebe21a3af Don't buffer large files a long time while scanning 2015-04-17 16:40:09 +09:00
Audrius Butkevicius
9eb679d70a Merge pull request #1647 from calmh/fix-localindexupdated
Homogenize the LocalIndexUpdated event
2015-04-17 08:14:38 +01:00
Jakob Borg
6d84443db8 Homogenize the LocalIndexUpdated event
It had two different formats, and we use "items" instead of "numFiles"
in other places.

(Discovered while documenting :)
2015-04-17 14:22:06 +09:00
Jakob Borg
da8a1f242c Merge pull request #1646 from AudriusButkevicius/readonly
Make targets writeable before removal on Windows (fixes #1610)
2015-04-17 14:21:39 +09:00
Jakob Borg
946d98b71f Merge pull request #1645 from AudriusButkevicius/tests
Fix tests on Windows (fixes #1531)
2015-04-17 14:20:53 +09:00
Audrius Butkevicius
dff51fc707 Make targets writeable before removal on Windows (fixes #1610) 2015-04-16 22:53:53 +01:00
Audrius Butkevicius
7d954dd5d1 Fix tests on Windows (fixes #1531) 2015-04-16 21:18:17 +01:00
Jakob Borg
c6300a5da8 Tone down UPnP errors a little 2015-04-16 23:45:12 +09:00
Jakob Borg
9359daa0d9 Merge branch 'pr-1636'
* pr-1636:
  Store and use _localStorage object
  fix using detect localStorage
2015-04-16 23:44:59 +09:00
Jakob Borg
2322e9cff7 Store and use _localStorage object 2015-04-16 23:44:34 +09:00
Jakob Borg
a876e1e348 Merge remote-tracking branch 'syncthing/pr/1636' into pr-1636
* syncthing/pr/1636:
  fix using detect localStorage
2015-04-16 23:32:48 +09:00
Jakob Borg
6a863c8f71 Translation update 2015-04-16 23:27:27 +09:00
Jakob Borg
392b006b06 Add Moter8 2015-04-16 23:23:34 +09:00
Audrius Butkevicius
96289f42b7 Merge pull request #1644 from syncthing/timeout
UPnP refactor/fixes
2015-04-16 14:32:16 +01:00
Audrius Butkevicius
1b69c2441c Make UPnP discovery requests on each interface explicitly (fixes #1113) 2015-04-16 14:23:36 +01:00
Audrius Butkevicius
8ca85a4918 Merge pull request #1639 from calmh/events
Improve event handling a little bit.
2015-04-16 14:18:52 +01:00
Audrius Butkevicius
2a31031cbc Add unit suffix to UPnP settings 2015-04-16 10:32:22 +01:00
Audrius Butkevicius
d148cd8ccc Make UPnP timeout configurable 2015-04-16 10:32:12 +01:00
Jakob Borg
d1cc1828b8 Improve ItemStarted/ItemFinished events
- Remove full details from ItemStarted (unnecessary, incorrect CamelCase)

 - Add "type" ("file" or "dir") to both events

 - Add "action" (what we tried to do - "delete" or "update") to both
   events.
2015-04-14 23:31:39 +09:00
Jakob Borg
069e8cf122 Don't schedule summaries on all state changes
Prior to this change we schedule summaries on each state change, i.e.
scanning->idle and idle->scanning, which is unnecessary. Now we only do
it on index updates, plus the immediate one on going syncing->idle.
2015-04-14 20:57:42 +09:00
Audrius Butkevicius
45cbcaca6d Merge pull request #1638 from calmh/lstat
Work around broken Lstat on Android
2015-04-14 11:59:20 +01:00
Jakob Borg
102a2db1f3 Work around broken Lstat on Android 2015-04-14 19:53:49 +09:00
Sergey Mishin
9f81c85ca7 fix using detect localStorage 2015-04-13 19:07:39 +03:00
Audrius Butkevicius
ba4a6fc0c5 Merge pull request #1633 from calmh/errorstate
Move folder errors to state
2015-04-13 00:48:13 +01:00
Jakob Borg
aa803ce2ff Move folder errors to state
The "Invalid" config attribute is retained for errors discovered during
config loading (empty path, duplicate ID). This can only be set or
cleared at config loading time.

Errors discovered during runtime (I/O problems, etc) are now in the
folder state instead. Changes to these are sent as any other folder
state change.
2015-04-13 07:43:45 +09:00
Jakob Borg
a027a60f5d Correctly feature detect localStorage (fixes #1632) 2015-04-13 06:50:07 +09:00
Jakob Borg
270649535e Merge pull request #1625 from Moter8/patch-1
Reword and clarify some sentences.
2015-04-10 13:48:14 +02:00
Carsten H
cf80ba71f4 Reword and clarify some sentences 2015-04-10 13:46:38 +02:00
Jakob Borg
b74df18a4a Translation update 2015-04-10 13:32:23 +02:00
Jakob Borg
5cd2906a39 Fix NICKS and authors in index.html 2015-04-10 12:57:43 +02:00
Jakob Borg
bc37b69d17 Add ARM to GUI architectures, and fallback for unknowns 2015-04-10 12:45:53 +02:00
Francois-Xavier Gsell
94f6e400ad fix '~' completion in add folder build assets (fix #1478) 2015-04-10 15:42:52 +08:00
Francois-Xavier Gsell
b95a6ccf80 fix '~' completion in add folder (fix #1478) 2015-04-10 15:42:52 +08:00
Jakob Borg
7df9c1b6e4 Merge pull request #1621 from Zillode/fix-no-upgrade
Fix compilation of -noupgrade builds
2015-04-10 08:30:50 +02:00
Lode Hoste
75348c0158 Fix compilation of -noupgrade builds 2015-04-09 22:44:46 +02:00
Audrius Butkevicius
75fb14acaf Fix integration tests 2015-04-09 16:16:39 +01:00
Audrius Butkevicius
5350315b68 Merge pull request #1614 from calmh/new-short-id
Index reset should generate file conflicts (fixes #1613)
2015-04-09 13:48:37 +01:00
Audrius Butkevicius
658e39c270 Merge pull request #1618 from calmh/id-conflict
Check for short ID conflict at startup
2015-04-09 13:40:32 +01:00
Audrius Butkevicius
ef7ce6c7e1 Merge pull request #1619 from calmh/gui-version
GUI version string includes OS and Arch
2015-04-09 12:29:58 +01:00
Jakob Borg
509e2411bf Merge pull request #1616 from syncthing/rates
Fix total transfer rates (fixes #1615)
2015-04-09 13:08:49 +02:00
Jakob Borg
65c906f951 Merge pull request #1617 from syncthing/integ
Try capturing panics
2015-04-09 13:07:46 +02:00
Audrius Butkevicius
1f159e8233 Fix total transfer rates (fixes #1615) 2015-04-09 12:07:21 +01:00
Jakob Borg
936c76119d Index reset should generate file conflicts (fixes #1613) 2015-04-09 13:06:09 +02:00
Jakob Borg
f45865606a Add initial merge and reset conflict tests 2015-04-09 13:06:09 +02:00
Jakob Borg
cfc9776bae Check for short ID conflict at startup 2015-04-09 13:06:00 +02:00
Jakob Borg
e7db264803 Extract counter value from vector 2015-04-09 12:51:21 +02:00
Audrius Butkevicius
0cb7ed9e4e Try capturing panics 2015-04-09 11:49:02 +01:00
Jakob Borg
4b07609458 GUI version string includes OS and Arch
(Useful when debugging via screenshots...)
2015-04-09 11:33:24 +02:00
Audrius Butkevicius
e41e58e781 Merge pull request #1608 from calmh/xdr-update
Update XDR dependency (fixes #1606)
2015-04-08 14:33:53 +01:00
Jakob Borg
f5030f1c2c Update XDR dependency (fixes #1606) 2015-04-08 14:49:29 +02:00
Jakob Borg
2a48fb8e87 Merge pull request #1607 from syncthing/deadlock
Don't run deadlock detection in release mode unless asked to (fixes #1536)
2015-04-08 14:48:20 +02:00
Jakob Borg
3d8a71fdb2 Generate with updated XDR package 2015-04-08 14:46:08 +02:00
Audrius Butkevicius
df6dbc5fa4 Only run deadlock detection if asked or non-release/beta (fixes #1536) 2015-04-08 13:40:05 +01:00
Jakob Borg
4b1d2839e8 Correct override PATH in test 2015-04-08 14:23:26 +02:00
Audrius Butkevicius
a892f80e86 Merge pull request #1590 from calmh/long-filenames
Handle long filenames on Windows (fixes #1295)
2015-04-08 13:12:25 +01:00
Jakob Borg
b2a79855ae Handle long filenames on Windows (fixes #1295) 2015-04-08 14:05:39 +02:00
Audrius Butkevicius
ff4974178a Merge pull request #1605 from calmh/http-trace
Add HTTP request tracing
2015-04-07 21:10:51 +01:00
Jakob Borg
d7100fd9bc Add HTTP request tracing 2015-04-07 21:52:47 +02:00
Jakob Borg
0bfb40ae51 discourse -> forum 2015-04-07 16:07:16 +02:00
Jakob Borg
11c83670d6 Merge pull request #1601 from syncthing/conns
Fix GUI
2015-04-07 15:35:54 +02:00
Audrius Butkevicius
68ff4f3842 Fix GUI 2015-04-07 14:24:34 +01:00
Jakob Borg
ab25cd09ed Merge pull request #1600 from syncthing/conns
Change /rest/system/connections output (fixes #1487)
2015-04-07 14:29:40 +02:00
Audrius Butkevicius
8f05b8f982 Change /rest/system/connections output (fixes #1487) 2015-04-07 13:21:03 +01:00
Jakob Borg
63ae2f64cf Woops: /rest/system/errors -> /rest/system/error 2015-04-07 13:46:39 +02:00
Jakob Borg
105103fae0 Woops: /rest/system/report -> /rest/svc/report 2015-04-07 13:33:37 +02:00
Jakob Borg
70f4792ab1 Translation update 2015-04-07 12:24:02 +02:00
Jakob Borg
defd9fa322 Merge pull request #1595 from calmh/rest-rework
Tidy up the REST interface URLs (fixes #1593)
2015-04-07 12:20:36 +02:00
Jakob Borg
e884d0fda6 Tidy up the REST interface URLs (fixes #1593) 2015-04-07 12:16:23 +02:00
Audrius Butkevicius
5f6a8fdc20 Merge pull request #1568 from calmh/override
Override needs to twiddle the version a bit more (fixes #1564)
2015-04-07 11:13:43 +01:00
Audrius Butkevicius
196a9ddbb0 Merge pull request #1599 from calmh/cleanup
Clean up config directory of old crap
2015-04-07 10:33:46 +01:00
Audrius Butkevicius
746140bd11 Merge pull request #1573 from calmh/beacon
Use a socket per interface for v6 multicast (fixes #1563)
2015-04-07 10:23:25 +01:00
Jakob Borg
7b99a5fbac Clean up config directory of old crap 2015-04-07 09:25:28 +02:00
Jakob Borg
b74c31e520 Only show Override button in idle state 2015-04-06 23:33:28 +02:00
Jakob Borg
221f43e4bd Use a socket per interface for v6 multicast (fixes #1563) 2015-04-06 20:55:50 +02:00
Jakob Borg
a17333d73e Override needs to twiddle the version a bit more (fixes #1564) 2015-04-06 20:55:40 +02:00
Jakob Borg
207b43499c Merge remote-tracking branch 'syncthing/pr/1577'
* syncthing/pr/1577:
  Add uptime in webgui (fixes #1501)

Conflicts:
	cmd/syncthing/gui.go
	internal/auto/gui.files.go
2015-04-06 20:53:32 +02:00
Jakob Borg
0c0de17b38 Merge pull request #1582 from ralder/webgui-enable-gzip
Enable gzip for static files for webgui
2015-04-06 08:33:57 +02:00
Sergey Mishin
77882e6086 Enable gzip encoding static files for webgui 2015-04-06 03:11:30 +03:00
Audrius Butkevicius
19a9834843 Merge pull request #1589 from calmh/copyconf
Copy configuration struct when sending Changed() events
2015-04-05 20:53:10 +01:00
Audrius Butkevicius
23dab30ca5 Merge pull request #1588 from calmh/dbcommitter
Use separate routine for database updates in puller
2015-04-05 20:43:46 +01:00
ralder
b5d7ce8ebe Add uptime in webgui (fixes #1501) 2015-04-05 22:37:55 +03:00
Jakob Borg
bf4eb4b269 Copy configuration struct when sending Changed() events
Avoids data race. Copy() must be called with lock held.
2015-04-05 21:07:15 +02:00
Jakob Borg
a5edb6807e The -reset option now only removes db 2015-04-05 17:40:58 +02:00
Jakob Borg
ecadf30fe7 model: Use separate db commit routine (fixes #1558) 2015-04-05 16:19:14 +02:00
Jakob Borg
515f0db5b4 Benchmark syncing many vs large files 2015-04-05 16:16:15 +02:00
Jakob Borg
c2f367cf70 Merge pull request #1585 from Zillode/extra-case-test
Test combination of prefixes (@benapetr)
2015-04-05 13:37:52 +02:00
Lode Hoste
f21dfea965 Test combination of prefixes (@benapetr) 2015-04-05 11:45:43 +02:00
Audrius Butkevicius
b84cad4db0 Merge pull request #1583 from calmh/configcleanup
Remove handling for old deprecated config versions
2015-04-05 01:31:53 +01:00
Jakob Borg
16ae019c8c Fix AUTHORS 2015-04-04 23:52:23 +02:00
Jakob Borg
bd2051febd Remove handling for old deprecated config versions 2015-04-04 23:37:47 +02:00
Audrius Butkevicius
6fb1e03ed4 Merge pull request #1576 from Zillode/reset-indexes
Update reset API to reflect new use cases.
2015-04-04 22:31:59 +01:00
Jakob Borg
8d41a762b6 Clean up translations, generate assets 2015-04-04 23:21:45 +02:00
Jakob Borg
6d3003716c Merge remote-tracking branch 'syncthing/pr/1553'
* syncthing/pr/1553:
  Add dzarda
  Added an "n", typo in a GUI string
2015-04-04 23:20:48 +02:00
Marc Laporte
2c87c3bac3 Fix some typos 2015-04-04 23:15:07 +02:00
Jakob Borg
739c525a98 model: TestIgnores should not randomly fail 2015-04-04 22:55:24 +02:00
Lode Hoste
ab287ebf40 Update reset API to reflect new use cases.
/rest/reset clears the entire Syncthing DB and restart the program
/rest/reset&folder=default clears the indexes of the default folder
2015-04-04 22:45:11 +02:00
Jakob Borg
ab6bcab78a fnmatch: Test should pass on Mac/windows 2015-04-04 22:05:15 +02:00
Jakob Borg
6e317896e9 golint: FNM_FOOBAR -> fnmatch.FooBar 2015-04-04 22:03:03 +02:00
Jakob Borg
b08ee3ff81 golint: locHttps -> locHTTPS 2015-04-04 21:59:54 +02:00
Jakob Borg
17fd09102e go vet: t.Errorf -> t.Error 2015-04-04 21:58:21 +02:00
Jakob Borg
a598cd2b18 Auto generate author list in gui/index.html 2015-04-04 10:09:42 +02:00
Jakob Borg
55e434d67a Allow non-word characters at the end of commit messages 2015-04-04 09:50:59 +02:00
Jakob Borg
04d4b5d8a0 NICKS 2015-04-04 09:47:54 +02:00
Jakob Borg
7ea00bcb78 Merge pull request #1570 from ralder/select-language-webgui
Add language select menu in webgui (fixes #981)
2015-04-04 09:47:08 +02:00
ralder
0ab56ffde8 Add ralder 2015-04-04 03:27:02 +03:00
ralder
e7e945533e Add language select menu in webgui (fixes #981) 2015-04-04 02:57:07 +03:00
Jakob Borg
7fd1047832 Merge pull request #1578 from Zillode/fix-config-windows
Expand locations during initialisation (fixes #1575).
2015-04-03 22:04:08 +02:00
Lode Hoste
19dfa88258 Expand locations during initialisation (fixes #1575). 2015-04-03 21:57:19 +02:00
Jakob Borg
65923b5c20 The summary event service should send summary events 2015-04-02 10:07:54 +02:00
Jaroslav Malec
ac731aa50c Add dzarda 2015-04-01 17:12:56 +02:00
Audrius Butkevicius
2aa3182476 Merge pull request #1539 from calmh/locations
Move index to index-v0.11.0.db (new format) and centralize location config
2015-04-01 13:18:49 +01:00
Audrius Butkevicius
529c386943 Merge pull request #1529 from calmh/modeldata
Push model data instead of pull (fixes #1434)
2015-04-01 13:10:44 +01:00
Jakob Borg
b659da8a4b Merge pull request #1548 from Zillode/case-insensitive-ignores
Support case-insensitive ignores (fixes #1511).
2015-04-01 13:46:59 +02:00
Jakob Borg
eba98717c9 Dependency update 2015-04-01 11:58:35 +02:00
Jakob Borg
e4dba99cc0 Immediately recalculate summary when folder state changes syncing->idle 2015-04-01 11:58:27 +02:00
Jakob Borg
454e688c3d Push model data instead of pull (fixes #1434) 2015-04-01 11:46:30 +02:00
Jakob Borg
54752deaa1 Move index to index-v0.11.0.db (new format) and centralize location config 2015-04-01 11:30:28 +02:00
Jakob Borg
a3cf37cb2e Refactor and improve integration tests 2015-04-01 11:13:19 +02:00
Jakob Borg
6459d11d32 Merge pull request #1378 from Zillode/draft-upgrade
Do not consider draft releases or releases with emtpy assets
2015-04-01 11:03:13 +02:00
Jaroslav Malec
15f2fabaaf Added an "n", typo in a GUI string 2015-03-31 19:27:25 +02:00
Lode Hoste
d6030b8d68 Only consider relevant releases (fixes #1285). 2015-03-31 10:22:28 +02:00
Audrius Butkevicius
e1757ee726 Fix test 2015-03-30 22:49:16 +01:00
Lode Hoste
9fed75d59c Support case-insensitive ignores (fixes #1511). 2015-03-30 22:41:12 +02:00
Audrius Butkevicius
5fe15475a4 Merge pull request #1540 from calmh/conflicts
Handle conflicts when pulling (fixes #220)
2015-03-30 21:20:23 +01:00
Jakob Borg
c7f6f4f48d Merge pull request #1546 from Zillode/fix-reset-dir
Fix -reset folder target
2015-03-30 21:57:09 +02:00
Lode Hoste
747c6c2714 Fix -reset folder target 2015-03-30 17:07:31 +02:00
Jakob Borg
2951f128f6 Translation update for external versioner 2015-03-30 14:15:17 +02:00
Jakob Borg
53cb66eeaf Merge pull request #1537 from syncthing/marker
More graceful handling on folder errors (fixes #762)
2015-03-30 09:30:15 +02:00
Jakob Borg
bcf8f798e2 Merge pull request #1543 from rumpelsepp/systemd
systemd: Fix error code definitions to prevent failed units
2015-03-30 08:37:08 +02:00
Audrius Butkevicius
7406176fad More graceful handling on folder errors (fixes #762)
Checks health before accepting every scanner batch, also
recovers from errors without having to restart.
2015-03-30 08:27:12 +02:00
Jakob Borg
34ba5678c3 Allowed integration test time 60m -> 90m 2015-03-30 08:26:31 +02:00
Stefan Tatschner
da0b78c67a systemd: Fix error code definitions to prevent failed units
The systemd exit code definitions (introduced in c586a17, refs #1324) caused
problems. Per default systemd considers a return code != 0 as failed. So when
you click on "Restart" in the WebUI an error appears:

  systemd[953]: syncthing.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
  systemd[953]: Unit syncthing.service entered failed state.
  systemd[953]: syncthing.service failed.
  systemd[953]: syncthing.service holdoff time over, scheduling restart.
  systemd[953]: Started Syncthing - Open Source Continuous File Synchronization.
  systemd[953]: Starting Syncthing - Open Source Continuous File Synchronization...
  syncthing[13222]: [LFKUK] INFO: syncthing v0.10.30 (go1.4.2 linux-amd64 default) builduser@jara 2015-03-29 07:46:44 UTC

To fix this error we have to add the "succes codes" 2, 3, 4 to
"SuccessExitStatus":

  syncthing[13006]: [LFKUK] INFO: Restarting
  syncthing[13006]: [LFKUK] OK: Exiting
  systemd[953]: syncthing.service holdoff time over, scheduling restart.
  systemd[953]: Started Syncthing - Open Source Continuous File Synchronization.
  systemd[953]: Starting Syncthing - Open Source Continuous File Synchronization...
  syncthing[13031]: [LFKUK] INFO: syncthing v0.10.30 (go1.4.2 linux-amd64 default) builduser@jara 2015-03-29 07:46:44 UTC

To make sure that syncthing restarts in case of error, and to make sure
that "Restart=on-failure" actually works, let's remove
"RestartPreventExitStatus=1". Systemd considers this as an error per
default and the restart will be triggered successfully.
2015-03-30 00:51:06 +02:00
Jakob Borg
47e64ae503 Handle conflicts when pulling (fixes #220) 2015-03-30 00:01:52 +02:00
Audrius Butkevicius
520bb74626 Merge pull request #1541 from calmh/no-gctweak
Remove default GC tweak
2015-03-29 20:01:45 +01:00
Jakob Borg
4beef5cc66 Remove default GC tweak
This reverts the GC behavior to the Go default of triggering GC when the
heap has grown 100% compared to after the previous GC. We were setting
this to 25% to keep memory usage at a minimum, but it has a pretty
severe performance cost (especially when syncing large files) as we keep
triggering GC too often.

This documents the tweak in the `-help` message so users can decide for
themselves, and sticks to whatever the Go runtime developers thinks is
best for the default.
2015-03-29 19:08:22 +02:00
Jakob Borg
ba575f55ec Merge pull request #1530 from Zillode/multi-scan
Support multiple scan query strings at the same time
2015-03-29 16:02:45 +02:00
Lode Hoste
2012ce02e8 Support multiple scan query strings at the same time 2015-03-28 22:40:13 +01:00
Jakob Borg
c67e2c2a5a Merge pull request #1528 from Zillode/change-gui-port
Change (default) GUI port from 8080 to 8384 ('ST' in ascii values)
2015-03-27 14:08:03 +01:00
Jakob Borg
4b1ce250c1 Merge pull request #1527 from AudriusButkevicius/protochanges
Cherry-picks
2015-03-27 13:44:03 +01:00
Audrius Butkevicius
0401a07507 Change existingBlocks map type 2015-03-26 22:04:34 +00:00
Audrius Butkevicius
c12265499a Response errors may be protocol defined errors 2015-03-26 22:04:34 +00:00
Audrius Butkevicius
0e341832e0 Handle unknown flags at the model 2015-03-26 22:04:34 +00:00
Audrius Butkevicius
489e2e6ad5 Update Model function signatures 2015-03-26 22:04:33 +00:00
Audrius Butkevicius
941f637bca Update protocol package 2015-03-26 22:04:28 +00:00
Jakob Borg
6277c0595c Merge pull request #3 from syncthing/changes
Changes for temp indexes
2015-03-26 22:31:45 +01:00
Audrius Butkevicius
aa9eda1979 Add IndexTemporary and RequestTemporary flags 2015-03-26 21:30:20 +00:00
Audrius Butkevicius
1d76efcbcd Remove duplication 2015-03-26 21:30:19 +00:00
Audrius Butkevicius
34c2c1ec16 Send and receive Request error codes 2015-03-26 21:30:19 +00:00
Audrius Butkevicius
bf7fea9a0a Rename error to code, update xdr path 2015-03-26 21:30:19 +00:00
Audrius Butkevicius
fdf15f3ca3 Flag checking is now responsibility of the model 2015-03-26 21:30:18 +00:00
Audrius Butkevicius
1cb5875b20 Expose flags, options in Index{,Update} 2015-03-26 21:30:18 +00:00
Audrius Butkevicius
1a59a5478f Expose hash, flags, options in Request 2015-03-26 21:30:17 +00:00
Audrius Butkevicius
fc0cb704f2 Merge pull request #1523 from calmh/vv
Implement version vectors
2015-03-26 20:46:02 +00:00
Lode Hoste
960c0cbddf Change (default) GUI port from 8080 to 8384 ('ST' in ascii values) 2015-03-26 21:36:06 +01:00
Audrius Butkevicius
5750443371 Merge pull request #10 from syncthing/vv
Implement version vectors
2015-03-26 13:25:04 +00:00
Audrius Butkevicius
9f67d86b30 Merge pull request #1526 from calmh/minreconnect
Don't allow arbitrarily short reconnection intervals (fixes #1524)
2015-03-26 13:01:29 +00:00
Jakob Borg
66f7d83baa Don't allow arbitrarily short reconnection intervals (fixes #1524) 2015-03-26 13:57:57 +01:00
Jakob Borg
b44e87c6e8 Silence go vet composites warning
https://code.google.com/p/go/issues/detail?id=6820
2015-03-25 23:18:43 +01:00
Jakob Borg
6da7f17c4a Implement version vectors 2015-03-25 23:10:34 +01:00
Jakob Borg
b4f45d1e79 Update tests for version vectors 2015-03-25 23:10:33 +01:00
Jakob Borg
7d766bf7c7 Update protocol 2015-03-25 22:52:43 +01:00
Jakob Borg
f9132cae85 Implement version vectors 2015-03-25 22:49:53 +01:00
Jakob Borg
75dc7e6671 Assets 2015-03-25 22:08:16 +01:00
Audrius Butkevicius
3fd887fc57 Merge pull request #1518 from calmh/negcache
Add negative cache time to global discovery
2015-03-25 21:06:15 +00:00
Audrius Butkevicius
128447a681 Merge pull request #1520 from calmh/flags
Add flags and options for future extensibility (fixes #1027)
2015-03-25 21:04:47 +00:00
Audrius Butkevicius
17149741a7 Merge pull request #9 from syncthing/flags
Add flags and options for future extensibility
2015-03-25 21:04:14 +00:00
Jakob Borg
23bae932c7 Add flags and options for future extensibility (fixes #1027) 2015-03-25 21:21:41 +01:00
Jakob Borg
d2ec40bb67 Add flags and options for future extensibility 2015-03-25 21:20:04 +01:00
Jakob Borg
9701998f82 Add negative cache time to global discovery
This reduces the amount of external queries by not repeating a query for
a given address if we have failed within the last three minutes.
2015-03-25 16:55:42 +01:00
Jakob Borg
0289c50ad9 Changelog.go needs a copyright header 2015-03-25 08:55:10 +01:00
Jakob Borg
50490f5b26 Merge pull request #1514 from kamadak/name
List the given name first for consistency with others
2015-03-24 21:10:12 +01:00
Jakob Borg
d12f802027 Merge pull request #1513 from kamadak/dir-perm
Preserve the permission of a newly created directory
2015-03-24 21:09:48 +01:00
KAMADA Ken'ichi
ac7097b4d0 Preserve the permission of a newly created directory
We need an explicit chmod() when creating a new directory.
Otherwise a new directory may be created with a different permission
from the one received from an originating device, because the umask
is applied to the mode given to mkdir().
The incorrect permission is later sent back to the originating device
and the original permission will be lost.
2015-03-23 22:39:16 +09:00
KAMADA Ken'ichi
66087e4332 List the given name first for consistency with others 2015-03-23 21:38:07 +09:00
Jakob Borg
3706f9bcb8 Merge pull request #1510 from AudriusButkevicius/location
Configure location provider
2015-03-22 18:38:33 +01:00
Audrius Butkevicius
c505218896 Configure location provider 2015-03-22 15:36:40 +00:00
Jakob Borg
6186a746e0 Rewrite changelog.sh in Go 2015-03-22 16:14:52 +01:00
Jakob Borg
3e98bae5ec Fix changelog.sh for Linux 2015-03-22 15:45:40 +01:00
Audrius Butkevicius
2e1e8f764e Fix crash on walker error (fixes #1507) 2015-03-22 14:28:14 +00:00
Jakob Borg
a7492f8612 Send correct MIME type for SVG images (fixes #1506) 2015-03-22 12:57:16 +01:00
Jakob Borg
123b1f01e4 Translation update 2015-03-22 11:33:10 +01:00
Jakob Borg
865f62e3eb Merge pull request #1505 from rumpelsepp/systemd
systemd: Set -logflags to 0, provide -no-browser flag
2015-03-21 23:28:54 +01:00
Stefan Tatschner
3ea93f52ee systemd: Set -logflags to 0, provide -no-browser flag
Syncthing should not try to start a browser when invoked by systemd.
Furthermore we do not need any timestamps in the journal as systemd
already handles this for us.
2015-03-21 20:55:29 +01:00
Audrius Butkevicius
b53e545ebc Merge pull request #1502 from calmh/defaults
Set defaults correctly for autoNormalize
2015-03-21 14:44:46 +00:00
Jakob Borg
157a4c891c Update integration test configs to v10 2015-03-21 15:40:00 +01:00
Jakob Borg
ad9ea07309 Set defaults correctly for autoNormalize
The default:"foo" struct tags aren't actually used for folder configs.
2015-03-21 15:33:31 +01:00
Jakob Borg
fc483cdfc6 Assets 2015-03-20 09:35:02 +01:00
Jakob Borg
9033838cf2 Merge pull request #1492 from alex2108/fix-gui
use lowerCamelCase for file versioning display
2015-03-20 09:34:48 +01:00
Jakob Borg
8e5d2d5905 Merge pull request #1491 from alex2108/master
use Lstat instead of Stat to prevent errors with symlinks
2015-03-20 09:34:10 +01:00
Alexander Graf
18aa66dabb use lowerCamelCase for file versioning display 2015-03-19 18:16:48 +01:00
Alexander Graf
a2f7b78453 use Lstat instead of Stat to prevent errors with symlinks 2015-03-19 17:36:15 +01:00
Jakob Borg
5c026cbe1d Merge pull request #1490 from syncthing/unspecified
Skip unspecified IPs
2015-03-19 14:02:47 +01:00
Audrius Butkevicius
dc51476897 Skip unspecified IPs 2015-03-19 12:44:38 +00:00
Jakob Borg
39eaa577e0 Merge pull request #1489 from syncthing/lans
Print LANs on startup
2015-03-19 13:09:59 +01:00
Jakob Borg
1f006481ee Merge pull request #1484 from alex2108/master
Add external versioner (ref #573)
2015-03-19 13:08:54 +01:00
Audrius Butkevicius
60faabcbe2 Print LANs on startup 2015-03-19 11:07:20 +00:00
Alexander Graf
d3f1eaf1a3 Add external versioner (ref #573) 2015-03-19 11:31:21 +01:00
Audrius Butkevicius
f568e76fd4 Merge pull request #1488 from calmh/utf8
Automatically fix file name normalization errors (fixes #430)
2015-03-19 08:33:08 +00:00
Jakob Borg
e947223aaa Decide once and for all to return filepath.SkipDir or nil 2015-03-19 07:46:13 +01:00
Jakob Borg
8311162be3 Automatically fix file name normalization errors (fixes #430) 2015-03-19 00:21:48 +01:00
Jakob Borg
75523556e8 Use SVG format logos 2015-03-18 12:51:23 +01:00
Audrius Butkevicius
c82b5d4982 Merge pull request #1474 from calmh/refactor-states
Refactor state handling
2015-03-17 19:09:48 +00:00
Jakob Borg
1c3158099c Rename files to match type names 2015-03-17 19:37:06 +01:00
Jakob Borg
bdbca75dfa Refactor state tracking (...)
Move state tracking into the puller/scanner objects. This is a first
step towards resolving #1391.

Rename Puller and Scanner to roFolder and rwFolder as they have more
duties than just pulling and scanning, and don't need to be exported.
2015-03-17 19:37:06 +01:00
Audrius Butkevicius
124b189cc0 Rebuild assets 2015-03-17 17:58:19 +00:00
Audrius Butkevicius
de38b46392 Fix build 2015-03-17 17:54:25 +00:00
Jakob Borg
e1975644d6 Add /rest/filestatus 2015-03-17 17:51:50 +00:00
Jakob Borg
d9fd27a9e8 Merge pull request #1421 from syncthing/mpl
Relicense to MPLv2
2015-03-17 16:42:33 +01:00
Jakob Borg
32425c5561 MPLv2 2015-03-17 16:02:27 +01:00
Johan Vromans
8d20923881 Suppress 'Last File Received' if a node is folder master (fixes #1472) 2015-03-17 08:44:17 +01:00
Jakob Borg
3a35b8b26c Add sciurius 2015-03-17 08:42:27 +01:00
Jakob Borg
36c93b755a Merge pull request #1465 from pascalj/lowercase-api
Use lowerCamelCase for the JSON API (fixes #1338)
2015-03-16 21:43:34 +01:00
Jakob Borg
ea8c3debea Merge pull request #1470 from syncthing/silence
Silence warnings (ref #1388)
2015-03-16 12:01:29 +01:00
Audrius Butkevicius
b2425b2a25 Silence warnings (ref #1388) 2015-03-16 10:47:59 +00:00
Pascal Jungblut
49bc74e7a0 Use lowerCamelCase for the JSON API (fixes #1338)
Replace the current mix of UpperCamelCase und lowerCamelCase with
consistent lowerCamelCase keys for the JSON API. Also adapt the frontend
so it works with the changed API.

Attention: this will break existing consumers of the API.
2015-03-16 10:05:01 +01:00
Jakob Borg
51c932164f bep/1.0 negotiation can't be a hard error. 2015-03-15 17:49:47 +01:00
Jakob Borg
19e82e93b1 Translation update 2015-03-15 16:42:52 +01:00
Jakob Borg
3f785eaecf Merge pull request #1466 from kamadak/win-w-bits
Do not send group/others-writable bits from Windows.
2015-03-15 16:35:13 +01:00
Jakob Borg
e59c0f38d9 Also build darwin/386 2015-03-15 16:23:45 +01:00
Jakob Borg
64004c6bc0 Alternate email for pascalj 2015-03-15 16:02:34 +01:00
Jakob Borg
422332de7e Add kamadak 2015-03-15 15:19:17 +01:00
Jakob Borg
5a15ba7451 Add pascalj 2015-03-15 15:17:35 +01:00
Jakob Borg
d3686bb1e2 Add kilburn 2015-03-15 15:17:26 +01:00
KAMADA Ken'ichi
3a6eeef580 Do not send group/others-writable bits from Windows.
There is no user/group/others in Windows' read-only attribute,
and all "w" bits are set in os.FileInfo if the file is not read-only.
Do not send these group/others-writable bits to other devices
in order to avoid unexpected world-writable files on other platforms.
2015-03-15 22:14:44 +09:00
Audrius Butkevicius
1dc5c6b8a8 Merge pull request #1457 from calmh/relverv3
Guessing version from directory name is not viable in Go (ref #1449)
2015-03-13 10:42:27 +00:00
Jakob Borg
3532a560d8 Guessing version from directory name is not viable in Go (ref #1449) 2015-03-13 10:10:13 +01:00
Audrius Butkevicius
d2d894d808 Merge pull request #1451 from calmh/relverv2
Get version from RELEASE file if it exists, or guess from directory (fixes #1449)
2015-03-12 10:38:39 +00:00
Jakob Borg
2aa38bfc4b Get version from RELEASE file if it exists, or guess from directory (fixes #1449) 2015-03-12 11:18:23 +01:00
Jakob Borg
9c3cee9ae4 Translation update 2015-03-11 21:15:55 +01:00
Jakob Borg
fc521b5f9d Protocol dep update 2015-03-11 21:13:17 +01:00
Jakob Borg
1a4398cc55 Tests should actually pass 2015-03-11 21:11:43 +01:00
Audrius Butkevicius
5253368acc Merge pull request #1448 from calmh/xp
Fall back to %AppData% is %LocalAppData% is blank (fixes #1446)
2015-03-11 20:06:13 +00:00
Jakob Borg
51cfc3d4be Fall back to %AppData% is %LocalAppData% is blank (fixes #1446) 2015-03-11 21:04:10 +01:00
Audrius Butkevicius
80bffd93e7 Merge pull request #1447 from calmh/silencio
Don't yell about discovery listening and resolving (ref #1418)
2015-03-11 20:03:37 +00:00
Jakob Borg
df4f22e899 Don't yell about discovery listening and resolving (ref #1418) 2015-03-11 20:57:20 +01:00
Audrius Butkevicius
529d91fb9d Merge pull request #7 from syncthing/compression
Add more fine grained compression control
2015-03-11 19:36:02 +00:00
Audrius Butkevicius
8cc70843a5 Merge pull request #1445 from calmh/compression
Compress only metadata by default (fixes #1374)
2015-03-11 19:35:15 +00:00
Jakob Borg
70c841f23a Compress only metadata by default (fixes #1374) 2015-03-11 19:10:57 +01:00
Jakob Borg
108b4e2e10 Add more fine grained compression control 2015-03-11 19:09:58 +01:00
Jakob Borg
7b22e09805 Merge pull request #1438 from moshen/runit-reparenting
Fix syncthing process reparenting with runit
2015-03-10 08:21:41 +01:00
Jakob Borg
c5838c143c Merge pull request #1425 from AudriusButkevicius/laan
Allow not to limit bandwidth in LAN (fixes #1336)
2015-03-10 08:19:29 +01:00
Colin Kennedy
eaf71db7c9 Fix syncthing process reparenting with runit
When you: `sudo sv down /etc/service/syncthing/` the `TERM` signal
isn't propogated or trapped, so syncthing is orphaned and adopted by
init (PID 1).

- Changed call to `chpst` to `exec`
- Moved logging to `log/run` per `runsv` standard
2015-03-10 01:01:52 -05:00
Jakob Borg
b322b527b3 Add SVG versions of logo 2015-03-10 00:02:46 +01:00
Jakob Borg
cc4b231875 Merge pull request #1429 from moshen/runit-fix
Use chpst instead of djb name setuidgid
2015-03-09 23:50:08 +01:00
Jakob Borg
05642a3e17 Merge remote-tracking branch 'syncthing/pr/1436'
* syncthing/pr/1436:
  Remove red if we managed to report to atleast one discovery server (fixes #1427)

Conflicts:
	internal/auto/gui.files.go
2015-03-09 23:47:41 +01:00
Jakob Borg
7dcc6bb579 Add moshen 2015-03-09 23:45:03 +01:00
Audrius Butkevicius
f15c416e59 Remove red if we managed to report to atleast one discovery server (fixes #1427) 2015-03-09 21:55:14 +00:00
Audrius Butkevicius
6fa97eeec7 Allow not to limit bandwidth in LAN (fixes #1336) 2015-03-09 20:54:33 +00:00
Jakob Borg
03bbf273b3 Update protocol dep 2015-03-09 21:21:50 +01:00
Jakob Borg
cd0cce4195 gofmt 2015-03-09 21:21:20 +01:00
Jakob Borg
9f1a72ec88 Merge pull request #6 from syncthing/limit
Remove 64 folder limit
2015-03-09 21:02:07 +01:00
Audrius Butkevicius
1e376cd3a6 Update assets 2015-03-09 12:13:43 +00:00
Audrius Butkevicius
49cf939c04 Add missing translation strings (fixes #1430) 2015-03-09 12:02:30 +00:00
Colin Kennedy
338394f8c3 Use chpst instead of djb name setuidgid
Some distros (Ubuntu, Debian?) don't link `chpst` to `setuidgid`, as it
could conflict with djb daemontools installation.  If daemontools isn't
going to be referenced in the README, then the example runit config
should reference the runit packaged utility.
2015-03-08 18:50:54 -05:00
Audrius Butkevicius
575b62d77b Merge pull request #1424 from AudriusButkevicius/scanner
Make sure we start scanning at an indexed location (fixes #1399)
2015-03-08 19:46:21 +00:00
Audrius Butkevicius
57fc0eb5b1 Make sure we start scanning at an indexed location (fixes #1399) 2015-03-08 19:45:47 +00:00
Jakob Borg
3a19fe3663 Merge pull request #1423 from AudriusButkevicius/warn
Silence discovery warnings when v6 not available
2015-03-08 19:50:35 +01:00
Audrius Butkevicius
a10c621e33 Remove 64 folder limit 2015-03-08 16:55:01 +00:00
Audrius Butkevicius
0c049179b4 Silence discovery warnings when v6 not available (fixes #1418) 2015-03-08 16:49:12 +00:00
Jakob Borg
e22c873ec4 Repair integration tests 2015-03-08 08:41:43 +01:00
Jakob Borg
d644ebab09 Translation update 2015-03-08 07:52:52 +01:00
Jakob Borg
2fdc578a88 Update goleveldb (fixes #1414) 2015-03-08 07:49:02 +01:00
Jakob Borg
aeb3a3f7b5 Handle crap at the end of commit subjects 2015-03-07 22:50:54 +01:00
Audrius Butkevicius
044b7ce070 Merge pull request #1415 from calmh/announce-v6
Add global announce server on IPv6
2015-03-07 21:39:53 +00:00
Jakob Borg
815e538f10 Merge pull request #1401 from Zillode/fix-chmod-android
Fix chmod android
2015-03-07 21:42:44 +01:00
Lode Hoste
758233f001 Do not error when chmod failes when permissions are ignored (fixes #1404). 2015-03-07 21:38:16 +01:00
Jakob Borg
f4f4fda520 String slice uniquification must return a well defined order, or tests fail 2015-03-07 21:05:30 +01:00
Jakob Borg
1d77aeb69c Add global announce server on IPv6 2015-03-07 21:01:20 +01:00
Jakob Borg
29dbfc647d Add bencurthoys 2015-03-07 14:41:18 +01:00
Jakob Borg
55d9514e83 Merge pull request #1407 from bencurthoys/master
Windows Service Setup.exe
2015-03-07 14:38:43 +01:00
Jakob Borg
46bd7956a3 Merge branch 'pr-1410'
* pr-1410:
  Add test for osutil.InWritableDir
  Exit and error if the target is not a directory
2015-03-07 14:35:54 +01:00
Jakob Borg
e1ee394c26 Add test for osutil.InWritableDir 2015-03-07 14:35:29 +01:00
Lode Hoste
19884ade99 Exit and error if the target is not a directory 2015-03-06 22:02:29 +01:00
bencurthoys
f0a88061db Setup.exe for Windows
NSIS script to build setup file to install Syncthing and install as a service.
2015-03-06 15:30:50 +00:00
Audrius Butkevicius
6057138466 Merge pull request #1400 from calmh/negproto
Verify negotiated protocol bep/1.0
2015-03-05 15:16:59 +00:00
Jakob Borg
aaaa6556f3 Some commentary on the initial connection checks 2015-03-05 16:09:20 +01:00
Jakob Borg
4745431cda Verify negotiated protocol bep/1.0 2015-03-05 15:58:16 +01:00
Jakob Borg
0455a948a9 Merge pull request #1337 from AudriusButkevicius/fileslevels
Add /rest/tree API call
2015-03-05 15:24:51 +01:00
Audrius Butkevicius
bf3e249237 Add GlobalDirectoryTree benchmarks 2015-03-04 23:39:33 +00:00
Audrius Butkevicius
fb649e9525 Fix benchmarks, cleanup tests 2015-03-04 23:39:32 +00:00
Audrius Butkevicius
9d1e2d9f46 Add /rest/tree API call 2015-03-04 23:39:27 +00:00
Audrius Butkevicius
9876d93b60 Fix tests on Windows while running as a simple user 2015-03-04 22:39:33 +00:00
Jakob Borg
b3dd05580b Don't follow the prototype chain when looking for a folder name (fixes #1387) 2015-03-01 22:10:34 +01:00
Jakob Borg
32847f33fd Translation update 2015-03-01 21:51:56 +01:00
Jakob Borg
bff9723fe3 Merge branch 'fix-1373'
* fix-1373:
  fixup alterFiles
  Ensure progress when delete-by-rename fails (fixes #1373)
  Handle weird Lstat() returns for disappeared items (ref #1373)
  Alter files into directories and the other way around
2015-03-01 21:51:26 +01:00
Jakob Borg
d114648c16 fixup alterFiles 2015-03-01 21:38:04 +01:00
Jakob Borg
44d0da02d0 Ensure progress when delete-by-rename fails (fixes #1373) 2015-03-01 10:55:48 +01:00
Jakob Borg
c25107eff3 Handle weird Lstat() returns for disappeared items (ref #1373) 2015-03-01 10:55:43 +01:00
Jakob Borg
af5c36d2a8 Merge remote-tracking branch 'syncthing/pr/1373' into fix-1373
* syncthing/pr/1373:
  Alter files into directories and the other way around
2015-03-01 10:55:34 +01:00
Audrius Butkevicius
0828a67145 Merge pull request #1275 from calmh/kv-cache
Namespaced key value store and value cache
2015-02-26 13:51:43 +00:00
Jakob Borg
617fb84983 Also build Solaris, NetBSD by default 2015-02-26 12:21:20 +01:00
Jakob Borg
6f8ac2b61c Refactor: add and use db.NamespacedKV 2015-02-26 09:56:11 +01:00
Jakob Borg
6f2b4b96cf Refactor: use leveldb/util.BytesPrefix 2015-02-26 09:56:11 +01:00
Jakob Borg
3cc288a169 Build binary packages for Dragonfly 2015-02-26 08:58:58 +01:00
Jakob Borg
0bbbf3eb3b Compile on Dragonfly 2015-02-26 08:42:39 +01:00
Jakob Borg
4b1b56fee8 Reduce CPU usage (fixes #1376) 2015-02-25 23:30:24 +01:00
Jakob Borg
154fc59e93 Switch back to original kardianos/osext 2015-02-24 20:44:49 +01:00
Lode Hoste
218c4c128c Alter files into directories and the other way around 2015-02-23 12:12:31 +01:00
Audrius Butkevicius
53f1af0cab Merge pull request #1375 from calmh/fix-987
Attempt recovery of corrupted DB at startup (fixes #987)
2015-02-23 09:17:29 +00:00
Jakob Borg
f9577a38dc Attempt recovery of corrupted DB at startup (fixes #987) 2015-02-23 08:22:39 +01:00
Jakob Borg
fadc7d9ba5 Merge pull request #1366 from krozycki/master
All folder panels collapsed, fixes #1034
2015-02-20 10:18:50 +01:00
Jakob Borg
1e4b2133f6 Also handle ()| in glob patterns (fixes #1365) 2015-02-20 10:12:06 +01:00
Karol Różycki
bfefa6d016 All folder panels collapsed, fixes #1034 2015-02-19 15:48:43 +01:00
Jakob Borg
8b66472949 Fix sync benchmark for latest test changes 2015-02-19 13:15:51 +02:00
Jakob Borg
3b3aa94c4e Refactor out connection related functions to a separate file 2015-02-19 12:05:26 +02:00
Audrius Butkevicius
dc05275670 Merge pull request #1357 from calmh/truncate-v3
Simplify FileInfoTruncated
2015-02-19 10:04:53 +00:00
Jakob Borg
7921082ece Correctly handle ^ and $ in ignore patterns (fixes #1365) 2015-02-19 09:10:32 +02:00
Jakob Borg
88c44b303d Simplify FileInfoTruncated 2015-02-15 12:50:03 +01:00
Jakob Borg
2e2d479103 Merge pull request #5 from syncthing/iota
We are not using iota
2015-02-15 09:40:19 +01:00
Audrius Butkevicius
35f0e355bf We are not using iota 2015-02-12 21:59:33 +00:00
Jakob Borg
4395711d26 Merge pull request #4 from syncthing/allflags
Add FlagsAll bit mask
2015-02-09 15:45:05 +01:00
Audrius Butkevicius
aba915037f Add FlagsAll bit mask 2015-02-01 16:33:52 +00:00
Audrius Butkevicius
442e93d3fc Merge pull request #2 from syncthing/integers
Integer type policy
2015-01-19 20:40:17 +00:00
Jakob Borg
3450b5f80c Integer type policy
Integers are for numbers, enabling arithmetic like subtractions and for
loops without getting shot in the foot. Unsigneds are for bitfields.

- "int" for numbers that will always be laughably smaller than four
  billion, and where we don't care about the serialization format.

- "int32" for numbers that will always be laughably smaller than four
  billion, and will be serialized to four bytes.

- "int64" for numbers that may approach four billion or will be
  serialized to eight bytes.

- "uint32" and "uint64" for bitfields, depending on required number of
  bits and serialization format. Likewise "uint8" and "uint16", although
  rare in this project since they don't exist in XDR.

- "int8", "int16" and plain "uint" are almost never useful.
2015-01-18 02:13:25 +01:00
Jakob Borg
f76b5d8002 rm '.gitignore' 2015-01-13 13:47:16 +01:00
Audrius Butkevicius
7e54868206 Merge pull request #1 from syncthing/mit
Relicense as MIT
2015-01-13 12:38:13 +00:00
Jakob Borg
d84a8e6404 Relicense as MIT 2015-01-13 13:31:14 +01:00
Jakob Borg
4833b6085c The luhn package moved 2015-01-13 13:20:29 +01:00
Jakob Borg
2ceaca8828 Add documentation copied from Syncthing 2015-01-13 13:09:59 +01:00
Jakob Borg
d02158c0ef Also filter out some other obviously invalid filenames (ref #1243) 2015-01-13 12:28:35 +01:00
Jakob Borg
6213c4f2cd Remove nil filenames from database and indexes (fixes #1243) 2015-01-13 09:20:14 +01:00
Jakob Borg
cd34eea017 Reject Index and Request messages with unexpected flags 2015-01-11 13:29:01 +01:00
Jakob Borg
7a0a702ec0 Refactor readerLoop to switch on message type directly 2015-01-11 13:24:56 +01:00
Jakob Borg
d9ed8e125e Move FileInfoTruncated to files package
This is where it's used, and it clarifies that it's never used over the
wire.
2015-01-09 08:28:24 +01:00
Jakob Borg
36708a5067 Move FileIntf to files package, expose Iterator type
This is where FileIntf is used, so it should be defined here (it's not
a protocol thing, really).
2015-01-09 08:18:42 +01:00
Jakob Borg
8c32955da1 Actually close connection based on unknown protocol version 2015-01-08 22:11:26 +01:00
Jakob Borg
c111ed4b20 Add fields for future extensibility
This adds a number of fields to the end of existing messages. This is a
backwards compatible change.
2015-01-08 22:11:26 +01:00
Jakob Borg
a3ea9427d1 Ensure backwards compatibility before modifying protocol
This change makes sure that things work smoothly when "we" are a newer
version than our peer and have more fields in our messages than they do.
Missing fields will be left at zero/nil.

(The other side will ignore our extra fields, for the same effect.)
2015-01-08 14:25:11 +01:00
Audrius Butkevicius
09b534b8a3 Add job queue (fixes #629)
Request to terminate currently ongoing downloads and jump to the bumped file
incoming in 3, 2, 1.

Also, has a slightly strange effect where we pop a job off the queue, but
the copyChannel is still busy and blocks, though it gets moved to the
progress slice in the jobqueue, and looks like it's in progress which it isn't
as it's waiting to be picked up from the copyChan.

As a result, the progress emitter doesn't register on the task, and hence the file
doesn't have a progress bar, but cannot be replaced by a bump.

I guess I can fix progress bar issue by moving the progressEmiter.Register just
before passing the file to the copyChan, but then we are back to the initial
problem of a file with a progress bar, but no progress happening as it's stuck
 on write to copyChan

I checked if there is a way to check for channel writeability (before popping)
but got struck by lightning just for bringing the idea up in #go-nuts.

My ideal scenario would be to check if copyChan is writeable, pop job from the
queue and shove it down handleFile. This way jobs would stay in the queue while
they cannot be handled, meaning that the `Bump` could bring your file up higher.
2015-01-02 15:33:39 +01:00
Jakob Borg
190c61ba2f Use Go 1.4 'generate' to create XDR codec 2014-12-06 14:23:10 +01:00
Jakob Borg
dc71ec734d Dependency update, new golang.org/x package names 2014-11-30 00:17:00 +01:00
Audrius Butkevicius
3af96e50bd Use custom structure for /need calls (fixes #1001)
Also, remove trimming by number of blocks as this no longer affects the size
of the response.
2014-11-23 00:52:48 +00:00
Audrius Butkevicius
ddc56c8a0d Add symlink support at the protocol level 2014-11-20 16:32:00 +01:00
Audrius Butkevicius
e0da2764c9 Code smell 2014-11-20 16:32:00 +01:00
Jakob Borg
ad29093ac1 Use more inclusive copyright header 2014-11-17 12:54:42 +01:00
Jakob Borg
28610a9a42 Break out logger as a reusable component 2014-10-26 13:16:54 +01:00
Jakob Borg
65eb528e2d Update xdr; handle marshalling errors 2014-10-21 09:20:14 +02:00
Audrius Butkevicius
ec9d68960f Remove 64 device limit 2014-10-20 21:46:53 +01:00
Audrius Butkevicius
c618eba9a9 Implement BlockMap 2014-10-16 12:26:27 +02:00
Jakob Borg
1ef8378a30 FileInfoTruncated.String() for stindex' benefit 2014-10-16 09:26:24 +02:00
Jakob Borg
43289103cb Relicense to GPL 2014-10-01 07:53:59 +02:00
Audrius Butkevicius
1bc5632771 Run go fmt -w 2014-09-28 14:23:08 +01:00
Audrius Butkevicius
4b488a2d28 Rename Repository -> Folder, Node -> Device (fixes #739) 2014-09-28 14:23:07 +01:00
Jakob Borg
f28367bcfc Move top level packages to internal. 2014-09-27 09:42:10 +02:00
722 changed files with 70988 additions and 24660 deletions

9
.gitattributes vendored Normal file
View File

@@ -0,0 +1,9 @@
# Text files use LF line endings in this repository
* text=auto
# Except the dependencies, which we leave alone
Godeps/** -text=auto
# Diffs on these files are meaningless
gui.files.go -diff
*.svg -diff

13
.gitignore vendored
View File

@@ -1,17 +1,16 @@
./syncthing
syncthing
!gui/syncthing
!Godeps/_workspace/src/github.com/syncthing
syncthing.exe
*.tar.gz
*.zip
*.asc
*.sublime*
.idea/
.jshintrc
coverage.out
files/pidx
bin
perfstats*.csv
coverage.xml
!gui/scripts/syncthing
.DS_Store
syncthing.md5
syncthing.exe.md5
syncthing.sig
RELEASE
deb

34
AUTHORS
View File

@@ -1,44 +1,72 @@
# This is the official list of Syncthing authors for copyright purposes.
Aaron Bieber <qbit@deftly.net>
Adam Piggott <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com>
Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Antony Male <antony.male@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Bart De Vries <devriesb@gmail.com>
Ben Curthoys <ben@bencurthoys.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
Brendan Long <self@brendanlong.com>
Brian R. Becker <brbecker@gmail.com>
Caleb Callaway <enlightened.despot@gmail.com>
Carsten Hagemann <moter8@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Howie <me@chrishowie.com>
Chris Joel <chris@scriptolo.gy>
Colin Kennedy <moshen.colin@gmail.com>
Daniel Bergmann <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Martí <mvdan@mvdan.cc>
Denis A. <denisva@gmail.com>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Elias Jarlebring <jarlebring@gmail.com>
Emil Hessman <emil@hessman.se>
Erik Meitner <e.meitner@willystreet.coop>
Federico Castagnini <federico.castagnini@gmail.com>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
Francois-Xavier Gsell <fxgsell@gmail.com>
Frank Isemann <frank@isemann.name>
Gilli Sigurdsson <gilli@vx.is>
Jacek Szafarkiewicz <szafar@linux.pl>
Jakob Borg <jakob@nym.se>
Jake Peterson <jake@acogdev.com>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec <dzardacz@gmail.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
Jochen Voss <voss@seehuhn.de>
Johan Vromans <jvromans@squirrel.nl>
Karol Różycki <rozycki.karol@gmail.com>
Ken'ichi Kamada <kamada@nanohz.org>
Lode Hoste <zillode@zillode.be>
Lord Landon Agahnim <lordlandon@gmail.com>
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol <kilburn@la3.org>
Marcin Dziadus <dziadus.marcin@gmail.com>
Marc Laporte <marc@marclaporte.com>
Mateusz Naściszewski <matin1111@wp.pl>
Matt Burke <mburke@amplify.com> <burkemw3@gmail.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Tilli <pyfisch@gmail.com>
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>
Peter Hoeg <peter@speartail.com>
Philippe Schommers <philippe@schommers.be>
Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Stefan Tatschner <stefan@sevenbyte.org>
Sergey Mishin <ralder@yandex.ru>
Stefan Tatschner <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
Stefan Kuntz <stefan.github@gmail.com> <Stefan.github@gmail.com>
Tim Abell <tim@timwise.co.uk>
Tobias Nygren <tnn@nygren.pp.se>
Tomas Cerveny <kozec@kozec.com>
Tully Robinson <tully@tojr.org>
Tyler Brazier <tyler@tylerbrazier.com>
Veeti Paananen <veeti.paananen@rojekti.fi>
Vil Brekin <vilbrekin@gmail.com>
Yannic A. <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>

View File

@@ -32,109 +32,21 @@ latest info on Transifex.
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Be prepared for a
[certain amount of review](https://github.com/syncthing/syncthing/wiki/FAQ#why-are-you-being-so-hard-on-my-pull-request);
it's all in the name of quality. :) Following the points below will make this
a smoother process.
where to start, any open issues are fair game! See the [Contribution
Guidelines](http://docs.syncthing.net/dev/contributing.html) for the full
story on committing code.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
## Contributing Documentation
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### Core Team
The Syncthing core team currently consists of the following members;
- Jakob Borg (@calmh)
- Audrius Butkevicius (@AudriusButkevicius)
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
as much as makes sense.
- All text files use Unix line endings.
- Each commit should be `go fmt` clean.
- The commit message subject should be a single short sentence
describing the change, starting with a capital letter.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
- A contribution solving a single issue or introducing a single new
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
Updates to the [documentation site](http://docs.syncthing.net/) can be
made as pull requests on the [documentation
repository](https://github.com/syncthing/docs).
## Licensing
All contributions are made under the same GPL license as the rest of the
project, except documentation, user interface text and translation
All contributions are made under the same MPLv2 license as the rest of
the project, except documentation, user interface text and translation
strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
When accepting your first contribution, the maintainer of the project
will ensure that you are added to the AUTHORS file. You are welcome to
add yourself as a separate commit in your first pull request.
## Building
[See the documentation](https://github.com/syncthing/syncthing/wiki/Building)
on how to get started with a build environment.
## Branches
- `master` is the main branch containing good code that will end up in
the next release. You should base your work on it. It won't ever be
rebased or force-pushed to.
- `vx.y` branches exist to make patch releases on otherwise obsolete
minor releases. Should only contain fixes cherry picked from master.
Don't base any work on them.
- Other branches are probably topic branches and may be subject to
rebasing. Don't base any work on them unless you specifically know
otherwise.
## Tags
All releases are tagged semver style as `vx.y.z`. Release tags are
signed by GPG key BCE524C7.
## Tests
Yes please!
## Documentation
[Over here!](https://github.com/syncthing/syncthing/wiki)
## License
GPLv3

49
Godeps/Godeps.json generated
View File

@@ -1,45 +1,46 @@
{
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.4",
"GoVersion": "go1.5.1",
"Packages": [
"./cmd/..."
],
"Deps": [
{
"ImportPath": "github.com/bkaradzic/go-lz4",
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
"Rev": "74ddf82598bc4745b965729e9c6a463bedd33049"
},
{
"ImportPath": "github.com/calmh/logger",
"Rev": "f50d32b313bec2933a3e1049f7416a29f3413d29"
"ImportPath": "github.com/calmh/du",
"Rev": "3c0690cca16228b97741327b1b6781397afbdb24"
},
{
"ImportPath": "github.com/calmh/luhn",
"Rev": "0c8388ff95fa92d4094011e5a04fc99dea3d1632"
},
{
"ImportPath": "github.com/calmh/osext",
"Rev": "9bf61584e5f1f172e8766ddc9022d9c401faaa5e"
"ImportPath": "github.com/calmh/xdr",
"Rev": "47c0042d09a827b81ee62497f99e5e0c7f0bd31c"
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "ff948d7666c5e0fd18d398f6278881724d36a90b"
"ImportPath": "github.com/golang/snappy",
"Rev": "723cc1e459b8eea2dea4583200fd60757d40097a"
},
{
"ImportPath": "github.com/juju/ratelimit",
"Rev": "f9f36d11773655c0485207f0ad30dc2655f69d56"
"Rev": "772f5c38e468398c4511514f4f6aa9a4185bc0a0"
},
{
"ImportPath": "github.com/syncthing/protocol",
"Rev": "2e2d479103df8fb721d55d59b0198d6c24f4865b"
"ImportPath": "github.com/kardianos/osext",
"Rev": "6e7f843663477789fac7c02def0d0909e969b4e5"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "63c9e642efad852f49e20a6f90194cae112fd2ac"
"Rev": "1a9d62f03ea92815b46fcaab357cfd4df264b1a0"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
"Rev": "ce8acff4829e0c2458a67ead32390ac0a381c862"
"ImportPath": "github.com/thejerf/suture",
"Comment": "v1.0.1",
"Rev": "99c1f2d613756768fc4299acd9dc621e11ed3fd7"
},
{
"ImportPath": "github.com/vitrun/qart/coding",
@@ -55,19 +56,31 @@
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "4ed45ec682102c643324fae5dff8dab085b6c300"
"Rev": "81bf7719a6b7ce9b665598222362b50122dfc13b"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "4ed45ec682102c643324fae5dff8dab085b6c300"
"Rev": "81bf7719a6b7ce9b665598222362b50122dfc13b"
},
{
"ImportPath": "golang.org/x/net/internal/iana",
"Rev": "4b709d93778b93d2f34943e3142c71578d83ad31"
},
{
"ImportPath": "golang.org/x/net/ipv6",
"Rev": "4b709d93778b93d2f34943e3142c71578d83ad31"
},
{
"ImportPath": "golang.org/x/net/proxy",
"Rev": "4b709d93778b93d2f34943e3142c71578d83ad31"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "c980adc4a823548817b9c47d38c6ca6b7d7d8b6a"
"Rev": "723492b65e225eafcba054e76ba18bb9c5ac1ea2"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "c980adc4a823548817b9c47d38c6ca6b7d7d8b6a"
"Rev": "723492b65e225eafcba054e76ba18bb9c5ac1ea2"
}
]
}

View File

@@ -4,4 +4,6 @@ go:
- 1.1
- 1.2
- 1.3
- 1.4
- 1.5
- tip

View File

@@ -4,7 +4,7 @@ go-lz4
go-lz4 is port of LZ4 lossless compression algorithm to Go. The original C code
is located at:
https://code.google.com/p/lz4/
https://github.com/Cyan4973/lz4
Status
------

View File

@@ -0,0 +1,23 @@
// +build gofuzz
package lz4
import "encoding/binary"
func Fuzz(data []byte) int {
if len(data) < 4 {
return 0
}
ln := binary.LittleEndian.Uint32(data)
if ln > (1 << 21) {
return 0
}
if _, err := Decode(nil, data); err != nil {
return 0
}
return 1
}

View File

@@ -141,7 +141,7 @@ func Decode(dst, src []byte) ([]byte, error) {
length += ln
}
if int(d.spos+length) > len(d.src) {
if int(d.spos+length) > len(d.src) || int(d.dpos+length) > len(d.dst) {
return nil, ErrCorrupt
}
@@ -179,7 +179,12 @@ func Decode(dst, src []byte) ([]byte, error) {
}
literal := d.dpos - d.ref
if literal < 4 {
if int(d.dpos+4) > len(d.dst) {
return nil, ErrCorrupt
}
d.cp(4, decr[literal])
} else {
length += 4

View File

@@ -25,8 +25,10 @@
package lz4
import "encoding/binary"
import "errors"
import (
"encoding/binary"
"errors"
)
const (
minMatch = 4

24
Godeps/_workspace/src/github.com/calmh/du/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org>

14
Godeps/_workspace/src/github.com/calmh/du/README.md generated vendored Normal file
View File

@@ -0,0 +1,14 @@
du
==
Get total and available disk space on a given volume.
Documentation
-------------
http://godoc.org/github.com/calmh/du
License
-------
Public Domain

View File

@@ -0,0 +1,21 @@
package main
import (
"fmt"
"log"
"os"
"github.com/calmh/du"
)
var KB = int64(1024)
func main() {
usage, err := du.Get(os.Args[1])
if err != nil {
log.Fatal(err)
}
fmt.Println("Free:", usage.FreeBytes/(KB*KB), "MiB")
fmt.Println("Available:", usage.AvailBytes/(KB*KB), "MiB")
fmt.Println("Size:", usage.TotalBytes/(KB*KB), "MiB")
}

View File

@@ -0,0 +1,8 @@
package du
// Usage holds information about total and available storage on a volume.
type Usage struct {
TotalBytes int64 // Size of volume
FreeBytes int64 // Unused size
AvailBytes int64 // Available to a non-privileged user
}

View File

@@ -0,0 +1,24 @@
// +build !windows,!netbsd,!openbsd,!solaris
package du
import (
"path/filepath"
"syscall"
)
// Get returns the Usage of a given path, or an error if usage data is
// unavailable.
func Get(path string) (Usage, error) {
var stat syscall.Statfs_t
err := syscall.Statfs(filepath.Clean(path), &stat)
if err != nil {
return Usage{}, err
}
u := Usage{
FreeBytes: int64(stat.Bfree) * int64(stat.Bsize),
TotalBytes: int64(stat.Blocks) * int64(stat.Bsize),
AvailBytes: int64(stat.Bavail) * int64(stat.Bsize),
}
return u, nil
}

View File

@@ -0,0 +1,13 @@
// +build netbsd openbsd solaris
package du
import "errors"
var ErrUnsupported = errors.New("unsupported platform")
// Get returns the Usage of a given path, or an error if usage data is
// unavailable.
func Get(path string) (Usage, error) {
return Usage{}, ErrUnsupported
}

View File

@@ -0,0 +1,27 @@
package du
import (
"syscall"
"unsafe"
)
// Get returns the Usage of a given path, or an error if usage data is
// unavailable.
func Get(path string) (Usage, error) {
h := syscall.MustLoadDLL("kernel32.dll")
c := h.MustFindProc("GetDiskFreeSpaceExW")
var u Usage
ret, _, err := c.Call(
uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(path))),
uintptr(unsafe.Pointer(&u.FreeBytes)),
uintptr(unsafe.Pointer(&u.TotalBytes)),
uintptr(unsafe.Pointer(&u.AvailBytes)))
if ret == 0 {
return u, err
}
return u, nil
}

View File

@@ -1,15 +0,0 @@
logger
======
A small wrapper around `log` to provide log levels.
Documentation
-------------
http://godoc.org/github.com/calmh/logger
License
-------
MIT

View File

@@ -1,160 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// Package logger implements a standardized logger with callback functionality
package logger
import (
"fmt"
"log"
"os"
"strings"
"sync"
)
type LogLevel int
const (
LevelDebug LogLevel = iota
LevelInfo
LevelOK
LevelWarn
LevelFatal
NumLevels
)
// A MessageHandler is called with the log level and message text.
type MessageHandler func(l LogLevel, msg string)
type Logger struct {
logger *log.Logger
handlers [NumLevels][]MessageHandler
mut sync.Mutex
}
// The default logger logs to standard output with a time prefix.
var DefaultLogger = New()
func New() *Logger {
return &Logger{
logger: log.New(os.Stdout, "", log.Ltime),
}
}
// AddHandler registers a new MessageHandler to receive messages with the
// specified log level or above.
func (l *Logger) AddHandler(level LogLevel, h MessageHandler) {
l.mut.Lock()
defer l.mut.Unlock()
l.handlers[level] = append(l.handlers[level], h)
}
// See log.SetFlags
func (l *Logger) SetFlags(flag int) {
l.logger.SetFlags(flag)
}
// See log.SetPrefix
func (l *Logger) SetPrefix(prefix string) {
l.logger.SetPrefix(prefix)
}
func (l *Logger) callHandlers(level LogLevel, s string) {
for _, h := range l.handlers[level] {
h(level, strings.TrimSpace(s))
}
}
// Debugln logs a line with a DEBUG prefix.
func (l *Logger) Debugln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "DEBUG: "+s)
l.callHandlers(LevelDebug, s)
}
// Debugf logs a formatted line with a DEBUG prefix.
func (l *Logger) Debugf(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "DEBUG: "+s)
l.callHandlers(LevelDebug, s)
}
// Infoln logs a line with an INFO prefix.
func (l *Logger) Infoln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "INFO: "+s)
l.callHandlers(LevelInfo, s)
}
// Infof logs a formatted line with an INFO prefix.
func (l *Logger) Infof(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "INFO: "+s)
l.callHandlers(LevelInfo, s)
}
// Okln logs a line with an OK prefix.
func (l *Logger) Okln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "OK: "+s)
l.callHandlers(LevelOK, s)
}
// Okf logs a formatted line with an OK prefix.
func (l *Logger) Okf(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "OK: "+s)
l.callHandlers(LevelOK, s)
}
// Warnln logs a formatted line with a WARNING prefix.
func (l *Logger) Warnln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "WARNING: "+s)
l.callHandlers(LevelWarn, s)
}
// Warnf logs a formatted line with a WARNING prefix.
func (l *Logger) Warnf(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "WARNING: "+s)
l.callHandlers(LevelWarn, s)
}
// Fatalln logs a line with a FATAL prefix and exits the process with exit
// code 1.
func (l *Logger) Fatalln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "FATAL: "+s)
l.callHandlers(LevelFatal, s)
os.Exit(1)
}
// Fatalf logs a formatted line with a FATAL prefix and exits the process with
// exit code 1.
func (l *Logger) Fatalf(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "FATAL: "+s)
l.callHandlers(LevelFatal, s)
os.Exit(1)
}

View File

@@ -1,58 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package logger
import (
"strings"
"testing"
)
func TestAPI(t *testing.T) {
l := New()
l.SetFlags(0)
l.SetPrefix("testing")
debug := 0
l.AddHandler(LevelDebug, checkFunc(t, LevelDebug, "test 0", &debug))
info := 0
l.AddHandler(LevelInfo, checkFunc(t, LevelInfo, "test 1", &info))
warn := 0
l.AddHandler(LevelWarn, checkFunc(t, LevelWarn, "test 2", &warn))
ok := 0
l.AddHandler(LevelOK, checkFunc(t, LevelOK, "test 3", &ok))
l.Debugf("test %d", 0)
l.Debugln("test", 0)
l.Infof("test %d", 1)
l.Infoln("test", 1)
l.Warnf("test %d", 2)
l.Warnln("test", 2)
l.Okf("test %d", 3)
l.Okln("test", 3)
if debug != 2 {
t.Errorf("Debug handler called %d != 2 times", debug)
}
if info != 2 {
t.Errorf("Info handler called %d != 2 times", info)
}
if warn != 2 {
t.Errorf("Warn handler called %d != 2 times", warn)
}
if ok != 2 {
t.Errorf("Ok handler called %d != 2 times", ok)
}
}
func checkFunc(t *testing.T, expectl LogLevel, expectmsg string, counter *int) func(LogLevel, string) {
return func(l LogLevel, msg string) {
*counter++
if l != expectl {
t.Errorf("Incorrect message level %d != %d", l, expectl)
}
if !strings.HasSuffix(msg, expectmsg) {
t.Errorf("%q does not end with %q", msg, expectmsg)
}
}
}

View File

@@ -1,20 +0,0 @@
Copyright (c) 2012 Daniel Theophanes
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.

View File

@@ -1,79 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin linux freebsd netbsd windows
package osext
import (
"fmt"
"os"
oexec "os/exec"
"path/filepath"
"runtime"
"testing"
)
const execPath_EnvVar = "OSTEST_OUTPUT_EXECPATH"
func TestExecPath(t *testing.T) {
ep, err := Executable()
if err != nil {
t.Fatalf("ExecPath failed: %v", err)
}
// we want fn to be of the form "dir/prog"
dir := filepath.Dir(filepath.Dir(ep))
fn, err := filepath.Rel(dir, ep)
if err != nil {
t.Fatalf("filepath.Rel: %v", err)
}
cmd := &oexec.Cmd{}
// make child start with a relative program path
cmd.Dir = dir
cmd.Path = fn
// forge argv[0] for child, so that we can verify we could correctly
// get real path of the executable without influenced by argv[0].
cmd.Args = []string{"-", "-test.run=XXXX"}
cmd.Env = []string{fmt.Sprintf("%s=1", execPath_EnvVar)}
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("exec(self) failed: %v", err)
}
outs := string(out)
if !filepath.IsAbs(outs) {
t.Fatalf("Child returned %q, want an absolute path", out)
}
if !sameFile(outs, ep) {
t.Fatalf("Child returned %q, not the same file as %q", out, ep)
}
}
func sameFile(fn1, fn2 string) bool {
fi1, err := os.Stat(fn1)
if err != nil {
return false
}
fi2, err := os.Stat(fn2)
if err != nil {
return false
}
return os.SameFile(fi1, fi2)
}
func init() {
if e := os.Getenv(execPath_EnvVar); e != "" {
// first chdir to another path
dir := "/"
if runtime.GOOS == "windows" {
dir = filepath.VolumeName(".")
}
os.Chdir(dir)
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
os.Exit(0)
}
}

View File

@@ -4,7 +4,7 @@ go:
install:
- export PATH=$PATH:$HOME/gopath/bin
- go get code.google.com/p/go.tools/cmd/cover
- go get golang.org/x/tools/cover
- go get github.com/mattn/goveralls
script:

View File

@@ -1,7 +1,7 @@
xdr
===
[![Build Status](https://img.shields.io/travis/calmh/xdr.svg?style=flat)](https://travis-ci.org/calmh/xdr)
[![Build Status](https://img.shields.io/circleci/project/calmh/xdr.svg?style=flat-square)](https://circleci.com/gh/calmh/xdr)
[![Coverage Status](https://img.shields.io/coveralls/calmh/xdr.svg?style=flat)](https://coveralls.io/r/calmh/xdr?branch=master)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat)](http://godoc.org/github.com/calmh/xdr)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://opensource.org/licenses/MIT)

View File

@@ -67,7 +67,7 @@ func BenchmarkThisEncode(b *testing.B) {
func BenchmarkThisEncoder(b *testing.B) {
w := xdr.NewWriter(ioutil.Discard)
for i := 0; i < b.N; i++ {
_, err := s.encodeXDR(w)
_, err := s.EncodeXDRInto(w)
if err != nil {
b.Fatal(err)
}
@@ -108,7 +108,7 @@ func BenchmarkThisDecoder(b *testing.B) {
r := xdr.NewReader(rr)
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
err := t.decodeXDR(r)
err := t.DecodeXDRFrom(r)
if err != nil {
b.Fatal(err)
}

View File

@@ -26,7 +26,9 @@ XDRBenchStruct Structure:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | I3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| uint8 |
/ /
\ uint8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Bs0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -69,7 +71,7 @@ struct XDRBenchStruct {
func (o XDRBenchStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o XDRBenchStruct) MarshalXDR() ([]byte, error) {
@@ -87,11 +89,11 @@ func (o XDRBenchStruct) MustMarshalXDR() []byte {
func (o XDRBenchStruct) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o XDRBenchStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint64(o.I1)
xw.WriteUint32(o.I2)
xw.WriteUint16(o.I3)
@@ -111,16 +113,16 @@ func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *XDRBenchStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *XDRBenchStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *XDRBenchStruct) decodeXDR(xr *xdr.Reader) error {
func (o *XDRBenchStruct) DecodeXDRFrom(xr *xdr.Reader) error {
o.I1 = xr.ReadUint64()
o.I2 = xr.ReadUint32()
o.I3 = xr.ReadUint16()
@@ -155,7 +157,7 @@ struct repeatReader {
func (o repeatReader) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o repeatReader) MarshalXDR() ([]byte, error) {
@@ -173,27 +175,27 @@ func (o repeatReader) MustMarshalXDR() []byte {
func (o repeatReader) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o repeatReader) encodeXDR(xw *xdr.Writer) (int, error) {
func (o repeatReader) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteBytes(o.data)
return xw.Tot(), xw.Error()
}
func (o *repeatReader) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *repeatReader) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *repeatReader) decodeXDR(xr *xdr.Reader) error {
func (o *repeatReader) DecodeXDRFrom(xr *xdr.Reader) error {
o.data = xr.ReadBytes()
return xr.Error()
}

View File

@@ -28,6 +28,7 @@ type fieldInfo struct {
Encoder string // the encoder name, i.e. "Uint64" for Read/WriteUint64
Convert string // what to convert to when encoding, i.e. "uint64"
Max int // max size for slices and strings
Submax int // max size for strings inside slices
}
type structInfo struct {
@@ -52,7 +53,7 @@ import (
var encodeTpl = template.Must(template.New("encoder").Parse(`
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}//+n
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
@@ -70,11 +71,11 @@ func (o {{.TypeName}}) MustMarshalXDR() []byte {
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}//+n
func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
{{range $fieldInfo := .Fields}}
{{if not $fieldInfo.IsSlice}}
{{if ne $fieldInfo.Convert ""}}
@@ -87,7 +88,7 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
{{end}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
{{else}}
_, err := o.{{$fieldInfo.Name}}.encodeXDR(xw)
_, err := o.{{$fieldInfo.Name}}.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -105,7 +106,7 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
{{else if $fieldInfo.IsBasic}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}}[i])
{{else}}
_, err := o.{{$fieldInfo.Name}}[i].encodeXDR(xw)
_, err := o.{{$fieldInfo.Name}}[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -118,16 +119,16 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}//+n
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}//+n
func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
{{range $fieldInfo := .Fields}}
{{if not $fieldInfo.IsSlice}}
{{if ne $fieldInfo.Convert ""}}
@@ -139,10 +140,13 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}()
{{end}}
{{else}}
(&o.{{$fieldInfo.Name}}).decodeXDR(xr)
(&o.{{$fieldInfo.Name}}).DecodeXDRFrom(xr)
{{end}}
{{else}}
_{{$fieldInfo.Name}}Size := int(xr.ReadUint32())
if _{{$fieldInfo.Name}}Size < 0 {
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
}
{{if ge $fieldInfo.Max 1}}
if _{{$fieldInfo.Name}}Size > {{$fieldInfo.Max}} {
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
@@ -153,9 +157,13 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
{{if ne $fieldInfo.Convert ""}}
o.{{$fieldInfo.Name}}[i] = {{$fieldInfo.FieldType}}(xr.Read{{$fieldInfo.Encoder}}())
{{else if $fieldInfo.IsBasic}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
{{if ge $fieldInfo.Submax 1}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}Max({{$fieldInfo.Submax}})
{{else}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
{{end}}
{{else}}
(&o.{{$fieldInfo.Name}}[i]).decodeXDR(xr)
(&o.{{$fieldInfo.Name}}[i]).DecodeXDRFrom(xr)
{{end}}
}
{{end}}
@@ -163,7 +171,7 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
return xr.Error()
}`))
var maxRe = regexp.MustCompile(`\Wmax:(\d+)`)
var maxRe = regexp.MustCompile(`(?:\Wmax:)(\d+)(?:\s*,\s*(\d+))?`)
type typeSet struct {
Type string
@@ -195,11 +203,15 @@ func handleStruct(t *ast.StructType) []fieldInfo {
}
fn := sf.Names[0].Name
var max = 0
var max1, max2 int
if sf.Comment != nil {
c := sf.Comment.List[0].Text
if m := maxRe.FindStringSubmatch(c); m != nil {
max, _ = strconv.Atoi(m[1])
m := maxRe.FindStringSubmatch(c)
if len(m) >= 2 {
max1, _ = strconv.Atoi(m[1])
}
if len(m) >= 3 {
max2, _ = strconv.Atoi(m[2])
}
if strings.Contains(c, "noencode") {
continue
@@ -217,14 +229,16 @@ func handleStruct(t *ast.StructType) []fieldInfo {
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max,
Max: max1,
Submax: max2,
}
} else {
f = fieldInfo{
Name: fn,
IsBasic: false,
FieldType: tn,
Max: max,
Max: max1,
Submax: max2,
}
}
@@ -242,7 +256,8 @@ func handleStruct(t *ast.StructType) []fieldInfo {
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max,
Max: max1,
Submax: max2,
}
} else if enc, ok := xdrEncoders[tn]; ok {
f = fieldInfo{
@@ -252,17 +267,26 @@ func handleStruct(t *ast.StructType) []fieldInfo {
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max,
Max: max1,
Submax: max2,
}
} else {
f = fieldInfo{
Name: fn,
IsBasic: false,
IsSlice: true,
FieldType: tn,
Max: max,
Max: max1,
Submax: max2,
}
}
case *ast.SelectorExpr:
f = fieldInfo{
Name: fn,
FieldType: ft.Sel.Name,
Max: max1,
Submax: max2,
}
}
fs = append(fs, f)
@@ -310,10 +334,9 @@ func generateDiagram(output io.Writer, s structInfo) {
for _, f := range fs {
tn := f.FieldType
sl := f.IsSlice
name := uncamelize(f.Name)
if sl {
if f.IsSlice {
fmt.Fprintf(output, "| %s |\n", center("Number of "+name, 61))
fmt.Fprintln(output, line)
}
@@ -340,13 +363,16 @@ func generateDiagram(output io.Writer, s structInfo) {
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintln(output, line)
default:
if sl {
if f.IsSlice {
tn = "Zero or more " + tn + " Structures"
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
} else {
fmt.Fprintf(output, "| %s |\n", center(tn, 61))
tn = tn + " Structure"
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
}
fmt.Fprintln(output, line)
}

View File

@@ -32,11 +32,11 @@ type TestStruct struct {
type Opaque [32]byte
func (u *Opaque) encodeXDR(w *xdr.Writer) (int, error) {
func (u *Opaque) EncodeXDRInto(w *xdr.Writer) (int, error) {
return w.WriteRaw(u[:])
}
func (u *Opaque) decodeXDR(r *xdr.Reader) (int, error) {
func (u *Opaque) DecodeXDRFrom(r *xdr.Reader) (int, error) {
return r.ReadRaw(u[:])
}

View File

@@ -18,17 +18,23 @@ TestStruct Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int |
/ /
\ int Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int8 |
/ /
\ int8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| uint8 |
/ /
\ uint8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int16 |
| 0x0000 | I16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | UI16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int32 |
| I32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| UI32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -52,7 +58,9 @@ TestStruct Structure:
\ S (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Opaque |
/ /
\ Opaque Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of SS |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -68,9 +76,9 @@ struct TestStruct {
int I;
int8 I8;
uint8 UI8;
int16 I16;
int I16;
unsigned int UI16;
int32 I32;
int I32;
unsigned int UI32;
hyper I64;
unsigned hyper UI64;
@@ -84,7 +92,7 @@ struct TestStruct {
func (o TestStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
return o.EncodeXDRInto(xw)
}
func (o TestStruct) MarshalXDR() ([]byte, error) {
@@ -102,11 +110,11 @@ func (o TestStruct) MustMarshalXDR() []byte {
func (o TestStruct) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o TestStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint64(uint64(o.I))
xw.WriteUint8(uint8(o.I8))
xw.WriteUint8(o.UI8)
@@ -124,7 +132,7 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("S", l, 1024)
}
xw.WriteString(o.S)
_, err := o.C.encodeXDR(xw)
_, err := o.C.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
@@ -140,16 +148,16 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
func (o *TestStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *TestStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
return o.DecodeXDRFrom(xr)
}
func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
func (o *TestStruct) DecodeXDRFrom(xr *xdr.Reader) error {
o.I = int(xr.ReadUint64())
o.I8 = int8(xr.ReadUint8())
o.UI8 = xr.ReadUint8()
@@ -161,8 +169,11 @@ func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
o.UI64 = xr.ReadUint64()
o.BS = xr.ReadBytesMax(1024)
o.S = xr.ReadStringMax(1024)
(&o.C).decodeXDR(xr)
(&o.C).DecodeXDRFrom(xr)
_SSSize := int(xr.ReadUint32())
if _SSSize < 0 {
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
}
if _SSSize > 1024 {
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
}

View File

@@ -68,7 +68,8 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
if r.err != nil {
return nil
}
if max > 0 && l > max {
if l < 0 || max > 0 && l > max {
// l may be negative on 32 bit builds
r.err = ElementSizeExceeded("bytes field", l, max)
return nil
}

14
Godeps/_workspace/src/github.com/golang/snappy/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,14 @@
# This is the official list of Snappy-Go authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as
# Name or Organization <email address>
# The email address is not required for organizations.
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Google Inc.
Jan Mercl <0xjnml@gmail.com>
Sebastien Binet <seb.binet@gmail.com>

View File

@@ -0,0 +1,36 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the Snappy-Go repository.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Google employees are listed here
# but not in AUTHORS, because Google holds the copyright.
#
# The submission process automatically checks to make sure
# that people submitting code are listed in this file (by email address).
#
# Names should be added to this file only after verifying that
# the individual or the individual's organization has agreed to
# the appropriate Contributor License Agreement, found here:
#
# http://code.google.com/legal/individual-cla-v1.0.html
# http://code.google.com/legal/corporate-cla-v1.0.html
#
# The agreement for individuals can be filled out on the web.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file, depending on whether the
# individual or corporate CLA was used.
# Names should be added to this file like so:
# Name <email address>
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Jan Mercl <0xjnml@gmail.com>
Kai Backman <kaib@golang.org>
Marc-Antoine Ruel <maruel@chromium.org>
Nigel Tao <nigeltao@golang.org>
Rob Pike <r@golang.org>
Russ Cox <rsc@golang.org>
Sebastien Binet <seb.binet@gmail.com>

27
Godeps/_workspace/src/github.com/golang/snappy/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,7 @@
The Snappy compression format in the Go programming language.
To download and install from source:
$ go get github.com/golang/snappy
Unless otherwise noted, the Snappy-Go source files are distributed
under the BSD-style license found in the LICENSE file.

View File

@@ -0,0 +1,294 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt reports that the input is invalid.
ErrCorrupt = errors.New("snappy: corrupt input")
// ErrTooLarge reports that the uncompressed length is too large.
ErrTooLarge = errors.New("snappy: decoded block is too large")
// ErrUnsupported reports that the input isn't supported.
ErrUnsupported = errors.New("snappy: unsupported input")
)
// DecodedLen returns the length of the decoded block.
func DecodedLen(src []byte) (int, error) {
v, _, err := decodedLen(src)
return v, err
}
// decodedLen returns the length of the decoded block and the number of bytes
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n <= 0 || v > 0xffffffff {
return 0, 0, ErrCorrupt
}
const wordSize = 32 << (^uint(0) >> 32 & 1)
if wordSize == 32 && v > 0x7fffffff {
return 0, 0, ErrTooLarge
}
return int(v), n, nil
}
// Decode returns the decoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire decoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Decode(dst, src []byte) ([]byte, error) {
dLen, s, err := decodedLen(src)
if err != nil {
return nil, err
}
if len(dst) < dLen {
dst = make([]byte, dLen)
}
var d, offset, length int
for s < len(src) {
switch src[s] & 0x03 {
case tagLiteral:
x := uint(src[s] >> 2)
switch {
case x < 60:
s++
case x == 60:
s += 2
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-1])
case x == 61:
s += 3
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-2]) | uint(src[s-1])<<8
case x == 62:
s += 4
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-3]) | uint(src[s-2])<<8 | uint(src[s-1])<<16
case x == 63:
s += 5
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-4]) | uint(src[s-3])<<8 | uint(src[s-2])<<16 | uint(src[s-1])<<24
}
length = int(x + 1)
if length <= 0 {
return nil, errors.New("snappy: unsupported literal length")
}
if length > len(dst)-d || length > len(src)-s {
return nil, ErrCorrupt
}
copy(dst[d:], src[s:s+length])
d += length
s += length
continue
case tagCopy1:
s += 2
if s > len(src) {
return nil, ErrCorrupt
}
length = 4 + int(src[s-2])>>2&0x7
offset = int(src[s-2])&0xe0<<3 | int(src[s-1])
case tagCopy2:
s += 3
if s > len(src) {
return nil, ErrCorrupt
}
length = 1 + int(src[s-3])>>2
offset = int(src[s-2]) | int(src[s-1])<<8
case tagCopy4:
return nil, errors.New("snappy: unsupported COPY_4 tag")
}
end := d + length
if offset > d || end > len(dst) {
return nil, ErrCorrupt
}
for ; d < end; d++ {
dst[d] = dst[d-offset]
}
}
if d != dLen {
return nil, ErrCorrupt
}
return dst[:d], nil
}
// NewReader returns a new Reader that decompresses from r, using the framing
// format described at
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewReader(r io.Reader) *Reader {
return &Reader{
r: r,
decoded: make([]byte, maxUncompressedChunkLen),
buf: make([]byte, MaxEncodedLen(maxUncompressedChunkLen)+checksumSize),
}
}
// Reader is an io.Reader than can read Snappy-compressed bytes.
type Reader struct {
r io.Reader
err error
decoded []byte
buf []byte
// decoded[i:j] contains decoded bytes that have not yet been passed on.
i, j int
readHeader bool
}
// Reset discards any buffered data, resets all state, and switches the Snappy
// reader to read from r. This permits reusing a Reader rather than allocating
// a new one.
func (r *Reader) Reset(reader io.Reader) {
r.r = reader
r.err = nil
r.i = 0
r.j = 0
r.readHeader = false
}
func (r *Reader) readFull(p []byte) (ok bool) {
if _, r.err = io.ReadFull(r.r, p); r.err != nil {
if r.err == io.ErrUnexpectedEOF {
r.err = ErrCorrupt
}
return false
}
return true
}
// Read satisfies the io.Reader interface.
func (r *Reader) Read(p []byte) (int, error) {
if r.err != nil {
return 0, r.err
}
for {
if r.i < r.j {
n := copy(p, r.decoded[r.i:r.j])
r.i += n
return n, nil
}
if !r.readFull(r.buf[:4]) {
return 0, r.err
}
chunkType := r.buf[0]
if !r.readHeader {
if chunkType != chunkTypeStreamIdentifier {
r.err = ErrCorrupt
return 0, r.err
}
r.readHeader = true
}
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
if chunkLen > len(r.buf) {
r.err = ErrUnsupported
return 0, r.err
}
// The chunk types are specified at
// https://github.com/google/snappy/blob/master/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
buf := r.buf[:chunkLen]
if !r.readFull(buf) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
buf = buf[checksumSize:]
n, err := DecodedLen(buf)
if err != nil {
r.err = err
return 0, r.err
}
if n > len(r.decoded) {
r.err = ErrCorrupt
return 0, r.err
}
if _, err := Decode(r.decoded, buf); err != nil {
r.err = err
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCorrupt
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeUncompressedData:
// Section 4.3. Uncompressed data (chunk type 0x01).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
buf := r.buf[:checksumSize]
if !r.readFull(buf) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
// Read directly into r.decoded instead of via r.buf.
n := chunkLen - checksumSize
if !r.readFull(r.decoded[:n]) {
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCorrupt
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeStreamIdentifier:
// Section 4.1. Stream identifier (chunk type 0xff).
if chunkLen != len(magicBody) {
r.err = ErrCorrupt
return 0, r.err
}
if !r.readFull(r.buf[:len(magicBody)]) {
return 0, r.err
}
for i := 0; i < len(magicBody); i++ {
if r.buf[i] != magicBody[i] {
r.err = ErrCorrupt
return 0, r.err
}
}
continue
}
if chunkType <= 0x7f {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return 0, r.err
}
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen]) {
return 0, r.err
}
}
}

View File

@@ -6,6 +6,7 @@ package snappy
import (
"encoding/binary"
"io"
)
// We limit how far copy back-references can go, the same as the C++ code.
@@ -78,7 +79,7 @@ func emitCopy(dst []byte, offset, length int) int {
// slice of dst if dst was large enough to hold the entire encoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Encode(dst, src []byte) ([]byte, error) {
func Encode(dst, src []byte) []byte {
if n := MaxEncodedLen(len(src)); len(dst) < n {
dst = make([]byte, n)
}
@@ -91,7 +92,7 @@ func Encode(dst, src []byte) ([]byte, error) {
if len(src) != 0 {
d += emitLiteral(dst[d:], src)
}
return dst[:d], nil
return dst[:d]
}
// Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive.
@@ -144,7 +145,7 @@ func Encode(dst, src []byte) ([]byte, error) {
if lit != len(src) {
d += emitLiteral(dst[d:], src[lit:])
}
return dst[:d], nil
return dst[:d]
}
// MaxEncodedLen returns the maximum length of a snappy block, given its
@@ -172,3 +173,82 @@ func MaxEncodedLen(srcLen int) int {
// This last factor dominates the blowup, so the final estimate is:
return 32 + srcLen + srcLen/6
}
// NewWriter returns a new Writer that compresses to w, using the framing
// format described at
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewWriter(w io.Writer) *Writer {
return &Writer{
w: w,
enc: make([]byte, MaxEncodedLen(maxUncompressedChunkLen)),
}
}
// Writer is an io.Writer than can write Snappy-compressed bytes.
type Writer struct {
w io.Writer
err error
enc []byte
buf [checksumSize + chunkHeaderSize]byte
wroteHeader bool
}
// Reset discards the writer's state and switches the Snappy writer to write to
// w. This permits reusing a Writer rather than allocating a new one.
func (w *Writer) Reset(writer io.Writer) {
w.w = writer
w.err = nil
w.wroteHeader = false
}
// Write satisfies the io.Writer interface.
func (w *Writer) Write(p []byte) (n int, errRet error) {
if w.err != nil {
return 0, w.err
}
if !w.wroteHeader {
copy(w.enc, magicChunk)
if _, err := w.w.Write(w.enc[:len(magicChunk)]); err != nil {
w.err = err
return n, err
}
w.wroteHeader = true
}
for len(p) > 0 {
var uncompressed []byte
if len(p) > maxUncompressedChunkLen {
uncompressed, p = p[:maxUncompressedChunkLen], p[maxUncompressedChunkLen:]
} else {
uncompressed, p = p, nil
}
checksum := crc(uncompressed)
// Compress the buffer, discarding the result if the improvement
// isn't at least 12.5%.
chunkType := uint8(chunkTypeCompressedData)
chunkBody := Encode(w.enc, uncompressed)
if len(chunkBody) >= len(uncompressed)-len(uncompressed)/8 {
chunkType, chunkBody = chunkTypeUncompressedData, uncompressed
}
chunkLen := 4 + len(chunkBody)
w.buf[0] = chunkType
w.buf[1] = uint8(chunkLen >> 0)
w.buf[2] = uint8(chunkLen >> 8)
w.buf[3] = uint8(chunkLen >> 16)
w.buf[4] = uint8(checksum >> 0)
w.buf[5] = uint8(checksum >> 8)
w.buf[6] = uint8(checksum >> 16)
w.buf[7] = uint8(checksum >> 24)
if _, err := w.w.Write(w.buf[:]); err != nil {
w.err = err
return n, err
}
if _, err := w.w.Write(chunkBody); err != nil {
w.err = err
return n, err
}
n += len(uncompressed)
}
return n, nil
}

View File

@@ -5,9 +5,13 @@
// Package snappy implements the snappy block-based compression format.
// It aims for very high speeds and reasonable compression.
//
// The C++ snappy implementation is at http://code.google.com/p/snappy/
// The C++ snappy implementation is at https://github.com/google/snappy
package snappy
import (
"hash/crc32"
)
/*
Each encoded block begins with the varint-encoded length of the decoded data,
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
@@ -36,3 +40,29 @@ const (
tagCopy2 = 0x02
tagCopy4 = 0x03
)
const (
checksumSize = 4
chunkHeaderSize = 4
magicChunk = "\xff\x06\x00\x00" + magicBody
magicBody = "sNaPpY"
// https://github.com/google/snappy/blob/master/framing_format.txt says
// that "the uncompressed data in a chunk must be no longer than 65536 bytes".
maxUncompressedChunkLen = 65536
)
const (
chunkTypeCompressedData = 0x00
chunkTypeUncompressedData = 0x01
chunkTypePadding = 0xfe
chunkTypeStreamIdentifier = 0xff
)
var crcTable = crc32.MakeTable(crc32.Castagnoli)
// crc implements the checksum specified in section 3 of
// https://github.com/google/snappy/blob/master/framing_format.txt
func crc(b []byte) uint32 {
c := crc32.Update(0, crcTable, b)
return uint32(c>>15|c<<17) + 0xa282ead8
}

View File

@@ -18,14 +18,13 @@ import (
"testing"
)
var download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
var (
download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
testdata = flag.String("testdata", "testdata", "Directory containing the test data")
)
func roundtrip(b, ebuf, dbuf []byte) error {
e, err := Encode(ebuf, b)
if err != nil {
return fmt.Errorf("encoding error: %v", err)
}
d, err := Decode(dbuf, e)
d, err := Decode(dbuf, Encode(ebuf, b))
if err != nil {
return fmt.Errorf("decoding error: %v", err)
}
@@ -55,11 +54,11 @@ func TestSmallCopy(t *testing.T) {
}
func TestSmallRand(t *testing.T) {
rand.Seed(27354294)
rng := rand.New(rand.NewSource(27354294))
for n := 1; n < 20000; n += 23 {
b := make([]byte, n)
for i, _ := range b {
b[i] = uint8(rand.Uint32())
for i := range b {
b[i] = uint8(rng.Uint32())
}
if err := roundtrip(b, nil, nil); err != nil {
t.Fatal(err)
@@ -70,7 +69,7 @@ func TestSmallRand(t *testing.T) {
func TestSmallRegular(t *testing.T) {
for n := 1; n < 20000; n += 23 {
b := make([]byte, n)
for i, _ := range b {
for i := range b {
b[i] = uint8(i%10 + 'a')
}
if err := roundtrip(b, nil, nil); err != nil {
@@ -79,11 +78,142 @@ func TestSmallRegular(t *testing.T) {
}
}
func benchDecode(b *testing.B, src []byte) {
encoded, err := Encode(nil, src)
if err != nil {
b.Fatal(err)
func TestInvalidVarint(t *testing.T) {
data := []byte("\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
// The encoded varint overflows 32 bits
data = []byte("\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
}
func cmp(a, b []byte) error {
if len(a) != len(b) {
return fmt.Errorf("got %d bytes, want %d", len(a), len(b))
}
for i := range a {
if a[i] != b[i] {
return fmt.Errorf("byte #%d: got 0x%02x, want 0x%02x", i, a[i], b[i])
}
}
return nil
}
func TestFramingFormat(t *testing.T) {
// src is comprised of alternating 1e5-sized sequences of random
// (incompressible) bytes and repeated (compressible) bytes. 1e5 was chosen
// because it is larger than maxUncompressedChunkLen (64k).
src := make([]byte, 1e6)
rng := rand.New(rand.NewSource(1))
for i := 0; i < 10; i++ {
if i%2 == 0 {
for j := 0; j < 1e5; j++ {
src[1e5*i+j] = uint8(rng.Intn(256))
}
} else {
for j := 0; j < 1e5; j++ {
src[1e5*i+j] = uint8(i)
}
}
}
buf := new(bytes.Buffer)
if _, err := NewWriter(buf).Write(src); err != nil {
t.Fatalf("Write: encoding: %v", err)
}
dst, err := ioutil.ReadAll(NewReader(buf))
if err != nil {
t.Fatalf("ReadAll: decoding: %v", err)
}
if err := cmp(dst, src); err != nil {
t.Fatal(err)
}
}
func TestReaderReset(t *testing.T) {
gold := bytes.Repeat([]byte("All that is gold does not glitter,\n"), 10000)
buf := new(bytes.Buffer)
if _, err := NewWriter(buf).Write(gold); err != nil {
t.Fatalf("Write: %v", err)
}
encoded, invalid, partial := buf.String(), "invalid", "partial"
r := NewReader(nil)
for i, s := range []string{encoded, invalid, partial, encoded, partial, invalid, encoded, encoded} {
if s == partial {
r.Reset(strings.NewReader(encoded))
if _, err := r.Read(make([]byte, 101)); err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
continue
}
r.Reset(strings.NewReader(s))
got, err := ioutil.ReadAll(r)
switch s {
case encoded:
if err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
if err := cmp(got, gold); err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
case invalid:
if err == nil {
t.Errorf("#%d: got nil error, want non-nil", i)
continue
}
}
}
}
func TestWriterReset(t *testing.T) {
gold := bytes.Repeat([]byte("Not all those who wander are lost;\n"), 10000)
var gots, wants [][]byte
const n = 20
w, failed := NewWriter(nil), false
for i := 0; i <= n; i++ {
buf := new(bytes.Buffer)
w.Reset(buf)
want := gold[:len(gold)*i/n]
if _, err := w.Write(want); err != nil {
t.Errorf("#%d: Write: %v", i, err)
failed = true
continue
}
got, err := ioutil.ReadAll(NewReader(buf))
if err != nil {
t.Errorf("#%d: ReadAll: %v", i, err)
failed = true
continue
}
gots = append(gots, got)
wants = append(wants, want)
}
if failed {
return
}
for i := range gots {
if err := cmp(gots[i], wants[i]); err != nil {
t.Errorf("#%d: %v", i, err)
}
}
}
func benchDecode(b *testing.B, src []byte) {
encoded := Encode(nil, src)
// Bandwidth is in amount of uncompressed data.
b.SetBytes(int64(len(src)))
b.ResetTimer()
@@ -102,10 +232,10 @@ func benchEncode(b *testing.B, src []byte) {
}
}
func readFile(b *testing.B, filename string) []byte {
func readFile(b testing.TB, filename string) []byte {
src, err := ioutil.ReadFile(filename)
if err != nil {
b.Fatalf("failed reading %s: %s", filename, err)
b.Skipf("skipping benchmark: %v", err)
}
if len(src) == 0 {
b.Fatalf("%s has zero length", filename)
@@ -144,7 +274,7 @@ func BenchmarkWordsEncode1e5(b *testing.B) { benchWords(b, 1e5, false) }
func BenchmarkWordsEncode1e6(b *testing.B) { benchWords(b, 1e6, false) }
// testFiles' values are copied directly from
// https://code.google.com/p/snappy/source/browse/trunk/snappy_unittest.cc.
// https://raw.githubusercontent.com/google/snappy/master/snappy_unittest.cc
// The label field is unused in snappy-go.
var testFiles = []struct {
label string
@@ -152,29 +282,36 @@ var testFiles = []struct {
}{
{"html", "html"},
{"urls", "urls.10K"},
{"jpg", "house.jpg"},
{"pdf", "mapreduce-osdi-1.pdf"},
{"jpg", "fireworks.jpeg"},
{"jpg_200", "fireworks.jpeg"},
{"pdf", "paper-100k.pdf"},
{"html4", "html_x_4"},
{"cp", "cp.html"},
{"c", "fields.c"},
{"lsp", "grammar.lsp"},
{"xls", "kennedy.xls"},
{"txt1", "alice29.txt"},
{"txt2", "asyoulik.txt"},
{"txt3", "lcet10.txt"},
{"txt4", "plrabn12.txt"},
{"bin", "ptt5"},
{"sum", "sum"},
{"man", "xargs.1"},
{"pb", "geo.protodata"},
{"gaviota", "kppkn.gtb"},
}
// The test data files are present at this canonical URL.
const baseURL = "https://snappy.googlecode.com/svn/trunk/testdata/"
const baseURL = "https://raw.githubusercontent.com/google/snappy/master/testdata/"
func downloadTestdata(b *testing.B, basename string) (errRet error) {
filename := filepath.Join(*testdata, basename)
if stat, err := os.Stat(filename); err == nil && stat.Size() != 0 {
return nil
}
if !*download {
b.Skipf("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
if err := os.Mkdir(*testdata, 0777); err != nil && !os.IsExist(err) {
return fmt.Errorf("failed to create testdata: %s", err)
}
func downloadTestdata(basename string) (errRet error) {
filename := filepath.Join("testdata", basename)
f, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create %s: %s", filename, err)
@@ -185,36 +322,27 @@ func downloadTestdata(basename string) (errRet error) {
os.Remove(filename)
}
}()
resp, err := http.Get(baseURL + basename)
url := baseURL + basename
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("failed to download %s: %s", baseURL+basename, err)
return fmt.Errorf("failed to download %s: %s", url, err)
}
defer resp.Body.Close()
if s := resp.StatusCode; s != http.StatusOK {
return fmt.Errorf("downloading %s: HTTP status code %d (%s)", url, s, http.StatusText(s))
}
_, err = io.Copy(f, resp.Body)
if err != nil {
return fmt.Errorf("failed to write %s: %s", filename, err)
return fmt.Errorf("failed to download %s to %s: %s", url, filename, err)
}
return nil
}
func benchFile(b *testing.B, n int, decode bool) {
filename := filepath.Join("testdata", testFiles[n].filename)
if stat, err := os.Stat(filename); err != nil || stat.Size() == 0 {
if !*download {
b.Fatal("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
if err := os.Mkdir("testdata", 0777); err != nil && !os.IsExist(err) {
b.Fatalf("failed to create testdata: %s", err)
}
for _, tf := range testFiles {
if err := downloadTestdata(tf.filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
}
if err := downloadTestdata(b, testFiles[n].filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
data := readFile(b, filename)
data := readFile(b, filepath.Join(*testdata, testFiles[n].filename))
if decode {
benchDecode(b, data)
} else {
@@ -235,12 +363,6 @@ func Benchmark_UFlat8(b *testing.B) { benchFile(b, 8, true) }
func Benchmark_UFlat9(b *testing.B) { benchFile(b, 9, true) }
func Benchmark_UFlat10(b *testing.B) { benchFile(b, 10, true) }
func Benchmark_UFlat11(b *testing.B) { benchFile(b, 11, true) }
func Benchmark_UFlat12(b *testing.B) { benchFile(b, 12, true) }
func Benchmark_UFlat13(b *testing.B) { benchFile(b, 13, true) }
func Benchmark_UFlat14(b *testing.B) { benchFile(b, 14, true) }
func Benchmark_UFlat15(b *testing.B) { benchFile(b, 15, true) }
func Benchmark_UFlat16(b *testing.B) { benchFile(b, 16, true) }
func Benchmark_UFlat17(b *testing.B) { benchFile(b, 17, true) }
func Benchmark_ZFlat0(b *testing.B) { benchFile(b, 0, false) }
func Benchmark_ZFlat1(b *testing.B) { benchFile(b, 1, false) }
func Benchmark_ZFlat2(b *testing.B) { benchFile(b, 2, false) }
@@ -253,9 +375,3 @@ func Benchmark_ZFlat8(b *testing.B) { benchFile(b, 8, false) }
func Benchmark_ZFlat9(b *testing.B) { benchFile(b, 9, false) }
func Benchmark_ZFlat10(b *testing.B) { benchFile(b, 10, false) }
func Benchmark_ZFlat11(b *testing.B) { benchFile(b, 11, false) }
func Benchmark_ZFlat12(b *testing.B) { benchFile(b, 12, false) }
func Benchmark_ZFlat13(b *testing.B) { benchFile(b, 13, false) }
func Benchmark_ZFlat14(b *testing.B) { benchFile(b, 14, false) }
func Benchmark_ZFlat15(b *testing.B) { benchFile(b, 15, false) }
func Benchmark_ZFlat16(b *testing.B) { benchFile(b, 16, false) }
func Benchmark_ZFlat17(b *testing.B) { benchFile(b, 17, false) }

View File

@@ -1,3 +1,9 @@
All files in this repository are licensed as follows. If you contribute
to this repository, it is assumed that you license your contribution
under the same license unless you state otherwise.
All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file.
This software is licensed under the LGPLv3, included below.
As a special exception to the GNU Lesser General Public License version 3

View File

@@ -20,7 +20,7 @@ token in the bucket represents one byte.
```go
func Writer(w io.Writer, bucket *Bucket) io.Writer
```
Writer returns a reader that is rate limited by the given token bucket. Each
Writer returns a writer that is rate limited by the given token bucket. Each
token in the bucket represents one byte.
#### type Bucket

View File

@@ -2,7 +2,8 @@
// Licensed under the LGPLv3 with static-linking exception.
// See LICENCE file for details.
// The ratelimit package provides an efficient token bucket implementation.
// The ratelimit package provides an efficient token bucket implementation
// that can be used to limit the rate of arbitrary things.
// See http://en.wikipedia.org/wiki/Token_bucket.
package ratelimit
@@ -10,6 +11,7 @@ import (
"strconv"
"sync"
"time"
"math"
)
// Bucket represents a token bucket that fills at a predetermined rate.
@@ -55,7 +57,7 @@ func NewBucketWithRate(rate float64, capacity int64) *Bucket {
continue
}
tb := NewBucketWithQuantum(fillInterval, capacity, quantum)
if diff := abs(tb.Rate() - rate); diff/rate <= rateMargin {
if diff := math.Abs(tb.Rate() - rate); diff/rate <= rateMargin {
return tb
}
}
@@ -217,10 +219,3 @@ func (tb *Bucket) adjust(now time.Time) (currentTick int64) {
tb.availTick = currentTick
return
}
func abs(f float64) float64 {
if f < 0 {
return -f
}
return f
}

View File

@@ -0,0 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,16 @@
### Extensions to the "os" package.
## Find the current Executable and ExecutableFolder.
There is sometimes utility in finding the current executable file
that is running. This can be used for upgrading the current executable
or finding resources located relative to the executable file. Both
working directory and the os.Args[0] value are arbitrary and cannot
be relied on; os.Args[0] can be "faked".
Multi-platform and supports:
* Linux
* OS X
* Windows
* Plan 9
* BSDs.

View File

@@ -16,17 +16,12 @@ func Executable() (string, error) {
}
// Returns same path as Executable, returns just the folder
// path. Excludes the executable name.
// path. Excludes the executable name and any trailing slash.
func ExecutableFolder() (string, error) {
p, err := Executable()
if err != nil {
return "", err
}
folder, _ := filepath.Split(p)
return folder, nil
}
// Depricated. Same as Executable().
func GetExePath() (exePath string, err error) {
return Executable()
return filepath.Dir(p), nil
}

View File

@@ -5,16 +5,16 @@
package osext
import (
"syscall"
"os"
"strconv"
"os"
"strconv"
"syscall"
)
func executable() (string, error) {
f, err := os.Open("/proc/" + strconv.Itoa(os.Getpid()) + "/text")
if err != nil {
return "", err
}
defer f.Close()
return syscall.Fd2path(int(f.Fd()))
f, err := os.Open("/proc/" + strconv.Itoa(os.Getpid()) + "/text")
if err != nil {
return "", err
}
defer f.Close()
return syscall.Fd2path(int(f.Fd()))
}

View File

@@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux netbsd openbsd solaris
// +build linux netbsd openbsd solaris dragonfly
package osext
@@ -11,15 +11,23 @@ import (
"fmt"
"os"
"runtime"
"strings"
)
func executable() (string, error) {
switch runtime.GOOS {
case "linux":
return os.Readlink("/proc/self/exe")
const deletedTag = " (deleted)"
execpath, err := os.Readlink("/proc/self/exe")
if err != nil {
return execpath, err
}
execpath = strings.TrimSuffix(execpath, deletedTag)
execpath = strings.TrimPrefix(execpath, deletedTag)
return execpath, nil
case "netbsd":
return os.Readlink("/proc/curproc/exe")
case "openbsd":
case "openbsd", "dragonfly":
return os.Readlink("/proc/curproc/file")
case "solaris":
return os.Readlink(fmt.Sprintf("/proc/%d/path/a.out", os.Getpid()))

View File

@@ -0,0 +1,203 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin linux freebsd netbsd windows
package osext
import (
"bytes"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"runtime"
"testing"
)
const (
executableEnvVar = "OSTEST_OUTPUT_EXECUTABLE"
executableEnvValueMatch = "match"
executableEnvValueDelete = "delete"
)
func TestPrintExecutable(t *testing.T) {
ef, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
t.Log("Executable:", ef)
}
func TestPrintExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
t.Log("Executable Folder:", ef)
}
func TestExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
if ef[len(ef)-1] == filepath.Separator {
t.Fatal("ExecutableFolder ends with a trailing slash.")
}
}
func TestExecutableMatch(t *testing.T) {
ep, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
// fullpath to be of the form "dir/prog".
dir := filepath.Dir(filepath.Dir(ep))
fullpath, err := filepath.Rel(dir, ep)
if err != nil {
t.Fatalf("filepath.Rel: %v", err)
}
// Make child start with a relative program path.
// Alter argv[0] for child to verify getting real path without argv[0].
cmd := &exec.Cmd{
Dir: dir,
Path: fullpath,
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueMatch)},
}
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("exec(self) failed: %v", err)
}
outs := string(out)
if !filepath.IsAbs(outs) {
t.Fatalf("Child returned %q, want an absolute path", out)
}
if !sameFile(outs, ep) {
t.Fatalf("Child returned %q, not the same file as %q", out, ep)
}
}
func TestExecutableDelete(t *testing.T) {
if runtime.GOOS != "linux" {
t.Skip()
}
fpath, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
r, w := io.Pipe()
stderrBuff := &bytes.Buffer{}
stdoutBuff := &bytes.Buffer{}
cmd := &exec.Cmd{
Path: fpath,
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueDelete)},
Stdin: r,
Stderr: stderrBuff,
Stdout: stdoutBuff,
}
err = cmd.Start()
if err != nil {
t.Fatalf("exec(self) start failed: %v", err)
}
tempPath := fpath + "_copy"
_ = os.Remove(tempPath)
err = copyFile(tempPath, fpath)
if err != nil {
t.Fatalf("copy file failed: %v", err)
}
err = os.Remove(fpath)
if err != nil {
t.Fatalf("remove running test file failed: %v", err)
}
err = os.Rename(tempPath, fpath)
if err != nil {
t.Fatalf("rename copy to previous name failed: %v", err)
}
w.Write([]byte{0})
w.Close()
err = cmd.Wait()
if err != nil {
t.Fatalf("exec wait failed: %v", err)
}
childPath := stderrBuff.String()
if !filepath.IsAbs(childPath) {
t.Fatalf("Child returned %q, want an absolute path", childPath)
}
if !sameFile(childPath, fpath) {
t.Fatalf("Child returned %q, not the same file as %q", childPath, fpath)
}
}
func sameFile(fn1, fn2 string) bool {
fi1, err := os.Stat(fn1)
if err != nil {
return false
}
fi2, err := os.Stat(fn2)
if err != nil {
return false
}
return os.SameFile(fi1, fi2)
}
func copyFile(dest, src string) error {
df, err := os.Create(dest)
if err != nil {
return err
}
defer df.Close()
sf, err := os.Open(src)
if err != nil {
return err
}
defer sf.Close()
_, err = io.Copy(df, sf)
return err
}
func TestMain(m *testing.M) {
env := os.Getenv(executableEnvVar)
switch env {
case "":
os.Exit(m.Run())
case executableEnvValueMatch:
// First chdir to another path.
dir := "/"
if runtime.GOOS == "windows" {
dir = filepath.VolumeName(".")
}
os.Chdir(dir)
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
case executableEnvValueDelete:
bb := make([]byte, 1)
var err error
n, err := os.Stdin.Read(bb)
if err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
os.Exit(2)
}
if n != 1 {
fmt.Fprint(os.Stderr, "ERROR: n != 1, n == ", n)
os.Exit(2)
}
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
}
os.Exit(0)
}

View File

@@ -1,15 +0,0 @@
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"os"
"strings"
"github.com/calmh/logger"
)
var (
debug = strings.Contains(os.Getenv("STTRACE"), "protocol") || os.Getenv("STTRACE") == "all"
l = logger.DefaultLogger
)

View File

@@ -4,7 +4,7 @@
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build go1.3
// +build !go1.2
package leveldb

View File

@@ -0,0 +1,30 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build !go1.2
package cache
import (
"math/rand"
"testing"
)
func BenchmarkLRUCache(b *testing.B) {
c := NewCache(NewLRU(10000))
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for pb.Next() {
key := uint64(r.Intn(1000000))
c.Get(0, key, func() (int, Value) {
return 1, key
}).Release()
}
})
}

View File

@@ -552,19 +552,3 @@ func TestLRUCache_Close(t *testing.T) {
t.Errorf("delFunc isn't called 1 times: got=%d", delFuncCalled)
}
}
func BenchmarkLRUCache(b *testing.B) {
c := NewCache(NewLRU(10000))
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for pb.Next() {
key := uint64(r.Intn(1000000))
c.Get(0, key, func() (int, Value) {
return 1, key
}).Release()
}
})
}

View File

@@ -63,13 +63,14 @@ type DB struct {
journalAckC chan error
// Compaction.
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
mcompCmdC chan cCmd
compErrC chan error
compPerErrC chan error
compErrSetC chan error
compStats []cStats
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
mcompCmdC chan cCmd
compErrC chan error
compPerErrC chan error
compErrSetC chan error
compWriteLocking bool
compStats []cStats
// Close.
closeW sync.WaitGroup
@@ -108,28 +109,44 @@ func openDB(s *session) (*DB, error) {
closeC: make(chan struct{}),
}
if err := db.recoverJournal(); err != nil {
return nil, err
}
// Read-only mode.
readOnly := s.o.GetReadOnly()
// Remove any obsolete files.
if err := db.checkAndCleanFiles(); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
if readOnly {
// Recover journals (read-only mode).
if err := db.recoverJournalRO(); err != nil {
return nil, err
}
return nil, err
} else {
// Recover journals.
if err := db.recoverJournal(); err != nil {
return nil, err
}
// Remove any obsolete files.
if err := db.checkAndCleanFiles(); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
}
return nil, err
}
}
// Doesn't need to be included in the wait group.
go db.compactionError()
go db.mpoolDrain()
db.closeW.Add(3)
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
if readOnly {
db.SetReadOnly()
} else {
db.closeW.Add(3)
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
}
s.logf("db@open done T·%v", time.Since(start))
@@ -274,8 +291,9 @@ func recoverTable(s *session, o *opt.Options) error {
// We will drop corrupted table.
strict = o.GetStrict(opt.StrictRecovery)
noSync = o.GetNoSync()
rec = &sessionRecord{numLevel: o.GetNumLevel()}
rec = &sessionRecord{}
bpool = util.NewBufferPool(o.GetBlockSize() + 5)
)
buildTable := func(iter iterator.Iterator) (tmp storage.File, size int64, err error) {
@@ -311,9 +329,11 @@ func recoverTable(s *session, o *opt.Options) error {
if err != nil {
return
}
err = writer.Sync()
if err != nil {
return
if !noSync {
err = writer.Sync()
if err != nil {
return
}
}
size = int64(tw.BytesLen())
return
@@ -347,12 +367,14 @@ func recoverTable(s *session, o *opt.Options) error {
return err
}
iter := tr.NewIterator(nil, nil)
iter.(iterator.ErrorCallbackSetter).SetErrorCallback(func(err error) {
if errors.IsCorrupted(err) {
s.logf("table@recovery block corruption @%d %q", file.Num(), err)
tcorruptedBlock++
}
})
if itererr, ok := iter.(iterator.ErrorCallbackSetter); ok {
itererr.SetErrorCallback(func(err error) {
if errors.IsCorrupted(err) {
s.logf("table@recovery block corruption @%d %q", file.Num(), err)
tcorruptedBlock++
}
})
}
// Scan the table.
for iter.Next() {
@@ -448,132 +470,136 @@ func recoverTable(s *session, o *opt.Options) error {
}
func (db *DB) recoverJournal() error {
// Get all tables and sort it by file number.
journalFiles_, err := db.s.getFiles(storage.TypeJournal)
// Get all journals and sort it by file number.
allJournalFiles, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
journalFiles := files(journalFiles_)
journalFiles.sort()
files(allJournalFiles).sort()
// Discard older journal.
prev := -1
for i, file := range journalFiles {
if file.Num() >= db.s.stJournalNum {
if prev >= 0 {
i--
journalFiles[i] = journalFiles[prev]
}
journalFiles = journalFiles[i:]
break
} else if file.Num() == db.s.stPrevJournalNum {
prev = i
// Journals that will be recovered.
var recJournalFiles []storage.File
for _, jf := range allJournalFiles {
if jf.Num() >= db.s.stJournalNum || jf.Num() == db.s.stPrevJournalNum {
recJournalFiles = append(recJournalFiles, jf)
}
}
var jr *journal.Reader
var of storage.File
var mem *memdb.DB
batch := new(Batch)
cm := newCMem(db.s)
buf := new(util.Buffer)
// Options.
strict := db.s.o.GetStrict(opt.StrictJournal)
checksum := db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer := db.s.o.GetWriteBuffer()
recoverJournal := func(file storage.File) error {
db.logf("journal@recovery recovering @%d", file.Num())
reader, err := file.Open()
if err != nil {
return err
}
defer reader.Close()
var (
of storage.File // Obsolete file.
rec = &sessionRecord{}
)
// Create/reset journal reader instance.
if jr == nil {
jr = journal.NewReader(reader, dropper{db.s, file}, strict, checksum)
} else {
jr.Reset(reader, dropper{db.s, file}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
return err
}
}
if err := cm.commit(file.Num(), db.seq); err != nil {
return err
}
cm.reset()
of.Remove()
of = nil
}
// Replay journal to memdb.
mem.Reset()
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
return errors.SetFile(err, file)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
} else {
return errors.SetFile(err, file)
}
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mem); err != nil {
if strict || !errors.IsCorrupted(err) {
return errors.SetFile(err, file)
} else {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
// Flush it if large enough.
if mem.Size() >= writeBuffer {
if err := cm.flush(mem, 0); err != nil {
return err
}
mem.Reset()
}
}
of = file
return nil
}
// Recover all journals.
if len(journalFiles) > 0 {
db.logf("journal@recovery F·%d", len(journalFiles))
// Recover journals.
if len(recJournalFiles) > 0 {
db.logf("journal@recovery F·%d", len(recJournalFiles))
// Mark file number as used.
db.s.markFileNum(journalFiles[len(journalFiles)-1].Num())
db.s.markFileNum(recJournalFiles[len(recJournalFiles)-1].Num())
mem = memdb.New(db.s.icmp, writeBuffer)
for _, file := range journalFiles {
if err := recoverJournal(file); err != nil {
var (
// Options.
strict = db.s.o.GetStrict(opt.StrictJournal)
checksum = db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer = db.s.o.GetWriteBuffer()
jr *journal.Reader
mdb = memdb.New(db.s.icmp, writeBuffer)
buf = &util.Buffer{}
batch = &Batch{}
)
for _, jf := range recJournalFiles {
db.logf("journal@recovery recovering @%d", jf.Num())
fr, err := jf.Open()
if err != nil {
return err
}
// Create or reset journal reader instance.
if jr == nil {
jr = journal.NewReader(fr, dropper{db.s, jf}, strict, checksum)
} else {
jr.Reset(fr, dropper{db.s, jf}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mdb.Len() > 0 {
if _, err := db.s.flushMemdb(rec, mdb, -1); err != nil {
fr.Close()
return err
}
}
rec.setJournalNum(jf.Num())
rec.setSeqNum(db.seq)
if err := db.s.commit(rec); err != nil {
fr.Close()
return err
}
rec.resetAddedTables()
of.Remove()
of = nil
}
// Replay journal to memdb.
mdb.Reset()
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
fr.Close()
return errors.SetFile(err, jf)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mdb); err != nil {
if !strict && errors.IsCorrupted(err) {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
// Flush it if large enough.
if mdb.Size() >= writeBuffer {
if _, err := db.s.flushMemdb(rec, mdb, 0); err != nil {
fr.Close()
return err
}
mdb.Reset()
}
}
fr.Close()
of = jf
}
// Flush the last journal.
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
// Flush the last memdb.
if mdb.Len() > 0 {
if _, err := db.s.flushMemdb(rec, mdb, 0); err != nil {
return err
}
}
@@ -585,8 +611,10 @@ func (db *DB) recoverJournal() error {
}
// Commit.
if err := cm.commit(db.journalFile.Num(), db.seq); err != nil {
// Close journal.
rec.setJournalNum(db.journalFile.Num())
rec.setSeqNum(db.seq)
if err := db.s.commit(rec); err != nil {
// Close journal on error.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
@@ -602,6 +630,103 @@ func (db *DB) recoverJournal() error {
return nil
}
func (db *DB) recoverJournalRO() error {
// Get all journals and sort it by file number.
allJournalFiles, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
files(allJournalFiles).sort()
// Journals that will be recovered.
var recJournalFiles []storage.File
for _, jf := range allJournalFiles {
if jf.Num() >= db.s.stJournalNum || jf.Num() == db.s.stPrevJournalNum {
recJournalFiles = append(recJournalFiles, jf)
}
}
var (
// Options.
strict = db.s.o.GetStrict(opt.StrictJournal)
checksum = db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer = db.s.o.GetWriteBuffer()
mdb = memdb.New(db.s.icmp, writeBuffer)
)
// Recover journals.
if len(recJournalFiles) > 0 {
db.logf("journal@recovery RO·Mode F·%d", len(recJournalFiles))
var (
jr *journal.Reader
buf = &util.Buffer{}
batch = &Batch{}
)
for _, jf := range recJournalFiles {
db.logf("journal@recovery recovering @%d", jf.Num())
fr, err := jf.Open()
if err != nil {
return err
}
// Create or reset journal reader instance.
if jr == nil {
jr = journal.NewReader(fr, dropper{db.s, jf}, strict, checksum)
} else {
jr.Reset(fr, dropper{db.s, jf}, strict, checksum)
}
// Replay journal to memdb.
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
fr.Close()
return errors.SetFile(err, jf)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mdb); err != nil {
if !strict && errors.IsCorrupted(err) {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
}
fr.Close()
}
}
// Set memDB.
db.mem = &memDB{db: db, DB: mdb, ref: 1}
return nil
}
func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err error) {
ikey := newIkey(key, seq, ktSeek)
@@ -612,7 +737,7 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
}
defer m.decref()
mk, mv, me := m.mdb.Find(ikey)
mk, mv, me := m.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
@@ -650,7 +775,7 @@ func (db *DB) has(key []byte, seq uint64, ro *opt.ReadOptions) (ret bool, err er
}
defer m.decref()
mk, _, me := m.mdb.Find(ikey)
mk, _, me := m.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
@@ -782,7 +907,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
const prefix = "leveldb."
if !strings.HasPrefix(name, prefix) {
return "", errors.New("leveldb: GetProperty: unknown property: " + name)
return "", ErrNotFound
}
p := name[len(prefix):]
@@ -796,7 +921,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
var rest string
n, _ := fmt.Sscanf(p[len(numFilesPrefix):], "%d%s", &level, &rest)
if n != 1 || int(level) >= db.s.o.GetNumLevel() {
err = errors.New("leveldb: GetProperty: invalid property: " + name)
err = ErrNotFound
} else {
value = fmt.Sprint(v.tLen(int(level)))
}
@@ -835,7 +960,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
case p == "aliveiters":
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
default:
err = errors.New("leveldb: GetProperty: unknown property: " + name)
err = ErrNotFound
}
return
@@ -898,6 +1023,9 @@ func (db *DB) Close() error {
var err error
select {
case err = <-db.compErrC:
if err == ErrReadOnly {
err = nil
}
default:
}

View File

@@ -11,7 +11,6 @@ import (
"time"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
@@ -62,58 +61,8 @@ func (p *cStatsStaging) stopTimer() {
}
}
type cMem struct {
s *session
level int
rec *sessionRecord
}
func newCMem(s *session) *cMem {
return &cMem{s: s, rec: &sessionRecord{numLevel: s.o.GetNumLevel()}}
}
func (c *cMem) flush(mem *memdb.DB, level int) error {
s := c.s
// Write memdb to table.
iter := mem.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
if err != nil {
return err
}
// Pick level.
if level < 0 {
v := s.version()
level = v.pickLevel(t.imin.ukey(), t.imax.ukey())
v.release()
}
c.rec.addTableFile(level, t)
s.logf("mem@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
c.level = level
return nil
}
func (c *cMem) reset() {
c.rec = &sessionRecord{numLevel: c.s.o.GetNumLevel()}
}
func (c *cMem) commit(journal, seq uint64) error {
c.rec.setJournalNum(journal)
c.rec.setSeqNum(seq)
// Commit changes.
return c.s.commit(c.rec)
}
func (db *DB) compactionError() {
var (
err error
wlocked bool
)
var err error
noerr:
// No error.
for {
@@ -121,7 +70,7 @@ noerr:
case err = <-db.compErrSetC:
switch {
case err == nil:
case errors.IsCorrupted(err):
case err == ErrReadOnly, errors.IsCorrupted(err):
goto hasperr
default:
goto haserr
@@ -139,7 +88,7 @@ haserr:
switch {
case err == nil:
goto noerr
case errors.IsCorrupted(err):
case err == ErrReadOnly, errors.IsCorrupted(err):
goto hasperr
default:
}
@@ -155,9 +104,9 @@ hasperr:
case db.compPerErrC <- err:
case db.writeLockC <- struct{}{}:
// Hold write lock, so that write won't pass-through.
wlocked = true
db.compWriteLocking = true
case _, _ = <-db.closeC:
if wlocked {
if db.compWriteLocking {
// We should release the lock or Close will hang.
<-db.writeLockC
}
@@ -287,21 +236,18 @@ func (db *DB) compactionExitTransact() {
}
func (db *DB) memCompaction() {
mem := db.getFrozenMem()
if mem == nil {
mdb := db.getFrozenMem()
if mdb == nil {
return
}
defer mem.decref()
defer mdb.decref()
c := newCMem(db.s)
stats := new(cStatsStaging)
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
db.logf("memdb@flush N·%d S·%s", mdb.Len(), shortenb(mdb.Size()))
// Don't compact empty memdb.
if mem.mdb.Len() == 0 {
db.logf("mem@flush skipping")
// drop frozen mem
if mdb.Len() == 0 {
db.logf("memdb@flush skipping")
// drop frozen memdb
db.dropFrozenMem()
return
}
@@ -317,13 +263,20 @@ func (db *DB) memCompaction() {
return
}
db.compactionTransactFunc("mem@flush", func(cnt *compactionTransactCounter) (err error) {
var (
rec = &sessionRecord{}
stats = &cStatsStaging{}
flushLevel int
)
db.compactionTransactFunc("memdb@flush", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.flush(mem.mdb, -1)
flushLevel, err = db.s.flushMemdb(rec, mdb.DB, -1)
stats.stopTimer()
return
}, func() error {
for _, r := range c.rec.addedTables {
db.logf("mem@flush revert @%d", r.num)
for _, r := range rec.addedTables {
db.logf("memdb@flush revert @%d", r.num)
f := db.s.getTableFile(r.num)
if err := f.Remove(); err != nil {
return err
@@ -332,20 +285,23 @@ func (db *DB) memCompaction() {
return nil
})
db.compactionTransactFunc("mem@commit", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransactFunc("memdb@commit", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.commit(db.journalFile.Num(), db.frozenSeq)
rec.setJournalNum(db.journalFile.Num())
rec.setSeqNum(db.frozenSeq)
err = db.s.commit(rec)
stats.stopTimer()
return
}, nil)
db.logf("mem@flush committed F·%d T·%v", len(c.rec.addedTables), stats.duration)
db.logf("memdb@flush committed F·%d T·%v", len(rec.addedTables), stats.duration)
for _, r := range c.rec.addedTables {
for _, r := range rec.addedTables {
stats.write += r.size
}
db.compStats[c.level].add(stats)
db.compStats[flushLevel].add(stats)
// Drop frozen mem.
// Drop frozen memdb.
db.dropFrozenMem()
// Resume table compaction.
@@ -557,7 +513,7 @@ func (b *tableCompactionBuilder) revert() error {
func (db *DB) tableCompaction(c *compaction, noTrivial bool) {
defer c.release()
rec := &sessionRecord{numLevel: db.s.o.GetNumLevel()}
rec := &sessionRecord{}
rec.addCompPtr(c.level, c.imax)
if !noTrivial && c.trivial() {

View File

@@ -8,6 +8,7 @@ package leveldb
import (
"errors"
"math/rand"
"runtime"
"sync"
"sync/atomic"
@@ -39,11 +40,11 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
ti := v.getIterators(slice, ro)
n := len(ti) + 2
i := make([]iterator.Iterator, 0, n)
emi := em.mdb.NewIterator(slice)
emi := em.NewIterator(slice)
emi.SetReleaser(&memdbReleaser{m: em})
i = append(i, emi)
if fm != nil {
fmi := fm.mdb.NewIterator(slice)
fmi := fm.NewIterator(slice)
fmi.SetReleaser(&memdbReleaser{m: fm})
i = append(i, fmi)
}
@@ -80,6 +81,10 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
return iter
}
func (db *DB) iterSamplingRate() int {
return rand.Intn(2 * db.s.o.GetIteratorSamplingRate())
}
type dir int
const (
@@ -98,11 +103,21 @@ type dbIter struct {
seq uint64
strict bool
dir dir
key []byte
value []byte
err error
releaser util.Releaser
smaplingGap int
dir dir
key []byte
value []byte
err error
releaser util.Releaser
}
func (i *dbIter) sampleSeek() {
ikey := i.iter.Key()
i.smaplingGap -= len(ikey) + len(i.iter.Value())
for i.smaplingGap < 0 {
i.smaplingGap += i.db.iterSamplingRate()
i.db.sampleSeek(ikey)
}
}
func (i *dbIter) setErr(err error) {
@@ -175,6 +190,7 @@ func (i *dbIter) Seek(key []byte) bool {
func (i *dbIter) next() bool {
for {
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if seq <= i.seq {
switch kt {
case ktDel:
@@ -225,6 +241,7 @@ func (i *dbIter) prev() bool {
if i.iter.Valid() {
for {
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if seq <= i.seq {
if !del && i.icmp.uCompare(ukey, i.key) < 0 {
return true
@@ -266,6 +283,7 @@ func (i *dbIter) Prev() bool {
case dirForward:
for i.iter.Prev() {
if ukey, _, _, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if i.icmp.uCompare(ukey, i.key) < 0 {
goto cont
}

View File

@@ -15,8 +15,8 @@ import (
)
type memDB struct {
db *DB
mdb *memdb.DB
db *DB
*memdb.DB
ref int32
}
@@ -27,12 +27,12 @@ func (m *memDB) incref() {
func (m *memDB) decref() {
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
// Only put back memdb with std capacity.
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
m.mdb.Reset()
m.db.mpoolPut(m.mdb)
if m.Capacity() == m.db.s.o.GetWriteBuffer() {
m.Reset()
m.db.mpoolPut(m.DB)
}
m.db = nil
m.mdb = nil
m.DB = nil
} else if ref < 0 {
panic("negative memdb ref")
}
@@ -48,6 +48,15 @@ func (db *DB) addSeq(delta uint64) {
atomic.AddUint64(&db.seq, delta)
}
func (db *DB) sampleSeek(ikey iKey) {
v := db.s.version()
if v.sampleSeek(ikey) {
// Trigger table compaction.
db.compSendTrigger(db.tcompCmdC)
}
v.release()
}
func (db *DB) mpoolPut(mem *memdb.DB) {
defer func() {
recover()
@@ -117,7 +126,7 @@ func (db *DB) newMem(n int) (mem *memDB, err error) {
}
mem = &memDB{
db: db,
mdb: mdb,
DB: mdb,
ref: 2,
}
db.mem = mem

View File

@@ -405,19 +405,21 @@ func (h *dbHarness) compactRange(min, max string) {
t.Log("DB range compaction done")
}
func (h *dbHarness) sizeAssert(start, limit string, low, hi uint64) {
t := h.t
db := h.db
s, err := db.SizeOf([]util.Range{
func (h *dbHarness) sizeOf(start, limit string) uint64 {
sz, err := h.db.SizeOf([]util.Range{
{[]byte(start), []byte(limit)},
})
if err != nil {
t.Error("SizeOf: got error: ", err)
h.t.Error("SizeOf: got error: ", err)
}
if s.Sum() < low || s.Sum() > hi {
t.Errorf("sizeof %q to %q not in range, want %d - %d, got %d",
shorten(start), shorten(limit), low, hi, s.Sum())
return sz.Sum()
}
func (h *dbHarness) sizeAssert(start, limit string, low, hi uint64) {
sz := h.sizeOf(start, limit)
if sz < low || sz > hi {
h.t.Errorf("sizeOf %q to %q not in range, want %d - %d, got %d",
shorten(start), shorten(limit), low, hi, sz)
}
}
@@ -2443,7 +2445,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
if err != nil {
t.Fatal(err)
}
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
rec := &sessionRecord{}
rec.addTableFile(i, tf)
if err := s.commit(rec); err != nil {
t.Fatal(err)
@@ -2453,7 +2455,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Build grandparent.
v := s.version()
c := newCompaction(s, v, 1, append(tFiles{}, v.tables[1]...))
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
rec := &sessionRecord{}
b := &tableCompactionBuilder{
s: s,
c: c,
@@ -2477,7 +2479,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Build level-1.
v = s.version()
c = newCompaction(s, v, 0, append(tFiles{}, v.tables[0]...))
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
b = &tableCompactionBuilder{
s: s,
c: c,
@@ -2521,7 +2523,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Compaction with transient error.
v = s.version()
c = newCompaction(s, v, 1, append(tFiles{}, v.tables[1]...))
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
b = &tableCompactionBuilder{
s: s,
c: c,
@@ -2577,3 +2579,123 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
}
v.release()
}
func testDB_IterTriggeredCompaction(t *testing.T, limitDiv int) {
const (
vSize = 200 * opt.KiB
tSize = 100 * opt.MiB
mIter = 100
n = tSize / vSize
)
h := newDbHarnessWopt(t, &opt.Options{
Compression: opt.NoCompression,
DisableBlockCache: true,
})
defer h.close()
key := func(x int) string {
return fmt.Sprintf("v%06d", x)
}
// Fill.
value := strings.Repeat("x", vSize)
for i := 0; i < n; i++ {
h.put(key(i), value)
}
h.compactMem()
// Delete all.
for i := 0; i < n; i++ {
h.delete(key(i))
}
h.compactMem()
var (
limit = n / limitDiv
startKey = key(0)
limitKey = key(limit)
maxKey = key(n)
slice = &util.Range{Limit: []byte(limitKey)}
initialSize0 = h.sizeOf(startKey, limitKey)
initialSize1 = h.sizeOf(limitKey, maxKey)
)
t.Logf("inital size %s [rest %s]", shortenb(int(initialSize0)), shortenb(int(initialSize1)))
for r := 0; true; r++ {
if r >= mIter {
t.Fatal("taking too long to compact")
}
// Iterates.
iter := h.db.NewIterator(slice, h.ro)
for iter.Next() {
}
if err := iter.Error(); err != nil {
t.Fatalf("Iter err: %v", err)
}
iter.Release()
// Wait compaction.
h.waitCompaction()
// Check size.
size0 := h.sizeOf(startKey, limitKey)
size1 := h.sizeOf(limitKey, maxKey)
t.Logf("#%03d size %s [rest %s]", r, shortenb(int(size0)), shortenb(int(size1)))
if size0 < initialSize0/10 {
break
}
}
if initialSize1 > 0 {
h.sizeAssert(limitKey, maxKey, initialSize1/4-opt.MiB, initialSize1+opt.MiB)
}
}
func TestDB_IterTriggeredCompaction(t *testing.T) {
testDB_IterTriggeredCompaction(t, 1)
}
func TestDB_IterTriggeredCompactionHalf(t *testing.T) {
testDB_IterTriggeredCompaction(t, 2)
}
func TestDB_ReadOnly(t *testing.T) {
h := newDbHarness(t)
defer h.close()
h.put("foo", "v1")
h.put("bar", "v2")
h.compactMem()
h.put("xfoo", "v1")
h.put("xbar", "v2")
t.Log("Trigger read-only")
if err := h.db.SetReadOnly(); err != nil {
h.close()
t.Fatalf("SetReadOnly error: %v", err)
}
h.stor.SetEmuErr(storage.TypeAll, tsOpCreate, tsOpReplace, tsOpRemove, tsOpWrite, tsOpWrite, tsOpSync)
ro := func(key, value, wantValue string) {
if err := h.db.Put([]byte(key), []byte(value), h.wo); err != ErrReadOnly {
t.Fatalf("unexpected error: %v", err)
}
h.getVal(key, wantValue)
}
ro("foo", "vx", "v1")
h.o.ReadOnly = true
h.reopenDB()
ro("foo", "vx", "v1")
ro("bar", "vx", "v2")
h.assertNumKeys(4)
}

View File

@@ -63,24 +63,24 @@ func (db *DB) rotateMem(n int) (mem *memDB, err error) {
return
}
func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
func (db *DB) flush(n int) (mdb *memDB, mdbFree int, err error) {
delayed := false
flush := func() (retry bool) {
v := db.s.version()
defer v.release()
mem = db.getEffectiveMem()
mdb = db.getEffectiveMem()
defer func() {
if retry {
mem.decref()
mem = nil
mdb.decref()
mdb = nil
}
}()
nn = mem.mdb.Free()
mdbFree = mdb.Free()
switch {
case v.tLen(0) >= db.s.o.GetWriteL0SlowdownTrigger() && !delayed:
delayed = true
time.Sleep(time.Millisecond)
case nn >= n:
case mdbFree >= n:
return false
case v.tLen(0) >= db.s.o.GetWriteL0PauseTrigger():
delayed = true
@@ -90,15 +90,15 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
}
default:
// Allow memdb to grow if it has no entry.
if mem.mdb.Len() == 0 {
nn = n
if mdb.Len() == 0 {
mdbFree = n
} else {
mem.decref()
mem, err = db.rotateMem(n)
mdb.decref()
mdb, err = db.rotateMem(n)
if err == nil {
nn = mem.mdb.Free()
mdbFree = mdb.Free()
} else {
nn = 0
mdbFree = 0
}
}
return false
@@ -129,7 +129,7 @@ func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
return
}
b.init(wo.GetSync())
b.init(wo.GetSync() && !db.s.o.GetNoSync())
// The write happen synchronously.
select {
@@ -157,18 +157,18 @@ func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
}
}()
mem, memFree, err := db.flush(b.size())
mdb, mdbFree, err := db.flush(b.size())
if err != nil {
return
}
defer mem.decref()
defer mdb.decref()
// Calculate maximum size of the batch.
m := 1 << 20
if x := b.size(); x <= 128<<10 {
m = x + (128 << 10)
}
m = minInt(m, memFree)
m = minInt(m, mdbFree)
// Merge with other batch.
drain:
@@ -197,7 +197,7 @@ drain:
select {
case db.journalC <- b:
// Write into memdb
if berr := b.memReplay(mem.mdb); berr != nil {
if berr := b.memReplay(mdb.DB); berr != nil {
panic(berr)
}
case err = <-db.compPerErrC:
@@ -211,7 +211,7 @@ drain:
case err = <-db.journalAckC:
if err != nil {
// Revert memdb if error detected
if berr := b.revertMemReplay(mem.mdb); berr != nil {
if berr := b.revertMemReplay(mdb.DB); berr != nil {
panic(berr)
}
return
@@ -225,7 +225,7 @@ drain:
if err != nil {
return
}
if berr := b.memReplay(mem.mdb); berr != nil {
if berr := b.memReplay(mdb.DB); berr != nil {
panic(berr)
}
}
@@ -233,7 +233,7 @@ drain:
// Set last seq number.
db.addSeq(uint64(b.Len()))
if b.size() >= memFree {
if b.size() >= mdbFree {
db.rotateMem(0)
}
return
@@ -249,8 +249,7 @@ func (db *DB) Put(key, value []byte, wo *opt.WriteOptions) error {
return db.Write(b, wo)
}
// Delete deletes the value for the given key. It returns ErrNotFound if
// the DB does not contain the key.
// Delete deletes the value for the given key.
//
// It is safe to modify the contents of the arguments after Delete returns.
func (db *DB) Delete(key []byte, wo *opt.WriteOptions) error {
@@ -290,9 +289,9 @@ func (db *DB) CompactRange(r util.Range) error {
}
// Check for overlaps in memdb.
mem := db.getEffectiveMem()
defer mem.decref()
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
mdb := db.getEffectiveMem()
defer mdb.decref()
if isMemOverlaps(db.s.icmp, mdb.DB, r.Start, r.Limit) {
// Memdb compaction.
if _, err := db.rotateMem(0); err != nil {
<-db.writeLockC
@@ -309,3 +308,31 @@ func (db *DB) CompactRange(r util.Range) error {
// Table compaction.
return db.compSendRange(db.tcompCmdC, -1, r.Start, r.Limit)
}
// SetReadOnly makes DB read-only. It will stay read-only until reopened.
func (db *DB) SetReadOnly() error {
if err := db.ok(); err != nil {
return err
}
// Lock writer.
select {
case db.writeLockC <- struct{}{}:
db.compWriteLocking = true
case err := <-db.compPerErrC:
return err
case _, _ = <-db.closeC:
return ErrClosed
}
// Set compaction read-only.
select {
case db.compErrSetC <- ErrReadOnly:
case perr := <-db.compPerErrC:
return perr
case _, _ = <-db.closeC:
return ErrClosed
}
return nil
}

View File

@@ -12,6 +12,7 @@ import (
var (
ErrNotFound = errors.ErrNotFound
ErrReadOnly = errors.New("leveldb: read-only mode")
ErrSnapshotReleased = errors.New("leveldb: snapshot released")
ErrIterReleased = errors.New("leveldb: iterator released")
ErrClosed = errors.New("leveldb: closed")

View File

@@ -52,12 +52,14 @@ func IsCorrupted(err error) bool {
switch err.(type) {
case *ErrCorrupted:
return true
case *storage.ErrCorrupted:
return true
}
return false
}
// ErrMissingFiles is the type that indicating a corruption due to missing
// files.
// files. ErrMissingFiles always wrapped with ErrCorrupted.
type ErrMissingFiles struct {
Files []*storage.FileInfo
}

View File

@@ -206,6 +206,7 @@ func (p *DB) randHeight() (h int) {
return
}
// Must hold RW-lock if prev == true, as it use shared prevNode slice.
func (p *DB) findGE(key []byte, prev bool) (int, bool) {
node := 0
h := p.maxHeight - 1
@@ -302,7 +303,7 @@ func (p *DB) Put(key []byte, value []byte) error {
node := len(p.nodeData)
p.nodeData = append(p.nodeData, kvOffset, len(key), len(value), h)
for i, n := range p.prevNode[:h] {
m := n + 4 + i
m := n + nNext + i
p.nodeData = append(p.nodeData, p.nodeData[m])
p.nodeData[m] = node
}
@@ -434,20 +435,22 @@ func (p *DB) Len() int {
// Reset resets the DB to initial empty state. Allows reuse the buffer.
func (p *DB) Reset() {
p.mu.Lock()
p.rnd = rand.New(rand.NewSource(0xdeadbeef))
p.maxHeight = 1
p.n = 0
p.kvSize = 0
p.kvData = p.kvData[:0]
p.nodeData = p.nodeData[:4+tMaxHeight]
p.nodeData = p.nodeData[:nNext+tMaxHeight]
p.nodeData[nKV] = 0
p.nodeData[nKey] = 0
p.nodeData[nVal] = 0
p.nodeData[nHeight] = tMaxHeight
for n := 0; n < tMaxHeight; n++ {
p.nodeData[4+n] = 0
p.nodeData[nNext+n] = 0
p.prevNode[n] = 0
}
p.mu.Unlock()
}
// New creates a new initalized in-memory key/value DB. The capacity

View File

@@ -34,10 +34,11 @@ var (
DefaultCompactionTotalSize = 10 * MiB
DefaultCompactionTotalSizeMultiplier = 10.0
DefaultCompressionType = SnappyCompression
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultIteratorSamplingRate = 1 * MiB
DefaultMaxMemCompationLevel = 2
DefaultNumLevel = 7
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultWriteBuffer = 4 * MiB
DefaultWriteL0PauseTrigger = 12
DefaultWriteL0SlowdownTrigger = 8
@@ -153,7 +154,7 @@ type Options struct {
BlockCacher Cacher
// BlockCacheCapacity defines the capacity of the 'sorted table' block caching.
// Use -1 for zero, this has same effect with specifying NoCacher to BlockCacher.
// Use -1 for zero, this has same effect as specifying NoCacher to BlockCacher.
//
// The default value is 8MiB.
BlockCacheCapacity int
@@ -249,6 +250,11 @@ type Options struct {
// The default value (DefaultCompression) uses snappy compression.
Compression Compression
// DisableBufferPool allows disable use of util.BufferPool functionality.
//
// The default value is false.
DisableBufferPool bool
// DisableBlockCache allows disable use of cache.Cache functionality on
// 'sorted table' block.
//
@@ -288,6 +294,13 @@ type Options struct {
// The default value is nil.
Filter filter.Filter
// IteratorSamplingRate defines approximate gap (in bytes) between read
// sampling of an iterator. The samples will be used to determine when
// compaction should be triggered.
//
// The default is 1MiB.
IteratorSamplingRate int
// MaxMemCompationLevel defines maximum level a newly compacted 'memdb'
// will be pushed into if doesn't creates overlap. This should less than
// NumLevel. Use -1 for level-0.
@@ -295,6 +308,11 @@ type Options struct {
// The default is 2.
MaxMemCompationLevel int
// NoSync allows completely disable fsync.
//
// The default is false.
NoSync bool
// NumLevel defines number of database level. The level shouldn't changed
// between opens, or the database will panic.
//
@@ -308,11 +326,16 @@ type Options struct {
OpenFilesCacher Cacher
// OpenFilesCacheCapacity defines the capacity of the open files caching.
// Use -1 for zero, this has same effect with specifying NoCacher to OpenFilesCacher.
// Use -1 for zero, this has same effect as specifying NoCacher to OpenFilesCacher.
//
// The default value is 500.
OpenFilesCacheCapacity int
// If true then opens DB in read-only mode.
//
// The default value is false.
ReadOnly bool
// Strict defines the DB strict level.
Strict Strict
@@ -355,9 +378,9 @@ func (o *Options) GetBlockCacher() Cacher {
}
func (o *Options) GetBlockCacheCapacity() int {
if o == nil || o.BlockCacheCapacity <= 0 {
if o == nil || o.BlockCacheCapacity == 0 {
return DefaultBlockCacheCapacity
} else if o.BlockCacheCapacity == -1 {
} else if o.BlockCacheCapacity < 0 {
return 0
}
return o.BlockCacheCapacity
@@ -464,6 +487,20 @@ func (o *Options) GetCompression() Compression {
return o.Compression
}
func (o *Options) GetDisableBufferPool() bool {
if o == nil {
return false
}
return o.DisableBufferPool
}
func (o *Options) GetDisableBlockCache() bool {
if o == nil {
return false
}
return o.DisableBlockCache
}
func (o *Options) GetDisableCompactionBackoff() bool {
if o == nil {
return false
@@ -492,12 +529,19 @@ func (o *Options) GetFilter() filter.Filter {
return o.Filter
}
func (o *Options) GetIteratorSamplingRate() int {
if o == nil || o.IteratorSamplingRate <= 0 {
return DefaultIteratorSamplingRate
}
return o.IteratorSamplingRate
}
func (o *Options) GetMaxMemCompationLevel() int {
level := DefaultMaxMemCompationLevel
if o != nil {
if o.MaxMemCompationLevel > 0 {
level = o.MaxMemCompationLevel
} else if o.MaxMemCompationLevel == -1 {
} else if o.MaxMemCompationLevel < 0 {
level = 0
}
}
@@ -507,6 +551,13 @@ func (o *Options) GetMaxMemCompationLevel() int {
return level
}
func (o *Options) GetNoSync() bool {
if o == nil {
return false
}
return o.NoSync
}
func (o *Options) GetNumLevel() int {
if o == nil || o.NumLevel <= 0 {
return DefaultNumLevel
@@ -525,14 +576,21 @@ func (o *Options) GetOpenFilesCacher() Cacher {
}
func (o *Options) GetOpenFilesCacheCapacity() int {
if o == nil || o.OpenFilesCacheCapacity <= 0 {
if o == nil || o.OpenFilesCacheCapacity == 0 {
return DefaultOpenFilesCacheCapacity
} else if o.OpenFilesCacheCapacity == -1 {
} else if o.OpenFilesCacheCapacity < 0 {
return 0
}
return o.OpenFilesCacheCapacity
}
func (o *Options) GetReadOnly() bool {
if o == nil {
return false
}
return o.ReadOnly
}
func (o *Options) GetStrict(strict Strict) bool {
if o == nil || o.Strict == 0 {
return DefaultStrict&strict != 0

View File

@@ -11,10 +11,8 @@ import (
"io"
"os"
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/journal"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
@@ -127,11 +125,16 @@ func (s *session) recover() (err error) {
return
}
defer reader.Close()
strict := s.o.GetStrict(opt.StrictManifest)
jr := journal.NewReader(reader, dropper{s, m}, strict, true)
staging := s.stVersion.newStaging()
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
var (
// Options.
numLevel = s.o.GetNumLevel()
strict = s.o.GetStrict(opt.StrictManifest)
jr = journal.NewReader(reader, dropper{s, m}, strict, true)
rec = &sessionRecord{}
staging = s.stVersion.newStaging()
)
for {
var r io.Reader
r, err = jr.Next()
@@ -143,7 +146,7 @@ func (s *session) recover() (err error) {
return errors.SetFile(err, m)
}
err = rec.decode(r)
err = rec.decode(r, numLevel)
if err == nil {
// save compact pointers
for _, r := range rec.compPtrs {
@@ -206,250 +209,3 @@ func (s *session) commit(r *sessionRecord) (err error) {
return
}
// Pick a compaction based on current state; need external synchronization.
func (s *session) pickCompaction() *compaction {
v := s.version()
var level int
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cptr := s.stCompPtrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
ts := (*tSet)(p)
level = ts.level
t0 = append(t0, ts.table)
} else {
v.release()
return nil
}
}
return newCompaction(s, v, level, t0)
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version()
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
v.release()
return nil
}
// Avoid compacting too much in one shot in case the range is large.
// But we cannot do this for level-0 since level-0 files can overlap
// and we must not pick one file and drop another older file if the
// two files overlap.
if level > 0 {
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
total := uint64(0)
for i, t := range t0 {
total += t.size
if total >= limit {
s.logf("table@compaction limiting F·%d -> F·%d", len(t0), i+1)
t0 = t0[:i+1]
break
}
}
}
return newCompaction(s, v, level, t0)
}
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
c := &compaction{
s: s,
v: v,
level: level,
tables: [2]tFiles{t0, nil},
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
tPtrs: make([]int, s.o.GetNumLevel()),
}
c.expand()
c.save()
return c
}
// compaction represent a compaction state.
type compaction struct {
s *session
v *version
level int
tables [2]tFiles
maxGPOverlaps uint64
gp tFiles
gpi int
seenKey bool
gpOverlappedBytes uint64
imin, imax iKey
tPtrs []int
released bool
snapGPI int
snapSeenKey bool
snapGPOverlappedBytes uint64
snapTPtrs []int
}
func (c *compaction) save() {
c.snapGPI = c.gpi
c.snapSeenKey = c.seenKey
c.snapGPOverlappedBytes = c.gpOverlappedBytes
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
}
func (c *compaction) restore() {
c.gpi = c.snapGPI
c.seenKey = c.snapSeenKey
c.gpOverlappedBytes = c.snapGPOverlappedBytes
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
}
func (c *compaction) release() {
if !c.released {
c.released = true
c.v.release()
}
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
t0, t1 := c.tables[0], c.tables[1]
imin, imax := t0.getRange(c.s.icmp)
// We expand t0 here just incase ukey hop across tables.
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
if len(t0) != len(c.tables[0]) {
imin, imax = t0.getRange(c.s.icmp)
}
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if c.level+2 < c.s.o.GetNumLevel() {
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
}
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
}
c.tPtrs[level]++
}
}
return true
}
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpi < len(c.gp); c.gpi++ {
gp := c.gp[c.gpi]
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
c.gpOverlappedBytes += gp.size
}
}
c.seenKey = true
if c.gpOverlappedBytes > c.maxGPOverlaps {
// Too much overlap for current output; start new output.
c.gpOverlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
Strict: opt.StrictOverride,
}
strict := c.s.o.GetStrict(opt.StrictCompaction)
if strict {
ro.Strict |= opt.StrictReader
}
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, c.s.icmp, strict)
}

View File

@@ -0,0 +1,287 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
func (s *session) pickMemdbLevel(umin, umax []byte) int {
v := s.version()
defer v.release()
return v.pickMemdbLevel(umin, umax)
}
func (s *session) flushMemdb(rec *sessionRecord, mdb *memdb.DB, level int) (level_ int, err error) {
// Create sorted table.
iter := mdb.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
if err != nil {
return level, err
}
// Pick level and add to record.
if level < 0 {
level = s.pickMemdbLevel(t.imin.ukey(), t.imax.ukey())
}
rec.addTableFile(level, t)
s.logf("memdb@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
return level, nil
}
// Pick a compaction based on current state; need external synchronization.
func (s *session) pickCompaction() *compaction {
v := s.version()
var level int
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cptr := s.stCompPtrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
ts := (*tSet)(p)
level = ts.level
t0 = append(t0, ts.table)
} else {
v.release()
return nil
}
}
return newCompaction(s, v, level, t0)
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version()
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
v.release()
return nil
}
// Avoid compacting too much in one shot in case the range is large.
// But we cannot do this for level-0 since level-0 files can overlap
// and we must not pick one file and drop another older file if the
// two files overlap.
if level > 0 {
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
total := uint64(0)
for i, t := range t0 {
total += t.size
if total >= limit {
s.logf("table@compaction limiting F·%d -> F·%d", len(t0), i+1)
t0 = t0[:i+1]
break
}
}
}
return newCompaction(s, v, level, t0)
}
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
c := &compaction{
s: s,
v: v,
level: level,
tables: [2]tFiles{t0, nil},
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
tPtrs: make([]int, s.o.GetNumLevel()),
}
c.expand()
c.save()
return c
}
// compaction represent a compaction state.
type compaction struct {
s *session
v *version
level int
tables [2]tFiles
maxGPOverlaps uint64
gp tFiles
gpi int
seenKey bool
gpOverlappedBytes uint64
imin, imax iKey
tPtrs []int
released bool
snapGPI int
snapSeenKey bool
snapGPOverlappedBytes uint64
snapTPtrs []int
}
func (c *compaction) save() {
c.snapGPI = c.gpi
c.snapSeenKey = c.seenKey
c.snapGPOverlappedBytes = c.gpOverlappedBytes
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
}
func (c *compaction) restore() {
c.gpi = c.snapGPI
c.seenKey = c.snapSeenKey
c.gpOverlappedBytes = c.snapGPOverlappedBytes
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
}
func (c *compaction) release() {
if !c.released {
c.released = true
c.v.release()
}
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
t0, t1 := c.tables[0], c.tables[1]
imin, imax := t0.getRange(c.s.icmp)
// We expand t0 here just incase ukey hop across tables.
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
if len(t0) != len(c.tables[0]) {
imin, imax = t0.getRange(c.s.icmp)
}
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if c.level+2 < c.s.o.GetNumLevel() {
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
}
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
}
c.tPtrs[level]++
}
}
return true
}
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpi < len(c.gp); c.gpi++ {
gp := c.gp[c.gpi]
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
c.gpOverlappedBytes += gp.size
}
}
c.seenKey = true
if c.gpOverlappedBytes > c.maxGPOverlaps {
// Too much overlap for current output; start new output.
c.gpOverlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0.
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
Strict: opt.StrictOverride,
}
strict := c.s.o.GetStrict(opt.StrictCompaction)
if strict {
ro.Strict |= opt.StrictReader
}
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, c.s.icmp, strict)
}

View File

@@ -52,8 +52,6 @@ type dtRecord struct {
}
type sessionRecord struct {
numLevel int
hasRec int
comparer string
journalNum uint64
@@ -230,7 +228,7 @@ func (p *sessionRecord) readBytes(field string, r byteReader) []byte {
return x
}
func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
func (p *sessionRecord) readLevel(field string, r io.ByteReader, numLevel int) int {
if p.err != nil {
return 0
}
@@ -238,14 +236,14 @@ func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
if p.err != nil {
return 0
}
if x >= uint64(p.numLevel) {
if x >= uint64(numLevel) {
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, "invalid level number"})
return 0
}
return int(x)
}
func (p *sessionRecord) decode(r io.Reader) error {
func (p *sessionRecord) decode(r io.Reader, numLevel int) error {
br, ok := r.(byteReader)
if !ok {
br = bufio.NewReader(r)
@@ -286,13 +284,13 @@ func (p *sessionRecord) decode(r io.Reader) error {
p.setSeqNum(x)
}
case recCompPtr:
level := p.readLevel("comp-ptr.level", br)
level := p.readLevel("comp-ptr.level", br, numLevel)
ikey := p.readBytes("comp-ptr.ikey", br)
if p.err == nil {
p.addCompPtr(level, iKey(ikey))
}
case recAddTable:
level := p.readLevel("add-table.level", br)
level := p.readLevel("add-table.level", br, numLevel)
num := p.readUvarint("add-table.num", br)
size := p.readUvarint("add-table.size", br)
imin := p.readBytes("add-table.imin", br)
@@ -301,7 +299,7 @@ func (p *sessionRecord) decode(r io.Reader) error {
p.addTable(level, num, size, imin, imax)
}
case recDelTable:
level := p.readLevel("del-table.level", br)
level := p.readLevel("del-table.level", br, numLevel)
num := p.readUvarint("del-table.num", br)
if p.err == nil {
p.delTable(level, num)

View File

@@ -19,8 +19,8 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
if err != nil {
return
}
v2 := &sessionRecord{numLevel: opt.DefaultNumLevel}
err = v.decode(b)
v2 := &sessionRecord{}
err = v.decode(b, opt.DefaultNumLevel)
if err != nil {
return
}
@@ -34,7 +34,7 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
func TestSessionRecord_EncodeDecode(t *testing.T) {
big := uint64(1) << 50
v := &sessionRecord{numLevel: opt.DefaultNumLevel}
v := &sessionRecord{}
i := uint64(0)
test := func() {
res, err := decodeEncode(v)

View File

@@ -182,7 +182,7 @@ func (s *session) newManifest(rec *sessionRecord, v *version) (err error) {
defer v.release()
}
if rec == nil {
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
}
s.fillRecord(rec, true)
v.fillRecord(rec)
@@ -240,9 +240,11 @@ func (s *session) flushManifest(rec *sessionRecord) (err error) {
if err != nil {
return
}
err = s.manifestWriter.Sync()
if err != nil {
return
if !s.o.GetNoSync() {
err = s.manifestWriter.Sync()
if err != nil {
return
}
}
s.recordCommited(rec)
return

View File

@@ -243,7 +243,10 @@ func (fs *fileStorage) GetManifest() (f File, err error) {
rem = append(rem, fn)
}
if !pend1 || cerr == nil {
cerr = fmt.Errorf("leveldb/storage: corrupted or incomplete %s file", fn)
cerr = &ErrCorrupted{
File: fsParseName(filepath.Base(fn)),
Err: errors.New("leveldb/storage: corrupted or incomplete manifest file"),
}
}
} else if f != nil && f1.Num() < f.Num() {
fs.log(fmt.Sprintf("skipping %s: obsolete", fn))
@@ -326,8 +329,7 @@ func (fs *fileStorage) Close() error {
runtime.SetFinalizer(fs, nil)
if fs.open > 0 {
fs.log(fmt.Sprintf("refuse to close, %d files still open", fs.open))
return fmt.Errorf("leveldb/storage: cannot close, %d files still open", fs.open)
fs.log(fmt.Sprintf("close: warning, %d files still open", fs.open))
}
fs.open = -1
e1 := fs.logw.Close()
@@ -505,30 +507,37 @@ func (f *file) path() string {
return filepath.Join(f.fs.path, f.name())
}
func (f *file) parse(name string) bool {
var num uint64
func fsParseName(name string) *FileInfo {
fi := &FileInfo{}
var tail string
_, err := fmt.Sscanf(name, "%d.%s", &num, &tail)
_, err := fmt.Sscanf(name, "%d.%s", &fi.Num, &tail)
if err == nil {
switch tail {
case "log":
f.t = TypeJournal
fi.Type = TypeJournal
case "ldb", "sst":
f.t = TypeTable
fi.Type = TypeTable
case "tmp":
f.t = TypeTemp
fi.Type = TypeTemp
default:
return false
return nil
}
f.num = num
return true
return fi
}
n, _ := fmt.Sscanf(name, "MANIFEST-%d%s", &num, &tail)
n, _ := fmt.Sscanf(name, "MANIFEST-%d%s", &fi.Num, &tail)
if n == 1 {
f.t = TypeManifest
f.num = num
return true
fi.Type = TypeManifest
return fi
}
return false
return nil
}
func (f *file) parse(name string) bool {
fi := fsParseName(name)
if fi == nil {
return false
}
f.t = fi.Type
f.num = fi.Num
return true
}

View File

@@ -50,13 +50,23 @@ func rename(oldpath, newpath string) error {
return os.Rename(oldpath, newpath)
}
func isErrInvalid(err error) bool {
if err == os.ErrInvalid {
return true
}
if syserr, ok := err.(*os.SyscallError); ok && syserr.Err == syscall.EINVAL {
return true
}
return false
}
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
if err := f.Sync(); err != nil && !isErrInvalid(err) {
return err
}
return nil

View File

@@ -46,6 +46,22 @@ var (
ErrClosed = errors.New("leveldb/storage: closed")
)
// ErrCorrupted is the type that wraps errors that indicate corruption of
// a file. Package storage has its own type instead of using
// errors.ErrCorrupted to prevent circular import.
type ErrCorrupted struct {
File *FileInfo
Err error
}
func (e *ErrCorrupted) Error() string {
if e.File != nil {
return fmt.Sprintf("%v [file=%v]", e.Err, e.File)
} else {
return e.Err.Error()
}
}
// Syncer is the interface that wraps basic Sync method.
type Syncer interface {
// Sync commits the current contents of the file to stable storage.

View File

@@ -42,6 +42,8 @@ type tsOp uint
const (
tsOpOpen tsOp = iota
tsOpCreate
tsOpReplace
tsOpRemove
tsOpRead
tsOpReadAt
tsOpWrite
@@ -241,6 +243,10 @@ func (tf tsFile) Replace(newfile storage.File) (err error) {
if err != nil {
return
}
if tf.shouldErr(tsOpReplace) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Replace(newfile.(tsFile).File)
if err != nil {
ts.t.Errorf("E: cannot replace file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
@@ -258,6 +264,10 @@ func (tf tsFile) Remove() (err error) {
if err != nil {
return
}
if tf.shouldErr(tsOpRemove) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Remove()
if err != nil {
ts.t.Errorf("E: cannot remove file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)

View File

@@ -287,6 +287,7 @@ func (x *tFilesSortByNum) Less(i, j int) bool {
// Table operations.
type tOps struct {
s *session
noSync bool
cache *cache.Cache
bcache *cache.Cache
bpool *util.BufferPool
@@ -441,22 +442,27 @@ func newTableOps(s *session) *tOps {
var (
cacher cache.Cacher
bcache *cache.Cache
bpool *util.BufferPool
)
if s.o.GetOpenFilesCacheCapacity() > 0 {
cacher = cache.NewLRU(s.o.GetOpenFilesCacheCapacity())
}
if !s.o.DisableBlockCache {
if !s.o.GetDisableBlockCache() {
var bcacher cache.Cacher
if s.o.GetBlockCacheCapacity() > 0 {
bcacher = cache.NewLRU(s.o.GetBlockCacheCapacity())
}
bcache = cache.NewCache(bcacher)
}
if !s.o.GetDisableBufferPool() {
bpool = util.NewBufferPool(s.o.GetBlockSize() + 5)
}
return &tOps{
s: s,
noSync: s.o.GetNoSync(),
cache: cache.NewCache(cacher),
bcache: bcache,
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
bpool: bpool,
}
}
@@ -501,9 +507,11 @@ func (w *tWriter) finish() (f *tFile, err error) {
if err != nil {
return
}
err = w.w.Sync()
if err != nil {
return
if !w.t.noSync {
err = w.w.Sync()
if err != nil {
return
}
}
f = newTableFile(w.file, uint64(w.tw.BytesLen()), iKey(w.first), iKey(w.last))
return

View File

@@ -14,7 +14,7 @@ import (
"strings"
"sync"
"github.com/syndtr/gosnappy/snappy"
"github.com/golang/snappy"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/comparer"

View File

@@ -12,7 +12,7 @@ import (
"fmt"
"io"
"github.com/syndtr/gosnappy/snappy"
"github.com/golang/snappy"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/filter"
@@ -167,11 +167,7 @@ func (w *Writer) writeBlock(buf *util.Buffer, compression opt.Compression) (bh b
if n := snappy.MaxEncodedLen(buf.Len()) + blockTrailerLen; len(w.compressionScratch) < n {
w.compressionScratch = make([]byte, n)
}
var compressed []byte
compressed, err = snappy.Encode(w.compressionScratch, buf.Bytes())
if err != nil {
return
}
compressed := snappy.Encode(w.compressionScratch, buf.Bytes())
n := len(compressed)
b = compressed[:n+blockTrailerLen]
b[n] = blockTypeSnappyCompression

View File

@@ -201,6 +201,7 @@ func (p *BufferPool) String() string {
func (p *BufferPool) drain() {
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:

View File

@@ -136,9 +136,8 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byt
if !tseek {
if tset == nil {
tset = &tSet{level, t}
} else if tset.table.consumeSeek() <= 0 {
} else {
tseek = true
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
}
@@ -203,6 +202,28 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byt
return true
})
if tseek && tset.table.consumeSeek() <= 0 {
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
return
}
func (v *version) sampleSeek(ikey iKey) (tcomp bool) {
var tset *tSet
v.walkOverlapping(ikey, func(level int, t *tFile) bool {
if tset == nil {
tset = &tSet{level, t}
return true
} else {
if tset.table.consumeSeek() <= 0 {
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
return false
}
}, nil)
return
}
@@ -279,7 +300,7 @@ func (v *version) offsetOf(ikey iKey) (n uint64, err error) {
return
}
func (v *version) pickLevel(umin, umax []byte) (level int) {
func (v *version) pickMemdbLevel(umin, umax []byte) (level int) {
if !v.tables[0].overlaps(v.s.icmp, umin, umax, true) {
var overlaps tFiles
maxLevel := v.s.o.GetMaxMemCompationLevel()

View File

@@ -1,124 +0,0 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"encoding/binary"
"errors"
)
// ErrCorrupt reports that the input is invalid.
var ErrCorrupt = errors.New("snappy: corrupt input")
// DecodedLen returns the length of the decoded block.
func DecodedLen(src []byte) (int, error) {
v, _, err := decodedLen(src)
return v, err
}
// decodedLen returns the length of the decoded block and the number of bytes
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n == 0 {
return 0, 0, ErrCorrupt
}
if uint64(int(v)) != v {
return 0, 0, errors.New("snappy: decoded block is too large")
}
return int(v), n, nil
}
// Decode returns the decoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire decoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Decode(dst, src []byte) ([]byte, error) {
dLen, s, err := decodedLen(src)
if err != nil {
return nil, err
}
if len(dst) < dLen {
dst = make([]byte, dLen)
}
var d, offset, length int
for s < len(src) {
switch src[s] & 0x03 {
case tagLiteral:
x := uint(src[s] >> 2)
switch {
case x < 60:
s += 1
case x == 60:
s += 2
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-1])
case x == 61:
s += 3
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-2]) | uint(src[s-1])<<8
case x == 62:
s += 4
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-3]) | uint(src[s-2])<<8 | uint(src[s-1])<<16
case x == 63:
s += 5
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-4]) | uint(src[s-3])<<8 | uint(src[s-2])<<16 | uint(src[s-1])<<24
}
length = int(x + 1)
if length <= 0 {
return nil, errors.New("snappy: unsupported literal length")
}
if length > len(dst)-d || length > len(src)-s {
return nil, ErrCorrupt
}
copy(dst[d:], src[s:s+length])
d += length
s += length
continue
case tagCopy1:
s += 2
if s > len(src) {
return nil, ErrCorrupt
}
length = 4 + int(src[s-2])>>2&0x7
offset = int(src[s-2])&0xe0<<3 | int(src[s-1])
case tagCopy2:
s += 3
if s > len(src) {
return nil, ErrCorrupt
}
length = 1 + int(src[s-3])>>2
offset = int(src[s-2]) | int(src[s-1])<<8
case tagCopy4:
return nil, errors.New("snappy: unsupported COPY_4 tag")
}
end := d + length
if offset > d || end > len(dst) {
return nil, ErrCorrupt
}
for ; d < end; d++ {
dst[d] = dst[d-offset]
}
}
if d != dLen {
return nil, ErrCorrupt
}
return dst[:d], nil
}

View File

@@ -0,0 +1,7 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- tip

View File

@@ -0,0 +1,19 @@
Copyright (c) 2014-2015 Barracuda Networks, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,50 @@
Suture
======
[![Build Status](https://travis-ci.org/thejerf/suture.png?branch=master)](https://travis-ci.org/thejerf/suture)
Suture provides Erlang-ish supervisor trees for Go. "Supervisor trees" ->
"sutree" -> "suture" -> holds your code together when it's trying to die.
This library has hit maturity, and isn't expected to be changed
radically. This can also be imported via gopkg.in/thejerf/suture.v1 .
It is intended to deal gracefully with the real failure cases that can
occur with supervision trees (such as burning all your CPU time endlessly
restarting dead services), while also making no unnecessary demands on the
"service" code, and providing hooks to perform adequate logging with in a
production environment.
[A blog post describing the design decisions](http://www.jerf.org/iri/post/2930)
is available.
This module is fully covered with [godoc](http://godoc.org/github.com/thejerf/suture),
including an example, usage, and everything else you might expect from a
README.md on GitHub. (DRY.)
Code Signing
------------
Starting with the commit after ac7cf8591b, I will be signing this repository
with the ["jerf" keybase account](https://keybase.io/jerf).
Aspiration
----------
One of the big wins the Erlang community has with their pervasive OTP
support is that it makes it easy for them to distribute libraries that
easily fit into the OTP paradigm. It ought to someday be considered a good
idea to distribute libraries that provide some sort of supervisor tree
functionality out of the box. It is possible to provide this functionality
without explicitly depending on the Suture library.
Changelog
---------
suture uses semantic versioning.
1. 1.0.0
* Initial release.
2. 1.0.1
* Fixed data race on the .state variable.

View File

@@ -0,0 +1,12 @@
#!/bin/bash
GOLINTOUT=$(golint *go)
if [ ! -z "$GOLINTOUT" -o "$?" != 0 ]; then
echo golint failed:
echo $GOLINTOUT
exit 1
fi
go test

View File

@@ -0,0 +1,690 @@
/*
Package suture provides Erlang-like supervisor trees.
This implements Erlang-esque supervisor trees, as adapted for Go. This is
intended to be an industrial-strength implementation, but it has not yet
been deployed in a hostile environment. (It's headed there, though.)
Supervisor Tree -> SuTree -> suture -> holds your code together when it's
trying to fall apart.
Why use Suture?
* You want to write bullet-resistant services that will remain available
despite unforeseen failure.
* You need the code to be smart enough not to consume 100% of the CPU
restarting things.
* You want to easily compose multiple such services in one program.
* You want the Erlang programmers to stop lording their supervision
trees over you.
Suture has 100% test coverage, and is golint clean. This doesn't prove it
free of bugs, but it shows I care.
A blog post describing the design decisions is available at
http://www.jerf.org/iri/post/2930 .
Using Suture
To idiomatically use Suture, create a Supervisor which is your top level
"application" supervisor. This will often occur in your program's "main"
function.
Create "Service"s, which implement the Service interface. .Add() them
to your Supervisor. Supervisors are also services, so you can create a
tree structure here, depending on the exact combination of restarts
you want to create.
As a special case, when adding Supervisors to Supervisors, the "sub"
supervisor will have the "super" supervisor's Log function copied.
This allows you to set one log function on the "top" supervisor, and
have it propagate down to all the sub-supervisors. This also allows
libraries or modules to provide Supervisors without having to commit
their users to a particular logging method.
Finally, as what is probably the last line of your main() function, call
.Serve() on your top level supervisor. This will start all the services
you've defined.
See the Example for an example, using a simple service that serves out
incrementing integers.
*/
package suture
import (
"errors"
"fmt"
"log"
"math"
"runtime"
"sync"
"sync/atomic"
"time"
)
const (
notRunning = iota
normal
paused
)
type supervisorID uint32
type serviceID uint32
var currentSupervisorID uint32
// ErrWrongSupervisor is returned by the (*Supervisor).Remove method
// if you pass a ServiceToken from the wrong Supervisor.
var ErrWrongSupervisor = errors.New("wrong supervisor for this service token, no service removed")
// ServiceToken is an opaque identifier that can be used to terminate a service that
// has been Add()ed to a Supervisor.
type ServiceToken struct {
id uint64
}
/*
Supervisor is the core type of the module that represents a Supervisor.
Supervisors should be constructed either by New or NewSimple.
Once constructed, a Supervisor should be started in one of three ways:
1. Calling .Serve().
2. Calling .ServeBackground().
3. Adding it to an existing Supervisor.
Calling Serve will cause the supervisor to run until it is shut down by
an external user calling Stop() on it. If that never happens, it simply
runs forever. I suggest creating your services in Supervisors, then making
a Serve() call on your top-level Supervisor be the last line of your main
func.
Calling ServeBackground will CORRECTLY start the supervisor running in a
new goroutine. You do not want to just:
go supervisor.Serve()
because that will briefly create a race condition as it starts up, if you
try to .Add() services immediately afterward.
*/
type Supervisor struct {
Name string
id supervisorID
failureDecay float64
failureThreshold float64
failureBackoff time.Duration
timeout time.Duration
log func(string)
services map[serviceID]Service
lastFail time.Time
failures float64
restartQueue []serviceID
serviceCounter serviceID
control chan supervisorMessage
resumeTimer <-chan time.Time
// The testing uses the ability to grab these individual logging functions
// and get inside of suture's handling at a deep level.
// If you ever come up with some need to get into these, submit a pull
// request to make them public and some smidge of justification, and
// I'll happily do it.
// But since I've now changed the signature on these once, I'm glad I
// didn't start with them public... :)
logBadStop func(*Supervisor, Service)
logFailure func(supervisor *Supervisor, service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
logBackoff func(*Supervisor, bool)
// avoid a dependency on github.com/thejerf/abtime by just implementing
// a minimal chunk.
getNow func() time.Time
getResume func(time.Duration) <-chan time.Time
sync.Mutex
state uint8
}
// Spec is used to pass arguments to the New function to create a
// supervisor. See the New function for full documentation.
type Spec struct {
Log func(string)
FailureDecay float64
FailureThreshold float64
FailureBackoff time.Duration
Timeout time.Duration
}
/*
New is the full constructor function for a supervisor.
The name is a friendly human name for the supervisor, used in logging. Suture
does not care if this is unique, but it is good for your sanity if it is.
If not set, the following values are used:
* Log: A function is created that uses log.Print.
* FailureDecay: 30 seconds
* FailureThreshold: 5 failures
* FailureBackoff: 15 seconds
* Timeout: 10 seconds
The Log function will be called when errors occur. Suture will log the
following:
* When a service has failed, with a descriptive message about the
current backoff status, and whether it was immediately restarted
* When the supervisor has gone into its backoff mode, and when it
exits it
* When a service fails to stop
The failureRate, failureThreshold, and failureBackoff controls how failures
are handled, in order to avoid the supervisor failure case where the
program does nothing but restarting failed services. If you do not
care how failures behave, the default values should be fine for the
vast majority of services, but if you want the details:
The supervisor tracks the number of failures that have occurred, with an
exponential decay on the count. Every FailureDecay seconds, the number of
failures that have occurred is cut in half. (This is done smoothly with an
exponential function.) When a failure occurs, the number of failures
is incremented by one. When the number of failures passes the
FailureThreshold, the entire service waits for FailureBackoff seconds
before attempting any further restarts, at which point it resets its
failure count to zero.
Timeout is how long Suture will wait for a service to properly terminate.
*/
func New(name string, spec Spec) (s *Supervisor) {
s = new(Supervisor)
s.Name = name
s.id = supervisorID(atomic.AddUint32(&currentSupervisorID, 1))
if spec.Log == nil {
s.log = func(msg string) {
log.Print(fmt.Sprintf("Supervisor %s: %s", s.Name, msg))
}
} else {
s.log = spec.Log
}
if spec.FailureDecay == 0 {
s.failureDecay = 30
} else {
s.failureDecay = spec.FailureDecay
}
if spec.FailureThreshold == 0 {
s.failureThreshold = 5
} else {
s.failureThreshold = spec.FailureThreshold
}
if spec.FailureBackoff == 0 {
s.failureBackoff = time.Second * 15
} else {
s.failureBackoff = spec.FailureBackoff
}
if spec.Timeout == 0 {
s.timeout = time.Second * 10
} else {
s.timeout = spec.Timeout
}
// overriding these allows for testing the threshold behavior
s.getNow = time.Now
s.getResume = time.After
s.control = make(chan supervisorMessage)
s.services = make(map[serviceID]Service)
s.restartQueue = make([]serviceID, 0, 1)
s.resumeTimer = make(chan time.Time)
// set up the default logging handlers
s.logBadStop = func(supervisor *Supervisor, service Service) {
s.log(fmt.Sprintf("%s: Service %s failed to terminate in a timely manner", serviceName(supervisor), serviceName(service)))
}
s.logFailure = func(supervisor *Supervisor, service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
var errString string
e, canError := err.(error)
if canError {
errString = e.Error()
} else {
errString = fmt.Sprintf("%#v", err)
}
s.log(fmt.Sprintf("%s: Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(supervisor), serviceName(service), failures, threshold, restarting, errString, string(st)))
}
s.logBackoff = func(s *Supervisor, entering bool) {
if entering {
s.log("Entering the backoff state.")
} else {
s.log("Exiting backoff state.")
}
}
return
}
func serviceName(service Service) (serviceName string) {
stringer, canStringer := service.(fmt.Stringer)
if canStringer {
serviceName = stringer.String()
} else {
serviceName = fmt.Sprintf("%#v", service)
}
return
}
// NewSimple is a convenience function to create a service with just a name
// and the sensible defaults.
func NewSimple(name string) *Supervisor {
return New(name, Spec{})
}
/*
Service is the interface that describes a service to a Supervisor.
Serve Method
The Serve method is called by a Supervisor to start the service.
The service should execute within the goroutine that this is
called in. If this function either returns or panics, the Supervisor
will call it again.
A Serve method SHOULD do as much cleanup of the state as possible,
to prevent any corruption in the previous state from crashing the
service again.
Stop Method
This method is used by the supervisor to stop the service. Calling this
directly on a Service given to a Supervisor will simply result in the
Service being restarted; use the Supervisor's .Remove(ServiceToken) method
to stop a service. A supervisor will call .Stop() only once. Thus, it may
be as destructive as it likes to get the service to stop.
Once Stop has been called on a Service, the Service SHOULD NOT be
reused in any other supervisor! Because of the impossibility of
guaranteeing that the service has actually stopped in Go, you can't
prove that you won't be starting two goroutines using the exact
same memory to store state, causing completely unpredictable behavior.
Stop should not return until the service has actually stopped.
"Stopped" here is defined as "the service will stop servicing any
further requests in the future". For instance, a common implementation
is to receive a message on a dedicated "stop" channel and immediately
returning. Once the stop command has been processed, the service is
stopped.
Another common Stop implementation is to forcibly close an open socket
or other resource, which will cause detectable errors to manifest in the
service code. Bear in mind that to perfectly correctly use this
approach requires a bit more work to handle the chance of a Stop
command coming in before the resource has been created.
If a service does not Stop within the supervisor's timeout duration, a log
entry will be made with a descriptive string to that effect. This does
not guarantee that the service is hung; it may still get around to being
properly stopped in the future. Until the service is fully stopped,
both the service and the spawned goroutine trying to stop it will be
"leaked".
Stringer Interface
It is not mandatory to implement the fmt.Stringer interface on your
service, but if your Service does happen to implement that, the log
messages that describe your service will use that when naming the
service. Otherwise, you'll see the GoString of your service object,
obtained via fmt.Sprintf("%#v", service).
*/
type Service interface {
Serve()
Stop()
}
/*
Add adds a service to this supervisor.
If the supervisor is currently running, the service will be started
immediately. If the supervisor is not currently running, the service
will be started when the supervisor is.
The returned ServiceID may be passed to the Remove method of the Supervisor
to terminate the service.
As a special behavior, if the service added is itself a supervisor, the
supervisor being added will copy the Log function from the Supervisor it
is being added to. This allows factoring out providing a Supervisor
from its logging.
*/
func (s *Supervisor) Add(service Service) ServiceToken {
if s == nil {
panic("can't add service to nil *suture.Supervisor")
}
if supervisor, isSupervisor := service.(*Supervisor); isSupervisor {
supervisor.logBadStop = s.logBadStop
supervisor.logFailure = s.logFailure
supervisor.logBackoff = s.logBackoff
}
s.Lock()
if s.state == notRunning {
id := s.serviceCounter
s.serviceCounter++
s.services[id] = service
s.restartQueue = append(s.restartQueue, id)
s.Unlock()
return ServiceToken{uint64(s.id)<<32 | uint64(id)}
}
s.Unlock()
response := make(chan serviceID)
s.control <- addService{service, response}
return ServiceToken{uint64(s.id)<<32 | uint64(<-response)}
}
// ServeBackground starts running a supervisor in its own goroutine. This
// method does not return until it is safe to use .Add() on the Supervisor.
func (s *Supervisor) ServeBackground() {
go s.Serve()
s.sync()
}
/*
Serve starts the supervisor. You should call this on the top-level supervisor,
but nothing else.
*/
func (s *Supervisor) Serve() {
if s == nil {
panic("Can't serve with a nil *suture.Supervisor")
}
if s.id == 0 {
panic("Can't call Serve on an incorrectly-constructed *suture.Supervisor")
}
defer func() {
s.Lock()
s.state = notRunning
s.Unlock()
}()
s.Lock()
if s.state != notRunning {
s.Unlock()
panic("Running a supervisor while it is already running?")
}
s.state = normal
s.Unlock()
// for all the services I currently know about, start them
for _, id := range s.restartQueue {
service, present := s.services[id]
if present {
s.runService(service, id)
}
}
s.restartQueue = make([]serviceID, 0, 1)
for {
select {
case m := <-s.control:
switch msg := m.(type) {
case serviceFailed:
s.handleFailedService(msg.id, msg.err, msg.stacktrace)
case serviceEnded:
service, monitored := s.services[msg.id]
if monitored {
s.handleFailedService(msg.id, fmt.Sprintf("%s returned unexpectedly", service), []byte("[unknown stack trace]"))
}
case addService:
id := s.serviceCounter
s.serviceCounter++
s.services[id] = msg.service
s.runService(msg.service, id)
msg.response <- id
case removeService:
s.removeService(msg.id)
case stopSupervisor:
for id := range s.services {
s.removeService(id)
}
return
case listServices:
services := []Service{}
for _, service := range s.services {
services = append(services, service)
}
msg.c <- services
case syncSupervisor:
// this does nothing on purpose; its sole purpose is to
// introduce a sync point via the channel receive
case panicSupervisor:
// used only by tests
panic("Panicking as requested!")
}
case _ = <-s.resumeTimer:
// We're resuming normal operation after a pause due to
// excessive thrashing
// FIXME: Ought to permit some spacing of these functions, rather
// than simply hammering through them
s.Lock()
s.state = normal
s.Unlock()
s.failures = 0
s.logBackoff(s, false)
for _, id := range s.restartQueue {
service, present := s.services[id]
if present {
s.runService(service, id)
}
}
s.restartQueue = make([]serviceID, 0, 1)
}
}
}
func (s *Supervisor) handleFailedService(id serviceID, err interface{}, stacktrace []byte) {
now := s.getNow()
if s.lastFail.IsZero() {
s.lastFail = now
s.failures = 1.0
} else {
sinceLastFail := now.Sub(s.lastFail).Seconds()
intervals := sinceLastFail / s.failureDecay
s.failures = s.failures*math.Pow(.5, intervals) + 1
}
if s.failures > s.failureThreshold {
s.Lock()
s.state = paused
s.Unlock()
s.logBackoff(s, true)
s.resumeTimer = s.getResume(s.failureBackoff)
}
s.lastFail = now
failedService, monitored := s.services[id]
// It is possible for a service to be no longer monitored
// by the time we get here. In that case, just ignore it.
if monitored {
// this may look dangerous because the state could change, but this
// code is only ever run in the one goroutine that is permitted to
// change the state, so nothing else will.
s.Lock()
curState := s.state
s.Unlock()
if curState == normal {
s.runService(failedService, id)
s.logFailure(s, failedService, s.failures, s.failureThreshold, true, err, stacktrace)
} else {
// FIXME: When restarting, check that the service still
// exists (it may have been stopped in the meantime)
s.restartQueue = append(s.restartQueue, id)
s.logFailure(s, failedService, s.failures, s.failureThreshold, false, err, stacktrace)
}
}
}
func (s *Supervisor) runService(service Service, id serviceID) {
go func() {
defer func() {
if r := recover(); r != nil {
buf := make([]byte, 65535, 65535)
written := runtime.Stack(buf, false)
buf = buf[:written]
s.fail(id, r, buf)
}
}()
service.Serve()
s.serviceEnded(id)
}()
}
func (s *Supervisor) removeService(id serviceID) {
service, present := s.services[id]
if present {
delete(s.services, id)
go func() {
successChan := make(chan bool)
go func() {
service.Stop()
successChan <- true
}()
failChan := s.getResume(s.timeout)
select {
case <-successChan:
// Life is good!
case <-failChan:
s.logBadStop(s, service)
}
}()
}
}
// String implements the fmt.Stringer interface.
func (s *Supervisor) String() string {
return s.Name
}
// sum type pattern for type-safe message passing; see
// http://www.jerf.org/iri/post/2917
type supervisorMessage interface {
isSupervisorMessage()
}
/*
Remove will remove the given service from the Supervisor, and attempt to Stop() it.
The ServiceID token comes from the Add() call.
*/
func (s *Supervisor) Remove(id ServiceToken) error {
sID := supervisorID(id.id >> 32)
if sID != s.id {
return ErrWrongSupervisor
}
s.control <- removeService{serviceID(id.id & 0xffffffff)}
return nil
}
/*
Services returns a []Service containing a snapshot of the services this
Supervisor is managing.
*/
func (s *Supervisor) Services() []Service {
ls := listServices{make(chan []Service)}
s.control <- ls
return <-ls.c
}
type listServices struct {
c chan []Service
}
func (ls listServices) isSupervisorMessage() {}
type removeService struct {
id serviceID
}
func (rs removeService) isSupervisorMessage() {}
func (s *Supervisor) sync() {
s.control <- syncSupervisor{}
}
type syncSupervisor struct {
}
func (ss syncSupervisor) isSupervisorMessage() {}
func (s *Supervisor) fail(id serviceID, err interface{}, stacktrace []byte) {
s.control <- serviceFailed{id, err, stacktrace}
}
type serviceFailed struct {
id serviceID
err interface{}
stacktrace []byte
}
func (sf serviceFailed) isSupervisorMessage() {}
func (s *Supervisor) serviceEnded(id serviceID) {
s.control <- serviceEnded{id}
}
type serviceEnded struct {
id serviceID
}
func (s serviceEnded) isSupervisorMessage() {}
// added by the Add() method
type addService struct {
service Service
response chan serviceID
}
func (as addService) isSupervisorMessage() {}
// Stop stops the Supervisor.
func (s *Supervisor) Stop() {
s.control <- stopSupervisor{}
}
type stopSupervisor struct {
}
func (ss stopSupervisor) isSupervisorMessage() {}
func (s *Supervisor) panic() {
s.control <- panicSupervisor{}
}
type panicSupervisor struct {
}
func (ps panicSupervisor) isSupervisorMessage() {}

View File

@@ -0,0 +1,49 @@
package suture
import "fmt"
type Incrementor struct {
current int
next chan int
stop chan bool
}
func (i *Incrementor) Stop() {
fmt.Println("Stopping the service")
i.stop <- true
}
func (i *Incrementor) Serve() {
for {
select {
case i.next <- i.current:
i.current++
case <-i.stop:
// We sync here just to guarantee the output of "Stopping the service",
// so this passes the test reliably.
// Most services would simply "return" here.
i.stop <- true
return
}
}
}
func ExampleNew_simple() {
supervisor := NewSimple("Supervisor")
service := &Incrementor{0, make(chan int), make(chan bool)}
supervisor.Add(service)
go supervisor.ServeBackground()
fmt.Println("Got:", <-service.next)
fmt.Println("Got:", <-service.next)
supervisor.Stop()
// We sync here just to guarantee the output of "Stopping the service"
<-service.stop
// Output:
// Got: 0
// Got: 1
// Stopping the service
}

View File

@@ -0,0 +1,616 @@
package suture
import (
"errors"
"fmt"
"reflect"
"sync"
"testing"
"time"
)
const (
Happy = iota
Fail
Panic
Hang
UseStopChan
)
var everMultistarted = false
// Test that supervisors work perfectly when everything is hunky dory.
func TestTheHappyCase(t *testing.T) {
t.Parallel()
s := NewSimple("A")
if s.String() != "A" {
t.Fatal("Can't get name from a supervisor")
}
service := NewService("B")
s.Add(service)
go s.Serve()
<-service.started
// If we stop the service, it just gets restarted
service.Stop()
<-service.started
// And it is shut down when we stop the supervisor
service.take <- UseStopChan
s.Stop()
<-service.stop
}
// Test that adding to a running supervisor does indeed start the service.
func TestAddingToRunningSupervisor(t *testing.T) {
t.Parallel()
s := NewSimple("A1")
s.ServeBackground()
defer s.Stop()
service := NewService("B1")
s.Add(service)
<-service.started
services := s.Services()
if !reflect.DeepEqual([]Service{service}, services) {
t.Fatal("Can't get list of services as expected.")
}
}
// Test what happens when services fail.
func TestFailures(t *testing.T) {
t.Parallel()
s := NewSimple("A2")
s.failureThreshold = 3.5
go s.Serve()
defer func() {
// to avoid deadlocks during shutdown, we have to not try to send
// things out on channels while we're shutting down (this undoes the
// logFailure overide about 25 lines down)
s.logFailure = func(*Supervisor, Service, float64, float64, bool, interface{}, []byte) {}
s.Stop()
}()
s.sync()
service1 := NewService("B2")
service2 := NewService("C2")
s.Add(service1)
<-service1.started
s.Add(service2)
<-service2.started
nowFeeder := NewNowFeeder()
pastVal := time.Unix(1000000, 0)
nowFeeder.appendTimes(pastVal)
s.getNow = nowFeeder.getter
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan bool)
// use this to synchronize on here
s.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- r
}
// All that setup was for this: Service1, please return now.
service1.take <- Fail
restarted := <-failNotify
<-service1.started
if !restarted || s.failures != 1 || s.lastFail != pastVal {
t.Fatal("Did not fail in the expected manner")
}
// Getting past this means the service was restarted.
service1.take <- Happy
// Service2, your turn.
service2.take <- Fail
nowFeeder.appendTimes(pastVal)
restarted = <-failNotify
<-service2.started
if !restarted || s.failures != 2 || s.lastFail != pastVal {
t.Fatal("Did not fail in the expected manner")
}
// And you're back. (That is, the correct service was restarted.)
service2.take <- Happy
// Now, one failureDecay later, is everything working correctly?
oneDecayLater := time.Unix(1000030, 0)
nowFeeder.appendTimes(oneDecayLater)
service2.take <- Fail
restarted = <-failNotify
<-service2.started
// playing a bit fast and loose here with floating point, but...
// we get 2 by taking the current failure value of 2, decaying it
// by one interval, which cuts it in half to 1, then adding 1 again,
// all of which "should" be precise
if !restarted || s.failures != 2 || s.lastFail != oneDecayLater {
t.Fatal("Did not decay properly", s.lastFail, oneDecayLater)
}
// For a change of pace, service1 would you be so kind as to panic?
nowFeeder.appendTimes(oneDecayLater)
service1.take <- Panic
restarted = <-failNotify
<-service1.started
if !restarted || s.failures != 3 || s.lastFail != oneDecayLater {
t.Fatal("Did not correctly recover from a panic")
}
nowFeeder.appendTimes(oneDecayLater)
backingoff := make(chan bool)
s.logBackoff = func(s *Supervisor, backingOff bool) {
backingoff <- backingOff
}
// And with this failure, we trigger the backoff code.
service1.take <- Fail
backoff := <-backingoff
restarted = <-failNotify
if !backoff || restarted || s.failures != 4 {
t.Fatal("Broke past the threshold but did not log correctly", s.failures)
}
if service1.existing != 0 {
t.Fatal("service1 still exists according to itself?")
}
// service2 is still running, because we don't shut anything down in a
// backoff, we just stop restarting.
service2.take <- Happy
var correct bool
timer := time.NewTimer(time.Millisecond * 10)
// verify the service has not been restarted
// hard to get around race conditions here without simply using a timer...
select {
case service1.take <- Happy:
correct = false
case <-timer.C:
correct = true
}
if !correct {
t.Fatal("Restarted the service during the backoff interval")
}
// tell the supervisor the restart interval has passed
resumeChan <- time.Time{}
backoff = <-backingoff
<-service1.started
s.sync()
if s.failures != 0 {
t.Fatal("Did not reset failure count after coming back from timeout.")
}
nowFeeder.appendTimes(oneDecayLater)
service1.take <- Fail
restarted = <-failNotify
<-service1.started
if !restarted || backoff {
t.Fatal("For some reason, got that we were backing off again.", restarted, backoff)
}
}
func TestRunningAlreadyRunning(t *testing.T) {
t.Parallel()
s := NewSimple("A3")
go s.Serve()
defer s.Stop()
// ensure the supervisor has made it to its main loop
s.sync()
var errored bool
func() {
defer func() {
if r := recover(); r != nil {
errored = true
}
}()
s.Serve()
}()
if !errored {
t.Fatal("Supervisor failed to prevent itself from double-running.")
}
}
func TestFullConstruction(t *testing.T) {
t.Parallel()
s := New("Moo", Spec{
Log: func(string) {},
FailureDecay: 1,
FailureThreshold: 2,
FailureBackoff: 3,
Timeout: time.Second * 29,
})
if s.String() != "Moo" || s.failureDecay != 1 || s.failureThreshold != 2 || s.failureBackoff != 3 || s.timeout != time.Second*29 {
t.Fatal("Full construction failed somehow")
}
}
// This is mostly for coverage testing.
func TestDefaultLogging(t *testing.T) {
t.Parallel()
s := NewSimple("A4")
service := NewService("B4")
s.Add(service)
s.failureThreshold = .5
s.failureBackoff = time.Millisecond * 25
go s.Serve()
s.sync()
<-service.started
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
service.take <- UseStopChan
service.take <- Fail
<-service.stop
resumeChan <- time.Time{}
<-service.started
service.take <- Happy
serviceName(&BarelyService{})
s.logBadStop(s, service)
s.logFailure(s, service, 1, 1, true, errors.New("test error"), []byte{})
s.Stop()
}
func TestNestedSupervisors(t *testing.T) {
t.Parallel()
super1 := NewSimple("Top5")
super2 := NewSimple("Nested5")
service := NewService("Service5")
super2.logBadStop = func(*Supervisor, Service) {
panic("Failed to copy logBadStop")
}
super1.Add(super2)
super2.Add(service)
// test the functions got copied from super1; if this panics, it didn't
// get copied
super2.logBadStop(super2, service)
go super1.Serve()
super1.sync()
<-service.started
service.take <- Happy
super1.Stop()
}
func TestStoppingSupervisorStopsServices(t *testing.T) {
t.Parallel()
s := NewSimple("Top6")
service := NewService("Service 6")
s.Add(service)
go s.Serve()
s.sync()
<-service.started
service.take <- UseStopChan
s.Stop()
<-service.stop
}
func TestStoppingStillWorksWithHungServices(t *testing.T) {
t.Parallel()
s := NewSimple("Top7")
service := NewService("Service WillHang7")
s.Add(service)
go s.Serve()
<-service.started
service.take <- UseStopChan
service.take <- Hang
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan struct{})
s.logBadStop = func(supervisor *Supervisor, s Service) {
failNotify <- struct{}{}
}
s.Stop()
resumeChan <- time.Time{}
<-failNotify
service.release <- true
<-service.stop
}
func TestRemoveService(t *testing.T) {
t.Parallel()
s := NewSimple("Top")
service := NewService("ServiceToRemove8")
id := s.Add(service)
go s.Serve()
<-service.started
service.take <- UseStopChan
err := s.Remove(id)
if err != nil {
t.Fatal("Removing service somehow failed")
}
<-service.stop
err = s.Remove(ServiceToken{1<<36 + 1})
if err != ErrWrongSupervisor {
t.Fatal("Did not detect that the ServiceToken was wrong")
}
}
func TestFailureToConstruct(t *testing.T) {
t.Parallel()
var s *Supervisor
panics(func() {
s.Serve()
})
s = new(Supervisor)
panics(func() {
s.Serve()
})
}
func TestFailingSupervisors(t *testing.T) {
t.Parallel()
// This is a bit of a complicated test, so let me explain what
// all this is doing:
// 1. Set up a top-level supervisor with a hair-trigger backoff.
// 2. Add a supervisor to that.
// 3. To that supervisor, add a service.
// 4. Panic the supervisor in the middle, sending the top-level into
// backoff.
// 5. Kill the lower level service too.
// 6. Verify that when the top-level service comes out of backoff,
// the service ends up restarted as expected.
// Ultimately, we can't have more than a best-effort recovery here.
// A panic'ed supervisor can't really be trusted to have consistent state,
// and without *that*, we can't trust it to do anything sensible with
// the children it may have been running. So unlike Erlang, we can't
// can't really expect to be able to safely restart them or anything.
// Really, the "correct" answer is that the Supervisor must never panic,
// but in the event that it does, this verifies that it at least tries
// to get on with life.
// This also tests that if a Supervisor itself panics, and one of its
// monitored services goes down in the meantime, that the monitored
// service also gets correctly restarted when the supervisor does.
s1 := NewSimple("Top9")
s2 := NewSimple("Nested9")
service := NewService("Service9")
s1.Add(s2)
s2.Add(service)
go s1.Serve()
<-service.started
s1.failureThreshold = .5
// let us control precisely when s1 comes back
resumeChan := make(chan time.Time)
s1.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan string)
// use this to synchronize on here
s1.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- fmt.Sprintf("%s", s)
}
s2.panic()
failing := <-failNotify
// that's enough sync to guarantee this:
if failing != "Nested9" || s1.state != paused {
t.Fatal("Top-level supervisor did not go into backoff as expected")
}
service.take <- Fail
resumeChan <- time.Time{}
<-service.started
}
func TestNilSupervisorAdd(t *testing.T) {
t.Parallel()
var s *Supervisor
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic as expected on nil add")
}
}()
s.Add(s)
}
// https://github.com/thejerf/suture/issues/11
//
// The purpose of this test is to verify that it does not cause data races,
// so there are no obvious assertions.
func TestIssue11(t *testing.T) {
t.Parallel()
s := NewSimple("main")
s.ServeBackground()
subsuper := NewSimple("sub")
s.Add(subsuper)
subsuper.Add(NewService("may cause data race"))
}
// http://golangtutorials.blogspot.com/2011/10/gotest-unit-testing-and-benchmarking-go.html
// claims test function are run in the same order as the source file...
// I'm not sure if this is part of the contract, though. Especially in the
// face of "t.Parallel()"...
func TestEverMultistarted(t *testing.T) {
if everMultistarted {
t.Fatal("Seem to have multistarted a service at some point, bummer.")
}
}
// A test service that can be induced to fail, panic, or hang on demand.
func NewService(name string) *FailableService {
return &FailableService{name, make(chan bool), make(chan int),
make(chan bool, 1), make(chan bool), make(chan bool), 0}
}
type FailableService struct {
name string
started chan bool
take chan int
shutdown chan bool
release chan bool
stop chan bool
existing int
}
func (s *FailableService) Serve() {
if s.existing != 0 {
everMultistarted = true
panic("Multi-started the same service! " + s.name)
}
s.existing++
s.started <- true
useStopChan := false
for {
select {
case val := <-s.take:
switch val {
case Happy:
// Do nothing on purpose. Life is good!
case Fail:
s.existing--
if useStopChan {
s.stop <- true
}
return
case Panic:
s.existing--
panic("Panic!")
case Hang:
// or more specifically, "hang until I release you"
<-s.release
case UseStopChan:
useStopChan = true
}
case <-s.shutdown:
s.existing--
if useStopChan {
s.stop <- true
}
return
}
}
}
func (s *FailableService) String() string {
return s.name
}
func (s *FailableService) Stop() {
s.shutdown <- true
}
type NowFeeder struct {
values []time.Time
getter func() time.Time
m sync.Mutex
}
// This is used to test serviceName; it's a service without a Stringer.
type BarelyService struct{}
func (bs *BarelyService) Serve() {}
func (bs *BarelyService) Stop() {}
func NewNowFeeder() (nf *NowFeeder) {
nf = new(NowFeeder)
nf.getter = func() time.Time {
nf.m.Lock()
defer nf.m.Unlock()
if len(nf.values) > 0 {
ret := nf.values[0]
nf.values = nf.values[1:]
return ret
}
panic("Ran out of values for NowFeeder")
}
return
}
func (nf *NowFeeder) appendTimes(t ...time.Time) {
nf.m.Lock()
defer nf.m.Unlock()
nf.values = append(nf.values, t...)
}
func panics(doesItPanic func()) (panics bool) {
defer func() {
if r := recover(); r != nil {
panics = true
}
}()
doesItPanic()
return
}

View File

@@ -0,0 +1,181 @@
// go generate gen.go
// GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
// Package iana provides protocol number resources managed by the Internet Assigned Numbers Authority (IANA).
package iana // import "golang.org/x/net/internal/iana"
// Differentiated Services Field Codepoints (DSCP), Updated: 2013-06-25
const (
DiffServCS0 = 0x0 // CS0
DiffServCS1 = 0x20 // CS1
DiffServCS2 = 0x40 // CS2
DiffServCS3 = 0x60 // CS3
DiffServCS4 = 0x80 // CS4
DiffServCS5 = 0xa0 // CS5
DiffServCS6 = 0xc0 // CS6
DiffServCS7 = 0xe0 // CS7
DiffServAF11 = 0x28 // AF11
DiffServAF12 = 0x30 // AF12
DiffServAF13 = 0x38 // AF13
DiffServAF21 = 0x48 // AF21
DiffServAF22 = 0x50 // AF22
DiffServAF23 = 0x58 // AF23
DiffServAF31 = 0x68 // AF31
DiffServAF32 = 0x70 // AF32
DiffServAF33 = 0x78 // AF33
DiffServAF41 = 0x88 // AF41
DiffServAF42 = 0x90 // AF42
DiffServAF43 = 0x98 // AF43
DiffServEFPHB = 0xb8 // EF PHB
DiffServVOICEADMIT = 0xb0 // VOICE-ADMIT
)
// IPv4 TOS Byte and IPv6 Traffic Class Octet, Updated: 2001-09-06
const (
NotECNTransport = 0x0 // Not-ECT (Not ECN-Capable Transport)
ECNTransport1 = 0x1 // ECT(1) (ECN-Capable Transport(1))
ECNTransport0 = 0x2 // ECT(0) (ECN-Capable Transport(0))
CongestionExperienced = 0x3 // CE (Congestion Experienced)
)
// Protocol Numbers, Updated: 2015-06-23
const (
ProtocolIP = 0 // IPv4 encapsulation, pseudo protocol number
ProtocolHOPOPT = 0 // IPv6 Hop-by-Hop Option
ProtocolICMP = 1 // Internet Control Message
ProtocolIGMP = 2 // Internet Group Management
ProtocolGGP = 3 // Gateway-to-Gateway
ProtocolIPv4 = 4 // IPv4 encapsulation
ProtocolST = 5 // Stream
ProtocolTCP = 6 // Transmission Control
ProtocolCBT = 7 // CBT
ProtocolEGP = 8 // Exterior Gateway Protocol
ProtocolIGP = 9 // any private interior gateway (used by Cisco for their IGRP)
ProtocolBBNRCCMON = 10 // BBN RCC Monitoring
ProtocolNVPII = 11 // Network Voice Protocol
ProtocolPUP = 12 // PUP
ProtocolARGUS = 13 // ARGUS
ProtocolEMCON = 14 // EMCON
ProtocolXNET = 15 // Cross Net Debugger
ProtocolCHAOS = 16 // Chaos
ProtocolUDP = 17 // User Datagram
ProtocolMUX = 18 // Multiplexing
ProtocolDCNMEAS = 19 // DCN Measurement Subsystems
ProtocolHMP = 20 // Host Monitoring
ProtocolPRM = 21 // Packet Radio Measurement
ProtocolXNSIDP = 22 // XEROX NS IDP
ProtocolTRUNK1 = 23 // Trunk-1
ProtocolTRUNK2 = 24 // Trunk-2
ProtocolLEAF1 = 25 // Leaf-1
ProtocolLEAF2 = 26 // Leaf-2
ProtocolRDP = 27 // Reliable Data Protocol
ProtocolIRTP = 28 // Internet Reliable Transaction
ProtocolISOTP4 = 29 // ISO Transport Protocol Class 4
ProtocolNETBLT = 30 // Bulk Data Transfer Protocol
ProtocolMFENSP = 31 // MFE Network Services Protocol
ProtocolMERITINP = 32 // MERIT Internodal Protocol
ProtocolDCCP = 33 // Datagram Congestion Control Protocol
Protocol3PC = 34 // Third Party Connect Protocol
ProtocolIDPR = 35 // Inter-Domain Policy Routing Protocol
ProtocolXTP = 36 // XTP
ProtocolDDP = 37 // Datagram Delivery Protocol
ProtocolIDPRCMTP = 38 // IDPR Control Message Transport Proto
ProtocolTPPP = 39 // TP++ Transport Protocol
ProtocolIL = 40 // IL Transport Protocol
ProtocolIPv6 = 41 // IPv6 encapsulation
ProtocolSDRP = 42 // Source Demand Routing Protocol
ProtocolIPv6Route = 43 // Routing Header for IPv6
ProtocolIPv6Frag = 44 // Fragment Header for IPv6
ProtocolIDRP = 45 // Inter-Domain Routing Protocol
ProtocolRSVP = 46 // Reservation Protocol
ProtocolGRE = 47 // Generic Routing Encapsulation
ProtocolDSR = 48 // Dynamic Source Routing Protocol
ProtocolBNA = 49 // BNA
ProtocolESP = 50 // Encap Security Payload
ProtocolAH = 51 // Authentication Header
ProtocolINLSP = 52 // Integrated Net Layer Security TUBA
ProtocolNARP = 54 // NBMA Address Resolution Protocol
ProtocolMOBILE = 55 // IP Mobility
ProtocolTLSP = 56 // Transport Layer Security Protocol using Kryptonet key management
ProtocolSKIP = 57 // SKIP
ProtocolIPv6ICMP = 58 // ICMP for IPv6
ProtocolIPv6NoNxt = 59 // No Next Header for IPv6
ProtocolIPv6Opts = 60 // Destination Options for IPv6
ProtocolCFTP = 62 // CFTP
ProtocolSATEXPAK = 64 // SATNET and Backroom EXPAK
ProtocolKRYPTOLAN = 65 // Kryptolan
ProtocolRVD = 66 // MIT Remote Virtual Disk Protocol
ProtocolIPPC = 67 // Internet Pluribus Packet Core
ProtocolSATMON = 69 // SATNET Monitoring
ProtocolVISA = 70 // VISA Protocol
ProtocolIPCV = 71 // Internet Packet Core Utility
ProtocolCPNX = 72 // Computer Protocol Network Executive
ProtocolCPHB = 73 // Computer Protocol Heart Beat
ProtocolWSN = 74 // Wang Span Network
ProtocolPVP = 75 // Packet Video Protocol
ProtocolBRSATMON = 76 // Backroom SATNET Monitoring
ProtocolSUNND = 77 // SUN ND PROTOCOL-Temporary
ProtocolWBMON = 78 // WIDEBAND Monitoring
ProtocolWBEXPAK = 79 // WIDEBAND EXPAK
ProtocolISOIP = 80 // ISO Internet Protocol
ProtocolVMTP = 81 // VMTP
ProtocolSECUREVMTP = 82 // SECURE-VMTP
ProtocolVINES = 83 // VINES
ProtocolTTP = 84 // Transaction Transport Protocol
ProtocolIPTM = 84 // Internet Protocol Traffic Manager
ProtocolNSFNETIGP = 85 // NSFNET-IGP
ProtocolDGP = 86 // Dissimilar Gateway Protocol
ProtocolTCF = 87 // TCF
ProtocolEIGRP = 88 // EIGRP
ProtocolOSPFIGP = 89 // OSPFIGP
ProtocolSpriteRPC = 90 // Sprite RPC Protocol
ProtocolLARP = 91 // Locus Address Resolution Protocol
ProtocolMTP = 92 // Multicast Transport Protocol
ProtocolAX25 = 93 // AX.25 Frames
ProtocolIPIP = 94 // IP-within-IP Encapsulation Protocol
ProtocolSCCSP = 96 // Semaphore Communications Sec. Pro.
ProtocolETHERIP = 97 // Ethernet-within-IP Encapsulation
ProtocolENCAP = 98 // Encapsulation Header
ProtocolGMTP = 100 // GMTP
ProtocolIFMP = 101 // Ipsilon Flow Management Protocol
ProtocolPNNI = 102 // PNNI over IP
ProtocolPIM = 103 // Protocol Independent Multicast
ProtocolARIS = 104 // ARIS
ProtocolSCPS = 105 // SCPS
ProtocolQNX = 106 // QNX
ProtocolAN = 107 // Active Networks
ProtocolIPComp = 108 // IP Payload Compression Protocol
ProtocolSNP = 109 // Sitara Networks Protocol
ProtocolCompaqPeer = 110 // Compaq Peer Protocol
ProtocolIPXinIP = 111 // IPX in IP
ProtocolVRRP = 112 // Virtual Router Redundancy Protocol
ProtocolPGM = 113 // PGM Reliable Transport Protocol
ProtocolL2TP = 115 // Layer Two Tunneling Protocol
ProtocolDDX = 116 // D-II Data Exchange (DDX)
ProtocolIATP = 117 // Interactive Agent Transfer Protocol
ProtocolSTP = 118 // Schedule Transfer Protocol
ProtocolSRP = 119 // SpectraLink Radio Protocol
ProtocolUTI = 120 // UTI
ProtocolSMP = 121 // Simple Message Protocol
ProtocolPTP = 123 // Performance Transparency Protocol
ProtocolISIS = 124 // ISIS over IPv4
ProtocolFIRE = 125 // FIRE
ProtocolCRTP = 126 // Combat Radio Transport Protocol
ProtocolCRUDP = 127 // Combat Radio User Datagram
ProtocolSSCOPMCE = 128 // SSCOPMCE
ProtocolIPLT = 129 // IPLT
ProtocolSPS = 130 // Secure Packet Shield
ProtocolPIPE = 131 // Private IP Encapsulation within IP
ProtocolSCTP = 132 // Stream Control Transmission Protocol
ProtocolFC = 133 // Fibre Channel
ProtocolRSVPE2EIGNORE = 134 // RSVP-E2E-IGNORE
ProtocolMobilityHeader = 135 // Mobility Header
ProtocolUDPLite = 136 // UDPLite
ProtocolMPLSinIP = 137 // MPLS-in-IP
ProtocolMANET = 138 // MANET Protocols
ProtocolHIP = 139 // Host Identity Protocol
ProtocolShim6 = 140 // Shim6 Protocol
ProtocolWESP = 141 // Wrapped Encapsulating Security Payload
ProtocolROHC = 142 // Robust Header Compression
ProtocolReserved = 255 // Reserved
)

View File

@@ -0,0 +1,293 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
//go:generate go run gen.go
// This program generates internet protocol constants and tables by
// reading IANA protocol registries.
package main
import (
"bytes"
"encoding/xml"
"fmt"
"go/format"
"io"
"io/ioutil"
"net/http"
"os"
"strconv"
"strings"
)
var registries = []struct {
url string
parse func(io.Writer, io.Reader) error
}{
{
"http://www.iana.org/assignments/dscp-registry/dscp-registry.xml",
parseDSCPRegistry,
},
{
"http://www.iana.org/assignments/ipv4-tos-byte/ipv4-tos-byte.xml",
parseTOSTCByte,
},
{
"http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xml",
parseProtocolNumbers,
},
}
func main() {
var bb bytes.Buffer
fmt.Fprintf(&bb, "// go generate gen.go\n")
fmt.Fprintf(&bb, "// GENERATED BY THE COMMAND ABOVE; DO NOT EDIT\n\n")
fmt.Fprintf(&bb, "// Package iana provides protocol number resources managed by the Internet Assigned Numbers Authority (IANA).\n")
fmt.Fprintf(&bb, `package iana // import "golang.org/x/net/internal/iana"`+"\n\n")
for _, r := range registries {
resp, err := http.Get(r.url)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
fmt.Fprintf(os.Stderr, "got HTTP status code %v for %v\n", resp.StatusCode, r.url)
os.Exit(1)
}
if err := r.parse(&bb, resp.Body); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
fmt.Fprintf(&bb, "\n")
}
b, err := format.Source(bb.Bytes())
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
if err := ioutil.WriteFile("const.go", b, 0644); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}
func parseDSCPRegistry(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var dr dscpRegistry
if err := dec.Decode(&dr); err != nil {
return err
}
drs := dr.escape()
fmt.Fprintf(w, "// %s, Updated: %s\n", dr.Title, dr.Updated)
fmt.Fprintf(w, "const (\n")
for _, dr := range drs {
fmt.Fprintf(w, "DiffServ%s = %#x", dr.Name, dr.Value)
fmt.Fprintf(w, "// %s\n", dr.OrigName)
}
fmt.Fprintf(w, ")\n")
return nil
}
type dscpRegistry struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
Note string `xml:"note"`
RegTitle string `xml:"registry>title"`
PoolRecords []struct {
Name string `xml:"name"`
Space string `xml:"space"`
} `xml:"registry>record"`
Records []struct {
Name string `xml:"name"`
Space string `xml:"space"`
} `xml:"registry>registry>record"`
}
type canonDSCPRecord struct {
OrigName string
Name string
Value int
}
func (drr *dscpRegistry) escape() []canonDSCPRecord {
drs := make([]canonDSCPRecord, len(drr.Records))
sr := strings.NewReplacer(
"+", "",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, dr := range drr.Records {
s := strings.TrimSpace(dr.Name)
drs[i].OrigName = s
drs[i].Name = sr.Replace(s)
n, err := strconv.ParseUint(dr.Space, 2, 8)
if err != nil {
continue
}
drs[i].Value = int(n) << 2
}
return drs
}
func parseTOSTCByte(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var ttb tosTCByte
if err := dec.Decode(&ttb); err != nil {
return err
}
trs := ttb.escape()
fmt.Fprintf(w, "// %s, Updated: %s\n", ttb.Title, ttb.Updated)
fmt.Fprintf(w, "const (\n")
for _, tr := range trs {
fmt.Fprintf(w, "%s = %#x", tr.Keyword, tr.Value)
fmt.Fprintf(w, "// %s\n", tr.OrigKeyword)
}
fmt.Fprintf(w, ")\n")
return nil
}
type tosTCByte struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
Note string `xml:"note"`
RegTitle string `xml:"registry>title"`
Records []struct {
Binary string `xml:"binary"`
Keyword string `xml:"keyword"`
} `xml:"registry>record"`
}
type canonTOSTCByteRecord struct {
OrigKeyword string
Keyword string
Value int
}
func (ttb *tosTCByte) escape() []canonTOSTCByteRecord {
trs := make([]canonTOSTCByteRecord, len(ttb.Records))
sr := strings.NewReplacer(
"Capable", "",
"(", "",
")", "",
"+", "",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, tr := range ttb.Records {
s := strings.TrimSpace(tr.Keyword)
trs[i].OrigKeyword = s
ss := strings.Split(s, " ")
if len(ss) > 1 {
trs[i].Keyword = strings.Join(ss[1:], " ")
} else {
trs[i].Keyword = ss[0]
}
trs[i].Keyword = sr.Replace(trs[i].Keyword)
n, err := strconv.ParseUint(tr.Binary, 2, 8)
if err != nil {
continue
}
trs[i].Value = int(n)
}
return trs
}
func parseProtocolNumbers(w io.Writer, r io.Reader) error {
dec := xml.NewDecoder(r)
var pn protocolNumbers
if err := dec.Decode(&pn); err != nil {
return err
}
prs := pn.escape()
prs = append([]canonProtocolRecord{{
Name: "IP",
Descr: "IPv4 encapsulation, pseudo protocol number",
Value: 0,
}}, prs...)
fmt.Fprintf(w, "// %s, Updated: %s\n", pn.Title, pn.Updated)
fmt.Fprintf(w, "const (\n")
for _, pr := range prs {
if pr.Name == "" {
continue
}
fmt.Fprintf(w, "Protocol%s = %d", pr.Name, pr.Value)
s := pr.Descr
if s == "" {
s = pr.OrigName
}
fmt.Fprintf(w, "// %s\n", s)
}
fmt.Fprintf(w, ")\n")
return nil
}
type protocolNumbers struct {
XMLName xml.Name `xml:"registry"`
Title string `xml:"title"`
Updated string `xml:"updated"`
RegTitle string `xml:"registry>title"`
Note string `xml:"registry>note"`
Records []struct {
Value string `xml:"value"`
Name string `xml:"name"`
Descr string `xml:"description"`
} `xml:"registry>record"`
}
type canonProtocolRecord struct {
OrigName string
Name string
Descr string
Value int
}
func (pn *protocolNumbers) escape() []canonProtocolRecord {
prs := make([]canonProtocolRecord, len(pn.Records))
sr := strings.NewReplacer(
"-in-", "in",
"-within-", "within",
"-over-", "over",
"+", "P",
"-", "",
"/", "",
".", "",
" ", "",
)
for i, pr := range pn.Records {
if strings.Contains(pr.Name, "Deprecated") ||
strings.Contains(pr.Name, "deprecated") {
continue
}
prs[i].OrigName = pr.Name
s := strings.TrimSpace(pr.Name)
switch pr.Name {
case "ISIS over IPv4":
prs[i].Name = "ISIS"
case "manet":
prs[i].Name = "MANET"
default:
prs[i].Name = sr.Replace(s)
}
ss := strings.Split(pr.Descr, "\n")
for i := range ss {
ss[i] = strings.TrimSpace(ss[i])
}
if len(ss) > 1 {
prs[i].Descr = strings.Join(ss, " ")
} else {
prs[i].Descr = ss[0]
}
prs[i].Value, _ = strconv.Atoi(pr.Value)
}
return prs
}

92
Godeps/_workspace/src/golang.org/x/net/ipv6/control.go generated vendored Normal file
View File

@@ -0,0 +1,92 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import (
"errors"
"fmt"
"net"
"sync"
)
var (
errMissingAddress = errors.New("missing address")
errInvalidConnType = errors.New("invalid conn type")
errNoSuchInterface = errors.New("no such interface")
)
// Note that RFC 3542 obsoletes RFC 2292 but OS X Snow Leopard and the
// former still support RFC 2292 only. Please be aware that almost
// all protocol implementations prohibit using a combination of RFC
// 2292 and RFC 3542 for some practical reasons.
type rawOpt struct {
sync.RWMutex
cflags ControlFlags
}
func (c *rawOpt) set(f ControlFlags) { c.cflags |= f }
func (c *rawOpt) clear(f ControlFlags) { c.cflags &^= f }
func (c *rawOpt) isset(f ControlFlags) bool { return c.cflags&f != 0 }
// A ControlFlags represents per packet basis IP-level socket option
// control flags.
type ControlFlags uint
const (
FlagTrafficClass ControlFlags = 1 << iota // pass the traffic class on the received packet
FlagHopLimit // pass the hop limit on the received packet
FlagSrc // pass the source address on the received packet
FlagDst // pass the destination address on the received packet
FlagInterface // pass the interface index on the received packet
FlagPathMTU // pass the path MTU on the received packet path
)
const flagPacketInfo = FlagDst | FlagInterface
// A ControlMessage represents per packet basis IP-level socket
// options.
type ControlMessage struct {
// Receiving socket options: SetControlMessage allows to
// receive the options from the protocol stack using ReadFrom
// method of PacketConn.
//
// Specifying socket options: ControlMessage for WriteTo
// method of PacketConn allows to send the options to the
// protocol stack.
//
TrafficClass int // traffic class, must be 1 <= value <= 255 when specifying
HopLimit int // hop limit, must be 1 <= value <= 255 when specifying
Src net.IP // source address, specifying only
Dst net.IP // destination address, receiving only
IfIndex int // interface index, must be 1 <= value when specifying
NextHop net.IP // next hop address, specifying only
MTU int // path MTU, receiving only
}
func (cm *ControlMessage) String() string {
if cm == nil {
return "<nil>"
}
return fmt.Sprintf("tclass: %#x, hoplim: %v, src: %v, dst: %v, ifindex: %v, nexthop: %v, mtu: %v", cm.TrafficClass, cm.HopLimit, cm.Src, cm.Dst, cm.IfIndex, cm.NextHop, cm.MTU)
}
// Ancillary data socket options
const (
ctlTrafficClass = iota // header field
ctlHopLimit // header field
ctlPacketInfo // inbound or outbound packet path
ctlNextHop // nexthop
ctlPathMTU // path mtu
ctlMax
)
// A ctlOpt represents a binding for ancillary data socket option.
type ctlOpt struct {
name int // option name, must be equal or greater than 1
length int // option length
marshal func([]byte, *ControlMessage) []byte
parse func(*ControlMessage, []byte)
}

View File

@@ -0,0 +1,56 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin
package ipv6
import (
"syscall"
"unsafe"
"golang.org/x/net/internal/iana"
)
func marshal2292HopLimit(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_2292HOPLIMIT
m.SetLen(syscall.CmsgLen(4))
if cm != nil {
data := b[syscall.CmsgLen(0):]
// TODO(mikio): fix potential misaligned memory access
*(*int32)(unsafe.Pointer(&data[:4][0])) = int32(cm.HopLimit)
}
return b[syscall.CmsgSpace(4):]
}
func marshal2292PacketInfo(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_2292PKTINFO
m.SetLen(syscall.CmsgLen(sysSizeofInet6Pktinfo))
if cm != nil {
pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
if ip := cm.Src.To16(); ip != nil && ip.To4() == nil {
copy(pi.Addr[:], ip)
}
if cm.IfIndex > 0 {
pi.setIfindex(cm.IfIndex)
}
}
return b[syscall.CmsgSpace(sysSizeofInet6Pktinfo):]
}
func marshal2292NextHop(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_2292NEXTHOP
m.SetLen(syscall.CmsgLen(sysSizeofSockaddrInet6))
if cm != nil {
sa := (*sysSockaddrInet6)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
sa.setSockaddr(cm.NextHop, cm.IfIndex)
}
return b[syscall.CmsgSpace(sysSizeofSockaddrInet6):]
}

View File

@@ -0,0 +1,103 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux netbsd openbsd
package ipv6
import (
"syscall"
"unsafe"
"golang.org/x/net/internal/iana"
)
func marshalTrafficClass(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_TCLASS
m.SetLen(syscall.CmsgLen(4))
if cm != nil {
data := b[syscall.CmsgLen(0):]
// TODO(mikio): fix potential misaligned memory access
*(*int32)(unsafe.Pointer(&data[:4][0])) = int32(cm.TrafficClass)
}
return b[syscall.CmsgSpace(4):]
}
func parseTrafficClass(cm *ControlMessage, b []byte) {
// TODO(mikio): fix potential misaligned memory access
cm.TrafficClass = int(*(*int32)(unsafe.Pointer(&b[:4][0])))
}
func marshalHopLimit(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_HOPLIMIT
m.SetLen(syscall.CmsgLen(4))
if cm != nil {
data := b[syscall.CmsgLen(0):]
// TODO(mikio): fix potential misaligned memory access
*(*int32)(unsafe.Pointer(&data[:4][0])) = int32(cm.HopLimit)
}
return b[syscall.CmsgSpace(4):]
}
func parseHopLimit(cm *ControlMessage, b []byte) {
// TODO(mikio): fix potential misaligned memory access
cm.HopLimit = int(*(*int32)(unsafe.Pointer(&b[:4][0])))
}
func marshalPacketInfo(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_PKTINFO
m.SetLen(syscall.CmsgLen(sysSizeofInet6Pktinfo))
if cm != nil {
pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
if ip := cm.Src.To16(); ip != nil && ip.To4() == nil {
copy(pi.Addr[:], ip)
}
if cm.IfIndex > 0 {
pi.setIfindex(cm.IfIndex)
}
}
return b[syscall.CmsgSpace(sysSizeofInet6Pktinfo):]
}
func parsePacketInfo(cm *ControlMessage, b []byte) {
pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[0]))
cm.Dst = pi.Addr[:]
cm.IfIndex = int(pi.Ifindex)
}
func marshalNextHop(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_NEXTHOP
m.SetLen(syscall.CmsgLen(sysSizeofSockaddrInet6))
if cm != nil {
sa := (*sysSockaddrInet6)(unsafe.Pointer(&b[syscall.CmsgLen(0)]))
sa.setSockaddr(cm.NextHop, cm.IfIndex)
}
return b[syscall.CmsgSpace(sysSizeofSockaddrInet6):]
}
func parseNextHop(cm *ControlMessage, b []byte) {
}
func marshalPathMTU(b []byte, cm *ControlMessage) []byte {
m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0]))
m.Level = iana.ProtocolIPv6
m.Type = sysIPV6_PATHMTU
m.SetLen(syscall.CmsgLen(sysSizeofIPv6Mtuinfo))
return b[syscall.CmsgSpace(sysSizeofIPv6Mtuinfo):]
}
func parsePathMTU(cm *ControlMessage, b []byte) {
mi := (*sysIPv6Mtuinfo)(unsafe.Pointer(&b[0]))
cm.Dst = mi.Addr.Addr[:]
cm.IfIndex = int(mi.Addr.Scope_id)
cm.MTU = int(mi.Mtu)
}

View File

@@ -0,0 +1,23 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build nacl plan9 solaris
package ipv6
func setControlMessage(fd int, opt *rawOpt, cf ControlFlags, on bool) error {
return errOpNoSupport
}
func newControlMessage(opt *rawOpt) (oob []byte) {
return nil
}
func parseControlMessage(b []byte) (*ControlMessage, error) {
return nil, errOpNoSupport
}
func marshalControlMessage(cm *ControlMessage) (oob []byte) {
return nil
}

View File

@@ -0,0 +1,166 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux netbsd openbsd
package ipv6
import (
"os"
"syscall"
"golang.org/x/net/internal/iana"
)
func setControlMessage(fd int, opt *rawOpt, cf ControlFlags, on bool) error {
opt.Lock()
defer opt.Unlock()
if cf&FlagTrafficClass != 0 && sockOpts[ssoReceiveTrafficClass].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceiveTrafficClass], boolint(on)); err != nil {
return err
}
if on {
opt.set(FlagTrafficClass)
} else {
opt.clear(FlagTrafficClass)
}
}
if cf&FlagHopLimit != 0 && sockOpts[ssoReceiveHopLimit].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceiveHopLimit], boolint(on)); err != nil {
return err
}
if on {
opt.set(FlagHopLimit)
} else {
opt.clear(FlagHopLimit)
}
}
if cf&flagPacketInfo != 0 && sockOpts[ssoReceivePacketInfo].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceivePacketInfo], boolint(on)); err != nil {
return err
}
if on {
opt.set(cf & flagPacketInfo)
} else {
opt.clear(cf & flagPacketInfo)
}
}
if cf&FlagPathMTU != 0 && sockOpts[ssoReceivePathMTU].name > 0 {
if err := setInt(fd, &sockOpts[ssoReceivePathMTU], boolint(on)); err != nil {
return err
}
if on {
opt.set(FlagPathMTU)
} else {
opt.clear(FlagPathMTU)
}
}
return nil
}
func newControlMessage(opt *rawOpt) (oob []byte) {
opt.RLock()
var l int
if opt.isset(FlagTrafficClass) && ctlOpts[ctlTrafficClass].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlTrafficClass].length)
}
if opt.isset(FlagHopLimit) && ctlOpts[ctlHopLimit].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlHopLimit].length)
}
if opt.isset(flagPacketInfo) && ctlOpts[ctlPacketInfo].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlPacketInfo].length)
}
if opt.isset(FlagPathMTU) && ctlOpts[ctlPathMTU].name > 0 {
l += syscall.CmsgSpace(ctlOpts[ctlPathMTU].length)
}
if l > 0 {
oob = make([]byte, l)
b := oob
if opt.isset(FlagTrafficClass) && ctlOpts[ctlTrafficClass].name > 0 {
b = ctlOpts[ctlTrafficClass].marshal(b, nil)
}
if opt.isset(FlagHopLimit) && ctlOpts[ctlHopLimit].name > 0 {
b = ctlOpts[ctlHopLimit].marshal(b, nil)
}
if opt.isset(flagPacketInfo) && ctlOpts[ctlPacketInfo].name > 0 {
b = ctlOpts[ctlPacketInfo].marshal(b, nil)
}
if opt.isset(FlagPathMTU) && ctlOpts[ctlPathMTU].name > 0 {
b = ctlOpts[ctlPathMTU].marshal(b, nil)
}
}
opt.RUnlock()
return
}
func parseControlMessage(b []byte) (*ControlMessage, error) {
if len(b) == 0 {
return nil, nil
}
cmsgs, err := syscall.ParseSocketControlMessage(b)
if err != nil {
return nil, os.NewSyscallError("parse socket control message", err)
}
cm := &ControlMessage{}
for _, m := range cmsgs {
if m.Header.Level != iana.ProtocolIPv6 {
continue
}
switch int(m.Header.Type) {
case ctlOpts[ctlTrafficClass].name:
ctlOpts[ctlTrafficClass].parse(cm, m.Data[:])
case ctlOpts[ctlHopLimit].name:
ctlOpts[ctlHopLimit].parse(cm, m.Data[:])
case ctlOpts[ctlPacketInfo].name:
ctlOpts[ctlPacketInfo].parse(cm, m.Data[:])
case ctlOpts[ctlPathMTU].name:
ctlOpts[ctlPathMTU].parse(cm, m.Data[:])
}
}
return cm, nil
}
func marshalControlMessage(cm *ControlMessage) (oob []byte) {
if cm == nil {
return
}
var l int
tclass := false
if ctlOpts[ctlTrafficClass].name > 0 && cm.TrafficClass > 0 {
tclass = true
l += syscall.CmsgSpace(ctlOpts[ctlTrafficClass].length)
}
hoplimit := false
if ctlOpts[ctlHopLimit].name > 0 && cm.HopLimit > 0 {
hoplimit = true
l += syscall.CmsgSpace(ctlOpts[ctlHopLimit].length)
}
pktinfo := false
if ctlOpts[ctlPacketInfo].name > 0 && (cm.Src.To16() != nil && cm.Src.To4() == nil || cm.IfIndex > 0) {
pktinfo = true
l += syscall.CmsgSpace(ctlOpts[ctlPacketInfo].length)
}
nexthop := false
if ctlOpts[ctlNextHop].name > 0 && cm.NextHop.To16() != nil && cm.NextHop.To4() == nil {
nexthop = true
l += syscall.CmsgSpace(ctlOpts[ctlNextHop].length)
}
if l > 0 {
oob = make([]byte, l)
b := oob
if tclass {
b = ctlOpts[ctlTrafficClass].marshal(b, cm)
}
if hoplimit {
b = ctlOpts[ctlHopLimit].marshal(b, cm)
}
if pktinfo {
b = ctlOpts[ctlPacketInfo].marshal(b, cm)
}
if nexthop {
b = ctlOpts[ctlNextHop].marshal(b, cm)
}
}
return
}

View File

@@ -0,0 +1,27 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6
import "syscall"
func setControlMessage(fd syscall.Handle, opt *rawOpt, cf ControlFlags, on bool) error {
// TODO(mikio): implement this
return syscall.EWINDOWS
}
func newControlMessage(opt *rawOpt) (oob []byte) {
// TODO(mikio): implement this
return nil
}
func parseControlMessage(b []byte) (*ControlMessage, error) {
// TODO(mikio): implement this
return nil, syscall.EWINDOWS
}
func marshalControlMessage(cm *ControlMessage) (oob []byte) {
// TODO(mikio): implement this
return nil
}

View File

@@ -0,0 +1,112 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#define __APPLE_USE_RFC_3542
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_2292PKTINFO = C.IPV6_2292PKTINFO
sysIPV6_2292HOPLIMIT = C.IPV6_2292HOPLIMIT
sysIPV6_2292NEXTHOP = C.IPV6_2292NEXTHOP
sysIPV6_2292HOPOPTS = C.IPV6_2292HOPOPTS
sysIPV6_2292DSTOPTS = C.IPV6_2292DSTOPTS
sysIPV6_2292RTHDR = C.IPV6_2292RTHDR
sysIPV6_2292PKTOPTIONS = C.IPV6_2292PKTOPTIONS
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR
sysIPV6_MSFILTER = C.IPV6_MSFILTER
sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP
sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP
sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP
sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP
sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE
sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE
sysIPV6_BOUND_IF = C.IPV6_BOUND_IF
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrStorage = C.sizeof_struct_sockaddr_storage
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofGroupReq = C.sizeof_struct_group_req
sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrStorage C.struct_sockaddr_storage
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter
type sysGroupReq C.struct_group_req
type sysGroupSourceReq C.struct_group_source_req

View File

@@ -0,0 +1,84 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,105 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR
sysIPV6_BINDANY = C.IPV6_BINDANY
sysIPV6_MSFILTER = C.IPV6_MSFILTER
sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP
sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP
sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP
sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP
sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE
sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrStorage = C.sizeof_struct_sockaddr_storage
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofGroupReq = C.sizeof_struct_group_req
sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrStorage C.struct_sockaddr_storage
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysGroupReq C.struct_group_req
type sysGroupSourceReq C.struct_group_source_req
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,136 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/ipv6.h>
#include <linux/icmpv6.h>
*/
import "C"
const (
sysIPV6_ADDRFORM = C.IPV6_ADDRFORM
sysIPV6_2292PKTINFO = C.IPV6_2292PKTINFO
sysIPV6_2292HOPOPTS = C.IPV6_2292HOPOPTS
sysIPV6_2292DSTOPTS = C.IPV6_2292DSTOPTS
sysIPV6_2292RTHDR = C.IPV6_2292RTHDR
sysIPV6_2292PKTOPTIONS = C.IPV6_2292PKTOPTIONS
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_2292HOPLIMIT = C.IPV6_2292HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_FLOWINFO = C.IPV6_FLOWINFO
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_ADD_MEMBERSHIP = C.IPV6_ADD_MEMBERSHIP
sysIPV6_DROP_MEMBERSHIP = C.IPV6_DROP_MEMBERSHIP
sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP
sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP
sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP
sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP
sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE
sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE
sysMCAST_MSFILTER = C.MCAST_MSFILTER
sysIPV6_ROUTER_ALERT = C.IPV6_ROUTER_ALERT
sysIPV6_MTU_DISCOVER = C.IPV6_MTU_DISCOVER
sysIPV6_MTU = C.IPV6_MTU
sysIPV6_RECVERR = C.IPV6_RECVERR
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_JOIN_ANYCAST = C.IPV6_JOIN_ANYCAST
sysIPV6_LEAVE_ANYCAST = C.IPV6_LEAVE_ANYCAST
//sysIPV6_PMTUDISC_DONT = C.IPV6_PMTUDISC_DONT
//sysIPV6_PMTUDISC_WANT = C.IPV6_PMTUDISC_WANT
//sysIPV6_PMTUDISC_DO = C.IPV6_PMTUDISC_DO
//sysIPV6_PMTUDISC_PROBE = C.IPV6_PMTUDISC_PROBE
//sysIPV6_PMTUDISC_INTERFACE = C.IPV6_PMTUDISC_INTERFACE
//sysIPV6_PMTUDISC_OMIT = C.IPV6_PMTUDISC_OMIT
sysIPV6_FLOWLABEL_MGR = C.IPV6_FLOWLABEL_MGR
sysIPV6_FLOWINFO_SEND = C.IPV6_FLOWINFO_SEND
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_XFRM_POLICY = C.IPV6_XFRM_POLICY
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_ADDR_PREFERENCES = C.IPV6_ADDR_PREFERENCES
sysIPV6_PREFER_SRC_TMP = C.IPV6_PREFER_SRC_TMP
sysIPV6_PREFER_SRC_PUBLIC = C.IPV6_PREFER_SRC_PUBLIC
sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = C.IPV6_PREFER_SRC_PUBTMP_DEFAULT
sysIPV6_PREFER_SRC_COA = C.IPV6_PREFER_SRC_COA
sysIPV6_PREFER_SRC_HOME = C.IPV6_PREFER_SRC_HOME
sysIPV6_PREFER_SRC_CGA = C.IPV6_PREFER_SRC_CGA
sysIPV6_PREFER_SRC_NONCGA = C.IPV6_PREFER_SRC_NONCGA
sysIPV6_MINHOPCOUNT = C.IPV6_MINHOPCOUNT
sysIPV6_ORIGDSTADDR = C.IPV6_ORIGDSTADDR
sysIPV6_RECVORIGDSTADDR = C.IPV6_RECVORIGDSTADDR
sysIPV6_TRANSPARENT = C.IPV6_TRANSPARENT
sysIPV6_UNICAST_IF = C.IPV6_UNICAST_IF
sysICMPV6_FILTER = C.ICMPV6_FILTER
sysICMPV6_FILTER_BLOCK = C.ICMPV6_FILTER_BLOCK
sysICMPV6_FILTER_PASS = C.ICMPV6_FILTER_PASS
sysICMPV6_FILTER_BLOCKOTHERS = C.ICMPV6_FILTER_BLOCKOTHERS
sysICMPV6_FILTER_PASSONLY = C.ICMPV6_FILTER_PASSONLY
sysSizeofKernelSockaddrStorage = C.sizeof_struct___kernel_sockaddr_storage
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6FlowlabelReq = C.sizeof_struct_in6_flowlabel_req
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofGroupReq = C.sizeof_struct_group_req
sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysKernelSockaddrStorage C.struct___kernel_sockaddr_storage
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6FlowlabelReq C.struct_in6_flowlabel_req
type sysIPv6Mreq C.struct_ipv6_mreq
type sysGroupReq C.struct_group_req
type sysGroupSourceReq C.struct_group_source_req
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,80 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,89 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <sys/param.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PORTRANGE = C.IPV6_PORTRANGE
sysICMP6_FILTER = C.ICMP6_FILTER
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_AUTH_LEVEL = C.IPV6_AUTH_LEVEL
sysIPV6_ESP_TRANS_LEVEL = C.IPV6_ESP_TRANS_LEVEL
sysIPV6_ESP_NETWORK_LEVEL = C.IPV6_ESP_NETWORK_LEVEL
sysIPSEC6_OUTSA = C.IPSEC6_OUTSA
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL
sysIPV6_IPCOMP_LEVEL = C.IPV6_IPCOMP_LEVEL
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_PIPEX = C.IPV6_PIPEX
sysIPV6_RTABLE = C.IPV6_RTABLE
sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT
sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH
sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

View File

@@ -0,0 +1,96 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// +godefs map struct_in6_addr [16]byte /* in6_addr */
package ipv6
/*
#include <netinet/in.h>
#include <netinet/icmp6.h>
*/
import "C"
const (
sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS
sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF
sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS
sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP
sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP
sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP
sysIPV6_PKTINFO = C.IPV6_PKTINFO
sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT
sysIPV6_NEXTHOP = C.IPV6_NEXTHOP
sysIPV6_HOPOPTS = C.IPV6_HOPOPTS
sysIPV6_DSTOPTS = C.IPV6_DSTOPTS
sysIPV6_RTHDR = C.IPV6_RTHDR
sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS
sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO
sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT
sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS
sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR
sysIPV6_RECVRTHDRDSTOPTS = C.IPV6_RECVRTHDRDSTOPTS
sysIPV6_CHECKSUM = C.IPV6_CHECKSUM
sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS
sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU
sysIPV6_DONTFRAG = C.IPV6_DONTFRAG
sysIPV6_SEC_OPT = C.IPV6_SEC_OPT
sysIPV6_SRC_PREFERENCES = C.IPV6_SRC_PREFERENCES
sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU
sysIPV6_PATHMTU = C.IPV6_PATHMTU
sysIPV6_TCLASS = C.IPV6_TCLASS
sysIPV6_V6ONLY = C.IPV6_V6ONLY
sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS
sysIPV6_PREFER_SRC_HOME = C.IPV6_PREFER_SRC_HOME
sysIPV6_PREFER_SRC_COA = C.IPV6_PREFER_SRC_COA
sysIPV6_PREFER_SRC_PUBLIC = C.IPV6_PREFER_SRC_PUBLIC
sysIPV6_PREFER_SRC_TMP = C.IPV6_PREFER_SRC_TMP
sysIPV6_PREFER_SRC_NONCGA = C.IPV6_PREFER_SRC_NONCGA
sysIPV6_PREFER_SRC_CGA = C.IPV6_PREFER_SRC_CGA
sysIPV6_PREFER_SRC_MIPMASK = C.IPV6_PREFER_SRC_MIPMASK
sysIPV6_PREFER_SRC_MIPDEFAULT = C.IPV6_PREFER_SRC_MIPDEFAULT
sysIPV6_PREFER_SRC_TMPMASK = C.IPV6_PREFER_SRC_TMPMASK
sysIPV6_PREFER_SRC_TMPDEFAULT = C.IPV6_PREFER_SRC_TMPDEFAULT
sysIPV6_PREFER_SRC_CGAMASK = C.IPV6_PREFER_SRC_CGAMASK
sysIPV6_PREFER_SRC_CGADEFAULT = C.IPV6_PREFER_SRC_CGADEFAULT
sysIPV6_PREFER_SRC_MASK = C.IPV6_PREFER_SRC_MASK
sysIPV6_PREFER_SRC_DEFAULT = C.IPV6_PREFER_SRC_DEFAULT
sysIPV6_BOUND_IF = C.IPV6_BOUND_IF
sysIPV6_UNSPEC_SRC = C.IPV6_UNSPEC_SRC
sysICMP6_FILTER = C.ICMP6_FILTER
sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6
sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo
sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo
sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq
sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter
)
type sysSockaddrInet6 C.struct_sockaddr_in6
type sysInet6Pktinfo C.struct_in6_pktinfo
type sysIPv6Mtuinfo C.struct_ip6_mtuinfo
type sysIPv6Mreq C.struct_ipv6_mreq
type sysICMPv6Filter C.struct_icmp6_filter

Some files were not shown because too many files have changed in this diff Show More