Compare commits

...

263 Commits

Author SHA1 Message Date
Audrius Butkevicius
83d25f09a3 Fix broken upgrades (fixes #1175) 2015-01-04 18:19:00 +00:00
Audrius Butkevicius
ed747a2d3d Add identicons to device prompts 2015-01-03 23:34:15 +00:00
Jakob Borg
3a8ee4ce2e Merge pull request #1169 from syncthing/pullhash
Hash blocks after receipt, try multiple peers (fixes #1166)
2015-01-04 00:24:07 +01:00
Audrius Butkevicius
5ac01a3af4 Hash blocks after receipt, try multiple peers (fixes #1166) 2015-01-03 23:21:57 +00:00
Jakob Borg
46343f2f9e Merge pull request #1174 from AudriusButkevicius/intro
New device, folder prompts (fixes #120, fixes #330)
2015-01-04 00:16:10 +01:00
Audrius Butkevicius
56ccb5b2ab New device, folder prompts (fixes #120, fixes #330) 2015-01-03 23:06:41 +00:00
Jakob Borg
9a946eed80 Discourse -> Wiki for docs 2015-01-03 16:44:13 +01:00
Audrius Butkevicius
9c6cb0f630 Merge pull request #1172 from syncthing/random-scanintv
Add a random perturbation to the scan interval (fixes #1150)
2015-01-02 15:25:22 +00:00
Audrius Butkevicius
1b066d6965 Merge pull request #1171 from syncthing/jobqueue
Add job queue (replaces #1060)
2015-01-02 15:18:50 +00:00
Jakob Borg
54c3caad53 Add a random perturbation to the scan interval (fixes #1150) 2015-01-02 16:16:16 +01:00
Jakob Borg
9b5e8aaf83 Repair buggy BringToFront 2015-01-02 15:54:04 +01:00
Jakob Borg
5143c09bcf Refactor / cleanup 2015-01-02 15:54:04 +01:00
Jakob Borg
2496185629 Only buffer file names, not full &FileInfo 2015-01-02 15:33:39 +01:00
Jakob Borg
34deb82aea Use slice instead of list, no map
benchmark                           old ns/op     new ns/op     delta
BenchmarkJobQueueBump               345           154498        +44682.03%
BenchmarkJobQueuePushPopDone10k     9437373       3258204       -65.48%

benchmark                           old allocs     new allocs     delta
BenchmarkJobQueueBump               0              0              +0.00%
BenchmarkJobQueuePushPopDone10k     10565          22             -99.79%

benchmark                           old bytes     new bytes     delta
BenchmarkJobQueueBump               0             0             +0.00%
BenchmarkJobQueuePushPopDone10k     1452498       385869        -73.43%
2015-01-02 15:33:39 +01:00
Jakob Borg
8f72ae9da2 Add some benchmarks 2015-01-02 15:33:39 +01:00
Audrius Butkevicius
b753f01ac1 Add tests 2015-01-02 15:33:39 +01:00
Audrius Butkevicius
fd0a147ae6 Add job queue (fixes #629)
Request to terminate currently ongoing downloads and jump to the bumped file
incoming in 3, 2, 1.

Also, has a slightly strange effect where we pop a job off the queue, but
the copyChannel is still busy and blocks, though it gets moved to the
progress slice in the jobqueue, and looks like it's in progress which it isn't
as it's waiting to be picked up from the copyChan.

As a result, the progress emitter doesn't register on the task, and hence the file
doesn't have a progress bar, but cannot be replaced by a bump.

I guess I can fix progress bar issue by moving the progressEmiter.Register just
before passing the file to the copyChan, but then we are back to the initial
problem of a file with a progress bar, but no progress happening as it's stuck
 on write to copyChan

I checked if there is a way to check for channel writeability (before popping)
but got struck by lightning just for bringing the idea up in #go-nuts.

My ideal scenario would be to check if copyChan is writeable, pop job from the
queue and shove it down handleFile. This way jobs would stay in the queue while
they cannot be handled, meaning that the `Bump` could bring your file up higher.
2015-01-02 15:33:39 +01:00
Audrius Butkevicius
e94bd90782 Merge pull request #1164 from syncthing/ro-tempfiles
Handle read only temp files after crash/restart
2014-12-31 12:08:37 +00:00
Jakob Borg
ce4b897d0e Handle read only temp files after crash/restart 2014-12-31 13:06:28 +01:00
Jakob Borg
a7694029e2 Make sure to stop processes when exiting integration test 2014-12-31 13:04:06 +01:00
Jakob Borg
1e9110b763 Add debugging utility for manual directory comparison 2014-12-31 13:04:06 +01:00
Jakob Borg
6f3fbbbe49 Improve error checking in integration tests 2014-12-31 13:04:04 +01:00
Jakob Borg
d346ec7bfe Merge pull request #1160 from AudriusButkevicius/upnp
Use unique names for UPnP mappings (fixes #1100, fixes #1128)
2014-12-31 12:56:47 +01:00
Jakob Borg
26a3613397 Merge pull request #1162 from AudriusButkevicius/silence
Silence versioner warnings for unmatched files (fixes #1117)
2014-12-31 12:54:23 +01:00
Jakob Borg
e6318bddf3 Merge pull request #1161 from AudriusButkevicius/upnp2
Use ListenMulticastUDP for multicast sockets (potentially fixes #1113)
2014-12-31 12:53:56 +01:00
Audrius Butkevicius
514bb0beda Silence versioner warnings for unmatched files (fixes #1117) 2014-12-30 22:43:07 +00:00
Audrius Butkevicius
41b1bd2f05 Use ListenMulticastUDP for multicast sockets (potentially fixes #1113) 2014-12-30 22:27:47 +00:00
Audrius Butkevicius
bf40dadf04 Use unique names for UPnP mappings (fixes #1100, fixes #1128) 2014-12-30 21:47:12 +00:00
Jakob Borg
cb1678ebec Clean up folders after -reset test 2014-12-30 11:02:49 +01:00
Jakob Borg
0c1ac568b5 Fix tests with newer goleveldb 2014-12-29 14:50:24 +01:00
Audrius Butkevicius
0f9550c747 Merge pull request #1149 from syncthing/fix-1058
Also check file size when determining if file is unchanged (fixes #1058)
2014-12-29 13:29:00 +00:00
Audrius Butkevicius
b13ae17a47 Merge pull request #1147 from syncthing/fix-1118
Generate a random API key on initial setup (fixes #1118)
2014-12-29 13:28:38 +00:00
Jakob Borg
f762a12d18 Also check file size when determining if file is unchanged (fixes #1058) 2014-12-29 14:24:12 +01:00
Jakob Borg
20d30a80be Generate a random API key on initial setup (fixes #1118)
Also makes the javascript implementation use the same algorithm for
generating random strings.
2014-12-29 13:48:26 +01:00
Audrius Butkevicius
229b218203 Merge pull request #1146 from syncthing/fix-1047
Make auto upgrade careful about breaking changes (fixes #1047)
2014-12-29 11:41:10 +00:00
Jakob Borg
4b668aaca8 Make auto upgrade careful about breaking changes (fixes #1047) 2014-12-29 12:35:06 +01:00
Jakob Borg
8c7f1421c6 Update goleveldb 2014-12-29 12:23:07 +01:00
Jakob Borg
d90b2c1d52 Translation update 2014-12-29 09:42:17 +01:00
Jakob Borg
22f39be197 Exit before attempting to use nil variables on scanning nonexistent folder 2014-12-23 14:14:05 +01:00
Audrius Butkevicius
2fa45436c2 Merge pull request #1140 from syncthing/fix-1133
Refactor ignore handling to fix #1133
2014-12-23 13:01:56 +02:00
Jakob Borg
cadbb6bbce Move ignore handling from index recv to puller (fixes #1133)
With this change we accept updates for ignored files from other devices,
and check the ignore patterns at pull time. When we detect that the
ignore patterns have changed we do a full check of files that we might
now need to pull.
2014-12-23 10:46:02 +01:00
Jakob Borg
2c89f04be7 Refactor ignore handling (...)
This uses persistent Matcher objects that can reload their content and
provide a hash string that can be used to check if it's changed. The
cache is local to each Matcher object instead of kept globally.
2014-12-23 10:46:02 +01:00
Jakob Borg
597011e3a9 Disregard change to removed doc 2014-12-23 10:23:36 +01:00
Audrius Butkevicius
0d433b58ba Merge pull request #1139 from syncthing/check-upgrade-md5
Check upgrade md5
2014-12-22 15:33:19 +02:00
Jakob Borg
cde8ef56e5 Implement manual -upgrade-to option 2014-12-22 12:18:10 +01:00
Jakob Borg
110816c7aa Consolidate Windows/Unix upgrading and check MD5 (fixes #1138) 2014-12-22 12:13:31 +01:00
Jakob Borg
fbb1e168f7 Include MD5 sums in archives 2014-12-22 12:12:34 +01:00
Jakob Borg
23085eb5ae Must verify success of from-network copy during upgrade (ref #1138) 2014-12-22 10:42:47 +01:00
Jakob Borg
7344a6205f Move protocol specs to a separate repo 2014-12-22 09:55:58 +01:00
marco-m
4b76ec40c0 Update DISCOVERY.md
Correct DISCOVERY.md with the changes proposed in the forum (https://discourse.syncthing.net/t/questions-about-the-discovery-protocol/1586)
2014-12-21 22:47:47 +01:00
Audrius Butkevicius
90101d0269 Merge pull request #1134 from syncthing/fix-816
Don't ignore ignored items forever (fixes #816)
2014-12-21 16:18:24 +02:00
Jakob Borg
7ac84c0660 Don't ignore ignored items forever (fixes #816) 2014-12-21 13:55:50 +01:00
Jakob Borg
2090530bbb Improve and clean up integration tests, benchmark. 2014-12-19 12:43:48 +01:00
Jakob Borg
b6cb7ddbaf There is no Legend string right now 2014-12-19 10:18:51 +01:00
Jakob Borg
3422d9335c ... and in NICKS (I should go to bed) 2014-12-18 22:55:04 +01:00
Jakob Borg
e91f9a944e Revert "Update bootstrap" (fixes #1121)
This reverts commit 51cdd38c3e.

Conflicts:
	internal/auto/gui.files.go
2014-12-18 22:32:03 +01:00
Jakob Borg
e7ddc7cf0f ... also in index.html 2014-12-18 22:02:45 +01:00
Jakob Borg
40dfa48756 Rebuild assets 2014-12-18 22:01:38 +01:00
Jakob Borg
579f92cf5f Merge branch 'pr-1115'
* pr-1115:
  Make progress indicators less animated
  put legend above list of needed files
2014-12-18 22:01:27 +01:00
Jakob Borg
4565125da9 Add Cathryne 2014-12-18 21:59:54 +01:00
Jakob Borg
ce13a01e65 Clarify authorship requirements in contribution guidelines 2014-12-18 21:56:52 +01:00
Jakob Borg
618a8682b7 golint style tweaks 2014-12-16 23:33:56 +01:00
Jakob Borg
963077f918 Translation update 2014-12-16 23:20:59 +01:00
Jakob Borg
3704d2d86b Don't exit after creating HTTPS certs (fixes #1103) 2014-12-16 22:55:44 +01:00
Jakob Borg
fc6a029311 gofmt 2014-12-16 22:40:04 +01:00
Jakob Borg
7c7b1e6c2d Merge branch 'update-bootstrap'
* update-bootstrap:
  Fix checkbox breakage in Settings dialog
  Update bootstrap
2014-12-15 09:13:05 +01:00
Jakob Borg
892920039d Fix checkbox breakage in Settings dialog 2014-12-15 09:12:59 +01:00
Jakob Borg
51cdd38c3e Update bootstrap 2014-12-15 08:54:29 +01:00
Jakob Borg
80977bd4c0 Make progress indicators less animated 2014-12-15 00:34:03 +01:00
Cathryne
d8022f94ef put legend above list of needed files 2014-12-13 18:33:20 +01:00
Jakob Borg
1c43587d7d Patch Go for issue #9102 in build env (fixes #1112) 2014-12-13 10:38:05 +01:00
Jakob Borg
b2ed32b118 Command -generate should work on non-existent dir 2014-12-12 21:39:03 +01:00
Jakob Borg
0cc815d816 Need config available for -reset (fixes #1111) 2014-12-12 21:29:57 +01:00
Jakob Borg
d452b7593f Merge branch 'pr-1094'
* pr-1094:
  GUI tweaks for last file synced
  Display last received file and time (fixes #292, fixes #801)
2014-12-12 14:25:12 +01:00
Jakob Borg
5346bdc683 GUI tweaks for last file synced 2014-12-12 14:24:36 +01:00
Jakob Borg
dc5c1e2002 Use Go 1.4 for builds 2014-12-11 12:48:40 +01:00
Jakob Borg
2e48e298a2 Merge pull request #1107 from AudriusButkevicius/cleanup
Remove temporaries during scan (fixes #1092)
2014-12-10 09:19:34 +01:00
Audrius Butkevicius
7a1aaaf5c4 Remove temporaries during scan (fixes #1092) 2014-12-09 23:58:58 +00:00
Audrius Butkevicius
bde92d5cfe Display last received file and time (fixes #292, fixes #801) 2014-12-09 20:24:48 +00:00
Audrius Butkevicius
691f0f4845 Merge pull request #1102 from syncthing/gui-poodle
Protect GUI HTTPS from some attacks
2014-12-09 09:52:21 +00:00
Jakob Borg
fdd458d2fe Protect GUI HTTPS from some attacks
- Disable SSLv3 against POODLE
 - Disable RC4 as a weak cipher
 - Set the CommonName to the system host name
2014-12-09 10:49:58 +01:00
Jakob Borg
d2c0b8374a Fix integration tests for Windows native 2014-12-08 22:15:10 +01:00
Jakob Borg
c96c78892d Include error in randomness failure panic 2014-12-08 19:40:38 +01:00
Jakob Borg
957643f523 crypto/rand.Reader may not return all entropy immediately 2014-12-08 19:36:08 +01:00
Audrius Butkevicius
749bbec566 Merge pull request #1099 from syncthing/vet-and-lint
Various changes for vet and lint
2014-12-08 17:08:18 +00:00
Jakob Borg
25e363c5fb Style tweaks and some *IDG->IGD in UPnP code 2014-12-08 17:07:55 +01:00
Jakob Borg
febeed3277 config.ConfigWrapper -> config.Wrapper 2014-12-08 16:39:11 +01:00
Jakob Borg
9d07aa006d Various style fixes 2014-12-08 16:36:15 +01:00
Jakob Borg
12d69e25dd Fixes for go vet 2014-12-08 16:19:08 +01:00
Jakob Borg
0c9f1efc75 Run vet and lint during build 2014-12-08 16:12:53 +01:00
Jakob Borg
665b4506e7 Correct check-contrib.sh 2014-12-08 15:44:55 +01:00
Jakob Borg
cb5548ceb8 Fit better in with Jenkins 2014-12-08 15:42:53 +01:00
Jakob Borg
c9492e54f7 Copyright notice 2014-12-08 15:42:53 +01:00
Jakob Borg
12490eafff Script to fail build on missing authors and copyrights 2014-12-08 15:42:53 +01:00
Jakob Borg
6e83d11d5f Translation update 2014-12-08 13:25:27 +01:00
Jakob Borg
9c6aedc91b Merge remote-tracking branch 'origin/pr/1097'
* origin/pr/1097:
  Revert "Cache file descriptors" (fixes #1096)
2014-12-08 13:23:06 +01:00
Audrius Butkevicius
a9339d0627 Revert "Cache file descriptors" (fixes #1096)
This reverts commit 992ad97ad5.

Causes issues on Windows which uses file locking.
Meaning we cannot archive or modify the file while it's open.
2014-12-08 11:56:14 +00:00
Jakob Borg
4d9aa10532 Merge pull request #1093 from syncthing/random
Refactor random string stuff and seeding
2014-12-08 09:40:30 +01:00
Audrius Butkevicius
b00264b594 Copy compression setting while introducing 2014-12-07 22:43:30 +00:00
Jakob Borg
e329c7015e Refactor random string stuff and seeding
Make sure we have a good random seed on the default RNG, that the
predictable RNG is clearly marked as such, that random strings are
actually the length requested, and that they contain a restricted set of
characters only.
2014-12-07 16:47:24 +01:00
Jakob Borg
1392cfc72d Actually commit and use new random UR ID 2014-12-07 15:49:17 +01:00
Jakob Borg
c6688d8f89 Include ref#, show author nickname in release notes 2014-12-07 12:52:18 +01:00
Jakob Borg
87abea0ba3 Script for generating the change log 2014-12-07 09:07:13 +01:00
Jakob Borg
f9fcb44f3c Translation update 2014-12-07 08:30:54 +01:00
Jakob Borg
996cbbca38 Merge remote-tracking branch 'origin/pr/1091'
* origin/pr/1091:
  Escape plus sign (fixes #1090)
2014-12-07 08:05:21 +01:00
Jakob Borg
581f4b89bd Merge remote-tracking branch 'origin/pr/977'
* origin/pr/977:
  Cache file descriptors
2014-12-07 08:03:34 +01:00
Audrius Butkevicius
88a347dce0 Escape plus sign (fixes #1090) 2014-12-07 00:10:32 +00:00
Audrius Butkevicius
3e7b197a1d Merge pull request #1074 from syncthing/fix-1071
Handle symlinks in versioning (fixes #1071)
2014-12-06 20:19:25 +00:00
Jakob Borg
94ab06e92f Handle symlinks, Staggered versioner (fixes #1071) 2014-12-06 15:20:35 +01:00
Jakob Borg
d38c81fcff Handle broken symlinks, Simple versioner (fixes #1071) 2014-12-06 15:20:35 +01:00
Jakob Borg
3e26fdfb67 Run filetype and symlink integration tests with versioning (ref #1071) 2014-12-06 15:20:35 +01:00
Jakob Borg
e1be73232d Merge remote-tracking branch 'origin/pr/1086'
* origin/pr/1086:
  Enable URL lang parameter for switching languages (fixes #1080)
2014-12-06 14:41:41 +01:00
Jakob Borg
06fd2268d9 Use Go 1.4 'generate' to create XDR codec 2014-12-06 14:23:10 +01:00
Jakob Borg
15251dfae1 Update calmh/xdr 2014-12-06 14:20:49 +01:00
Dennis Wilson
f62812a8dc Enable URL lang parameter for switching languages (fixes #1080) 2014-12-06 14:11:42 +01:00
Jakob Borg
c6041d2590 Skip dotfiles when generating assets 2014-12-06 13:46:02 +01:00
Jakob Borg
c7e779107c Staggered versioning should use current time for filename tag (fixes #994) 2014-12-06 13:26:46 +01:00
Jakob Borg
7cd25c919f Asset rebuild 2014-12-06 12:36:35 +01:00
Jakob Borg
47d67d3985 Merge remote-tracking branch 'origin/pr/1087'
* origin/pr/1087:
  Folder/device panel header with progress indicator
2014-12-06 12:32:43 +01:00
Jakob Borg
1a7921b46c Merge remote-tracking branch 'origin/pr/1083'
* origin/pr/1083:
  Select repos to share with in node editor dialog (fixes #719)
2014-12-06 12:30:46 +01:00
Jakob Borg
43d569741b Merge remote-tracking branch 'origin/pr/1081'
* origin/pr/1081:
  Revisit -no-console option for Windows
2014-12-06 12:17:46 +01:00
Jakob Borg
52c6869eab Merge remote-tracking branch 'origin/pr/1082'
* origin/pr/1082:
  Scrap IsSymlink for native support on Go 1.4
2014-12-06 12:12:53 +01:00
Jakob Borg
6dff9097a2 Use Go 1.4 build environment (currently 1.4rc2) 2014-12-06 12:12:33 +01:00
Ben Schulz
4ff211662a Folder/device panel header with progress indicator 2014-12-05 18:44:38 +01:00
Audrius Butkevicius
05eab51a0d Select repos to share with in node editor dialog (fixes #719) 2014-12-05 00:22:16 +00:00
Audrius Butkevicius
604a4e7dbc Scrap IsSymlink for native support on Go 1.4
Obviously needs Go 1.4 to go back in.

I am still open to doing fix-up's on rescan interval on Windows, which
would still allow getting rid of all the Windows code.

Frankly, we could just defer creations of links (like we defer deletions of files)
in hopes that the target gets created, and if it doesn't, well tough luck, you'll
get a file symlink.

To be honest, nobody would even notice this 'issue' as I am sure nobody on
Windows uses symlinks.

But at the same time, this ugly code is hidden away in some creppy file in
it's own module far far away, and the interface that it exports is fine'ish,
so I wouldn't mind keeping it as it is.
2014-12-04 23:02:57 +00:00
Audrius Butkevicius
80dca96ee8 Revisit -no-console option for Windows
The reason for ShowWindow opose to your FreeConsole is because if you start up
cmd.exe and do syncthing.exe -no-output it actually hides the existing cmd.exe
window oppose to opening a separate window and then hiding it, which keeps the
existing console hanging on syncthing.exe running.

I tried playing around with compiling as GUI, then given the option is not present
allocating a console, and redirecting the std streams to the new console, but that
seems ugly as I'd have to make quite a few calls. But that does get of the initial
flash.
2014-12-04 21:59:40 +00:00
Jakob Borg
7f97037190 Revert "Merge pull request #1078 from syncthing/go1.4"
This reverts commit b658afd857, reversing
changes made to 591c5dabf4.
2014-12-04 22:24:49 +01:00
Jakob Borg
b658afd857 Merge pull request #1078 from syncthing/go1.4
Use Go 1.4 build env
2014-12-04 20:45:48 +01:00
Audrius Butkevicius
992ad97ad5 Cache file descriptors 2014-12-04 16:18:47 +00:00
Jakob Borg
5af6cbae2c Verify Windows support, report appropriate errors when unsupported 2014-12-04 12:10:25 +01:00
Jakob Borg
2abe792f36 Use same order of parameters as os.Symlink 2014-12-04 11:53:55 +01:00
Jakob Borg
1ff9bb8fdc Remove Windows specific implementation 2014-12-04 06:59:30 +01:00
Jakob Borg
5cb1039daf Use Go 1.4 build environment (currently 1.4rc2) 2014-12-04 06:59:13 +01:00
Jakob Borg
591c5dabf4 Merge pull request #1077 from AudriusButkevicius/round
Avoid rounding errors (fixes #1068)
2014-12-04 05:52:29 +01:00
Audrius Butkevicius
770fff287e Avoid rounding errors (fixes #1068) 2014-12-03 23:44:39 +00:00
Jakob Borg
cea7a179ae Utility to print all info we know about a path 2014-12-03 11:32:10 +01:00
Audrius Butkevicius
d80c40cfbf Merge pull request #1072 from syncthing/rewrite-ignore-cache
Rewrite ignore cache
2014-12-03 09:50:05 +00:00
Jakob Borg
12e83374e9 Verify that a symlink can be removed 2014-12-03 09:05:01 +01:00
Jakob Borg
98344d2e5e Rewrite ignores to fix data race, use fewer maps 2014-12-03 08:39:59 +01:00
Jakob Borg
99dc1eec50 Map is a reference type, does not need * here 2014-12-03 08:39:59 +01:00
Jakob Borg
2a886576a6 Fix announce timers on Solaris (and others, given the right timing) (...)
In the successfull case, we start the timer with NewTimer(0), then do a
bunch of stuff during which time it can fire, then reset it with
Reset(0). The result is that two timer firings are queued when we enter
the select loop, so we do two announcements back to back and fail the
tests.
2014-12-03 08:36:45 +01:00
Jakob Borg
919d005550 Print detected data races to stdout instead of hiding in a file 2014-12-03 07:47:40 +01:00
Jakob Borg
97abdaca5a Merge pull request #1070 from AudriusButkevicius/staggered
Use unique versions in staggered versioner (fixes #1063)
2014-12-02 23:27:01 +01:00
Jakob Borg
9cc8b7c858 Simple smoke test for parallell scans 2014-12-02 22:13:08 +01:00
Jakob Borg
0726472b91 Update test configs to v7 2014-12-02 22:05:15 +01:00
Audrius Butkevicius
3cbe92d797 Use unique versions in staggered versioner (fixes #1063) 2014-12-02 19:04:12 +00:00
Jakob Borg
72a278c9ed Merge pull request #1065 from syncthing/coc
Add Code of Conduct
2014-12-02 16:22:13 +01:00
Jakob Borg
e567c8adce Merge pull request #1064 from syncthing/contributors
Clarify/formalize contribution policy and commit access
2014-12-02 16:21:36 +01:00
Jakob Borg
dde8045109 Add Code of Conduct 2014-12-02 15:57:31 +01:00
Jakob Borg
c922c4c383 Clarify/formalize contribution policy and commit access 2014-12-02 15:55:45 +01:00
Audrius Butkevicius
bc8907e90d Check if announcement data is available 2014-12-01 19:53:13 +00:00
Jakob Borg
a8ba7786ae Reinstate 'Shared With' until a better alternative emerges (ref #1054) 2014-12-01 20:50:27 +01:00
Jakob Borg
c734e48ad0 Merge pull request #1052 from AudriusButkevicius/disco4
Change to URL based announce server addresses (fixes #943)
2014-12-01 17:53:48 +01:00
Audrius Butkevicius
d30d0b29a9 Fix CSS 2014-12-01 10:30:38 +00:00
Audrius Butkevicius
2912defb97 Add tests for new discovery 2014-12-01 10:30:25 +00:00
Audrius Butkevicius
69f8ac6b56 Change to URL based announce server addresses (fixes #943) 2014-12-01 10:30:25 +00:00
Jakob Borg
e7441ff6e8 DisableSymlinks -> !SymlinksEnabled 2014-12-01 11:27:07 +01:00
Jakob Borg
8a34158fa4 Merge pull request #1053 from AudriusButkevicius/symdis
Add option to disable symlinks (fixes #1017)
2014-12-01 11:22:04 +01:00
Jakob Borg
8d2a6d96f2 Shorter Global Discovery label 2014-12-01 11:14:11 +01:00
Jakob Borg
bb50b677c7 Merge pull request #1037 from snnd/locale-service
Added Locale Service. Minor Controller Refactoring.
2014-12-01 10:38:44 +01:00
Jakob Borg
0fde4b3b2e Use runtime info to determine ARM version for upgrade (fixes #1051) 2014-12-01 10:24:13 +01:00
Dennis Wilson
ee9c109f07 add locale service to GUI. minor cleanup of controller. 2014-12-01 10:00:03 +01:00
Jakob Borg
1219423091 Revert "Figure out GOARM without being told (ref #1051)"
This reverts commit 2d7b0cf94d.

GOARM is not actually embedded and printed by "go env"
2014-12-01 09:39:57 +01:00
Jakob Borg
cf00ab854f Translation update (fixes #1054) 2014-12-01 09:13:58 +01:00
Jakob Borg
c417dcb7e2 Repair Rescan button, cleanup CSS (fixes #1054) 2014-12-01 09:11:16 +01:00
Audrius Butkevicius
7ad711f554 Add option to disable symlinks (fixes #1017) 2014-11-30 22:10:32 +00:00
Jakob Borg
2d7b0cf94d Figure out GOARM without being told (ref #1051) 2014-11-30 21:46:00 +01:00
Jakob Borg
d669c07e8a Increase allowed test runtimes (fixes #1049) 2014-11-30 21:21:37 +01:00
Jakob Borg
8bd52946b4 Merge pull request #1048 from asdil12/goarm
Directly accept GOARM env var for ARM version
2014-11-30 20:57:49 +01:00
Jakob Borg
27e81637be Add asdil12 2014-11-30 20:57:34 +01:00
Jakob Borg
5c67e27a30 Use CSS column layouts in About box 2014-11-30 20:49:49 +01:00
Dominik Heidler
59af9809fe Directly accept GOARM env var for ARM version
As GOARCH defaults to 'arm' on arm systems this allows packagers to
specify the arm version by setting the GOARM env var to 5, 6 or 7.
2014-11-30 17:08:43 +01:00
Jakob Borg
a564510c49 Homogenize folder and device state to 'Up to Date' (fixes #1042) 2014-11-30 13:45:08 +01:00
Jakob Borg
285b614927 Translation update 2014-11-30 13:38:05 +01:00
Audrius Butkevicius
fd2d2c035e Add support for multiple announce servers (fixes #677)
Somebody owes me a beer.
2014-11-30 13:25:06 +01:00
Jakob Borg
78981862be Silence verbose docker build output 2014-11-29 08:30:41 +01:00
Jakob Borg
7f1253ff83 Revert "golang.org/x/tools in Dockerfile"
This reverts commit 5dd5602229.
2014-11-30 10:42:31 +01:00
Jakob Borg
e0265aed05 Increase read timeout on HTTP server, try to not run out of sockets in stress test 2014-11-30 10:38:39 +01:00
Jakob Borg
9d36d88a65 Build std for race in Docker image 2014-11-30 00:30:23 +01:00
Jakob Borg
5dd5602229 golang.org/x/tools in Dockerfile 2014-11-30 00:18:24 +01:00
Jakob Borg
126c4e9a06 Dependency update, new golang.org/x package names 2014-11-30 00:17:00 +01:00
Jakob Borg
5dbaf6ceb0 Use short integration tests by default 2014-11-30 00:07:36 +01:00
Jakob Borg
90de5659ea Data race: sharedPullerState WriteAt+Close 2014-11-29 23:51:53 +01:00
Jakob Borg
367e50edab Fixup integration tests for race detector 2014-11-29 23:41:06 +01:00
Jakob Borg
42b8dafafe Data race: can't access sharedPullerState.closed from the outside 2014-11-29 23:18:56 +01:00
Jakob Borg
577aaf8ad6 Data race: Discoverer.registryLock must cover the contents of registry as well 2014-11-29 23:04:25 +01:00
Jakob Borg
07cdf0364c Data race: ProgressEmitter (debug output only) 2014-11-29 22:51:13 +01:00
Jakob Borg
7f829f0159 Data race: broken locking on model.folderIgnores 2014-11-29 22:38:08 +01:00
Jakob Borg
a918aa97d9 Data race: deviceActivity methods with value receiver :( 2014-11-29 22:38:08 +01:00
Jakob Borg
4fdecc9b85 Run integration tests with -race (fixes #1043) 2014-11-29 22:38:04 +01:00
Jakob Borg
7af25c785d Don't send unnecessary SNI in TLS handshake 2014-11-29 20:58:24 +01:00
Jakob Borg
4de39b205d Only color status text, not panel headings (fixes #1039) 2014-11-29 13:08:00 +01:00
Jakob Borg
2748a2e97f Mark unused devices as 'Unused' and in warning color, show folders per device (fixes #962) 2014-11-29 09:43:05 +01:00
Jakob Borg
2926bbfe15 Mark unshared folders as 'Unshared' and in warning color (fixes #962) 2014-11-29 09:42:51 +01:00
Audrius Butkevicius
254c63763a Remove top margin from checkboxes (fixes #1036) 2014-11-28 15:17:02 +00:00
Jakob Borg
2de834f1f4 Place list of devices to share with in columns, in supported browsers 2014-11-27 21:34:24 +01:00
Jakob Borg
7273eab80e Clean up device panel (...) (ref #964)
- Remove "Synchronization"
- Hide "Compression" when default (on)
- Hide "Introducer" when default (off)
2014-11-27 20:46:36 +01:00
Jakob Borg
13e79c777a Clean up folder panel (...) (fixes #964)
- Remove ID
- Hide "Out of sync" when in sync
- Hide "Folder master" when default (not master)
- Hide "Ignore permissions" when default (not ignored)
- Hide "Rescan interval" when default (60 seconds)
2014-11-27 20:43:00 +01:00
Jakob Borg
8aa7d4b463 Lower the bar for when to stop restarting (fixes #1004) 2014-11-27 20:34:35 +01:00
Jakob Borg
5251f1c9db Use a separate, unique ID for usage reporting (fixes #1000) 2014-11-27 10:00:07 +01:00
Jakob Borg
82e923dfc8 Add kozec 2014-11-26 23:25:52 +01:00
Jakob Borg
decf16b92c Merge pull request #1029 from kozec/master
Add STNOUPGRADE environment variable to prevents autoupgrades
2014-11-26 23:25:45 +01:00
kozec
b84d960a81 Added STNOUPGRADE environment variable; Prevents autoupgrades, no matter of configuration. 2014-11-26 22:17:01 +01:00
Jakob Borg
34cb305755 Report all rates in bytes per second (fixes #934) 2014-11-26 17:30:52 +01:00
Jakob Borg
ed85bfa915 Don't perform external discovery lookups until local cache has had time to warm (fixes #666) 2014-11-26 17:23:15 +01:00
Jakob Borg
06ef33ff5e Translation strings for new functionality 2014-11-26 13:47:17 +01:00
Jakob Borg
57f121178c Update translate/transifex for new GUI paths 2014-11-26 13:46:34 +01:00
Dennis Wilson
3b88ee623b GUI Rework: reorganized folders and split app.js 2014-11-26 13:43:38 +01:00
Jakob Borg
8588625937 Add snnd 2014-11-26 13:43:26 +01:00
Jakob Borg
3417839726 Merge pull request #1024 from AudriusButkevicius/regexp
Fix versioner regexp's (fixes #1023)
2014-11-26 13:22:57 +01:00
Audrius Butkevicius
c1069052ae Fix versioner regexp's (fixes #1023) 2014-11-25 22:32:18 +00:00
Audrius Butkevicius
ea17542e4b Change progress emitter
1. Do not use cached value for BytesCompleted
2. Refactor JS a bit
3. Allow disabling progress emitter
2014-11-25 22:07:18 +00:00
Audrius Butkevicius
c7d779fe88 Fix tests on Windows 2014-11-25 21:27:10 +00:00
Audrius Butkevicius
a70f3f12c5 Merge pull request #999 from piobpl/master
Showing detailed sync progress (fixes #476)
2014-11-25 20:55:12 +00:00
piobpl
90a31589bb Showing detailed sync progress (fixes #476)
based on commit by Audrius Butkevicius <audrius.butkevicius@gmail.com>
2014-11-25 20:18:35 +01:00
Jakob Borg
b48d9a3a82 Don't panic when lacking symlink support on XP (fixes #1016) 2014-11-24 23:32:11 +01:00
Jakob Borg
0255311bbe Note about IRC channel 2014-11-24 23:07:30 +01:00
Audrius Butkevicius
bd91519df9 Add aria label on cog (closes #1020) 2014-11-24 21:14:14 +00:00
Jakob Borg
58fe8b0cf1 Add example for Solaris SMF running 2014-11-24 13:59:59 +01:00
Jakob Borg
064aa64f20 Point to etc dir in README 2014-11-24 13:49:18 +01:00
Jakob Borg
d9f79853fb Include etc dir in Unix builds 2014-11-24 13:49:15 +01:00
Jakob Borg
2e68ee5c8b Add example for Mac OS X background running 2014-11-24 13:49:15 +01:00
Jakob Borg
a9544ca890 Add example for runit service 2014-11-24 13:48:42 +01:00
Jakob Borg
9a549a853b Update goleveldb 2014-11-24 11:57:31 +01:00
Jakob Borg
2dad769a00 Only run Go based integration tests in Docker 2014-11-24 11:49:49 +01:00
Jakob Borg
0ceb14dbf6 Merge pull request #1013 from syncthing/timestamp-before-ext
Use file~timestamp.ext for version (fixes #1010)
2014-11-24 11:44:56 +01:00
Jakob Borg
bab1e26d9b Use source data for genfiles that is guaranteed to exist 2014-11-24 11:37:00 +01:00
Jakob Borg
9a91cc232c Use file~timestamp.ext for version (fixes #1010) 2014-11-24 11:02:14 +01:00
Jakob Borg
5a46cf1d48 Be a little more generous with HTTP timeouts 2014-11-24 10:16:47 +01:00
Jakob Borg
f1e241940b Translation update 2014-11-24 10:10:01 +01:00
Jakob Borg
47b344ba12 Merge pull request #1006 from AudriusButkevicius/defaults
Populate correct defaults
2014-11-24 08:18:31 +01:00
Jakob Borg
afbb06a72f Tests may dirty workspace 2014-11-23 23:10:08 +01:00
Jakob Borg
e336cd463f Tests may take longer than 60 seconds to complete 2014-11-23 23:10:07 +01:00
Jakob Borg
3a8315971e Run integration tests under Docker 2014-11-23 22:31:07 +01:00
Jakob Borg
4ccfa98771 Correct command in README 2014-11-23 22:01:07 +01:00
Jakob Borg
1db120bf06 Improve docker image and build 2014-11-23 21:46:18 +01:00
Audrius Butkevicius
262cf63956 Populate correct defaults 2014-11-23 18:45:45 +00:00
Jakob Borg
fe2ae4c6c3 Merge pull request #997 from syncthing/lig
Minor fixes
2014-11-23 11:35:19 +01:00
Jakob Borg
16d9944dbb Merge pull request #1002 from AudriusButkevicius/routine-cfg
Make copiers, pullers and finishers configurable
2014-11-23 11:29:58 +01:00
Jakob Borg
e9956cc71e Merge pull request #1003 from AudriusButkevicius/needtrim
Use custom structure for /need calls (fixes #1001)
2014-11-23 11:28:38 +01:00
Audrius Butkevicius
59a85c1d75 Use custom structure for /need calls (fixes #1001)
Also, remove trimming by number of blocks as this no longer affects the size
of the response.
2014-11-23 00:52:48 +00:00
Audrius Butkevicius
4427149a38 Make copiers, pullers and finishers configurable
Compliments #999
2014-11-23 00:02:12 +00:00
Audrius Butkevicius
20dee618ea Populate ignores upon adding a folder (fixes #996) 2014-11-22 02:22:09 +00:00
Audrius Butkevicius
37ebbb53be Replace directories/links with files (fixes #580) 2014-11-22 02:22:03 +00:00
Jakob Borg
ba019efaf1 Use a docker container for full builds 2014-11-21 06:48:24 +01:00
Jakob Borg
ce948fc512 Don't leave read only dir around, fails clean 2014-11-20 23:34:14 +01:00
Jakob Borg
2cd9e7fb55 Merge pull request #953 from syncthing/symlink
Symlink support
2014-11-20 16:34:12 +01:00
Jakob Borg
1e2d151684 Copyright notice update 2014-11-20 16:33:16 +01:00
Jakob Borg
ce5651f5fa Integration tests for symlinks 2014-11-20 16:32:01 +01:00
Audrius Butkevicius
20ba0bf4ed Update PROTOCOL.md 2014-11-20 16:32:01 +01:00
Audrius Butkevicius
c325ffd0f8 Add symlink support (fixes #873) 2014-11-20 16:32:00 +01:00
Audrius Butkevicius
6e88d9688b Implement symlinks package 2014-11-20 16:32:00 +01:00
Audrius Butkevicius
bf898f10fb Add symlink support at the protocol level 2014-11-20 16:32:00 +01:00
Audrius Butkevicius
c891999e1d Move filename conversion into osutil 2014-11-20 16:32:00 +01:00
Audrius Butkevicius
938e287501 Code smell 2014-11-20 16:32:00 +01:00
Jakob Borg
edcfc32b1a Add integration test (disabled) for file->dir and dir->file replacement (ref #580) 2014-11-20 16:23:58 +01:00
Jakob Borg
904b211d98 Merge pull request #990 from bigbear2nd/master
Add directory separator to autocomplete. Fixes #984
2014-11-20 16:09:57 +01:00
bigbear2nd
af08567f24 Add directory separator to autocomplete. Fixes #984 2014-11-20 00:26:06 +09:00
Jakob Borg
75ef658962 Correct file mode bits 2014-11-19 07:39:01 +04:00
Jakob Borg
fe2dd79838 Clean up global discovery timer handing 2014-11-19 01:03:43 +04:00
Jakob Borg
bbe7e6525d Finalize s/CONTRIBUTORS/AUTHORS/ 2014-11-18 18:13:19 +04:00
Jakob Borg
ef20df719c Remove redundant style section 2014-11-18 17:18:10 +04:00
271 changed files with 19592 additions and 15209 deletions

5
.gitignore vendored
View File

@@ -4,9 +4,14 @@ syncthing.exe
*.zip
*.asc
*.sublime*
.idea/
.jshintrc
coverage.out
files/pidx
bin
perfstats*.csv
coverage.xml
!gui/scripts/syncthing
.DS_Store
syncthing.md5
syncthing.exe.md5

1
.mailmap Symbolic link
View File

@@ -0,0 +1 @@
NICKS

View File

@@ -5,11 +5,15 @@ Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
Caleb Callaway <enlightened.despot@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Joel <chris@scriptolo.gy>
Daniel Martí <mvdan@mvdan.cc>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Emil Hessman <emil@hessman.se>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
@@ -23,7 +27,9 @@ Marcin Dziadus <dziadus.marcin@gmail.com>
Michael Tilli <pyfisch@gmail.com>
Philippe Schommers <philippe@schommers.be>
Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Tomas Cerveny <kozec@kozec.com>
Tully Robinson <tully@tojr.org>
Veeti Paananen <veeti.paananen@rojekti.fi>
Vil Brekin <vilbrekin@gmail.com>

92
CONDUCT.md Normal file
View File

@@ -0,0 +1,92 @@
## Conduct
* We are committed to providing a friendly, safe and welcoming
environment for all, regardless of gender, sexual orientation,
disability, ethnicity, religion, or similar personal characteristic.
* On IRC, please avoid using overtly sexual nicknames or other nicknames
that might detract from a friendly, safe and welcoming environment for
all.
* Please be kind and courteous. There's no need to be mean or rude.
* Respect that people have differences of opinion and that every design
or implementation choice carries a trade-off and numerous costs. There
is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid
ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass
anyone. That is not welcome behaviour. We interpret the term
"harassment" as including the definition in the <a
href="http://citizencodeofconduct.org/">Citizen Code of Conduct</a>;
if you have any lack of clarity about what might be included in that
concept, please read their definition. In particular, we don't
tolerate behavior that excludes people in socially marginalized
groups.
* Private harassment is also unacceptable. No matter who you are, if you
feel you have been or are being harassed or made uncomfortable by a
community member, please contact one of the channel ops or any of the
Syncthing core team immediately. Whether you're a regular contributor
or a newcomer, we care about making this community a safe place for
you and we've got your back.
* Likewise any spamming, trolling, flaming, baiting or other
attention-stealing behaviour is not welcome.
## Moderation
These are the policies for upholding our community's standards of
conduct in our communication channels, most notably in Syncthing-related
IRC channels and on the web forum.
1. Remarks that violate the Syncthing standards of conduct, including
hateful, hurtful, oppressive, or exclusionary remarks, are not
allowed. (Cursing is allowed, but never targeting another user, and
never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the
code of conduct or not, are also not allowed.
3. Moderators will first respond to such remarks with a warning.
4. If the warning is unheeded, the user will be "kicked," i.e., kicked
out of the communication channel to cool off.
5. If the user comes back and continues to make trouble, they will be
banned, i.e., indefinitely excluded.
6. Moderators may choose at their discretion to un-ban the user if it
was a first offense and they offer the offended party a genuine
apology.
7. If a moderator bans someone and you think it was unjustified, please
take it up with that moderator, or with a different moderator, **in
private**. Complaints about bans in-channel are not allowed.
8. Moderators are held to a higher standard than other community
members. If a moderator creates an inappropriate situation, they
should expect less leeway than others.
In the Syncthing community we strive to go the extra step to look out
for each other. Don't just aim to be technically unimpeachable, try to
be your best self. In particular, avoid flirting with offensive or
sensitive issues, particularly if they're off-topic; this all too
often leads to unnecessary fights, hurt feelings, and damaged trust;
worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the
urge to be defensive. Just stop doing what it was they complained about
and apologize. Even if you feel you were misinterpreted or unfairly
accused, chances are good there was something you could've communicated
better — remember that it's your responsibility to make your fellow
community members comfortable. Everyone wants to get along and we are
all here first and foremost because we want to talk about cool
technology. You will find that people will be eager to assume good
intent and forgive as long as you earn their trust.
*Adapted from the [Rust Code of Conduct](https://github.com/rust-lang/rust/wiki/Note-development-policy#conduct)*
*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling)*

View File

@@ -37,6 +37,36 @@ where to start, any open issues are fair game! Be prepared for a
all in the name of quality. :) Following the points below will make this
a smoother process.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### Core Team
The Syncthing core team currently consists of the following members;
- Jakob Borg (@calmh)
- Audrius Butkevicius (@AudriusButkevicius)
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
@@ -59,7 +89,7 @@ a smoother process.
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
amendments during review.
## Licensing
@@ -70,8 +100,8 @@ International License. You retain the copyright to code you have
written.
When accepting your first contribution, the maintainer of the project
will ensure that you are added to the CONTRIBUTORS file. You are welcome
to add yourself as a separate commit in your first pull request.
will ensure that you are added to the AUTHORS file. You are welcome to
add yourself as a separate commit in your first pull request.
## Building
@@ -101,12 +131,6 @@ signed by GPG key BCE524C7.
Yes please!
## Style
- `go fmt`
- Unix line breaks
## Documentation
[Over here!](http://discourse.syncthing.net/category/documentation)

42
Godeps/Godeps.json generated
View File

@@ -1,30 +1,10 @@
{
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.3.3",
"GoVersion": "go1.4",
"Packages": [
"./cmd/..."
],
"Deps": [
{
"ImportPath": "code.google.com/p/go.crypto/bcrypt",
"Comment": "null-216",
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
},
{
"ImportPath": "code.google.com/p/go.crypto/blowfish",
"Comment": "null-216",
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
},
{
"ImportPath": "code.google.com/p/go.text/transform",
"Comment": "null-90",
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
},
{
"ImportPath": "code.google.com/p/go.text/unicode/norm",
"Comment": "null-90",
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
},
{
"ImportPath": "github.com/AudriusButkevicius/lfu-go",
"Rev": "164bcecceb92fd6037f4d18a8d97b495ec6ef669"
@@ -43,7 +23,7 @@
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "ec3d404f43731551258977b38dd72cf557d00398"
"Rev": "45c46b7db7ff83b8b9ee09bbd95f36ab50043ece"
},
{
"ImportPath": "github.com/juju/ratelimit",
@@ -51,7 +31,7 @@
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "d8d1d2a5cc2d34c950dffa2f554525415d59f737"
"Rev": "63c9e642efad852f49e20a6f90194cae112fd2ac"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
@@ -68,6 +48,22 @@
{
"ImportPath": "github.com/vitrun/qart/qr",
"Rev": "ccb109cf25f0cd24474da73b9fee4e7a3e8a8ce0"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "731db29863ea7213d9556d0170afb38987f401d4"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "731db29863ea7213d9556d0170afb38987f401d4"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "985ee5acfaf1ff6712c7c99438752f8e09416ccb"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "985ee5acfaf1ff6712c7c99438752f8e09416ccb"
}
]
}

View File

@@ -1,45 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Generate test data for trie code.
package main
import (
"fmt"
)
func main() {
printTestTables()
}
// We take the smallest, largest and an arbitrary value for each
// of the UTF-8 sequence lengths.
var testRunes = []rune{
0x01, 0x0C, 0x7F, // 1-byte sequences
0x80, 0x100, 0x7FF, // 2-byte sequences
0x800, 0x999, 0xFFFF, // 3-byte sequences
0x10000, 0x10101, 0x10FFFF, // 4-byte sequences
0x200, 0x201, 0x202, 0x210, 0x215, // five entries in one sparse block
}
const fileHeader = `// Generated by running
// maketesttables
// DO NOT EDIT
package norm
`
func printTestTables() {
fmt.Print(fileHeader)
fmt.Printf("var testRunes = %#v\n\n", testRunes)
t := newNode()
for i, r := range testRunes {
t.insert(r, uint16(i))
}
t.printTables("testdata")
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,232 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
type valueRange struct {
value uint16 // header: value:stride
lo, hi byte // header: lo:n
}
type trie struct {
index []uint8
values []uint16
sparse []valueRange
sparseOffset []uint16
cutoff uint8 // indices >= cutoff are sparse
}
// lookupValue determines the type of block n and looks up the value for b.
// For n < t.cutoff, the block is a simple lookup table. Otherwise, the block
// is a list of ranges with an accompanying value. Given a matching range r,
// the value for b is by r.value + (b - r.lo) * stride.
func (t *trie) lookupValue(n uint8, b byte) uint16 {
if n < t.cutoff {
return t.values[uint16(n)<<6+uint16(b)]
}
offset := t.sparseOffset[n-t.cutoff]
header := t.sparse[offset]
lo := offset + 1
hi := lo + uint16(header.lo)
for lo < hi {
m := lo + (hi-lo)/2
r := t.sparse[m]
if r.lo <= b && b <= r.hi {
return r.value + uint16(b-r.lo)*header.value
}
if b < r.lo {
hi = m
} else {
lo = m + 1
}
}
return 0
}
const (
t1 = 0x00 // 0000 0000
tx = 0x80 // 1000 0000
t2 = 0xC0 // 1100 0000
t3 = 0xE0 // 1110 0000
t4 = 0xF0 // 1111 0000
t5 = 0xF8 // 1111 1000
t6 = 0xFC // 1111 1100
te = 0xFE // 1111 1110
)
// lookup returns the trie value for the first UTF-8 encoding in s and
// the width in bytes of this encoding. The size will be 0 if s does not
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
func (t *trie) lookup(s []byte) (v uint16, sz int) {
c0 := s[0]
switch {
case c0 < tx:
return t.values[c0], 1
case c0 < t2:
return 0, 1
case c0 < t3:
if len(s) < 2 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
return t.lookupValue(i, c1), 2
case c0 < t4:
if len(s) < 3 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
return t.lookupValue(i, c2), 3
case c0 < t5:
if len(s) < 4 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
o = uint16(i)<<6 + uint16(c2)
i = t.index[o]
c3 := s[3]
if c3 < tx || t2 <= c3 {
return 0, 3
}
return t.lookupValue(i, c3), 4
}
// Illegal rune
return 0, 1
}
// lookupString returns the trie value for the first UTF-8 encoding in s and
// the width in bytes of this encoding. The size will be 0 if s does not
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
func (t *trie) lookupString(s string) (v uint16, sz int) {
c0 := s[0]
switch {
case c0 < tx:
return t.values[c0], 1
case c0 < t2:
return 0, 1
case c0 < t3:
if len(s) < 2 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
return t.lookupValue(i, c1), 2
case c0 < t4:
if len(s) < 3 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
return t.lookupValue(i, c2), 3
case c0 < t5:
if len(s) < 4 {
return 0, 0
}
i := t.index[c0]
c1 := s[1]
if c1 < tx || t2 <= c1 {
return 0, 1
}
o := uint16(i)<<6 + uint16(c1)
i = t.index[o]
c2 := s[2]
if c2 < tx || t2 <= c2 {
return 0, 2
}
o = uint16(i)<<6 + uint16(c2)
i = t.index[o]
c3 := s[3]
if c3 < tx || t2 <= c3 {
return 0, 3
}
return t.lookupValue(i, c3), 4
}
// Illegal rune
return 0, 1
}
// lookupUnsafe returns the trie value for the first UTF-8 encoding in s.
// s must hold a full encoding.
func (t *trie) lookupUnsafe(s []byte) uint16 {
c0 := s[0]
if c0 < tx {
return t.values[c0]
}
if c0 < t2 {
return 0
}
i := t.index[c0]
if c0 < t3 {
return t.lookupValue(i, s[1])
}
i = t.index[uint16(i)<<6+uint16(s[1])]
if c0 < t4 {
return t.lookupValue(i, s[2])
}
i = t.index[uint16(i)<<6+uint16(s[2])]
if c0 < t5 {
return t.lookupValue(i, s[3])
}
return 0
}
// lookupStringUnsafe returns the trie value for the first UTF-8 encoding in s.
// s must hold a full encoding.
func (t *trie) lookupStringUnsafe(s string) uint16 {
c0 := s[0]
if c0 < tx {
return t.values[c0]
}
if c0 < t2 {
return 0
}
i := t.index[c0]
if c0 < t3 {
return t.lookupValue(i, s[1])
}
i = t.index[uint16(i)<<6+uint16(s[1])]
if c0 < t4 {
return t.lookupValue(i, s[2])
}
i = t.index[uint16(i)<<6+uint16(s[2])]
if c0 < t5 {
return t.lookupValue(i, s[3])
}
return 0
}

View File

@@ -1,152 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
import (
"testing"
"unicode/utf8"
)
// Test data is located in triedata_test.go; generated by maketesttables.
var testdata = testdataTrie
type rangeTest struct {
block uint8
lookup byte
result uint16
table []valueRange
offsets []uint16
}
var range1Off = []uint16{0, 2}
var range1 = []valueRange{
{0, 1, 0},
{1, 0x80, 0x80},
{0, 2, 0},
{1, 0x80, 0x80},
{9, 0xff, 0xff},
}
var rangeTests = []rangeTest{
{10, 0x80, 1, range1, range1Off},
{10, 0x00, 0, range1, range1Off},
{11, 0x80, 1, range1, range1Off},
{11, 0xff, 9, range1, range1Off},
{11, 0x00, 0, range1, range1Off},
}
func TestLookupSparse(t *testing.T) {
for i, test := range rangeTests {
n := trie{sparse: test.table, sparseOffset: test.offsets, cutoff: 10}
v := n.lookupValue(test.block, test.lookup)
if v != test.result {
t.Errorf("LookupSparse:%d: found %X; want %X", i, v, test.result)
}
}
}
// Test cases for illegal runes.
type trietest struct {
size int
bytes []byte
}
var tests = []trietest{
// illegal runes
{1, []byte{0x80}},
{1, []byte{0xFF}},
{1, []byte{t2, tx - 1}},
{1, []byte{t2, t2}},
{2, []byte{t3, tx, tx - 1}},
{2, []byte{t3, tx, t2}},
{1, []byte{t3, tx - 1, tx}},
{3, []byte{t4, tx, tx, tx - 1}},
{3, []byte{t4, tx, tx, t2}},
{1, []byte{t4, t2, tx, tx - 1}},
{2, []byte{t4, tx, t2, tx - 1}},
// short runes
{0, []byte{t2}},
{0, []byte{t3, tx}},
{0, []byte{t4, tx, tx}},
// we only support UTF-8 up to utf8.UTFMax bytes (4 bytes)
{1, []byte{t5, tx, tx, tx, tx}},
{1, []byte{t6, tx, tx, tx, tx, tx}},
}
func mkUTF8(r rune) ([]byte, int) {
var b [utf8.UTFMax]byte
sz := utf8.EncodeRune(b[:], r)
return b[:sz], sz
}
func TestLookup(t *testing.T) {
for i, tt := range testRunes {
b, szg := mkUTF8(tt)
v, szt := testdata.lookup(b)
if int(v) != i {
t.Errorf("lookup(%U): found value %#x, expected %#x", tt, v, i)
}
if szt != szg {
t.Errorf("lookup(%U): found size %d, expected %d", tt, szt, szg)
}
}
for i, tt := range tests {
v, sz := testdata.lookup(tt.bytes)
if v != 0 {
t.Errorf("lookup of illegal rune, case %d: found value %#x, expected 0", i, v)
}
if sz != tt.size {
t.Errorf("lookup of illegal rune, case %d: found size %d, expected %d", i, sz, tt.size)
}
}
// Verify defaults.
if v, _ := testdata.lookup([]byte{0xC1, 0x8C}); v != 0 {
t.Errorf("lookup of non-existing rune should be 0; found %X", v)
}
}
func TestLookupUnsafe(t *testing.T) {
for i, tt := range testRunes {
b, _ := mkUTF8(tt)
v := testdata.lookupUnsafe(b)
if int(v) != i {
t.Errorf("lookupUnsafe(%U): found value %#x, expected %#x", i, v, i)
}
}
}
func TestLookupString(t *testing.T) {
for i, tt := range testRunes {
b, szg := mkUTF8(tt)
v, szt := testdata.lookupString(string(b))
if int(v) != i {
t.Errorf("lookup(%U): found value %#x, expected %#x", i, v, i)
}
if szt != szg {
t.Errorf("lookup(%U): found size %d, expected %d", i, szt, szg)
}
}
for i, tt := range tests {
v, sz := testdata.lookupString(string(tt.bytes))
if int(v) != 0 {
t.Errorf("lookup of illegal rune, case %d: found value %#x, expected 0", i, v)
}
if sz != tt.size {
t.Errorf("lookup of illegal rune, case %d: found size %d, expected %d", i, sz, tt.size)
}
}
}
func TestLookupStringUnsafe(t *testing.T) {
for i, tt := range testRunes {
b, _ := mkUTF8(tt)
v := testdata.lookupStringUnsafe(string(b))
if int(v) != i {
t.Errorf("lookupUnsafe(%U): found value %#x, expected %#x", i, v, i)
}
}
}

View File

@@ -1,85 +0,0 @@
// Generated by running
// maketesttables
// DO NOT EDIT
package norm
var testRunes = []int32{1, 12, 127, 128, 256, 2047, 2048, 2457, 65535, 65536, 65793, 1114111, 512, 513, 514, 528, 533}
// testdataValues: 192 entries, 384 bytes
// Block 2 is the null block.
var testdataValues = [192]uint16{
// Block 0x0, offset 0x0
0x000c: 0x0001,
// Block 0x1, offset 0x40
0x007f: 0x0002,
// Block 0x2, offset 0x80
}
// testdataSparseOffset: 10 entries, 20 bytes
var testdataSparseOffset = []uint16{0x0, 0x2, 0x4, 0x8, 0xa, 0xc, 0xe, 0x10, 0x12, 0x14}
// testdataSparseValues: 22 entries, 88 bytes
var testdataSparseValues = [22]valueRange{
// Block 0x0, offset 0x1
{value: 0x0000, lo: 0x01},
{value: 0x0003, lo: 0x80, hi: 0x80},
// Block 0x1, offset 0x2
{value: 0x0000, lo: 0x01},
{value: 0x0004, lo: 0x80, hi: 0x80},
// Block 0x2, offset 0x3
{value: 0x0001, lo: 0x03},
{value: 0x000c, lo: 0x80, hi: 0x82},
{value: 0x000f, lo: 0x90, hi: 0x90},
{value: 0x0010, lo: 0x95, hi: 0x95},
// Block 0x3, offset 0x4
{value: 0x0000, lo: 0x01},
{value: 0x0005, lo: 0xbf, hi: 0xbf},
// Block 0x4, offset 0x5
{value: 0x0000, lo: 0x01},
{value: 0x0006, lo: 0x80, hi: 0x80},
// Block 0x5, offset 0x6
{value: 0x0000, lo: 0x01},
{value: 0x0007, lo: 0x99, hi: 0x99},
// Block 0x6, offset 0x7
{value: 0x0000, lo: 0x01},
{value: 0x0008, lo: 0xbf, hi: 0xbf},
// Block 0x7, offset 0x8
{value: 0x0000, lo: 0x01},
{value: 0x0009, lo: 0x80, hi: 0x80},
// Block 0x8, offset 0x9
{value: 0x0000, lo: 0x01},
{value: 0x000a, lo: 0x81, hi: 0x81},
// Block 0x9, offset 0xa
{value: 0x0000, lo: 0x01},
{value: 0x000b, lo: 0xbf, hi: 0xbf},
}
// testdataLookup: 640 bytes
// Block 0 is the null block.
var testdataLookup = [640]uint8{
// Block 0x0, offset 0x0
// Block 0x1, offset 0x40
// Block 0x2, offset 0x80
// Block 0x3, offset 0xc0
0x0c2: 0x01, 0x0c4: 0x02,
0x0c8: 0x03,
0x0df: 0x04,
0x0e0: 0x02,
0x0ef: 0x03,
0x0f0: 0x05, 0x0f4: 0x07,
// Block 0x4, offset 0x100
0x120: 0x05, 0x126: 0x06,
// Block 0x5, offset 0x140
0x17f: 0x07,
// Block 0x6, offset 0x180
0x180: 0x08, 0x184: 0x09,
// Block 0x7, offset 0x1c0
0x1d0: 0x04,
// Block 0x8, offset 0x200
0x23f: 0x0a,
// Block 0x9, offset 0x240
0x24f: 0x06,
}
var testdataTrie = trie{testdataLookup[:], testdataValues[:], testdataSparseValues[:], testdataSparseOffset[:], 1}

View File

@@ -1,317 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Trie table generator.
// Used by make*tables tools to generate a go file with trie data structures
// for mapping UTF-8 to a 16-bit value. All but the last byte in a UTF-8 byte
// sequence are used to lookup offsets in the index table to be used for the
// next byte. The last byte is used to index into a table with 16-bit values.
package main
import (
"fmt"
"hash/crc32"
"log"
"unicode/utf8"
)
const (
blockSize = 64
blockOffset = 2 // Subtract two blocks to compensate for the 0x80 added to continuation bytes.
maxSparseEntries = 16
)
// Intermediate trie structure
type trieNode struct {
table [256]*trieNode
value int
b byte
leaf bool
}
func newNode() *trieNode {
return new(trieNode)
}
func (n trieNode) String() string {
s := fmt.Sprint("trieNode{table: { non-nil at index: ")
for i, v := range n.table {
if v != nil {
s += fmt.Sprintf("%d, ", i)
}
}
s += fmt.Sprintf("}, value:%#x, b:%#x leaf:%v}", n.value, n.b, n.leaf)
return s
}
func (n trieNode) isInternal() bool {
internal := true
for i := 0; i < 256; i++ {
if nn := n.table[i]; nn != nil {
if !internal && !nn.leaf {
log.Fatalf("triegen: isInternal: node contains both leaf and non-leaf children (%v)", n)
}
internal = internal && !nn.leaf
}
}
return internal
}
func (n trieNode) mostFrequentStride() int {
counts := make(map[int]int)
v := 0
for _, t := range n.table[0x80 : 0x80+blockSize] {
if t != nil {
if stride := t.value - v; v != 0 && stride >= 0 {
counts[stride]++
}
v = t.value
} else {
v = 0
}
}
var maxs, maxc int
for stride, cnt := range counts {
if cnt > maxc || (cnt == maxc && stride < maxs) {
maxs, maxc = stride, cnt
}
}
return maxs
}
func (n trieNode) countSparseEntries() int {
stride := n.mostFrequentStride()
var count, v int
for _, t := range n.table[0x80 : 0x80+blockSize] {
tv := 0
if t != nil {
tv = t.value
}
if tv-v != stride {
if tv != 0 {
count++
}
}
v = tv
}
return count
}
func (n *trieNode) insert(r rune, value uint16) {
var p [utf8.UTFMax]byte
sz := utf8.EncodeRune(p[:], r)
for i := 0; i < sz; i++ {
if n.leaf {
log.Fatalf("triegen: insert: node (%#v) should not be a leaf", n)
}
nn := n.table[p[i]]
if nn == nil {
nn = newNode()
nn.b = p[i]
n.table[p[i]] = nn
}
n = nn
}
n.value = int(value)
n.leaf = true
}
type nodeIndex struct {
lookupBlocks []*trieNode
valueBlocks []*trieNode
sparseBlocks []*trieNode
sparseOffset []uint16
sparseCount int
lookupBlockIdx map[uint32]int
valueBlockIdx map[uint32]int
}
func newIndex() *nodeIndex {
index := &nodeIndex{}
index.lookupBlocks = make([]*trieNode, 0)
index.valueBlocks = make([]*trieNode, 0)
index.sparseBlocks = make([]*trieNode, 0)
index.sparseOffset = make([]uint16, 1)
index.lookupBlockIdx = make(map[uint32]int)
index.valueBlockIdx = make(map[uint32]int)
return index
}
func computeOffsets(index *nodeIndex, n *trieNode) int {
if n.leaf {
return n.value
}
hasher := crc32.New(crc32.MakeTable(crc32.IEEE))
// We only index continuation bytes.
for i := 0; i < blockSize; i++ {
v := 0
if nn := n.table[0x80+i]; nn != nil {
v = computeOffsets(index, nn)
}
hasher.Write([]byte{uint8(v >> 8), uint8(v)})
}
h := hasher.Sum32()
if n.isInternal() {
v, ok := index.lookupBlockIdx[h]
if !ok {
v = len(index.lookupBlocks) - blockOffset
index.lookupBlocks = append(index.lookupBlocks, n)
index.lookupBlockIdx[h] = v
}
n.value = v
} else {
v, ok := index.valueBlockIdx[h]
if !ok {
if c := n.countSparseEntries(); c > maxSparseEntries {
v = len(index.valueBlocks) - blockOffset
index.valueBlocks = append(index.valueBlocks, n)
index.valueBlockIdx[h] = v
} else {
v = -len(index.sparseOffset)
index.sparseBlocks = append(index.sparseBlocks, n)
index.sparseOffset = append(index.sparseOffset, uint16(index.sparseCount))
index.sparseCount += c + 1
index.valueBlockIdx[h] = v
}
}
n.value = v
}
return n.value
}
func printValueBlock(nr int, n *trieNode, offset int) {
boff := nr * blockSize
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
var printnewline bool
for i := 0; i < blockSize; i++ {
if i%6 == 0 {
printnewline = true
}
v := 0
if nn := n.table[i+offset]; nn != nil {
v = nn.value
}
if v != 0 {
if printnewline {
fmt.Printf("\n")
printnewline = false
}
fmt.Printf("%#04x:%#04x, ", boff+i, v)
}
}
}
func printSparseBlock(nr int, n *trieNode) {
boff := -n.value
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
v := 0
//stride := f(n)
stride := n.mostFrequentStride()
c := n.countSparseEntries()
fmt.Printf("\n{value:%#04x,lo:%#02x},", stride, uint8(c))
for i, nn := range n.table[0x80 : 0x80+blockSize] {
nv := 0
if nn != nil {
nv = nn.value
}
if nv-v != stride {
if v != 0 {
fmt.Printf(",hi:%#02x},", 0x80+i-1)
}
if nv != 0 {
fmt.Printf("\n{value:%#04x,lo:%#02x", nv, nn.b)
}
}
v = nv
}
if v != 0 {
fmt.Printf(",hi:%#02x},", 0x80+blockSize-1)
}
}
func printLookupBlock(nr int, n *trieNode, offset, cutoff int) {
boff := nr * blockSize
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
var printnewline bool
for i := 0; i < blockSize; i++ {
if i%8 == 0 {
printnewline = true
}
v := 0
if nn := n.table[i+offset]; nn != nil {
v = nn.value
}
if v != 0 {
if v < 0 {
v = -v - 1 + cutoff
}
if printnewline {
fmt.Printf("\n")
printnewline = false
}
fmt.Printf("%#03x:%#02x, ", boff+i, v)
}
}
}
// printTables returns the size in bytes of the generated tables.
func (t *trieNode) printTables(name string) int {
index := newIndex()
// Values for 7-bit ASCII are stored in first two block, followed by nil block.
index.valueBlocks = append(index.valueBlocks, nil, nil, nil)
// First byte of multi-byte UTF-8 codepoints are indexed in 4th block.
index.lookupBlocks = append(index.lookupBlocks, nil, nil, nil, nil)
// Index starter bytes of multi-byte UTF-8.
for i := 0xC0; i < 0x100; i++ {
if t.table[i] != nil {
computeOffsets(index, t.table[i])
}
}
nv := len(index.valueBlocks) * blockSize
fmt.Printf("// %sValues: %d entries, %d bytes\n", name, nv, nv*2)
fmt.Printf("// Block 2 is the null block.\n")
fmt.Printf("var %sValues = [%d]uint16 {", name, nv)
printValueBlock(0, t, 0)
printValueBlock(1, t, 64)
printValueBlock(2, newNode(), 0)
for i := 3; i < len(index.valueBlocks); i++ {
printValueBlock(i, index.valueBlocks[i], 0x80)
}
fmt.Print("\n}\n\n")
ls := len(index.sparseBlocks)
fmt.Printf("// %sSparseOffset: %d entries, %d bytes\n", name, ls, ls*2)
fmt.Printf("var %sSparseOffset = %#v\n\n", name, index.sparseOffset[1:])
ns := index.sparseCount
fmt.Printf("// %sSparseValues: %d entries, %d bytes\n", name, ns, ns*4)
fmt.Printf("var %sSparseValues = [%d]valueRange {", name, ns)
for i, n := range index.sparseBlocks {
printSparseBlock(i, n)
}
fmt.Print("\n}\n\n")
cutoff := len(index.valueBlocks) - blockOffset
ni := len(index.lookupBlocks) * blockSize
fmt.Printf("// %sLookup: %d bytes\n", name, ni)
fmt.Printf("// Block 0 is the null block.\n")
fmt.Printf("var %sLookup = [%d]uint8 {", name, ni)
printLookupBlock(0, newNode(), 0, cutoff)
printLookupBlock(1, newNode(), 0, cutoff)
printLookupBlock(2, newNode(), 0, cutoff)
printLookupBlock(3, t, 0xC0, cutoff)
for i := 4; i < len(index.lookupBlocks); i++ {
printLookupBlock(i, index.lookupBlocks[i], 0x80, cutoff)
}
fmt.Print("\n}\n\n")
fmt.Printf("var %sTrie = trie{ %sLookup[:], %sValues[:], %sSparseValues[:], %sSparseOffset[:], %d}\n\n",
name, name, name, name, name, cutoff)
return nv*2 + ns*4 + ni + ls*2
}

View File

@@ -11,6 +11,8 @@ import (
"go/format"
"go/parser"
"go/token"
"io"
"log"
"os"
"regexp"
"strconv"
@@ -269,7 +271,7 @@ func handleStruct(t *ast.StructType) []fieldInfo {
return fs
}
func generateCode(s structInfo) {
func generateCode(output io.Writer, s structInfo) {
name := s.Name
fs := s.Fields
@@ -286,7 +288,7 @@ func generateCode(s structInfo) {
if err != nil {
panic(err)
}
fmt.Println(string(bs))
fmt.Fprintln(output, string(bs))
}
func uncamelize(s string) string {
@@ -295,16 +297,16 @@ func uncamelize(s string) string {
})
}
func generateDiagram(s structInfo) {
func generateDiagram(output io.Writer, s structInfo) {
sn := s.Name
fs := s.Fields
fmt.Println(sn + " Structure:")
fmt.Println()
fmt.Println(" 0 1 2 3")
fmt.Println(" 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1")
fmt.Fprintln(output, sn+" Structure:")
fmt.Fprintln(output)
fmt.Fprintln(output, " 0 1 2 3")
fmt.Fprintln(output, " 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1")
line := "+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+"
fmt.Println(line)
fmt.Fprintln(output, line)
for _, f := range fs {
tn := f.FieldType
@@ -312,52 +314,52 @@ func generateDiagram(s structInfo) {
name := uncamelize(f.Name)
if sl {
fmt.Printf("| %s |\n", center("Number of "+name, 61))
fmt.Println(line)
fmt.Fprintf(output, "| %s |\n", center("Number of "+name, 61))
fmt.Fprintln(output, line)
}
switch tn {
case "bool":
fmt.Printf("| %s |V|\n", center(name+" (V=0 or 1)", 59))
fmt.Println(line)
fmt.Fprintf(output, "| %s |V|\n", center(name+" (V=0 or 1)", 59))
fmt.Fprintln(output, line)
case "uint16":
fmt.Printf("| %s | %s |\n", center("0x0000", 29), center(name, 29))
fmt.Println(line)
fmt.Fprintf(output, "| %s | %s |\n", center("0x0000", 29), center(name, 29))
fmt.Fprintln(output, line)
case "uint32":
fmt.Printf("| %s |\n", center(name, 61))
fmt.Println(line)
fmt.Fprintf(output, "| %s |\n", center(name, 61))
fmt.Fprintln(output, line)
case "int64", "uint64":
fmt.Printf("| %-61s |\n", "")
fmt.Printf("+ %s +\n", center(name+" (64 bits)", 61))
fmt.Printf("| %-61s |\n", "")
fmt.Println(line)
fmt.Fprintf(output, "| %-61s |\n", "")
fmt.Fprintf(output, "+ %s +\n", center(name+" (64 bits)", 61))
fmt.Fprintf(output, "| %-61s |\n", "")
fmt.Fprintln(output, line)
case "string", "byte": // XXX We assume slice of byte!
fmt.Printf("| %s |\n", center("Length of "+name, 61))
fmt.Println(line)
fmt.Printf("/ %61s /\n", "")
fmt.Printf("\\ %s \\\n", center(name+" (variable length)", 61))
fmt.Printf("/ %61s /\n", "")
fmt.Println(line)
fmt.Fprintf(output, "| %s |\n", center("Length of "+name, 61))
fmt.Fprintln(output, line)
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintf(output, "\\ %s \\\n", center(name+" (variable length)", 61))
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintln(output, line)
default:
if sl {
tn = "Zero or more " + tn + " Structures"
fmt.Printf("/ %s /\n", center("", 61))
fmt.Printf("\\ %s \\\n", center(tn, 61))
fmt.Printf("/ %s /\n", center("", 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
} else {
fmt.Printf("| %s |\n", center(tn, 61))
fmt.Fprintf(output, "| %s |\n", center(tn, 61))
}
fmt.Println(line)
fmt.Fprintln(output, line)
}
}
fmt.Println()
fmt.Println()
fmt.Fprintln(output)
fmt.Fprintln(output)
}
func generateXdr(s structInfo) {
func generateXdr(output io.Writer, s structInfo) {
sn := s.Name
fs := s.Fields
fmt.Printf("struct %s {\n", sn)
fmt.Fprintf(output, "struct %s {\n", sn)
for _, f := range fs {
tn := f.FieldType
@@ -373,21 +375,21 @@ func generateXdr(s structInfo) {
switch tn {
case "uint16", "uint32":
fmt.Printf("\tunsigned int %s%s;\n", fn, suf)
fmt.Fprintf(output, "\tunsigned int %s%s;\n", fn, suf)
case "int64":
fmt.Printf("\thyper %s%s;\n", fn, suf)
fmt.Fprintf(output, "\thyper %s%s;\n", fn, suf)
case "uint64":
fmt.Printf("\tunsigned hyper %s%s;\n", fn, suf)
fmt.Fprintf(output, "\tunsigned hyper %s%s;\n", fn, suf)
case "string":
fmt.Printf("\tstring %s<%s>;\n", fn, l)
fmt.Fprintf(output, "\tstring %s<%s>;\n", fn, l)
case "byte":
fmt.Printf("\topaque %s<%s>;\n", fn, l)
fmt.Fprintf(output, "\topaque %s<%s>;\n", fn, l)
default:
fmt.Printf("\t%s %s%s;\n", tn, fn, suf)
fmt.Fprintf(output, "\t%s %s%s;\n", tn, fn, suf)
}
}
fmt.Println("}")
fmt.Println()
fmt.Fprintln(output, "}")
fmt.Fprintln(output)
}
func center(s string, w int) string {
@@ -418,25 +420,35 @@ func inspector(structs *[]structInfo) func(ast.Node) bool {
}
func main() {
outputFile := flag.String("o", "", "Output file, blank for stdout")
flag.Parse()
fname := flag.Arg(0)
fset := token.NewFileSet()
f, err := parser.ParseFile(fset, fname, nil, parser.ParseComments)
if err != nil {
panic(err)
log.Fatal(err)
}
var structs []structInfo
i := inspector(&structs)
ast.Inspect(f, i)
headerTpl.Execute(os.Stdout, map[string]string{"Package": f.Name.Name})
var output io.Writer = os.Stdout
if *outputFile != "" {
fd, err := os.Create(*outputFile)
if err != nil {
log.Fatal(err)
}
output = fd
}
headerTpl.Execute(output, map[string]string{"Package": f.Name.Name})
for _, s := range structs {
fmt.Printf("\n/*\n\n")
generateDiagram(s)
generateXdr(s)
fmt.Printf("*/\n")
generateCode(s)
fmt.Fprintf(output, "\n/*\n\n")
generateDiagram(output, s)
generateXdr(output, s)
fmt.Fprintf(output, "*/\n")
generateCode(output, s)
}
}

View File

@@ -8,152 +8,669 @@
package cache
import (
"sync"
"sync/atomic"
"unsafe"
"github.com/syndtr/goleveldb/leveldb/util"
)
// SetFunc is the function that will be called by Namespace.Get to create
// a cache object, if charge is less than one than the cache object will
// not be registered to cache tree, if value is nil then the cache object
// will not be created.
type SetFunc func() (charge int, value interface{})
// DelFin is the function that will be called as the result of a delete operation.
// Exist == true is indication that the object is exist, and pending == true is
// indication of deletion already happen but haven't done yet (wait for all handles
// to be released). And exist == false means the object doesn't exist.
type DelFin func(exist, pending bool)
// PurgeFin is the function that will be called as the result of a purge operation.
type PurgeFin func(ns, key uint64)
// Cache is a cache tree. A cache instance must be goroutine-safe.
type Cache interface {
// SetCapacity sets cache tree capacity.
SetCapacity(capacity int)
// Capacity returns cache tree capacity.
// Cacher provides interface to implements a caching functionality.
// An implementation must be goroutine-safe.
type Cacher interface {
// Capacity returns cache capacity.
Capacity() int
// Used returns used cache tree capacity.
Used() int
// SetCapacity sets cache capacity.
SetCapacity(capacity int)
// Size returns entire alive cache objects size.
Size() int
// Promote promotes the 'cache node'.
Promote(n *Node)
// NumObjects returns number of alive objects.
NumObjects() int
// Ban evicts the 'cache node' and prevent subsequent 'promote'.
Ban(n *Node)
// GetNamespace gets cache namespace with the given id.
// GetNamespace is never return nil.
GetNamespace(id uint64) Namespace
// Evict evicts the 'cache node'.
Evict(n *Node)
// PurgeNamespace purges cache namespace with the given id from this cache tree.
// Also read Namespace.Purge.
PurgeNamespace(id uint64, fin PurgeFin)
// EvictNS evicts 'cache node' with the given namespace.
EvictNS(ns uint64)
// ZapNamespace detaches cache namespace with the given id from this cache tree.
// Also read Namespace.Zap.
ZapNamespace(id uint64)
// EvictAll evicts all 'cache node'.
EvictAll()
// Purge purges all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Purge method on all cache namespace.
Purge(fin PurgeFin)
// Zap detaches all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Zap method on all cache namespace.
Zap()
// Close closes the 'cache tree'
Close() error
}
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
type Namespace interface {
// Get gets cache object with the given key.
// If cache object is not found and setf is not nil, Get will atomically creates
// the cache object by calling setf. Otherwise Get will returns nil.
//
// The returned cache handle should be released after use by calling Release
// method.
Get(key uint64, setf SetFunc) Handle
// Value is a 'cacheable object'. It may implements util.Releaser, if
// so the the Release method will be called once object is released.
type Value interface{}
// Delete removes cache object with the given key from cache tree.
// A deleted cache object will be released as soon as all of its handles have
// been released.
// Delete only happen once, subsequent delete will consider cache object doesn't
// exist, even if the cache object ins't released yet.
//
// If not nil, fin will be called if the cache object doesn't exist or when
// finally be released.
//
// Delete returns true if such cache object exist and never been deleted.
Delete(key uint64, fin DelFin) bool
// Purge removes all cache objects within this namespace from cache tree.
// This is the same as doing delete on all cache objects.
//
// If not nil, fin will be called on all cache objects when its finally be
// released.
Purge(fin PurgeFin)
// Zap detaches namespace from cache tree and release all its cache objects.
// A zapped namespace can never be filled again.
// Calling Get on zapped namespace will always return nil.
Zap()
type CacheGetter struct {
Cache *Cache
NS uint64
}
// Handle is a cache handle.
type Handle interface {
// Release releases this cache handle. This method can be safely called mutiple
// times.
Release()
// Value returns value of this cache handle.
// Value will returns nil after this cache handle have be released.
Value() interface{}
func (g *CacheGetter) Get(key uint64, setFunc func() (size int, value Value)) *Handle {
return g.Cache.Get(g.NS, key, setFunc)
}
// The hash tables implementation is based on:
// "Dynamic-Sized Nonblocking Hash Tables", by Yujie Liu, Kunlong Zhang, and Michael Spear. ACM Symposium on Principles of Distributed Computing, Jul 2014.
const (
DelNotExist = iota
DelExist
DelPendig
mInitialSize = 1 << 4
mOverflowThreshold = 1 << 5
mOverflowGrowThreshold = 1 << 7
)
// Namespace state.
type nsState int
const (
nsEffective nsState = iota
nsZapped
)
// Node state.
type nodeState int
const (
nodeZero nodeState = iota
nodeEffective
nodeEvicted
nodeDeleted
)
// Fake handle.
type fakeHandle struct {
value interface{}
fin func()
once uint32
type mBucket struct {
mu sync.Mutex
node []*Node
frozen bool
}
func (h *fakeHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.value
func (b *mBucket) freeze() []*Node {
b.mu.Lock()
defer b.mu.Unlock()
if !b.frozen {
b.frozen = true
}
return b.node
}
func (b *mBucket) get(r *Cache, h *mNode, hash uint32, ns, key uint64, noset bool) (done, added bool, n *Node) {
b.mu.Lock()
if b.frozen {
b.mu.Unlock()
return
}
// Scan the node.
for _, n := range b.node {
if n.hash == hash && n.ns == ns && n.key == key {
atomic.AddInt32(&n.ref, 1)
b.mu.Unlock()
return true, false, n
}
}
// Get only.
if noset {
b.mu.Unlock()
return true, false, nil
}
// Create node.
n = &Node{
r: r,
hash: hash,
ns: ns,
key: key,
ref: 1,
}
// Add node to bucket.
b.node = append(b.node, n)
bLen := len(b.node)
b.mu.Unlock()
// Update counter.
grow := atomic.AddInt32(&r.nodes, 1) >= h.growThreshold
if bLen > mOverflowThreshold {
grow = grow || atomic.AddInt32(&h.overflow, 1) >= mOverflowGrowThreshold
}
// Grow.
if grow && atomic.CompareAndSwapInt32(&h.resizeInProgess, 0, 1) {
nhLen := len(h.buckets) << 1
nh := &mNode{
buckets: make([]unsafe.Pointer, nhLen),
mask: uint32(nhLen) - 1,
pred: unsafe.Pointer(h),
growThreshold: int32(nhLen * mOverflowThreshold),
shrinkThreshold: int32(nhLen >> 1),
}
ok := atomic.CompareAndSwapPointer(&r.mHead, unsafe.Pointer(h), unsafe.Pointer(nh))
if !ok {
panic("BUG: failed swapping head")
}
go nh.initBuckets()
}
return true, true, n
}
func (b *mBucket) delete(r *Cache, h *mNode, hash uint32, ns, key uint64) (done, deleted bool) {
b.mu.Lock()
if b.frozen {
b.mu.Unlock()
return
}
// Scan the node.
var (
n *Node
bLen int
)
for i := range b.node {
n = b.node[i]
if n.ns == ns && n.key == key {
if atomic.LoadInt32(&n.ref) == 0 {
deleted = true
// Call releaser.
if n.value != nil {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
n.value = nil
}
// Remove node from bucket.
b.node = append(b.node[:i], b.node[i+1:]...)
bLen = len(b.node)
}
break
}
}
b.mu.Unlock()
if deleted {
// Call OnDel.
for _, f := range n.onDel {
f()
}
// Update counter.
atomic.AddInt32(&r.size, int32(n.size)*-1)
shrink := atomic.AddInt32(&r.nodes, -1) < h.shrinkThreshold
if bLen >= mOverflowThreshold {
atomic.AddInt32(&h.overflow, -1)
}
// Shrink.
if shrink && len(h.buckets) > mInitialSize && atomic.CompareAndSwapInt32(&h.resizeInProgess, 0, 1) {
nhLen := len(h.buckets) >> 1
nh := &mNode{
buckets: make([]unsafe.Pointer, nhLen),
mask: uint32(nhLen) - 1,
pred: unsafe.Pointer(h),
growThreshold: int32(nhLen * mOverflowThreshold),
shrinkThreshold: int32(nhLen >> 1),
}
ok := atomic.CompareAndSwapPointer(&r.mHead, unsafe.Pointer(h), unsafe.Pointer(nh))
if !ok {
panic("BUG: failed swapping head")
}
go nh.initBuckets()
}
}
return true, deleted
}
type mNode struct {
buckets []unsafe.Pointer // []*mBucket
mask uint32
pred unsafe.Pointer // *mNode
resizeInProgess int32
overflow int32
growThreshold int32
shrinkThreshold int32
}
func (n *mNode) initBucket(i uint32) *mBucket {
if b := (*mBucket)(atomic.LoadPointer(&n.buckets[i])); b != nil {
return b
}
p := (*mNode)(atomic.LoadPointer(&n.pred))
if p != nil {
var node []*Node
if n.mask > p.mask {
// Grow.
pb := (*mBucket)(atomic.LoadPointer(&p.buckets[i&p.mask]))
if pb == nil {
pb = p.initBucket(i & p.mask)
}
m := pb.freeze()
// Split nodes.
for _, x := range m {
if x.hash&n.mask == i {
node = append(node, x)
}
}
} else {
// Shrink.
pb0 := (*mBucket)(atomic.LoadPointer(&p.buckets[i]))
if pb0 == nil {
pb0 = p.initBucket(i)
}
pb1 := (*mBucket)(atomic.LoadPointer(&p.buckets[i+uint32(len(n.buckets))]))
if pb1 == nil {
pb1 = p.initBucket(i + uint32(len(n.buckets)))
}
m0 := pb0.freeze()
m1 := pb1.freeze()
// Merge nodes.
node = make([]*Node, 0, len(m0)+len(m1))
node = append(node, m0...)
node = append(node, m1...)
}
b := &mBucket{node: node}
if atomic.CompareAndSwapPointer(&n.buckets[i], nil, unsafe.Pointer(b)) {
if len(node) > mOverflowThreshold {
atomic.AddInt32(&n.overflow, int32(len(node)-mOverflowThreshold))
}
return b
}
}
return (*mBucket)(atomic.LoadPointer(&n.buckets[i]))
}
func (n *mNode) initBuckets() {
for i := range n.buckets {
n.initBucket(uint32(i))
}
atomic.StorePointer(&n.pred, nil)
}
// Cache is a 'cache map'.
type Cache struct {
mu sync.RWMutex
mHead unsafe.Pointer // *mNode
nodes int32
size int32
cacher Cacher
closed bool
}
// NewCache creates a new 'cache map'. The cacher is optional and
// may be nil.
func NewCache(cacher Cacher) *Cache {
h := &mNode{
buckets: make([]unsafe.Pointer, mInitialSize),
mask: mInitialSize - 1,
growThreshold: int32(mInitialSize * mOverflowThreshold),
shrinkThreshold: 0,
}
for i := range h.buckets {
h.buckets[i] = unsafe.Pointer(&mBucket{})
}
r := &Cache{
mHead: unsafe.Pointer(h),
cacher: cacher,
}
return r
}
func (r *Cache) getBucket(hash uint32) (*mNode, *mBucket) {
h := (*mNode)(atomic.LoadPointer(&r.mHead))
i := hash & h.mask
b := (*mBucket)(atomic.LoadPointer(&h.buckets[i]))
if b == nil {
b = h.initBucket(i)
}
return h, b
}
func (r *Cache) delete(n *Node) bool {
for {
h, b := r.getBucket(n.hash)
done, deleted := b.delete(r, h, n.hash, n.ns, n.key)
if done {
return deleted
}
}
return false
}
// Nodes returns number of 'cache node' in the map.
func (r *Cache) Nodes() int {
return int(atomic.LoadInt32(&r.nodes))
}
// Size returns sums of 'cache node' size in the map.
func (r *Cache) Size() int {
return int(atomic.LoadInt32(&r.size))
}
// Capacity returns cache capacity.
func (r *Cache) Capacity() int {
if r.cacher == nil {
return 0
}
return r.cacher.Capacity()
}
// SetCapacity sets cache capacity.
func (r *Cache) SetCapacity(capacity int) {
if r.cacher != nil {
r.cacher.SetCapacity(capacity)
}
}
// Get gets 'cache node' with the given namespace and key.
// If cache node is not found and setFunc is not nil, Get will atomically creates
// the 'cache node' by calling setFunc. Otherwise Get will returns nil.
//
// The returned 'cache handle' should be released after use by calling Release
// method.
func (r *Cache) Get(ns, key uint64, setFunc func() (size int, value Value)) *Handle {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return nil
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, setFunc == nil)
if done {
if n != nil {
n.mu.Lock()
if n.value == nil {
if setFunc == nil {
n.mu.Unlock()
n.unref()
return nil
}
n.size, n.value = setFunc()
if n.value == nil {
n.size = 0
n.mu.Unlock()
n.unref()
return nil
}
atomic.AddInt32(&r.size, int32(n.size))
}
n.mu.Unlock()
if r.cacher != nil {
r.cacher.Promote(n)
}
return &Handle{unsafe.Pointer(n)}
}
break
}
}
return nil
}
func (h *fakeHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
// Delete removes and ban 'cache node' with the given namespace and key.
// A banned 'cache node' will never inserted into the 'cache tree'. Ban
// only attributed to the particular 'cache node', so when a 'cache node'
// is recreated it will not be banned.
//
// If onDel is not nil, then it will be executed if such 'cache node'
// doesn't exist or once the 'cache node' is released.
//
// Delete return true is such 'cache node' exist.
func (r *Cache) Delete(ns, key uint64, onDel func()) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return false
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, true)
if done {
if n != nil {
if onDel != nil {
n.mu.Lock()
n.onDel = append(n.onDel, onDel)
n.mu.Unlock()
}
if r.cacher != nil {
r.cacher.Ban(n)
}
n.unref()
return true
}
break
}
}
if onDel != nil {
onDel()
}
return false
}
// Evict evicts 'cache node' with the given namespace and key. This will
// simply call Cacher.Evict.
//
// Evict return true is such 'cache node' exist.
func (r *Cache) Evict(ns, key uint64) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return false
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, true)
if done {
if n != nil {
if r.cacher != nil {
r.cacher.Evict(n)
}
n.unref()
return true
}
break
}
}
return false
}
// EvictNS evicts 'cache node' with the given namespace. This will
// simply call Cacher.EvictNS.
func (r *Cache) EvictNS(ns uint64) {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return
}
if h.fin != nil {
h.fin()
h.fin = nil
if r.cacher != nil {
r.cacher.EvictNS(ns)
}
}
// EvictAll evicts all 'cache node'. This will simply call Cacher.EvictAll.
func (r *Cache) EvictAll() {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return
}
if r.cacher != nil {
r.cacher.EvictAll()
}
}
// Close closes the 'cache map' and releases all 'cache node'.
func (r *Cache) Close() error {
r.mu.Lock()
if !r.closed {
r.closed = true
if r.cacher != nil {
if err := r.cacher.Close(); err != nil {
return err
}
}
h := (*mNode)(r.mHead)
h.initBuckets()
for i := range h.buckets {
b := (*mBucket)(h.buckets[i])
for _, n := range b.node {
// Call releaser.
if n.value != nil {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
n.value = nil
}
// Call OnDel.
for _, f := range n.onDel {
f()
}
}
}
}
r.mu.Unlock()
return nil
}
// Node is a 'cache node'.
type Node struct {
r *Cache
hash uint32
ns, key uint64
mu sync.Mutex
size int
value Value
ref int32
onDel []func()
CacheData unsafe.Pointer
}
// NS returns this 'cache node' namespace.
func (n *Node) NS() uint64 {
return n.ns
}
// Key returns this 'cache node' key.
func (n *Node) Key() uint64 {
return n.key
}
// Size returns this 'cache node' size.
func (n *Node) Size() int {
return n.size
}
// Value returns this 'cache node' value.
func (n *Node) Value() Value {
return n.value
}
// Ref returns this 'cache node' ref counter.
func (n *Node) Ref() int32 {
return atomic.LoadInt32(&n.ref)
}
// GetHandle returns an handle for this 'cache node'.
func (n *Node) GetHandle() *Handle {
if atomic.AddInt32(&n.ref, 1) <= 1 {
panic("BUG: Node.GetHandle on zero ref")
}
return &Handle{unsafe.Pointer(n)}
}
func (n *Node) unref() {
if atomic.AddInt32(&n.ref, -1) == 0 {
n.r.delete(n)
}
}
func (n *Node) unrefLocked() {
if atomic.AddInt32(&n.ref, -1) == 0 {
n.r.mu.RLock()
if !n.r.closed {
n.r.delete(n)
}
n.r.mu.RUnlock()
}
}
type Handle struct {
n unsafe.Pointer // *Node
}
func (h *Handle) Value() Value {
n := (*Node)(atomic.LoadPointer(&h.n))
if n != nil {
return n.value
}
return nil
}
func (h *Handle) Release() {
nPtr := atomic.LoadPointer(&h.n)
if nPtr != nil && atomic.CompareAndSwapPointer(&h.n, nPtr, nil) {
n := (*Node)(nPtr)
n.unrefLocked()
}
}
func murmur32(ns, key uint64, seed uint32) uint32 {
const (
m = uint32(0x5bd1e995)
r = 24
)
k1 := uint32(ns >> 32)
k2 := uint32(ns)
k3 := uint32(key >> 32)
k4 := uint32(key)
k1 *= m
k1 ^= k1 >> r
k1 *= m
k2 *= m
k2 ^= k2 >> r
k2 *= m
k3 *= m
k3 ^= k3 >> r
k3 *= m
k4 *= m
k4 ^= k4 >> r
k4 *= m
h := seed
h *= m
h ^= k1
h *= m
h ^= k2
h *= m
h ^= k3
h *= m
h ^= k4
h ^= h >> 13
h *= m
h ^= h >> 15
return h
}

View File

@@ -13,11 +13,26 @@ import (
"sync/atomic"
"testing"
"time"
"unsafe"
)
type int32o int32
func (o *int32o) acquire() {
if atomic.AddInt32((*int32)(o), 1) != 1 {
panic("BUG: invalid ref")
}
}
func (o *int32o) Release() {
if atomic.AddInt32((*int32)(o), -1) != 0 {
panic("BUG: invalid ref")
}
}
type releaserFunc struct {
fn func()
value interface{}
value Value
}
func (r releaserFunc) Release() {
@@ -26,8 +41,8 @@ func (r releaserFunc) Release() {
}
}
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
return ns.Get(key, func() (int, interface{}) {
func set(c *Cache, ns, key uint64, value Value, charge int, relf func()) *Handle {
return c.Get(ns, key, func() (int, Value) {
if relf != nil {
return charge, releaserFunc{relf, value}
} else {
@@ -36,7 +51,246 @@ func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) H
})
}
func TestCache_HitMiss(t *testing.T) {
func TestCacheMap(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
nsx := []struct {
nobjects, nhandles, concurrent, repeat int
}{
{10000, 400, 50, 3},
{100000, 1000, 100, 10},
}
var (
objects [][]int32o
handles [][]unsafe.Pointer
)
for _, x := range nsx {
objects = append(objects, make([]int32o, x.nobjects))
handles = append(handles, make([]unsafe.Pointer, x.nhandles))
}
c := NewCache(nil)
wg := new(sync.WaitGroup)
var done int32
for ns, x := range nsx {
for i := 0; i < x.concurrent; i++ {
wg.Add(1)
go func(ns, i, repeat int, objects []int32o, handles []unsafe.Pointer) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for j := len(objects) * repeat; j >= 0; j-- {
key := uint64(r.Intn(len(objects)))
h := c.Get(uint64(ns), key, func() (int, Value) {
o := &objects[key]
o.acquire()
return 1, o
})
if v := h.Value().(*int32o); v != &objects[key] {
t.Fatalf("#%d invalid value: want=%p got=%p", ns, &objects[key], v)
}
if objects[key] != 1 {
t.Fatalf("#%d invalid object %d: %d", ns, key, objects[key])
}
if !atomic.CompareAndSwapPointer(&handles[r.Intn(len(handles))], nil, unsafe.Pointer(h)) {
h.Release()
}
}
}(ns, i, x.repeat, objects[ns], handles[ns])
}
go func(handles []unsafe.Pointer) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for atomic.LoadInt32(&done) == 0 {
i := r.Intn(len(handles))
h := (*Handle)(atomic.LoadPointer(&handles[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles[i], unsafe.Pointer(h), nil) {
h.Release()
}
time.Sleep(time.Millisecond)
}
}(handles[ns])
}
go func() {
handles := make([]*Handle, 100000)
for atomic.LoadInt32(&done) == 0 {
for i := range handles {
handles[i] = c.Get(999999999, uint64(i), func() (int, Value) {
return 1, 1
})
}
for _, h := range handles {
h.Release()
}
}
}()
wg.Wait()
atomic.StoreInt32(&done, 1)
for _, handles0 := range handles {
for i := range handles0 {
h := (*Handle)(atomic.LoadPointer(&handles0[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles0[i], unsafe.Pointer(h), nil) {
h.Release()
}
}
}
for ns, objects0 := range objects {
for i, o := range objects0 {
if o != 0 {
t.Fatalf("invalid object #%d.%d: ref=%d", ns, i, o)
}
}
}
}
func TestCacheMap_NodesAndSize(t *testing.T) {
c := NewCache(nil)
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 2, nil)
set(c, 1, 1, 3, 3, nil)
set(c, 2, 1, 4, 1, nil)
if c.Nodes() != 4 {
t.Errorf("invalid nodes counter: want=%d got=%d", 4, c.Nodes())
}
if c.Size() != 7 {
t.Errorf("invalid size counter: want=%d got=%d", 4, c.Size())
}
}
func TestLRUCache_Capacity(t *testing.T) {
c := NewCache(NewLRU(10))
if c.Capacity() != 10 {
t.Errorf("invalid capacity: want=%d got=%d", 10, c.Capacity())
}
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 2, nil).Release()
set(c, 1, 1, 3, 3, nil).Release()
set(c, 2, 1, 4, 1, nil).Release()
set(c, 2, 2, 5, 1, nil).Release()
set(c, 2, 3, 6, 1, nil).Release()
set(c, 2, 4, 7, 1, nil).Release()
set(c, 2, 5, 8, 1, nil).Release()
if c.Nodes() != 7 {
t.Errorf("invalid nodes counter: want=%d got=%d", 7, c.Nodes())
}
if c.Size() != 10 {
t.Errorf("invalid size counter: want=%d got=%d", 10, c.Size())
}
c.SetCapacity(9)
if c.Capacity() != 9 {
t.Errorf("invalid capacity: want=%d got=%d", 9, c.Capacity())
}
if c.Nodes() != 6 {
t.Errorf("invalid nodes counter: want=%d got=%d", 6, c.Nodes())
}
if c.Size() != 8 {
t.Errorf("invalid size counter: want=%d got=%d", 8, c.Size())
}
}
func TestCacheMap_NilValue(t *testing.T) {
c := NewCache(NewLRU(10))
h := c.Get(0, 0, func() (size int, value Value) {
return 1, nil
})
if h != nil {
t.Error("cache handle is non-nil")
}
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
}
func TestLRUCache_GetLatency(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
const (
concurrentSet = 30
concurrentGet = 3
duration = 3 * time.Second
delay = 3 * time.Millisecond
maxkey = 100000
)
var (
set, getHit, getAll int32
getMaxLatency, getDuration int64
)
c := NewCache(NewLRU(5000))
wg := &sync.WaitGroup{}
until := time.Now().Add(duration)
for i := 0; i < concurrentSet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for time.Now().Before(until) {
c.Get(0, uint64(r.Intn(maxkey)), func() (int, Value) {
time.Sleep(delay)
atomic.AddInt32(&set, 1)
return 1, 1
}).Release()
}
}(i)
}
for i := 0; i < concurrentGet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for {
mark := time.Now()
if mark.Before(until) {
h := c.Get(0, uint64(r.Intn(maxkey)), nil)
latency := int64(time.Now().Sub(mark))
m := atomic.LoadInt64(&getMaxLatency)
if latency > m {
atomic.CompareAndSwapInt64(&getMaxLatency, m, latency)
}
atomic.AddInt64(&getDuration, latency)
if h != nil {
atomic.AddInt32(&getHit, 1)
h.Release()
}
atomic.AddInt32(&getAll, 1)
} else {
break
}
}
}(i)
}
wg.Wait()
getAvglatency := time.Duration(getDuration) / time.Duration(getAll)
t.Logf("set=%d getHit=%d getAll=%d getMaxLatency=%v getAvgLatency=%v",
set, getHit, getAll, time.Duration(getMaxLatency), getAvglatency)
if getAvglatency > delay/3 {
t.Errorf("get avg latency > %v: got=%v", delay/3, getAvglatency)
}
}
func TestLRUCache_HitMiss(t *testing.T) {
cases := []struct {
key uint64
value string
@@ -54,14 +308,13 @@ func TestCache_HitMiss(t *testing.T) {
}
setfin := 0
c := NewLRUCache(1000)
ns := c.GetNamespace(0)
c := NewCache(NewLRU(1000))
for i, x := range cases {
set(ns, x.key, x.value, len(x.value), func() {
set(c, 0, x.key, x.value, len(x.value), func() {
setfin++
}).Release()
for j, y := range cases {
h := ns.Get(y.key, nil)
h := c.Get(0, y.key, nil)
if j <= i {
// should hit
if h == nil {
@@ -85,7 +338,7 @@ func TestCache_HitMiss(t *testing.T) {
for i, x := range cases {
finalizerOk := false
ns.Delete(x.key, func(exist, pending bool) {
c.Delete(0, x.key, func() {
finalizerOk = true
})
@@ -94,7 +347,7 @@ func TestCache_HitMiss(t *testing.T) {
}
for j, y := range cases {
h := ns.Get(y.key, nil)
h := c.Get(0, y.key, nil)
if j > i {
// should hit
if h == nil {
@@ -122,20 +375,19 @@ func TestCache_HitMiss(t *testing.T) {
}
func TestLRUCache_Eviction(t *testing.T) {
c := NewLRUCache(12)
ns := c.GetNamespace(0)
o1 := set(ns, 1, 1, 1, nil)
set(ns, 2, 2, 1, nil).Release()
set(ns, 3, 3, 1, nil).Release()
set(ns, 4, 4, 1, nil).Release()
set(ns, 5, 5, 1, nil).Release()
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
c := NewCache(NewLRU(12))
o1 := set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 1, nil).Release()
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
set(c, 0, 5, 5, 1, nil).Release()
if h := c.Get(0, 2, nil); h != nil { // 1,3,4,5,2
h.Release()
}
set(ns, 9, 9, 10, nil).Release() // 5,2,9
set(c, 0, 9, 9, 10, nil).Release() // 5,2,9
for _, key := range []uint64{9, 2, 5, 1} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
@@ -147,7 +399,7 @@ func TestLRUCache_Eviction(t *testing.T) {
}
o1.Release()
for _, key := range []uint64{1, 2, 5} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
@@ -158,7 +410,7 @@ func TestLRUCache_Eviction(t *testing.T) {
}
}
for _, key := range []uint64{3, 4, 9} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
@@ -169,487 +421,150 @@ func TestLRUCache_Eviction(t *testing.T) {
}
}
func TestLRUCache_SetGet(t *testing.T) {
c := NewLRUCache(13)
ns := c.GetNamespace(0)
for i := 0; i < 200; i++ {
n := uint64(rand.Intn(99999) % 20)
set(ns, n, n, 1, nil).Release()
if h := ns.Get(n, nil); h != nil {
if h.Value() == nil {
t.Errorf("key '%d' contains nil value", n)
func TestLRUCache_Evict(t *testing.T) {
c := NewCache(NewLRU(6))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
set(c, 1, 1, 4, 1, nil).Release()
set(c, 1, 2, 5, 1, nil).Release()
set(c, 2, 1, 6, 1, nil).Release()
set(c, 2, 2, 7, 1, nil).Release()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
h.Release()
} else {
if x := h.Value().(uint64); x != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
}
t.Errorf("Cache.Get on #%d.%d return nil", ns, key)
}
}
}
if ok := c.Evict(0, 1); !ok {
t.Error("first Cache.Evict on #0.1 return false")
}
if ok := c.Evict(0, 1); ok {
t.Error("second Cache.Evict on #0.1 return true")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #0.1 return non-nil: %v", h.Value())
}
c.EvictNS(1)
if h := c.Get(1, 1, nil); h != nil {
t.Errorf("Cache.Get on #1.1 return non-nil: %v", h.Value())
}
if h := c.Get(1, 2, nil); h != nil {
t.Errorf("Cache.Get on #1.2 return non-nil: %v", h.Value())
}
c.EvictAll()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
t.Errorf("Cache.Get on #%d.%d return non-nil: %v", ns, key, h.Value())
}
}
}
}
func TestLRUCache_Delete(t *testing.T) {
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
if ok := c.Delete(0, 1, delFunc); !ok {
t.Error("Cache.Delete on #1 return false")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #1 return non-nil: %v", h.Value())
}
if ok := c.Delete(0, 1, delFunc); ok {
t.Error("Cache.Delete on #1 return true")
}
h2 := c.Get(0, 2, nil)
if h2 == nil {
t.Error("Cache.Get on #2 return nil")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(1) Cache.Delete on #2 return false")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(2) Cache.Delete on #2 return false")
}
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
c.Get(0, 2, nil).Release()
for key := 2; key <= 4; key++ {
if h := c.Get(0, uint64(key), nil); h != nil {
h.Release()
} else {
t.Errorf("key '%d' doesn't exist", n)
}
}
}
func TestLRUCache_Purge(t *testing.T) {
c := NewLRUCache(3)
ns1 := c.GetNamespace(0)
o1 := set(ns1, 1, 1, 1, nil)
o2 := set(ns1, 2, 2, 1, nil)
ns1.Purge(nil)
set(ns1, 3, 3, 1, nil).Release()
for _, key := range []uint64{1, 2, 3} {
h := ns1.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
o1.Release()
o2.Release()
for _, key := range []uint64{1, 2} {
h := ns1.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
}
type testingCacheObjectCounter struct {
created uint
released uint
}
func (c *testingCacheObjectCounter) createOne() {
c.created++
}
func (c *testingCacheObjectCounter) releaseOne() {
c.released++
}
type testingCacheObject struct {
t *testing.T
cnt *testingCacheObjectCounter
ns, key uint64
releaseCalled bool
}
func (x *testingCacheObject) Release() {
if !x.releaseCalled {
x.releaseCalled = true
x.cnt.releaseOne()
} else {
x.t.Errorf("duplicate setfin NS#%d KEY#%d", x.ns, x.key)
}
}
func TestLRUCache_ConcurrentSetGet(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
seed := time.Now().UnixNano()
t.Logf("seed=%d", seed)
const (
N = 2000000
M = 4000
C = 3
)
var set, get uint32
wg := &sync.WaitGroup{}
c := NewLRUCache(M / 4)
for ni := uint64(0); ni < C; ni++ {
r0 := rand.New(rand.NewSource(seed + int64(ni)))
r1 := rand.New(rand.NewSource(seed + int64(ni) + 1))
ns := c.GetNamespace(ni)
wg.Add(2)
go func(ns Namespace, r *rand.Rand) {
for i := 0; i < N; i++ {
x := uint64(r.Int63n(M))
o := ns.Get(x, func() (int, interface{}) {
atomic.AddUint32(&set, 1)
return 1, x
})
if v := o.Value().(uint64); v != x {
t.Errorf("#%d invalid value, got=%d", x, v)
}
o.Release()
}
wg.Done()
}(ns, r0)
go func(ns Namespace, r *rand.Rand) {
for i := 0; i < N; i++ {
x := uint64(r.Int63n(M))
o := ns.Get(x, nil)
if o != nil {
atomic.AddUint32(&get, 1)
if v := o.Value().(uint64); v != x {
t.Errorf("#%d invalid value, got=%d", x, v)
}
o.Release()
}
}
wg.Done()
}(ns, r1)
}
wg.Wait()
t.Logf("set=%d get=%d", set, get)
}
func TestLRUCache_Finalizer(t *testing.T) {
const (
capacity = 100
goroutines = 100
iterations = 10000
keymax = 8000
)
cnt := &testingCacheObjectCounter{}
c := NewLRUCache(capacity)
type instance struct {
seed int64
rnd *rand.Rand
nsid uint64
ns Namespace
effective int
handles []Handle
handlesMap map[uint64]int
delete bool
purge bool
zap bool
wantDel int
delfinCalled int
delfinCalledAll int
delfinCalledEff int
purgefinCalled int
}
instanceGet := func(p *instance, key uint64) {
h := p.ns.Get(key, func() (charge int, value interface{}) {
to := &testingCacheObject{
t: t, cnt: cnt,
ns: p.nsid,
key: key,
}
p.effective++
cnt.createOne()
return 1, releaserFunc{func() {
to.Release()
p.effective--
}, to}
})
p.handles = append(p.handles, h)
p.handlesMap[key] = p.handlesMap[key] + 1
}
instanceRelease := func(p *instance, i int) {
h := p.handles[i]
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
if n := p.handlesMap[key]; n == 0 {
t.Fatal("key ref == 0")
} else if n > 1 {
p.handlesMap[key] = n - 1
} else {
delete(p.handlesMap, key)
}
h.Release()
p.handles = append(p.handles[:i], p.handles[i+1:]...)
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
}
seed := time.Now().UnixNano()
t.Logf("seed=%d", seed)
instances := make([]*instance, goroutines)
for i := range instances {
p := &instance{}
p.handlesMap = make(map[uint64]int)
p.seed = seed + int64(i)
p.rnd = rand.New(rand.NewSource(p.seed))
p.nsid = uint64(i)
p.ns = c.GetNamespace(p.nsid)
p.delete = i%6 == 0
p.purge = i%8 == 0
p.zap = i%12 == 0 || i%3 == 0
instances[i] = p
}
runr := rand.New(rand.NewSource(seed - 1))
run := func(rnd *rand.Rand, x []*instance, init func(p *instance) bool, fn func(p *instance, i int) bool) {
var (
rx []*instance
rn []int
)
if init == nil {
rx = append([]*instance{}, x...)
rn = make([]int, len(x))
} else {
for _, p := range x {
if init(p) {
rx = append(rx, p)
rn = append(rn, 0)
}
}
}
for len(rx) > 0 {
i := rand.Intn(len(rx))
if fn(rx[i], rn[i]) {
rn[i]++
} else {
rx = append(rx[:i], rx[i+1:]...)
rn = append(rn[:i], rn[i+1:]...)
}
t.Errorf("Cache.Get on #%d return nil", key)
}
}
// Get and release.
run(runr, instances, nil, func(p *instance, i int) bool {
if i < iterations {
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
instanceGet(p, uint64(p.rnd.Intn(keymax)))
} else {
instanceRelease(p, p.rnd.Intn(len(p.handles)))
}
return true
} else {
return false
h2.Release()
if h := c.Get(0, 2, nil); h != nil {
t.Errorf("Cache.Get on #2 return non-nil: %v", h.Value())
}
if delFuncCalled != 4 {
t.Errorf("delFunc isn't called 4 times: got=%d", delFuncCalled)
}
}
func TestLRUCache_Close(t *testing.T) {
relFuncCalled := 0
relFunc := func() {
relFuncCalled++
}
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, relFunc).Release()
set(c, 0, 2, 2, 1, relFunc).Release()
h3 := set(c, 0, 3, 3, 1, relFunc)
if h3 == nil {
t.Error("Cache.Get on #3 return nil")
}
if ok := c.Delete(0, 3, delFunc); !ok {
t.Error("Cache.Delete on #3 return false")
}
c.Close()
if relFuncCalled != 3 {
t.Errorf("relFunc isn't called 3 times: got=%d", relFuncCalled)
}
if delFuncCalled != 1 {
t.Errorf("delFunc isn't called 1 times: got=%d", delFuncCalled)
}
}
func BenchmarkLRUCache(b *testing.B) {
c := NewCache(NewLRU(10000))
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for pb.Next() {
key := uint64(r.Intn(1000000))
c.Get(0, key, func() (int, Value) {
return 1, key
}).Release()
}
})
if used, cap := c.Used(), c.Capacity(); used > cap {
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
}
// Check effective objects.
for i, p := range instances {
if int(p.effective) < len(p.handlesMap) {
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
}
}
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// First delete.
run(runr, instances, func(p *instance) bool {
p.wantDel = p.effective
return p.delete
}, func(p *instance, i int) bool {
key := uint64(i)
if key < keymax {
_, wantExist := p.handlesMap[key]
gotExist := p.ns.Delete(key, func(exist, pending bool) {
p.delfinCalledAll++
if exist {
p.delfinCalledEff++
}
})
if !gotExist && wantExist {
t.Errorf("delete on NS#%d KEY#%d not found", p.nsid, key)
}
return true
} else {
return false
}
})
// Second delete.
run(runr, instances, func(p *instance) bool {
p.delfinCalled = 0
return p.delete
}, func(p *instance, i int) bool {
key := uint64(i)
if key < keymax {
gotExist := p.ns.Delete(key, func(exist, pending bool) {
if exist && !pending {
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.nsid, key)
}
p.delfinCalled++
})
if gotExist {
t.Errorf("delete on NS#%d KEY#%d found", p.nsid, key)
}
return true
} else {
if p.delfinCalled != keymax {
t.Errorf("(2) NS#%d not all delete fin called, diff=%d", p.nsid, keymax-p.delfinCalled)
}
return false
}
})
// Purge.
run(runr, instances, func(p *instance) bool {
return p.purge
}, func(p *instance, i int) bool {
p.ns.Purge(func(ns, key uint64) {
p.purgefinCalled++
})
return false
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Release.
run(runr, instances, func(p *instance) bool {
return !p.zap
}, func(p *instance, i int) bool {
if len(p.handles) > 0 {
instanceRelease(p, len(p.handles)-1)
return true
} else {
return false
}
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Zap.
run(runr, instances, func(p *instance) bool {
return p.zap
}, func(p *instance, i int) bool {
p.ns.Zap()
p.handles = nil
p.handlesMap = nil
return false
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
}
c.Purge(nil)
for _, p := range instances {
if p.delete {
if p.delfinCalledAll != keymax {
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.nsid, p.purge, p.zap, keymax-p.delfinCalledAll)
}
if p.delfinCalledEff != p.wantDel {
t.Errorf("#%d not all effective delete fin called, diff=%d", p.nsid, p.wantDel-p.delfinCalledEff)
}
if p.purge && p.purgefinCalled > 0 {
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.nsid, p.delete, p.zap, p.purgefinCalled)
}
} else {
if p.purge {
if p.purgefinCalled != p.wantDel {
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.nsid, p.delete, p.zap, p.wantDel-p.purgefinCalled)
}
}
}
}
if cnt.created != cnt.released {
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
}
}
func BenchmarkLRUCache_Set(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
}
func BenchmarkLRUCache_Get(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
ns.Get(i, nil)
}
}
func BenchmarkLRUCache_Get2(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
ns.Get(i, func() (charge int, value interface{}) {
return 0, nil
})
}
}
func BenchmarkLRUCache_Release(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
handles := make([]Handle, b.N)
for i := uint64(0); i < uint64(b.N); i++ {
handles[i] = set(ns, i, "", 1, nil)
}
b.ResetTimer()
for _, h := range handles {
h.Release()
}
}
func BenchmarkLRUCache_SetRelease(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {
capacity = 10
}
c := NewLRUCache(capacity)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil).Release()
}
}
func BenchmarkLRUCache_SetReleaseTwice(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {
capacity = 10
}
c := NewLRUCache(capacity)
ns := c.GetNamespace(0)
b.ResetTimer()
na := b.N / 2
nb := b.N - na
for i := uint64(0); i < uint64(na); i++ {
set(ns, i, "", 1, nil).Release()
}
for i := uint64(0); i < uint64(nb); i++ {
set(ns, i, "", 1, nil).Release()
}
}

View File

@@ -0,0 +1,195 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"unsafe"
)
type lruNode struct {
n *Node
h *Handle
ban bool
next, prev *lruNode
}
func (n *lruNode) insert(at *lruNode) {
x := at.next
at.next = n
n.prev = at
n.next = x
x.prev = n
}
func (n *lruNode) remove() {
if n.prev != nil {
n.prev.next = n.next
n.next.prev = n.prev
n.prev = nil
n.next = nil
} else {
panic("BUG: removing removed node")
}
}
type lru struct {
mu sync.Mutex
capacity int
used int
recent lruNode
}
func (r *lru) reset() {
r.recent.next = &r.recent
r.recent.prev = &r.recent
r.used = 0
}
func (r *lru) Capacity() int {
r.mu.Lock()
defer r.mu.Unlock()
return r.capacity
}
func (r *lru) SetCapacity(capacity int) {
var evicted []*lruNode
r.mu.Lock()
r.capacity = capacity
for r.used > r.capacity {
rn := r.recent.prev
if rn == nil {
panic("BUG: invalid LRU used or capacity counter")
}
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) Promote(n *Node) {
var evicted []*lruNode
r.mu.Lock()
if n.CacheData == nil {
if n.Size() <= r.capacity {
rn := &lruNode{n: n, h: n.GetHandle()}
rn.insert(&r.recent)
n.CacheData = unsafe.Pointer(rn)
r.used += n.Size()
for r.used > r.capacity {
rn := r.recent.prev
if rn == nil {
panic("BUG: invalid LRU used or capacity counter")
}
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
}
} else {
rn := (*lruNode)(n.CacheData)
if !rn.ban {
rn.remove()
rn.insert(&r.recent)
}
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) Ban(n *Node) {
r.mu.Lock()
if n.CacheData == nil {
n.CacheData = unsafe.Pointer(&lruNode{n: n, ban: true})
} else {
rn := (*lruNode)(n.CacheData)
if !rn.ban {
rn.remove()
rn.ban = true
r.used -= rn.n.Size()
r.mu.Unlock()
rn.h.Release()
rn.h = nil
return
}
}
r.mu.Unlock()
}
func (r *lru) Evict(n *Node) {
r.mu.Lock()
rn := (*lruNode)(n.CacheData)
if rn == nil || rn.ban {
r.mu.Unlock()
return
}
n.CacheData = nil
r.mu.Unlock()
rn.h.Release()
}
func (r *lru) EvictNS(ns uint64) {
var evicted []*lruNode
r.mu.Lock()
for e := r.recent.prev; e != &r.recent; {
rn := e
e = e.prev
if rn.n.NS() == ns {
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) EvictAll() {
r.mu.Lock()
back := r.recent.prev
for rn := back; rn != &r.recent; rn = rn.prev {
rn.n.CacheData = nil
}
r.reset()
r.mu.Unlock()
for rn := back; rn != &r.recent; rn = rn.prev {
rn.h.Release()
}
}
func (r *lru) Close() error {
return nil
}
// NewLRU create a new LRU-cache.
func NewLRU(capacity int) Cacher {
r := &lru{capacity: capacity}
r.reset()
return r
}

View File

@@ -1,622 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/util"
)
// The LLRB implementation were taken from https://github.com/petar/GoLLRB.
// Which contains the following header:
//
// Copyright 2010 Petar Maymounkov. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// lruCache represent a LRU cache state.
type lruCache struct {
mu sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
used, size, alive int
}
// NewLRUCache creates a new initialized LRU cache with the given capacity.
func NewLRUCache(capacity int) Cache {
c := &lruCache{
table: make(map[uint64]*lruNs),
capacity: capacity,
}
c.recent.rNext = &c.recent
c.recent.rPrev = &c.recent
return c
}
func (c *lruCache) Capacity() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.capacity
}
func (c *lruCache) Used() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.used
}
func (c *lruCache) Size() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.size
}
func (c *lruCache) NumObjects() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.alive
}
// SetCapacity set cache capacity.
func (c *lruCache) SetCapacity(capacity int) {
c.mu.Lock()
c.capacity = capacity
c.evict()
c.mu.Unlock()
}
// GetNamespace return namespace object for given id.
func (c *lruCache) GetNamespace(id uint64) Namespace {
c.mu.Lock()
defer c.mu.Unlock()
if ns, ok := c.table[id]; ok {
return ns
}
ns := &lruNs{lru: c, id: id}
c.table[id] = ns
return ns
}
func (c *lruCache) ZapNamespace(id uint64) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.zapNB()
delete(c.table, id)
}
c.mu.Unlock()
}
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
// Purge purge entire cache.
func (c *lruCache) Purge(fin PurgeFin) {
c.mu.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
func (c *lruCache) Zap() {
c.mu.Lock()
for _, ns := range c.table {
ns.zapNB()
}
c.table = make(map[uint64]*lruNs)
c.mu.Unlock()
}
func (c *lruCache) evict() {
top := &c.recent
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
if n.state != nodeEffective {
panic("evicting non effective node")
}
n.state = nodeEvicted
n.rRemove()
n.derefNB()
c.used -= n.charge
n = c.recent.rPrev
}
}
type lruNs struct {
lru *lruCache
id uint64
rbRoot *lruNode
state nsState
}
func (ns *lruNs) rbGetOrCreateNode(h *lruNode, key uint64) (hn, n *lruNode) {
if h == nil {
n = &lruNode{ns: ns, key: key}
return n, n
}
if key < h.key {
hn, n = ns.rbGetOrCreateNode(h.rbLeft, key)
if hn != nil {
h.rbLeft = hn
} else {
return nil, n
}
} else if key > h.key {
hn, n = ns.rbGetOrCreateNode(h.rbRight, key)
if hn != nil {
h.rbRight = hn
} else {
return nil, n
}
} else {
return nil, h
}
if rbIsRed(h.rbRight) && !rbIsRed(h.rbLeft) {
h = rbRotLeft(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
rbFlip(h)
}
return h, n
}
func (ns *lruNs) getOrCreateNode(key uint64) *lruNode {
hn, n := ns.rbGetOrCreateNode(ns.rbRoot, key)
if hn != nil {
ns.rbRoot = hn
ns.rbRoot.rbBlack = true
}
return n
}
func (ns *lruNs) rbGetNode(key uint64) *lruNode {
h := ns.rbRoot
for h != nil {
switch {
case key < h.key:
h = h.rbLeft
case key > h.key:
h = h.rbRight
default:
return h
}
}
return nil
}
func (ns *lruNs) getNode(key uint64) *lruNode {
return ns.rbGetNode(key)
}
func (ns *lruNs) rbDeleteNode(h *lruNode, key uint64) *lruNode {
if h == nil {
return nil
}
if key < h.key {
if h.rbLeft == nil { // key not present. Nothing to delete
return h
}
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
h = rbMoveLeft(h)
}
h.rbLeft = ns.rbDeleteNode(h.rbLeft, key)
} else {
if rbIsRed(h.rbLeft) {
h = rbRotRight(h)
}
// If @key equals @h.key and no right children at @h
if h.key == key && h.rbRight == nil {
return nil
}
if h.rbRight != nil && !rbIsRed(h.rbRight) && !rbIsRed(h.rbRight.rbLeft) {
h = rbMoveRight(h)
}
// If @key equals @h.key, and (from above) 'h.Right != nil'
if h.key == key {
var x *lruNode
h.rbRight, x = rbDeleteMin(h.rbRight)
if x == nil {
panic("logic")
}
x.rbLeft, h.rbLeft = h.rbLeft, nil
x.rbRight, h.rbRight = h.rbRight, nil
x.rbBlack = h.rbBlack
h = x
} else { // Else, @key is bigger than @h.key
h.rbRight = ns.rbDeleteNode(h.rbRight, key)
}
}
return rbFixup(h)
}
func (ns *lruNs) deleteNode(key uint64) {
ns.rbRoot = ns.rbDeleteNode(ns.rbRoot, key)
if ns.rbRoot != nil {
ns.rbRoot.rbBlack = true
}
}
func (ns *lruNs) rbIterateNodes(h *lruNode, pivot uint64, iter func(n *lruNode) bool) bool {
if h == nil {
return true
}
if h.key >= pivot {
if !ns.rbIterateNodes(h.rbLeft, pivot, iter) {
return false
}
if !iter(h) {
return false
}
}
return ns.rbIterateNodes(h.rbRight, pivot, iter)
}
func (ns *lruNs) iterateNodes(iter func(n *lruNode) bool) {
ns.rbIterateNodes(ns.rbRoot, 0, iter)
}
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
ns.lru.mu.Lock()
defer ns.lru.mu.Unlock()
if ns.state != nsEffective {
return nil
}
var n *lruNode
if setf == nil {
n = ns.getNode(key)
if n == nil {
return nil
}
} else {
n = ns.getOrCreateNode(key)
}
switch n.state {
case nodeZero:
charge, value := setf()
if value == nil {
ns.deleteNode(key)
return nil
}
if charge < 0 {
charge = 0
}
n.value = value
n.charge = charge
n.state = nodeEvicted
ns.lru.size += charge
ns.lru.alive++
fallthrough
case nodeEvicted:
if n.charge == 0 {
break
}
// Insert to recent list.
n.state = nodeEffective
n.ref++
ns.lru.used += n.charge
ns.lru.evict()
fallthrough
case nodeEffective:
// Bump to front.
n.rRemove()
n.rInsert(&ns.lru.recent)
case nodeDeleted:
// Do nothing.
default:
panic("invalid state")
}
n.ref++
return &lruHandle{node: n}
}
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
ns.lru.mu.Lock()
defer ns.lru.mu.Unlock()
if ns.state != nsEffective {
if fin != nil {
fin(false, false)
}
return false
}
n := ns.getNode(key)
if n == nil {
if fin != nil {
fin(false, false)
}
return false
}
switch n.state {
case nodeEffective:
ns.lru.used -= n.charge
n.state = nodeDeleted
n.delfin = fin
n.rRemove()
n.derefNB()
case nodeEvicted:
n.state = nodeDeleted
n.delfin = fin
case nodeDeleted:
if fin != nil {
fin(true, true)
}
return false
default:
panic("invalid state")
}
return true
}
func (ns *lruNs) purgeNB(fin PurgeFin) {
if ns.state == nsEffective {
var nodes []*lruNode
ns.iterateNodes(func(n *lruNode) bool {
nodes = append(nodes, n)
return true
})
for _, n := range nodes {
switch n.state {
case nodeEffective:
ns.lru.used -= n.charge
n.state = nodeDeleted
n.purgefin = fin
n.rRemove()
n.derefNB()
case nodeEvicted:
n.state = nodeDeleted
n.purgefin = fin
case nodeDeleted:
default:
panic("invalid state")
}
}
}
}
func (ns *lruNs) Purge(fin PurgeFin) {
ns.lru.mu.Lock()
ns.purgeNB(fin)
ns.lru.mu.Unlock()
}
func (ns *lruNs) zapNB() {
if ns.state == nsEffective {
ns.state = nsZapped
ns.iterateNodes(func(n *lruNode) bool {
if n.state == nodeEffective {
ns.lru.used -= n.charge
n.rRemove()
}
ns.lru.size -= n.charge
n.state = nodeDeleted
n.fin()
return true
})
ns.rbRoot = nil
}
}
func (ns *lruNs) Zap() {
ns.lru.mu.Lock()
ns.zapNB()
delete(ns.lru.table, ns.id)
ns.lru.mu.Unlock()
}
type lruNode struct {
ns *lruNs
rNext, rPrev *lruNode
rbLeft, rbRight *lruNode
rbBlack bool
key uint64
value interface{}
charge int
ref int
state nodeState
delfin DelFin
purgefin PurgeFin
}
func (n *lruNode) rInsert(at *lruNode) {
x := at.rNext
at.rNext = n
n.rPrev = at
n.rNext = x
x.rPrev = n
}
func (n *lruNode) rRemove() bool {
if n.rPrev == nil {
return false
}
n.rPrev.rNext = n.rNext
n.rNext.rPrev = n.rPrev
n.rPrev = nil
n.rNext = nil
return true
}
func (n *lruNode) fin() {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
if n.purgefin != nil {
if n.delfin != nil {
panic("conflicting delete and purge fin")
}
n.purgefin(n.ns.id, n.key)
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true, false)
n.delfin = nil
}
}
func (n *lruNode) derefNB() {
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// Remove elemement.
n.ns.deleteNode(n.key)
n.ns.lru.size -= n.charge
n.ns.lru.alive--
n.fin()
}
n.value = nil
} else if n.ref < 0 {
panic("leveldb/cache: lruCache: negative node reference")
}
}
func (n *lruNode) deref() {
n.ns.lru.mu.Lock()
n.derefNB()
n.ns.lru.mu.Unlock()
}
type lruHandle struct {
node *lruNode
once uint32
}
func (h *lruHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.node.value
}
return nil
}
func (h *lruHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
h.node.deref()
h.node = nil
}
func rbIsRed(h *lruNode) bool {
if h == nil {
return false
}
return !h.rbBlack
}
func rbRotLeft(h *lruNode) *lruNode {
x := h.rbRight
if x.rbBlack {
panic("rotating a black link")
}
h.rbRight = x.rbLeft
x.rbLeft = h
x.rbBlack = h.rbBlack
h.rbBlack = false
return x
}
func rbRotRight(h *lruNode) *lruNode {
x := h.rbLeft
if x.rbBlack {
panic("rotating a black link")
}
h.rbLeft = x.rbRight
x.rbRight = h
x.rbBlack = h.rbBlack
h.rbBlack = false
return x
}
func rbFlip(h *lruNode) {
h.rbBlack = !h.rbBlack
h.rbLeft.rbBlack = !h.rbLeft.rbBlack
h.rbRight.rbBlack = !h.rbRight.rbBlack
}
func rbMoveLeft(h *lruNode) *lruNode {
rbFlip(h)
if rbIsRed(h.rbRight.rbLeft) {
h.rbRight = rbRotRight(h.rbRight)
h = rbRotLeft(h)
rbFlip(h)
}
return h
}
func rbMoveRight(h *lruNode) *lruNode {
rbFlip(h)
if rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
rbFlip(h)
}
return h
}
func rbFixup(h *lruNode) *lruNode {
if rbIsRed(h.rbRight) {
h = rbRotLeft(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
rbFlip(h)
}
return h
}
func rbDeleteMin(h *lruNode) (hn, n *lruNode) {
if h == nil {
return nil, nil
}
if h.rbLeft == nil {
return nil, h
}
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
h = rbMoveLeft(h)
}
h.rbLeft, n = rbDeleteMin(h.rbLeft)
return rbFixup(h), n
}

View File

@@ -9,13 +9,12 @@ package leveldb
import (
"bytes"
"fmt"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
"io"
"math/rand"
"testing"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
)
const ctValSize = 1000
@@ -32,8 +31,8 @@ func newDbCorruptHarnessWopt(t *testing.T, o *opt.Options) *dbCorruptHarness {
func newDbCorruptHarness(t *testing.T) *dbCorruptHarness {
return newDbCorruptHarnessWopt(t, &opt.Options{
BlockCache: cache.NewLRUCache(100),
Strict: opt.StrictJournalChecksum,
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
})
}
@@ -96,21 +95,22 @@ func (h *dbCorruptHarness) deleteRand(n, max int, rnd *rand.Rand) {
}
}
func (h *dbCorruptHarness) corrupt(ft storage.FileType, offset, n int) {
func (h *dbCorruptHarness) corrupt(ft storage.FileType, fi, offset, n int) {
p := &h.dbHarness
t := p.t
var file storage.File
ff, _ := p.stor.GetFiles(ft)
for _, f := range ff {
if file == nil || f.Num() > file.Num() {
file = f
}
sff := files(ff)
sff.sort()
if fi < 0 {
fi = len(sff) - 1
}
if file == nil {
t.Fatalf("no such file with type %q", ft)
if fi >= len(sff) {
t.Fatalf("no such file with type %q with index %d", ft, fi)
}
file := sff[fi]
r, err := file.Open()
if err != nil {
t.Fatal("cannot open file: ", err)
@@ -225,8 +225,8 @@ func TestCorruptDB_Journal(t *testing.T) {
h.build(100)
h.check(100, 100)
h.closeDB()
h.corrupt(storage.TypeJournal, 19, 1)
h.corrupt(storage.TypeJournal, 32*1024+1000, 1)
h.corrupt(storage.TypeJournal, -1, 19, 1)
h.corrupt(storage.TypeJournal, -1, 32*1024+1000, 1)
h.openDB()
h.check(36, 36)
@@ -242,7 +242,7 @@ func TestCorruptDB_Table(t *testing.T) {
h.compactRangeAt(0, "", "")
h.compactRangeAt(1, "", "")
h.closeDB()
h.corrupt(storage.TypeTable, 100, 1)
h.corrupt(storage.TypeTable, -1, 100, 1)
h.openDB()
h.check(99, 99)
@@ -256,7 +256,7 @@ func TestCorruptDB_TableIndex(t *testing.T) {
h.build(10000)
h.compactMem()
h.closeDB()
h.corrupt(storage.TypeTable, -2000, 500)
h.corrupt(storage.TypeTable, -1, -2000, 500)
h.openDB()
h.check(5000, 9999)
@@ -267,9 +267,9 @@ func TestCorruptDB_TableIndex(t *testing.T) {
func TestCorruptDB_MissingManifest(t *testing.T) {
rnd := rand.New(rand.NewSource(0x0badda7a))
h := newDbCorruptHarnessWopt(t, &opt.Options{
BlockCache: cache.NewLRUCache(100),
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
})
h.build(1000)
@@ -355,7 +355,7 @@ func TestCorruptDB_CorruptedManifest(t *testing.T) {
h.compactMem()
h.compactRange("", "")
h.closeDB()
h.corrupt(storage.TypeManifest, 0, 1000)
h.corrupt(storage.TypeManifest, -1, 0, 1000)
h.openAssert(false)
h.recover()
@@ -370,7 +370,7 @@ func TestCorruptDB_CompactionInputError(t *testing.T) {
h.build(10)
h.compactMem()
h.closeDB()
h.corrupt(storage.TypeTable, 100, 1)
h.corrupt(storage.TypeTable, -1, 100, 1)
h.openDB()
h.check(9, 9)
@@ -387,7 +387,7 @@ func TestCorruptDB_UnrelatedKeys(t *testing.T) {
h.build(10)
h.compactMem()
h.closeDB()
h.corrupt(storage.TypeTable, 100, 1)
h.corrupt(storage.TypeTable, -1, 100, 1)
h.openDB()
h.put(string(tkey(1000)), string(tval(1000, ctValSize)))
@@ -470,3 +470,31 @@ func TestCorruptDB_MissingTableFiles(t *testing.T) {
h.close()
}
func TestCorruptDB_RecoverTable(t *testing.T) {
h := newDbCorruptHarnessWopt(t, &opt.Options{
WriteBuffer: 112 * opt.KiB,
CompactionTableSize: 90 * opt.KiB,
Filter: filter.NewBloomFilter(10),
})
h.build(1000)
h.compactMem()
h.compactRangeAt(0, "", "")
h.compactRangeAt(1, "", "")
seq := h.db.seq
h.closeDB()
h.corrupt(storage.TypeTable, 0, 1000, 1)
h.corrupt(storage.TypeTable, 3, 10000, 1)
// Corrupted filter shouldn't affect recovery.
h.corrupt(storage.TypeTable, 3, 113888, 10)
h.corrupt(storage.TypeTable, -1, 20000, 1)
h.recover()
if h.db.seq != seq {
t.Errorf("invalid seq, want=%d got=%d", seq, h.db.seq)
}
h.check(985, 985)
h.close()
}

View File

@@ -269,7 +269,7 @@ func recoverTable(s *session, o *opt.Options) error {
tableFiles.sort()
var (
mSeq uint64
maxSeq uint64
recoveredKey, goodKey, corruptedKey, corruptedBlock, droppedTable int
// We will drop corrupted table.
@@ -324,7 +324,12 @@ func recoverTable(s *session, o *opt.Options) error {
if err != nil {
return err
}
defer reader.Close()
var closed bool
defer func() {
if !closed {
reader.Close()
}
}()
// Get file size.
size, err := reader.Seek(0, 2)
@@ -392,14 +397,15 @@ func recoverTable(s *session, o *opt.Options) error {
if err != nil {
return err
}
closed = true
reader.Close()
if err := file.Replace(tmp); err != nil {
return err
}
size = newSize
}
if tSeq > mSeq {
mSeq = tSeq
if tSeq > maxSeq {
maxSeq = tSeq
}
recoveredKey += tgoodKey
// Add table to level 0.
@@ -426,11 +432,11 @@ func recoverTable(s *session, o *opt.Options) error {
}
}
s.logf("table@recovery recovered F·%d N·%d Gk·%d Ck·%d Q·%d", len(tableFiles), recoveredKey, goodKey, corruptedKey, mSeq)
s.logf("table@recovery recovered F·%d N·%d Gk·%d Ck·%d Q·%d", len(tableFiles), recoveredKey, goodKey, corruptedKey, maxSeq)
}
// Set sequence number.
rec.setSeqNum(mSeq + 1)
rec.setSeqNum(maxSeq)
// Create new manifest.
if err := s.create(); err != nil {
@@ -625,7 +631,7 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
}
v := db.s.version()
value, cSched, err := v.get(ikey, ro)
value, cSched, err := v.get(ikey, ro, false)
v.release()
if cSched {
// Trigger table compaction.
@@ -634,8 +640,51 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
return
}
func (db *DB) has(key []byte, seq uint64, ro *opt.ReadOptions) (ret bool, err error) {
ikey := newIkey(key, seq, ktSeek)
em, fm := db.getMems()
for _, m := range [...]*memDB{em, fm} {
if m == nil {
continue
}
defer m.decref()
mk, _, me := m.mdb.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
// Shouldn't have had happen.
panic(kerr)
}
if db.s.icmp.uCompare(ukey, key) == 0 {
if kt == ktDel {
return false, nil
}
return true, nil
}
} else if me != ErrNotFound {
return false, me
}
}
v := db.s.version()
_, cSched, err := v.get(ikey, ro, true)
v.release()
if cSched {
// Trigger table compaction.
db.compSendTrigger(db.tcompCmdC)
}
if err == nil {
ret = true
} else if err == ErrNotFound {
err = nil
}
return
}
// Get gets the value for the given key. It returns ErrNotFound if the
// DB does not contain the key.
// DB does not contains the key.
//
// The returned slice is its own copy, it is safe to modify the contents
// of the returned slice.
@@ -651,6 +700,20 @@ func (db *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
return db.get(key, se.seq, ro)
}
// Has returns true if the DB does contains the given key.
//
// It is safe to modify the contents of the argument after Get returns.
func (db *DB) Has(key []byte, ro *opt.ReadOptions) (ret bool, err error) {
err = db.ok()
if err != nil {
return
}
se := db.acquireSnapshot()
defer db.releaseSnapshot(se)
return db.has(key, se.seq, ro)
}
// NewIterator returns an iterator for the latest snapshot of the
// uderlying DB.
// The returned iterator is not goroutine-safe, but it is safe to use
@@ -760,8 +823,8 @@ func (db *DB) GetProperty(name string) (value string, err error) {
case p == "blockpool":
value = fmt.Sprintf("%v", db.s.tops.bpool)
case p == "cachedblock":
if bc := db.s.o.GetBlockCache(); bc != nil {
value = fmt.Sprintf("%d", bc.Size())
if db.s.tops.bcache != nil {
value = fmt.Sprintf("%d", db.s.tops.bcache.Size())
} else {
value = "<nil>"
}

View File

@@ -8,6 +8,7 @@ package leveldb
import (
"container/list"
"fmt"
"runtime"
"sync"
"sync/atomic"
@@ -89,8 +90,12 @@ func (db *DB) newSnapshot() *Snapshot {
return snap
}
func (snap *Snapshot) String() string {
return fmt.Sprintf("leveldb.Snapshot{%d}", snap.elem.seq)
}
// Get gets the value for the given key. It returns ErrNotFound if
// the DB does not contain the key.
// the DB does not contains the key.
//
// The caller should not modify the contents of the returned slice, but
// it is safe to modify the contents of the argument after Get returns.
@@ -108,6 +113,23 @@ func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err er
return snap.db.get(key, snap.elem.seq, ro)
}
// Has returns true if the DB does contains the given key.
//
// It is safe to modify the contents of the argument after Get returns.
func (snap *Snapshot) Has(key []byte, ro *opt.ReadOptions) (ret bool, err error) {
err = snap.db.ok()
if err != nil {
return
}
snap.mu.RLock()
defer snap.mu.RUnlock()
if snap.released {
err = ErrSnapshotReleased
return
}
return snap.db.has(key, snap.elem.seq, ro)
}
// NewIterator returns an iterator for the snapshot of the uderlying DB.
// The returned iterator is not goroutine-safe, but it is safe to use
// multiple iterators concurrently, with each in a dedicated goroutine.

View File

@@ -530,7 +530,7 @@ func Test_FieldsAligned(t *testing.T) {
testAligned(t, "session.stSeqNum", unsafe.Offsetof(p2.stSeqNum))
}
func TestDb_Locking(t *testing.T) {
func TestDB_Locking(t *testing.T) {
h := newDbHarness(t)
defer h.stor.Close()
h.openAssert(false)
@@ -538,7 +538,7 @@ func TestDb_Locking(t *testing.T) {
h.openAssert(true)
}
func TestDb_Empty(t *testing.T) {
func TestDB_Empty(t *testing.T) {
trun(t, func(h *dbHarness) {
h.get("foo", false)
@@ -547,7 +547,7 @@ func TestDb_Empty(t *testing.T) {
})
}
func TestDb_ReadWrite(t *testing.T) {
func TestDB_ReadWrite(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
h.getVal("foo", "v1")
@@ -562,7 +562,7 @@ func TestDb_ReadWrite(t *testing.T) {
})
}
func TestDb_PutDeleteGet(t *testing.T) {
func TestDB_PutDeleteGet(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
h.getVal("foo", "v1")
@@ -576,7 +576,7 @@ func TestDb_PutDeleteGet(t *testing.T) {
})
}
func TestDb_EmptyBatch(t *testing.T) {
func TestDB_EmptyBatch(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -588,7 +588,7 @@ func TestDb_EmptyBatch(t *testing.T) {
h.get("foo", false)
}
func TestDb_GetFromFrozen(t *testing.T) {
func TestDB_GetFromFrozen(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{WriteBuffer: 100100})
defer h.close()
@@ -614,7 +614,7 @@ func TestDb_GetFromFrozen(t *testing.T) {
h.get("k2", true)
}
func TestDb_GetFromTable(t *testing.T) {
func TestDB_GetFromTable(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
h.compactMem()
@@ -622,7 +622,7 @@ func TestDb_GetFromTable(t *testing.T) {
})
}
func TestDb_GetSnapshot(t *testing.T) {
func TestDB_GetSnapshot(t *testing.T) {
trun(t, func(h *dbHarness) {
bar := strings.Repeat("b", 200)
h.put("foo", "v1")
@@ -656,7 +656,7 @@ func TestDb_GetSnapshot(t *testing.T) {
})
}
func TestDb_GetLevel0Ordering(t *testing.T) {
func TestDB_GetLevel0Ordering(t *testing.T) {
trun(t, func(h *dbHarness) {
for i := 0; i < 4; i++ {
h.put("bar", fmt.Sprintf("b%d", i))
@@ -679,7 +679,7 @@ func TestDb_GetLevel0Ordering(t *testing.T) {
})
}
func TestDb_GetOrderedByLevels(t *testing.T) {
func TestDB_GetOrderedByLevels(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
h.compactMem()
@@ -691,7 +691,7 @@ func TestDb_GetOrderedByLevels(t *testing.T) {
})
}
func TestDb_GetPicksCorrectFile(t *testing.T) {
func TestDB_GetPicksCorrectFile(t *testing.T) {
trun(t, func(h *dbHarness) {
// Arrange to have multiple files in a non-level-0 level.
h.put("a", "va")
@@ -715,7 +715,7 @@ func TestDb_GetPicksCorrectFile(t *testing.T) {
})
}
func TestDb_GetEncountersEmptyLevel(t *testing.T) {
func TestDB_GetEncountersEmptyLevel(t *testing.T) {
trun(t, func(h *dbHarness) {
// Arrange for the following to happen:
// * sstable A in level 0
@@ -770,7 +770,7 @@ func TestDb_GetEncountersEmptyLevel(t *testing.T) {
})
}
func TestDb_IterMultiWithDelete(t *testing.T) {
func TestDB_IterMultiWithDelete(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("a", "va")
h.put("b", "vb")
@@ -796,7 +796,7 @@ func TestDb_IterMultiWithDelete(t *testing.T) {
})
}
func TestDb_IteratorPinsRef(t *testing.T) {
func TestDB_IteratorPinsRef(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -820,7 +820,7 @@ func TestDb_IteratorPinsRef(t *testing.T) {
iter.Release()
}
func TestDb_Recover(t *testing.T) {
func TestDB_Recover(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
h.put("baz", "v5")
@@ -842,7 +842,7 @@ func TestDb_Recover(t *testing.T) {
})
}
func TestDb_RecoverWithEmptyJournal(t *testing.T) {
func TestDB_RecoverWithEmptyJournal(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
h.put("foo", "v2")
@@ -856,7 +856,7 @@ func TestDb_RecoverWithEmptyJournal(t *testing.T) {
})
}
func TestDb_RecoverDuringMemtableCompaction(t *testing.T) {
func TestDB_RecoverDuringMemtableCompaction(t *testing.T) {
truno(t, &opt.Options{WriteBuffer: 1000000}, func(h *dbHarness) {
h.stor.DelaySync(storage.TypeTable)
@@ -872,7 +872,7 @@ func TestDb_RecoverDuringMemtableCompaction(t *testing.T) {
})
}
func TestDb_MinorCompactionsHappen(t *testing.T) {
func TestDB_MinorCompactionsHappen(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{WriteBuffer: 10000})
defer h.close()
@@ -896,7 +896,7 @@ func TestDb_MinorCompactionsHappen(t *testing.T) {
}
}
func TestDb_RecoverWithLargeJournal(t *testing.T) {
func TestDB_RecoverWithLargeJournal(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -921,7 +921,7 @@ func TestDb_RecoverWithLargeJournal(t *testing.T) {
v.release()
}
func TestDb_CompactionsGenerateMultipleFiles(t *testing.T) {
func TestDB_CompactionsGenerateMultipleFiles(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 10000000,
Compression: opt.NoCompression,
@@ -959,7 +959,7 @@ func TestDb_CompactionsGenerateMultipleFiles(t *testing.T) {
}
}
func TestDb_RepeatedWritesToSameKey(t *testing.T) {
func TestDB_RepeatedWritesToSameKey(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{WriteBuffer: 100000})
defer h.close()
@@ -975,7 +975,7 @@ func TestDb_RepeatedWritesToSameKey(t *testing.T) {
}
}
func TestDb_RepeatedWritesToSameKeyAfterReopen(t *testing.T) {
func TestDB_RepeatedWritesToSameKeyAfterReopen(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{WriteBuffer: 100000})
defer h.close()
@@ -993,7 +993,7 @@ func TestDb_RepeatedWritesToSameKeyAfterReopen(t *testing.T) {
}
}
func TestDb_SparseMerge(t *testing.T) {
func TestDB_SparseMerge(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{Compression: opt.NoCompression})
defer h.close()
@@ -1031,7 +1031,7 @@ func TestDb_SparseMerge(t *testing.T) {
h.maxNextLevelOverlappingBytes(20 * 1048576)
}
func TestDb_SizeOf(t *testing.T) {
func TestDB_SizeOf(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
Compression: opt.NoCompression,
WriteBuffer: 10000000,
@@ -1081,7 +1081,7 @@ func TestDb_SizeOf(t *testing.T) {
}
}
func TestDb_SizeOf_MixOfSmallAndLarge(t *testing.T) {
func TestDB_SizeOf_MixOfSmallAndLarge(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{Compression: opt.NoCompression})
defer h.close()
@@ -1119,7 +1119,7 @@ func TestDb_SizeOf_MixOfSmallAndLarge(t *testing.T) {
}
}
func TestDb_Snapshot(t *testing.T) {
func TestDB_Snapshot(t *testing.T) {
trun(t, func(h *dbHarness) {
h.put("foo", "v1")
s1 := h.getSnapshot()
@@ -1148,7 +1148,7 @@ func TestDb_Snapshot(t *testing.T) {
})
}
func TestDb_SnapshotList(t *testing.T) {
func TestDB_SnapshotList(t *testing.T) {
db := &DB{snapsList: list.New()}
e0a := db.acquireSnapshot()
e0b := db.acquireSnapshot()
@@ -1186,7 +1186,7 @@ func TestDb_SnapshotList(t *testing.T) {
}
}
func TestDb_HiddenValuesAreRemoved(t *testing.T) {
func TestDB_HiddenValuesAreRemoved(t *testing.T) {
trun(t, func(h *dbHarness) {
s := h.db.s
@@ -1229,7 +1229,7 @@ func TestDb_HiddenValuesAreRemoved(t *testing.T) {
})
}
func TestDb_DeletionMarkers2(t *testing.T) {
func TestDB_DeletionMarkers2(t *testing.T) {
h := newDbHarness(t)
defer h.close()
s := h.db.s
@@ -1270,8 +1270,8 @@ func TestDb_DeletionMarkers2(t *testing.T) {
h.allEntriesFor("foo", "[ ]")
}
func TestDb_CompactionTableOpenError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{CachedOpenFiles: -1})
func TestDB_CompactionTableOpenError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{OpenFilesCacheCapacity: -1})
defer h.close()
im := 10
@@ -1305,7 +1305,7 @@ func TestDb_CompactionTableOpenError(t *testing.T) {
}
}
func TestDb_OverlapInLevel0(t *testing.T) {
func TestDB_OverlapInLevel0(t *testing.T) {
trun(t, func(h *dbHarness) {
if h.o.GetMaxMemCompationLevel() != 2 {
t.Fatal("fix test to reflect the config")
@@ -1348,7 +1348,7 @@ func TestDb_OverlapInLevel0(t *testing.T) {
})
}
func TestDb_L0_CompactionBug_Issue44_a(t *testing.T) {
func TestDB_L0_CompactionBug_Issue44_a(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -1368,7 +1368,7 @@ func TestDb_L0_CompactionBug_Issue44_a(t *testing.T) {
h.getKeyVal("(a->v)")
}
func TestDb_L0_CompactionBug_Issue44_b(t *testing.T) {
func TestDB_L0_CompactionBug_Issue44_b(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -1397,7 +1397,7 @@ func TestDb_L0_CompactionBug_Issue44_b(t *testing.T) {
h.getKeyVal("(->)(c->cv)")
}
func TestDb_SingleEntryMemCompaction(t *testing.T) {
func TestDB_SingleEntryMemCompaction(t *testing.T) {
trun(t, func(h *dbHarness) {
for i := 0; i < 10; i++ {
h.put("big", strings.Repeat("v", opt.DefaultWriteBuffer))
@@ -1414,7 +1414,7 @@ func TestDb_SingleEntryMemCompaction(t *testing.T) {
})
}
func TestDb_ManifestWriteError(t *testing.T) {
func TestDB_ManifestWriteError(t *testing.T) {
for i := 0; i < 2; i++ {
func() {
h := newDbHarness(t)
@@ -1464,7 +1464,7 @@ func assertErr(t *testing.T, err error, wanterr bool) {
}
}
func TestDb_ClosedIsClosed(t *testing.T) {
func TestDB_ClosedIsClosed(t *testing.T) {
h := newDbHarness(t)
db := h.db
@@ -1559,7 +1559,7 @@ func (p numberComparer) Compare(a, b []byte) int {
func (numberComparer) Separator(dst, a, b []byte) []byte { return nil }
func (numberComparer) Successor(dst, b []byte) []byte { return nil }
func TestDb_CustomComparer(t *testing.T) {
func TestDB_CustomComparer(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
Comparer: numberComparer{},
WriteBuffer: 1000,
@@ -1589,7 +1589,7 @@ func TestDb_CustomComparer(t *testing.T) {
}
}
func TestDb_ManualCompaction(t *testing.T) {
func TestDB_ManualCompaction(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -1627,10 +1627,10 @@ func TestDb_ManualCompaction(t *testing.T) {
h.tablesPerLevel("0,0,1")
}
func TestDb_BloomFilter(t *testing.T) {
func TestDB_BloomFilter(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
BlockCache: opt.NoCache,
Filter: filter.NewBloomFilter(10),
DisableBlockCache: true,
Filter: filter.NewBloomFilter(10),
})
defer h.close()
@@ -1680,7 +1680,7 @@ func TestDb_BloomFilter(t *testing.T) {
h.stor.ReleaseSync(storage.TypeTable)
}
func TestDb_Concurrent(t *testing.T) {
func TestDB_Concurrent(t *testing.T) {
const n, secs, maxkey = 4, 2, 1000
runtime.GOMAXPROCS(n)
@@ -1745,7 +1745,7 @@ func TestDb_Concurrent(t *testing.T) {
runtime.GOMAXPROCS(1)
}
func TestDb_Concurrent2(t *testing.T) {
func TestDB_Concurrent2(t *testing.T) {
const n, n2 = 4, 4000
runtime.GOMAXPROCS(n*2 + 2)
@@ -1816,7 +1816,7 @@ func TestDb_Concurrent2(t *testing.T) {
runtime.GOMAXPROCS(1)
}
func TestDb_CreateReopenDbOnFile(t *testing.T) {
func TestDB_CreateReopenDbOnFile(t *testing.T) {
dbpath := filepath.Join(os.TempDir(), fmt.Sprintf("goleveldbtestCreateReopenDbOnFile-%d", os.Getuid()))
if err := os.RemoveAll(dbpath); err != nil {
t.Fatal("cannot remove old db: ", err)
@@ -1844,7 +1844,7 @@ func TestDb_CreateReopenDbOnFile(t *testing.T) {
}
}
func TestDb_CreateReopenDbOnFile2(t *testing.T) {
func TestDB_CreateReopenDbOnFile2(t *testing.T) {
dbpath := filepath.Join(os.TempDir(), fmt.Sprintf("goleveldbtestCreateReopenDbOnFile2-%d", os.Getuid()))
if err := os.RemoveAll(dbpath); err != nil {
t.Fatal("cannot remove old db: ", err)
@@ -1865,7 +1865,7 @@ func TestDb_CreateReopenDbOnFile2(t *testing.T) {
}
}
func TestDb_DeletionMarkersOnMemdb(t *testing.T) {
func TestDB_DeletionMarkersOnMemdb(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -1876,7 +1876,7 @@ func TestDb_DeletionMarkersOnMemdb(t *testing.T) {
h.getKeyVal("")
}
func TestDb_LeveldbIssue178(t *testing.T) {
func TestDB_LeveldbIssue178(t *testing.T) {
nKeys := (opt.DefaultCompactionTableSize / 30) * 5
key1 := func(i int) string {
return fmt.Sprintf("my_key_%d", i)
@@ -1919,7 +1919,7 @@ func TestDb_LeveldbIssue178(t *testing.T) {
h.assertNumKeys(nKeys)
}
func TestDb_LeveldbIssue200(t *testing.T) {
func TestDB_LeveldbIssue200(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -1946,7 +1946,7 @@ func TestDb_LeveldbIssue200(t *testing.T) {
assertBytes(t, []byte("5"), iter.Key())
}
func TestDb_GoleveldbIssue74(t *testing.T) {
func TestDB_GoleveldbIssue74(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 1 * opt.MiB,
})
@@ -2044,7 +2044,7 @@ func TestDb_GoleveldbIssue74(t *testing.T) {
wg.Wait()
}
func TestDb_GetProperties(t *testing.T) {
func TestDB_GetProperties(t *testing.T) {
h := newDbHarness(t)
defer h.close()
@@ -2064,10 +2064,10 @@ func TestDb_GetProperties(t *testing.T) {
}
}
func TestDb_GoleveldbIssue72and83(t *testing.T) {
func TestDB_GoleveldbIssue72and83(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 1 * opt.MiB,
CachedOpenFiles: 3,
WriteBuffer: 1 * opt.MiB,
OpenFilesCacheCapacity: 3,
})
defer h.close()
@@ -2077,12 +2077,13 @@ func TestDb_GoleveldbIssue72and83(t *testing.T) {
randomData := func(prefix byte, i int) []byte {
data := make([]byte, 1+4+32+64+32)
_, err := crand.Reader.Read(data[1 : len(data)-4])
_, err := crand.Reader.Read(data[1 : len(data)-8])
if err != nil {
panic(err)
}
data[0] = prefix
binary.LittleEndian.PutUint32(data[len(data)-4:], uint32(i))
binary.LittleEndian.PutUint32(data[len(data)-8:], uint32(i))
binary.LittleEndian.PutUint32(data[len(data)-4:], util.NewCRC(data[:len(data)-4]).Value())
return data
}
@@ -2131,12 +2132,22 @@ func TestDb_GoleveldbIssue72and83(t *testing.T) {
continue
}
iter := snap.NewIterator(util.BytesPrefix([]byte{1}), nil)
writei := int(snap.elem.seq/(n*2) - 1)
writei := int(seq/(n*2) - 1)
var k int
for ; iter.Next(); k++ {
k1 := iter.Key()
k2 := iter.Value()
kwritei := int(binary.LittleEndian.Uint32(k2[len(k2)-4:]))
k1checksum0 := binary.LittleEndian.Uint32(k1[len(k1)-4:])
k1checksum1 := util.NewCRC(k1[:len(k1)-4]).Value()
if k1checksum0 != k1checksum1 {
t.Fatalf("READER0 #%d.%d W#%d invalid K1 checksum: %#x != %#x", i, k, k1checksum0, k1checksum0)
}
k2checksum0 := binary.LittleEndian.Uint32(k2[len(k2)-4:])
k2checksum1 := util.NewCRC(k2[:len(k2)-4]).Value()
if k2checksum0 != k2checksum1 {
t.Fatalf("READER0 #%d.%d W#%d invalid K2 checksum: %#x != %#x", i, k, k2checksum0, k2checksum1)
}
kwritei := int(binary.LittleEndian.Uint32(k2[len(k2)-8:]))
if writei != kwritei {
t.Fatalf("READER0 #%d.%d W#%d invalid write iteration num: %d", i, k, writei, kwritei)
}
@@ -2186,10 +2197,10 @@ func TestDb_GoleveldbIssue72and83(t *testing.T) {
wg.Wait()
}
func TestDb_TransientError(t *testing.T) {
func TestDB_TransientError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 128 * opt.KiB,
CachedOpenFiles: 3,
OpenFilesCacheCapacity: 3,
DisableCompactionBackoff: true,
})
defer h.close()
@@ -2299,7 +2310,7 @@ func TestDb_TransientError(t *testing.T) {
wg.Wait()
}
func TestDb_UkeyShouldntHopAcrossTable(t *testing.T) {
func TestDB_UkeyShouldntHopAcrossTable(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 112 * opt.KiB,
CompactionTableSize: 90 * opt.KiB,
@@ -2388,7 +2399,7 @@ func TestDb_UkeyShouldntHopAcrossTable(t *testing.T) {
wg.Wait()
}
func TestDb_TableCompactionBuilder(t *testing.T) {
func TestDB_TableCompactionBuilder(t *testing.T) {
stor := newTestStorage(t)
defer stor.Close()
@@ -2399,7 +2410,7 @@ func TestDb_TableCompactionBuilder(t *testing.T) {
CompactionTableSize: 43 * opt.KiB,
CompactionExpandLimitFactor: 1,
CompactionGPOverlapsFactor: 1,
BlockCache: opt.NoCache,
DisableBlockCache: true,
}
s, err := newSession(stor, o)
if err != nil {

View File

@@ -112,9 +112,9 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
db.writeDelay += time.Since(start)
db.writeDelayN++
} else if db.writeDelayN > 0 {
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
db.writeDelay = 0
db.writeDelayN = 0
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
}
return
}

View File

@@ -17,13 +17,14 @@ import (
var _ = testutil.Defer(func() {
Describe("Leveldb external", func() {
o := &opt.Options{
BlockCache: opt.NoCache,
BlockRestartInterval: 5,
BlockSize: 50,
Compression: opt.NoCompression,
CachedOpenFiles: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
DisableBlockCache: true,
BlockRestartInterval: 5,
BlockSize: 80,
Compression: opt.NoCompression,
OpenFilesCacheCapacity: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
CompactionTableSize: 2000,
}
Describe("write test", func() {

View File

@@ -3,15 +3,9 @@ package iterator_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestIterator(t *testing.T) {
testutil.RunDefer()
RegisterFailHandler(Fail)
RunSpecs(t, "Iterator Suite")
testutil.RunSuite(t, "Iterator Suite")
}

View File

@@ -106,7 +106,7 @@ func (ik iKey) assert() {
panic("leveldb: nil iKey")
}
if len(ik) < 8 {
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", ik, len(ik)))
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", []byte(ik), len(ik)))
}
}
@@ -124,7 +124,7 @@ func (ik iKey) parseNum() (seq uint64, kt kType) {
num := ik.num()
seq, kt = uint64(num>>8), kType(num&0xff)
if kt > ktVal {
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", ik, len(ik), kt))
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", []byte(ik), len(ik), kt))
}
return
}

View File

@@ -3,18 +3,9 @@ package leveldb
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestLeveldb(t *testing.T) {
testutil.RunDefer()
RegisterFailHandler(Fail)
RunSpecs(t, "Leveldb Suite")
RegisterTestingT(t)
testutil.RunDefer("teardown")
func TestLevelDB(t *testing.T) {
testutil.RunSuite(t, "LevelDB Suite")
}

View File

@@ -3,15 +3,9 @@ package memdb
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestMemdb(t *testing.T) {
testutil.RunDefer()
RegisterFailHandler(Fail)
RunSpecs(t, "Memdb Suite")
func TestMemDB(t *testing.T) {
testutil.RunSuite(t, "MemDB Suite")
}

View File

@@ -20,8 +20,9 @@ const (
GiB = MiB * 1024
)
const (
DefaultBlockCacheSize = 8 * MiB
var (
DefaultBlockCacher = LRUCacher
DefaultBlockCacheCapacity = 8 * MiB
DefaultBlockRestartInterval = 16
DefaultBlockSize = 4 * KiB
DefaultCompactionExpandLimitFactor = 25
@@ -33,7 +34,8 @@ const (
DefaultCompactionTotalSize = 10 * MiB
DefaultCompactionTotalSizeMultiplier = 10.0
DefaultCompressionType = SnappyCompression
DefaultCachedOpenFiles = 500
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultMaxMemCompationLevel = 2
DefaultNumLevel = 7
DefaultWriteBuffer = 4 * MiB
@@ -41,22 +43,33 @@ const (
DefaultWriteL0SlowdownTrigger = 8
)
type noCache struct{}
// Cacher is a caching algorithm.
type Cacher interface {
New(capacity int) cache.Cacher
}
func (noCache) SetCapacity(capacity int) {}
func (noCache) Capacity() int { return 0 }
func (noCache) Used() int { return 0 }
func (noCache) Size() int { return 0 }
func (noCache) NumObjects() int { return 0 }
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
func (noCache) ZapNamespace(id uint64) {}
func (noCache) Purge(fin cache.PurgeFin) {}
func (noCache) Zap() {}
type CacherFunc struct {
NewFunc func(capacity int) cache.Cacher
}
var NoCache cache.Cache = noCache{}
func (f *CacherFunc) New(capacity int) cache.Cacher {
if f.NewFunc != nil {
return f.NewFunc(capacity)
}
return nil
}
// Compression is the per-block compression algorithm to use.
func noCacher(int) cache.Cacher { return nil }
var (
// LRUCacher is the LRU-cache algorithm.
LRUCacher = &CacherFunc{cache.NewLRU}
// NoCacher is the value to disable caching algorithm.
NoCacher = &CacherFunc{}
)
// Compression is the 'sorted table' block compression algorithm to use.
type Compression uint
func (c Compression) String() string {
@@ -114,7 +127,7 @@ const (
StrictOverride
// StrictAll enables all strict flags.
StrictAll = StrictManifest | StrictJournalChecksum | StrictJournal | StrictBlockChecksum | StrictCompaction | StrictReader
StrictAll = StrictManifest | StrictJournalChecksum | StrictJournal | StrictBlockChecksum | StrictCompaction | StrictReader | StrictRecovery
// DefaultStrict is the default strict flags. Specify any strict flags
// will override default strict flags as whole (i.e. not OR'ed).
@@ -133,11 +146,17 @@ type Options struct {
// The default value is nil
AltFilters []filter.Filter
// BlockCache provides per-block caching for LevelDB. Specify NoCache to
// disable block caching.
// BlockCacher provides cache algorithm for LevelDB 'sorted table' block caching.
// Specify NoCacher to disable caching algorithm.
//
// By default LevelDB will create LRU-cache with capacity of 8MiB.
BlockCache cache.Cache
// The default value is LRUCacher.
BlockCacher Cacher
// BlockCacheCapacity defines the capacity of the 'sorted table' block caching.
// Use -1 for zero, this has same effect with specifying NoCacher to BlockCacher.
//
// The default value is 8MiB.
BlockCacheCapacity int
// BlockRestartInterval is the number of keys between restart points for
// delta encoding of keys.
@@ -151,13 +170,6 @@ type Options struct {
// The default value is 4KiB.
BlockSize int
// CachedOpenFiles defines number of open files to kept around when not
// in-use, the counting includes still in-use files.
// Set this to negative value to disable caching.
//
// The default value is 500.
CachedOpenFiles int
// CompactionExpandLimitFactor limits compaction size after expanded.
// This will be multiplied by table size limit at compaction target level.
//
@@ -232,11 +244,17 @@ type Options struct {
// The default value uses the same ordering as bytes.Compare.
Comparer comparer.Comparer
// Compression defines the per-block compression to use.
// Compression defines the 'sorted table' block compression to use.
//
// The default value (DefaultCompression) uses snappy compression.
Compression Compression
// DisableBlockCache allows disable use of cache.Cache functionality on
// 'sorted table' block.
//
// The default value is false.
DisableBlockCache bool
// DisableCompactionBackoff allows disable compaction retry backoff.
//
// The default value is false.
@@ -283,6 +301,18 @@ type Options struct {
// The default is 7.
NumLevel int
// OpenFilesCacher provides cache algorithm for open files caching.
// Specify NoCacher to disable caching algorithm.
//
// The default value is LRUCacher.
OpenFilesCacher Cacher
// OpenFilesCacheCapacity defines the capacity of the open files caching.
// Use -1 for zero, this has same effect with specifying NoCacher to OpenFilesCacher.
//
// The default value is 500.
OpenFilesCacheCapacity int
// Strict defines the DB strict level.
Strict Strict
@@ -315,11 +345,22 @@ func (o *Options) GetAltFilters() []filter.Filter {
return o.AltFilters
}
func (o *Options) GetBlockCache() cache.Cache {
if o == nil {
func (o *Options) GetBlockCacher() Cacher {
if o == nil || o.BlockCacher == nil {
return DefaultBlockCacher
} else if o.BlockCacher == NoCacher {
return nil
}
return o.BlockCache
return o.BlockCacher
}
func (o *Options) GetBlockCacheCapacity() int {
if o == nil || o.BlockCacheCapacity <= 0 {
return DefaultBlockCacheCapacity
} else if o.BlockCacheCapacity == -1 {
return 0
}
return o.BlockCacheCapacity
}
func (o *Options) GetBlockRestartInterval() int {
@@ -336,15 +377,6 @@ func (o *Options) GetBlockSize() int {
return o.BlockSize
}
func (o *Options) GetCachedOpenFiles() int {
if o == nil || o.CachedOpenFiles == 0 {
return DefaultCachedOpenFiles
} else if o.CachedOpenFiles < 0 {
return 0
}
return o.CachedOpenFiles
}
func (o *Options) GetCompactionExpandLimit(level int) int {
factor := DefaultCompactionExpandLimitFactor
if o != nil && o.CompactionExpandLimitFactor > 0 {
@@ -482,6 +514,25 @@ func (o *Options) GetNumLevel() int {
return o.NumLevel
}
func (o *Options) GetOpenFilesCacher() Cacher {
if o == nil || o.OpenFilesCacher == nil {
return DefaultOpenFilesCacher
}
if o.OpenFilesCacher == NoCacher {
return nil
}
return o.OpenFilesCacher
}
func (o *Options) GetOpenFilesCacheCapacity() int {
if o == nil || o.OpenFilesCacheCapacity <= 0 {
return DefaultOpenFilesCacheCapacity
} else if o.OpenFilesCacheCapacity == -1 {
return 0
}
return o.OpenFilesCacheCapacity
}
func (o *Options) GetStrict(strict Strict) bool {
if o == nil || o.Strict == 0 {
return DefaultStrict&strict != 0

View File

@@ -7,7 +7,6 @@
package leveldb
import (
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
)
@@ -17,6 +16,9 @@ func dupOptions(o *opt.Options) *opt.Options {
if o != nil {
*newo = *o
}
if newo.Strict == 0 {
newo.Strict = opt.DefaultStrict
}
return newo
}
@@ -29,13 +31,6 @@ func (s *session) setOptions(o *opt.Options) {
no.AltFilters[i] = &iFilter{filter}
}
}
// Block cache.
switch o.GetBlockCache() {
case nil:
no.BlockCache = cache.NewLRUCache(opt.DefaultBlockCacheSize)
case opt.NoCache:
no.BlockCache = nil
}
// Comparer.
s.icmp = &iComparer{o.GetComparer()}
no.Comparer = s.icmp

View File

@@ -73,7 +73,7 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
stCompPtrs: make([]iKey, o.GetNumLevel()),
}
s.setOptions(o)
s.tops = newTableOps(s, s.o.GetCachedOpenFiles())
s.tops = newTableOps(s)
s.setVersion(newVersion(s))
s.log("log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed")
return
@@ -82,9 +82,6 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
// Close session.
func (s *session) close() {
s.tops.close()
if bc := s.o.GetBlockCache(); bc != nil {
bc.Purge(nil)
}
if s.manifest != nil {
s.manifest.Close()
}

View File

@@ -221,7 +221,7 @@ func (fs *fileStorage) GetManifest() (f File, err error) {
fs.log(fmt.Sprintf("skipping %s: invalid file name", fn))
continue
}
if _, e1 := strconv.ParseUint(fn[7:], 10, 0); e1 != nil {
if _, e1 := strconv.ParseUint(fn[8:], 10, 0); e1 != nil {
fs.log(fmt.Sprintf("skipping %s: invalid file num: %v", fn, e1))
continue
}

View File

@@ -233,6 +233,23 @@ func (tf tsFile) Create() (w storage.Writer, err error) {
return
}
func (tf tsFile) Replace(newfile storage.File) (err error) {
ts := tf.ts
ts.mu.Lock()
defer ts.mu.Unlock()
err = tf.checkOpen("replace")
if err != nil {
return
}
err = tf.File.Replace(newfile.(tsFile).File)
if err != nil {
ts.t.Errorf("E: cannot replace file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
} else {
ts.t.Logf("I: file replace, num=%d type=%v", tf.Num(), tf.Type())
}
return
}
func (tf tsFile) Remove() (err error) {
ts := tf.ts
ts.mu.Lock()
@@ -492,6 +509,10 @@ func newTestStorage(t *testing.T) *testStorage {
}
f.Close()
}
if t.Failed() {
t.Logf("testing failed, test DB preserved at %s", path)
return nil
}
if tsKeepFS {
return nil
}

View File

@@ -286,10 +286,10 @@ func (x *tFilesSortByNum) Less(i, j int) bool {
// Table operations.
type tOps struct {
s *session
cache cache.Cache
cacheNS cache.Namespace
bpool *util.BufferPool
s *session
cache *cache.Cache
bcache *cache.Cache
bpool *util.BufferPool
}
// Creates an empty table and returns table writer.
@@ -338,26 +338,28 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
// Opens table. It returns a cache handle, which should
// be released after use.
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
func (t *tOps) open(f *tFile) (ch *cache.Handle, err error) {
num := f.file.Num()
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
ch = t.cache.Get(0, num, func() (size int, value cache.Value) {
var r storage.Reader
r, err = f.file.Open()
if err != nil {
return 0, nil
}
var bcacheNS cache.Namespace
if bc := t.s.o.GetBlockCache(); bc != nil {
bcacheNS = bc.GetNamespace(num)
var bcache *cache.CacheGetter
if t.bcache != nil {
bcache = &cache.CacheGetter{Cache: t.bcache, NS: num}
}
var tr *table.Reader
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcacheNS, t.bpool, t.s.o.Options)
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcache, t.bpool, t.s.o.Options)
if err != nil {
r.Close()
return 0, nil
}
return 1, tr
})
if ch == nil && err == nil {
err = ErrClosed
@@ -373,7 +375,17 @@ func (t *tOps) find(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []b
return nil, nil, err
}
defer ch.Release()
return ch.Value().(*table.Reader).Find(key, ro)
return ch.Value().(*table.Reader).Find(key, true, ro)
}
// Finds key that is greater than or equal to the given key.
func (t *tOps) findKey(f *tFile, key []byte, ro *opt.ReadOptions) (rkey []byte, err error) {
ch, err := t.open(f)
if err != nil {
return nil, err
}
defer ch.Release()
return ch.Value().(*table.Reader).FindKey(key, true, ro)
}
// Returns approximate offset of the given key.
@@ -402,16 +414,14 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
// no one use the the table.
func (t *tOps) remove(f *tFile) {
num := f.file.Num()
t.cacheNS.Delete(num, func(exist, pending bool) {
if !pending {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.ZapNamespace(num)
}
t.cache.Delete(0, num, func() {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if t.bcache != nil {
t.bcache.EvictNS(num)
}
})
}
@@ -419,18 +429,34 @@ func (t *tOps) remove(f *tFile) {
// Closes the table ops instance. It will close all tables,
// regadless still used or not.
func (t *tOps) close() {
t.cache.Zap()
t.bpool.Close()
t.cache.Close()
if t.bcache != nil {
t.bcache.Close()
}
}
// Creates new initialized table ops instance.
func newTableOps(s *session, cacheCap int) *tOps {
c := cache.NewLRUCache(cacheCap)
func newTableOps(s *session) *tOps {
var (
cacher cache.Cacher
bcache *cache.Cache
)
if s.o.GetOpenFilesCacheCapacity() > 0 {
cacher = cache.NewLRU(s.o.GetOpenFilesCacheCapacity())
}
if !s.o.DisableBlockCache {
var bcacher cache.Cacher
if s.o.GetBlockCacheCapacity() > 0 {
bcacher = cache.NewLRU(s.o.GetBlockCacheCapacity())
}
bcache = cache.NewCache(bcacher)
}
return &tOps{
s: s,
cache: c,
cacheNS: c.GetNamespace(0),
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
s: s,
cache: cache.NewCache(cacher),
bcache: bcache,
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
}
}

View File

@@ -122,6 +122,7 @@ var _ = testutil.Defer(func() {
}
testutil.DoIteratorTesting(&t)
iter.Release()
done <- true
}
}

View File

@@ -50,13 +50,6 @@ func max(x, y int) int {
return y
}
func verifyBlockChecksum(data []byte) bool {
n := len(data) - 4
checksum0 := binary.LittleEndian.Uint32(data[n:])
checksum1 := util.NewCRC(data[:n]).Value()
return checksum0 == checksum1
}
type block struct {
bpool *util.BufferPool
bh blockHandle
@@ -516,7 +509,7 @@ type Reader struct {
mu sync.RWMutex
fi *storage.FileInfo
reader io.ReaderAt
cache cache.Namespace
cache *cache.CacheGetter
err error
bpool *util.BufferPool
// Options
@@ -525,21 +518,24 @@ type Reader struct {
filter filter.Filter
verifyChecksum bool
dataEnd int64
indexBH, filterBH blockHandle
indexBlock *block
filterBlock *filterBlock
dataEnd int64
metaBH, indexBH, filterBH blockHandle
indexBlock *block
filterBlock *filterBlock
}
func (r *Reader) blockKind(bh blockHandle) string {
switch bh.offset {
case r.metaBH.offset:
return "meta-block"
case r.indexBH.offset:
return "index-block"
case r.filterBH.offset:
return "filter-block"
default:
return "data-block"
if r.filterBH.length > 0 {
return "filter-block"
}
}
return "data-block"
}
func (r *Reader) newErrCorrupted(pos, size int64, kind, reason string) error {
@@ -565,10 +561,17 @@ func (r *Reader) readRawBlock(bh blockHandle, verifyChecksum bool) ([]byte, erro
if _, err := r.reader.ReadAt(data, int64(bh.offset)); err != nil && err != io.EOF {
return nil, err
}
if verifyChecksum && !verifyBlockChecksum(data) {
r.bpool.Put(data)
return nil, r.newErrCorruptedBH(bh, "checksum mismatch")
if verifyChecksum {
n := bh.length + 1
checksum0 := binary.LittleEndian.Uint32(data[n:])
checksum1 := util.NewCRC(data[:n]).Value()
if checksum0 != checksum1 {
r.bpool.Put(data)
return nil, r.newErrCorruptedBH(bh, fmt.Sprintf("checksum mismatch, want=%#x got=%#x", checksum0, checksum1))
}
}
switch data[bh.length] {
case blockTypeNoCompression:
data = data[:bh.length]
@@ -610,18 +613,22 @@ func (r *Reader) readBlock(bh blockHandle, verifyChecksum bool) (*block, error)
func (r *Reader) readBlockCached(bh blockHandle, verifyChecksum, fillCache bool) (*block, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *block
b, err = r.readBlock(bh, verifyChecksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
var (
err error
ch *cache.Handle
)
if fillCache {
ch = r.cache.Get(bh.offset, func() (size int, value cache.Value) {
var b *block
b, err = r.readBlock(bh, verifyChecksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
} else {
ch = r.cache.Get(bh.offset, nil)
}
if ch != nil {
b, ok := ch.Value().(*block)
if !ok {
@@ -664,18 +671,22 @@ func (r *Reader) readFilterBlock(bh blockHandle) (*filterBlock, error) {
func (r *Reader) readFilterBlockCached(bh blockHandle, fillCache bool) (*filterBlock, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
var (
err error
ch *cache.Handle
)
if fillCache {
ch = r.cache.Get(bh.offset, func() (size int, value cache.Value) {
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
} else {
ch = r.cache.Get(bh.offset, nil)
}
if ch != nil {
b, ok := ch.Value().(*filterBlock)
if !ok {
@@ -798,13 +809,7 @@ func (r *Reader) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
return iterator.NewIndexedIterator(index, opt.GetStrict(r.o, ro, opt.StrictReader))
}
// Find finds key/value pair whose key is greater than or equal to the
// given key. It returns ErrNotFound if the table doesn't contain
// such pair.
//
// The caller should not modify the contents of the returned slice, but
// it is safe to modify the contents of the argument after Find returns.
func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err error) {
func (r *Reader) find(key []byte, filtered bool, ro *opt.ReadOptions, noValue bool) (rkey, value []byte, err error) {
r.mu.RLock()
defer r.mu.RUnlock()
@@ -833,14 +838,17 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
r.err = r.newErrCorruptedBH(r.indexBH, "bad data block handle")
return
}
if r.filter != nil {
filterBlock, rel, ferr := r.getFilterBlock(true)
if filtered && r.filter != nil {
filterBlock, frel, ferr := r.getFilterBlock(true)
if ferr == nil {
if !filterBlock.contains(r.filter, dataBH.offset, key) {
rel.Release()
frel.Release()
return nil, nil, ErrNotFound
}
rel.Release()
frel.Release()
} else if !errors.IsCorrupted(ferr) {
err = ferr
return
}
}
data := r.getDataIter(dataBH, nil, r.verifyChecksum, !ro.GetDontFillCache())
@@ -854,21 +862,52 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
}
// Don't use block buffer, no need to copy the buffer.
rkey = data.Key()
if r.bpool == nil {
value = data.Value()
} else {
// Use block buffer, and since the buffer will be recycled, the buffer
// need to be copied.
value = append([]byte{}, data.Value()...)
if !noValue {
if r.bpool == nil {
value = data.Value()
} else {
// Use block buffer, and since the buffer will be recycled, the buffer
// need to be copied.
value = append([]byte{}, data.Value()...)
}
}
return
}
// Find finds key/value pair whose key is greater than or equal to the
// given key. It returns ErrNotFound if the table doesn't contain
// such pair.
// If filtered is true then the nearest 'block' will be checked against
// 'filter data' (if present) and will immediately return ErrNotFound if
// 'filter data' indicates that such pair doesn't exist.
//
// The caller may modify the contents of the returned slice as it is its
// own copy.
// It is safe to modify the contents of the argument after Find returns.
func (r *Reader) Find(key []byte, filtered bool, ro *opt.ReadOptions) (rkey, value []byte, err error) {
return r.find(key, filtered, ro, false)
}
// Find finds key that is greater than or equal to the given key.
// It returns ErrNotFound if the table doesn't contain such key.
// If filtered is true then the nearest 'block' will be checked against
// 'filter data' (if present) and will immediately return ErrNotFound if
// 'filter data' indicates that such key doesn't exist.
//
// The caller may modify the contents of the returned slice as it is its
// own copy.
// It is safe to modify the contents of the argument after Find returns.
func (r *Reader) FindKey(key []byte, filtered bool, ro *opt.ReadOptions) (rkey []byte, err error) {
rkey, _, err = r.find(key, filtered, ro, true)
return
}
// Get gets the value for the given key. It returns errors.ErrNotFound
// if the table does not contain the key.
//
// The caller should not modify the contents of the returned slice, but
// it is safe to modify the contents of the argument after Get returns.
// The caller may modify the contents of the returned slice as it is its
// own copy.
// It is safe to modify the contents of the argument after Find returns.
func (r *Reader) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
r.mu.RLock()
defer r.mu.RUnlock()
@@ -878,7 +917,7 @@ func (r *Reader) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error)
return
}
rkey, value, err := r.Find(key, ro)
rkey, value, err := r.find(key, false, ro, false)
if err == nil && r.cmp.Compare(rkey, key) != 0 {
value = nil
err = ErrNotFound
@@ -949,7 +988,11 @@ func (r *Reader) Release() {
// The fi, cache and bpool is optional and can be nil.
//
// The returned table reader instance is goroutine-safe.
func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Namespace, bpool *util.BufferPool, o *opt.Options) (*Reader, error) {
func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache *cache.CacheGetter, bpool *util.BufferPool, o *opt.Options) (*Reader, error) {
if f == nil {
return nil, errors.New("leveldb/table: nil file")
}
r := &Reader{
fi: fi,
reader: f,
@@ -959,13 +1002,12 @@ func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Name
cmp: o.GetComparer(),
verifyChecksum: o.GetStrict(opt.StrictBlockChecksum),
}
if f == nil {
return nil, errors.New("leveldb/table: nil file")
}
if size < footerLen {
r.err = r.newErrCorrupted(0, size, "table", "too small")
return r, nil
}
footerPos := size - footerLen
var footer [footerLen]byte
if _, err := r.reader.ReadAt(footer[:], footerPos); err != nil && err != io.EOF {
@@ -975,20 +1017,24 @@ func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Name
r.err = r.newErrCorrupted(footerPos, footerLen, "table-footer", "bad magic number")
return r, nil
}
var n int
// Decode the metaindex block handle.
metaBH, n := decodeBlockHandle(footer[:])
r.metaBH, n = decodeBlockHandle(footer[:])
if n == 0 {
r.err = r.newErrCorrupted(footerPos, footerLen, "table-footer", "bad metaindex block handle")
return r, nil
}
// Decode the index block handle.
r.indexBH, n = decodeBlockHandle(footer[n:])
if n == 0 {
r.err = r.newErrCorrupted(footerPos, footerLen, "table-footer", "bad index block handle")
return r, nil
}
// Read metaindex block.
metaBlock, err := r.readBlock(metaBH, true)
metaBlock, err := r.readBlock(r.metaBH, true)
if err != nil {
if errors.IsCorrupted(err) {
r.err = err
@@ -997,9 +1043,12 @@ func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Name
return nil, err
}
}
// Set data end.
r.dataEnd = int64(metaBH.offset)
metaIter := r.newBlockIter(metaBlock, nil, nil, false)
r.dataEnd = int64(r.metaBH.offset)
// Read metaindex.
metaIter := r.newBlockIter(metaBlock, nil, nil, true)
for metaIter.Next() {
key := string(metaIter.Key())
if !strings.HasPrefix(key, "filter.") {
@@ -1044,13 +1093,12 @@ func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Name
if r.filter != nil {
r.filterBlock, err = r.readFilterBlock(r.filterBH)
if err != nil {
if !errors.IsCorrupted(r.err) {
if !errors.IsCorrupted(err) {
return nil, err
}
// Don't use filter then.
r.filter = nil
r.filterBH = blockHandle{}
}
}
}

View File

@@ -3,15 +3,9 @@ package table
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestTable(t *testing.T) {
testutil.RunDefer()
RegisterFailHandler(Fail)
RunSpecs(t, "Table Suite")
testutil.RunSuite(t, "Table Suite")
}

View File

@@ -23,7 +23,7 @@ type tableWrapper struct {
}
func (t tableWrapper) TestFind(key []byte) (rkey, rvalue []byte, err error) {
return t.Reader.Find(key, nil)
return t.Reader.Find(key, false, nil)
}
func (t tableWrapper) TestGet(key []byte) (value []byte, err error) {

View File

@@ -35,6 +35,10 @@ type Get interface {
TestGet(key []byte) (value []byte, err error)
}
type Has interface {
TestHas(key []byte) (ret bool, err error)
}
type NewIterator interface {
TestNewIterator(slice *util.Range) iterator.Iterator
}
@@ -213,5 +217,6 @@ func DoDBTesting(t *DBTesting) {
}
DoIteratorTesting(&it)
iter.Release()
}
}

View File

@@ -0,0 +1,21 @@
package testutil
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func RunSuite(t GinkgoTestingT, name string) {
RunDefer()
SynchronizedBeforeSuite(func() []byte {
RunDefer("setup")
return nil
}, func(data []byte) {})
SynchronizedAfterSuite(func() {
RunDefer("teardown")
}, func() {})
RegisterFailHandler(Fail)
RunSpecs(t, name)
}

View File

@@ -26,9 +26,11 @@ func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB,
BeforeEach(func() {
p = setup(kv)
})
AfterEach(func() {
teardown(p)
})
if teardown != nil {
AfterEach(func() {
teardown(p)
})
}
}
It("Should find all keys with Find", func() {
@@ -84,6 +86,26 @@ func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB,
}
})
It("Should only find present key with Has", func() {
if db, ok := p.(Has); ok {
ShuffledIndex(nil, kv.Len(), 1, func(i int) {
key_, key, _ := kv.IndexInexact(i)
// Using exact key.
ret, err := db.TestHas(key)
Expect(err).ShouldNot(HaveOccurred(), "Error for key %q", key)
Expect(ret).Should(BeTrue(), "False for key %q", key)
// Using inexact key.
if len(key_) > 0 {
ret, err = db.TestHas(key_)
Expect(err).ShouldNot(HaveOccurred(), "Error for key %q", key_)
Expect(ret).ShouldNot(BeTrue(), "True for key %q", key)
}
})
}
})
TestIter := func(r *util.Range, _kv KeyValue) {
if db, ok := p.(NewIterator); ok {
iter := db.TestNewIterator(r)
@@ -95,6 +117,7 @@ func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB,
}
DoIteratorTesting(&t)
iter.Release()
}
}
@@ -103,7 +126,7 @@ func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB,
done <- true
}, 3.0)
RandomIndex(rnd, kv.Len(), kv.Len(), func(i int) {
RandomIndex(rnd, kv.Len(), Min(kv.Len(), 50), func(i int) {
type slice struct {
r *util.Range
start, limit int
@@ -121,7 +144,7 @@ func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB,
}
})
RandomRange(rnd, kv.Len(), kv.Len(), func(start, limit int) {
RandomRange(rnd, kv.Len(), Min(kv.Len(), 50), func(start, limit int) {
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", start, limit), func(done Done) {
r := kv.Range(start, limit)
TestIter(&r, kv.Slice(start, limit))
@@ -134,10 +157,22 @@ func AllKeyValueTesting(rnd *rand.Rand, body, setup func(KeyValue) DB, teardown
Test := func(kv *KeyValue) func() {
return func() {
var p DB
if setup != nil {
Defer("setup", func() {
p = setup(*kv)
})
}
if teardown != nil {
Defer("teardown", func() {
teardown(p)
})
}
if body != nil {
p = body(*kv)
}
KeyValueTesting(rnd, *kv, p, setup, teardown)
KeyValueTesting(rnd, *kv, p, func(KeyValue) DB {
return p
}, nil)
}
}
@@ -148,4 +183,5 @@ func AllKeyValueTesting(rnd *rand.Rand, body, setup func(KeyValue) DB, teardown
Describe("with big value", Test(KeyValue_BigValue()))
Describe("with special key", Test(KeyValue_SpecialKey()))
Describe("with multiple key/value", Test(KeyValue_MultipleKeyValue()))
Describe("with generated key/value", Test(KeyValue_Generate(nil, 120, 1, 50, 10, 120)))
}

View File

@@ -397,6 +397,7 @@ func (s *Storage) logI(format string, args ...interface{}) {
func (s *Storage) Log(str string) {
s.log(1, "Log: "+str)
s.Storage.Log(str)
}
func (s *Storage) Lock() (r util.Releaser, err error) {

View File

@@ -155,3 +155,17 @@ func RandomRange(rnd *rand.Rand, n, round int, fn func(start, limit int)) {
}
return
}
func Max(x, y int) int {
if x > y {
return x
}
return y
}
func Min(x, y int) int {
if x < y {
return x
}
return y
}

View File

@@ -34,6 +34,10 @@ func (t *testingDB) TestGet(key []byte) (value []byte, err error) {
return t.Get(key, t.ro)
}
func (t *testingDB) TestHas(key []byte) (ret bool, err error) {
return t.Has(key, t.ro)
}
func (t *testingDB) TestNewIterator(slice *util.Range) iterator.Iterator {
return t.NewIterator(slice, t.ro)
}

View File

@@ -114,7 +114,7 @@ func (v *version) walkOverlapping(ikey iKey, f func(level int, t *tFile) bool, l
}
}
func (v *version) get(ikey iKey, ro *opt.ReadOptions) (value []byte, tcomp bool, err error) {
func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byte, tcomp bool, err error) {
ukey := ikey.ukey()
var (
@@ -142,7 +142,15 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions) (value []byte, tcomp bool,
}
}
fikey, fval, ferr := v.s.tops.find(t, ikey, ro)
var (
fikey, fval []byte
ferr error
)
if noValue {
fikey, ferr = v.s.tops.findKey(t, ikey, ro)
} else {
fikey, fval, ferr = v.s.tops.find(t, ikey, ro)
}
switch ferr {
case nil:
case ErrNotFound:

View File

@@ -8,11 +8,11 @@ package bcrypt
// The code is a port of Provos and Mazières's C implementation.
import (
"code.google.com/p/go.crypto/blowfish"
"crypto/rand"
"crypto/subtle"
"errors"
"fmt"
"golang.org/x/crypto/blowfish"
"io"
"strconv"
)

View File

@@ -8,8 +8,8 @@ import (
"fmt"
"unicode"
"code.google.com/p/go.text/transform"
"code.google.com/p/go.text/unicode/norm"
"golang.org/x/text/transform"
"golang.org/x/text/unicode/norm"
)
func ExampleRemoveFunc() {

View File

@@ -55,10 +55,17 @@ type Transformer interface {
// either error may be returned. Other than the error conditions listed
// here, implementations are free to report other errors that arise.
Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error)
// Reset resets the state and allows a Transformer to be reused.
Reset()
}
// TODO: Do we require that a Transformer be reusable if it returns a nil error
// or do we always require a reset after use? Is Reset mandatory or optional?
// NopResetter can be embedded by implementations of Transformer to add a nop
// Reset method.
type NopResetter struct{}
// Reset implements the Reset method of the Transformer interface.
func (NopResetter) Reset() {}
// Reader wraps another io.Reader by transforming the bytes read.
type Reader struct {
@@ -84,8 +91,9 @@ type Reader struct {
const defaultBufSize = 4096
// NewReader returns a new Reader that wraps r by transforming the bytes read
// via t.
// via t. It calls Reset on t.
func NewReader(r io.Reader, t Transformer) *Reader {
t.Reset()
return &Reader{
r: r,
t: t,
@@ -170,8 +178,9 @@ type Writer struct {
}
// NewWriter returns a new Writer that wraps w by transforming the bytes written
// via t.
// via t. It calls Reset on t.
func NewWriter(w io.Writer, t Transformer) *Writer {
t.Reset()
return &Writer{
w: w,
t: t,
@@ -247,7 +256,7 @@ func (w *Writer) Close() error {
return nil
}
type nop struct{}
type nop struct{ NopResetter }
func (nop) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := copy(dst, src)
@@ -257,7 +266,7 @@ func (nop) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
return n, n, err
}
type discard struct{}
type discard struct{ NopResetter }
func (discard) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
return 0, len(src), nil
@@ -327,6 +336,16 @@ func Chain(t ...Transformer) Transformer {
return c
}
// Reset resets the state of Chain. It calls Reset on all the Transformers.
func (c *chain) Reset() {
for i, l := range c.link {
if l.t != nil {
l.t.Reset()
}
c.link[i].p, c.link[i].n = 0, 0
}
}
// Transform applies the transformers of c in sequence.
func (c *chain) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
// Set up src and dst in the chain.
@@ -425,6 +444,8 @@ func RemoveFunc(f func(r rune) bool) Transformer {
type removeF func(r rune) bool
func (removeF) Reset() {}
// Transform implements the Transformer interface.
func (t removeF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
for r, sz := rune(0), 0; len(src) > 0; src = src[sz:] {
@@ -436,7 +457,7 @@ func (t removeF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err err
if sz == 1 {
// Invalid rune.
if !atEOF && !utf8.FullRune(src[nSrc:]) {
if !atEOF && !utf8.FullRune(src) {
err = ErrShortSrc
break
}
@@ -485,12 +506,14 @@ func grow(b []byte, n int) []byte {
const initialBufSize = 128
// String returns a string with the result of converting s[:n] using t, where
// n <= len(s). If err == nil, n will be len(s).
// n <= len(s). If err == nil, n will be len(s). It calls Reset on t.
func String(t Transformer, s string) (result string, n int, err error) {
if s == "" {
return "", 0, nil
}
t.Reset()
// Allocate only once. Note that both dst and src escape when passed to
// Transform.
buf := [2 * initialBufSize]byte{}
@@ -571,8 +594,9 @@ func String(t Transformer, s string) (result string, n int, err error) {
}
// Bytes returns a new byte slice with the result of converting b[:n] using t,
// where n <= len(b). If err == nil, n will be len(b).
// where n <= len(b). If err == nil, n will be len(b). It calls Reset on t.
func Bytes(t Transformer, b []byte) (result []byte, n int, err error) {
t.Reset()
dst := make([]byte, len(b))
pDst, pSrc := 0, 0
for {

View File

@@ -16,7 +16,7 @@ import (
"unicode/utf8"
)
type lowerCaseASCII struct{}
type lowerCaseASCII struct{ NopResetter }
func (lowerCaseASCII) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
@@ -34,7 +34,7 @@ func (lowerCaseASCII) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, er
var errYouMentionedX = errors.New("you mentioned X")
type dontMentionX struct{}
type dontMentionX struct{ NopResetter }
func (dontMentionX) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
@@ -52,7 +52,7 @@ func (dontMentionX) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err
// doublerAtEOF is a strange Transformer that transforms "this" to "tthhiiss",
// but only if atEOF is true.
type doublerAtEOF struct{}
type doublerAtEOF struct{ NopResetter }
func (doublerAtEOF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if !atEOF {
@@ -71,7 +71,7 @@ func (doublerAtEOF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err
// rleDecode and rleEncode implement a toy run-length encoding: "aabbbbbbbbbb"
// is encoded as "2a10b". The decoding is assumed to not contain any numbers.
type rleDecode struct{}
type rleDecode struct{ NopResetter }
func (rleDecode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
loop:
@@ -104,6 +104,8 @@ loop:
}
type rleEncode struct {
NopResetter
// allowStutter means that "xxxxxxxx" can be encoded as "5x3x"
// instead of always as "8x".
allowStutter bool
@@ -136,6 +138,10 @@ func (e rleEncode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err e
// trickler consumes all input bytes, but writes a single byte at a time to dst.
type trickler []byte
func (t *trickler) Reset() {
*t = nil
}
func (t *trickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
*t = append(*t, src...)
if len(*t) == 0 {
@@ -157,6 +163,9 @@ func (t *trickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err e
// to have some tolerance as long as progress can be detected.
type delayedTrickler []byte
func (t *delayedTrickler) Reset() {
*t = nil
}
func (t *delayedTrickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if len(*t) > 0 && len(dst) > 0 {
dst[0] = (*t)[0]
@@ -447,7 +456,6 @@ var testCases = []testCase{
func TestReader(t *testing.T) {
for _, tc := range testCases {
reset(tc.t)
r := NewReader(strings.NewReader(tc.src), tc.t)
// Differently sized dst and src buffers are not part of the
// exported API. We override them manually.
@@ -461,13 +469,6 @@ func TestReader(t *testing.T) {
}
}
func reset(t Transformer) {
var dst [128]byte
for err := ErrShortDst; err != nil; {
_, _, err = t.Transform(dst[:], nil, true)
}
}
func TestWriter(t *testing.T) {
tests := append(testCases, chainTests()...)
for _, tc := range tests {
@@ -477,7 +478,6 @@ func TestWriter(t *testing.T) {
}
for _, sz := range sizes {
bb := &bytes.Buffer{}
reset(tc.t)
w := NewWriter(bb, tc.t)
// Differently sized dst and src buffers are not part of the
// exported API. We override them manually.
@@ -735,7 +735,7 @@ func chainTests() []testCase {
}
func doTransform(tc testCase) (res string, iter int, err error) {
reset(tc.t)
tc.t.Reset()
dst := make([]byte, tc.dstSize)
out, in := make([]byte, 0, 2*len(tc.src)), []byte(tc.src)
for {
@@ -884,6 +884,14 @@ func TestRemoveFunc(t *testing.T) {
wantIter: 2,
},
{
// Test a long buffer greater than the internal buffer size
src: "hello\xcc\xcc\xccworld",
srcSize: 13,
wantStr: "hello\uFFFD\uFFFD\uFFFDworld",
wantIter: 1,
},
{
src: "\u2345",
dstSize: 2,
@@ -947,7 +955,6 @@ func testString(t *testing.T, f func(Transformer, string) (string, int, error))
// that depend on a specific buffer size being set.
continue
}
reset(tt.t)
if tt.wantErr == ErrShortDst || tt.wantErr == ErrShortSrc {
// The result string will be different.
continue

View File

@@ -5,9 +5,6 @@
maketables: maketables.go triegen.go
go build $^
maketesttables: maketesttables.go triegen.go
go build $^
normregtest: normregtest.go
go build $^
@@ -15,10 +12,6 @@ tables: maketables
./maketables > tables.go
gofmt -w tables.go
trietesttables: maketesttables
./maketesttables > triedata_test.go
gofmt -w triedata_test.go
# Downloads from www.unicode.org, so not part
# of standard test scripts.
test: testtables regtest

View File

@@ -9,7 +9,7 @@ import (
"fmt"
"unicode/utf8"
"code.google.com/p/go.text/unicode/norm"
"golang.org/x/text/unicode/norm"
)
// EqualSimple uses a norm.Iter to compare two non-normalized

View File

@@ -201,17 +201,17 @@ func lookupInfoNFKC(b input, i int) Properties {
// Properties returns properties for the first rune in s.
func (f Form) Properties(s []byte) Properties {
if f == NFC || f == NFD {
return compInfo(nfcTrie.lookup(s))
return compInfo(nfcData.lookup(s))
}
return compInfo(nfkcTrie.lookup(s))
return compInfo(nfkcData.lookup(s))
}
// PropertiesString returns properties for the first rune in s.
func (f Form) PropertiesString(s string) Properties {
if f == NFC || f == NFD {
return compInfo(nfcTrie.lookupString(s))
return compInfo(nfcData.lookupString(s))
}
return compInfo(nfkcTrie.lookupString(s))
return compInfo(nfkcData.lookupString(s))
}
// compInfo converts the information contained in v and sz

View File

@@ -77,16 +77,16 @@ func (in *input) copySlice(buf []byte, b, e int) int {
func (in *input) charinfoNFC(p int) (uint16, int) {
if in.bytes == nil {
return nfcTrie.lookupString(in.str[p:])
return nfcData.lookupString(in.str[p:])
}
return nfcTrie.lookup(in.bytes[p:])
return nfcData.lookup(in.bytes[p:])
}
func (in *input) charinfoNFKC(p int) (uint16, int) {
if in.bytes == nil {
return nfkcTrie.lookupString(in.str[p:])
return nfkcData.lookupString(in.str[p:])
}
return nfkcTrie.lookup(in.bytes[p:])
return nfkcData.lookup(in.bytes[p:])
}
func (in *input) hangul(p int) (r rune) {

View File

@@ -9,6 +9,8 @@ import (
"unicode/utf8"
)
// MaxSegmentSize is the maximum size of a byte buffer needed to consider any
// sequence of starter and non-starter runes for the purpose of normalization.
const MaxSegmentSize = maxByteBufferSize
// An Iter iterates over a string or byte slice, while normalizing it

View File

@@ -24,7 +24,8 @@ import (
"strings"
"unicode"
"code.google.com/p/go.text/internal/ucd"
"golang.org/x/text/internal/triegen"
"golang.org/x/text/internal/ucd"
)
func main() {
@@ -714,13 +715,14 @@ func printCharInfoTables() int {
varnames := []string{"nfc", "nfkc"}
for i := 0; i < FNumberOfFormTypes; i++ {
trie := newNode()
trie := triegen.NewTrie(varnames[i])
for r, c := range chars {
f := c.forms[i]
d := f.expandedDecomp
if len(d) != 0 {
_, key := mkstr(c.codePoint, &f)
trie.insert(rune(r), positionMap[key])
trie.Insert(rune(r), uint64(positionMap[key]))
if c.ccc != ccc(d[0]) {
// We assume the lead ccc of a decomposition !=0 in this case.
if ccc(d[0]) == 0 {
@@ -730,12 +732,16 @@ func printCharInfoTables() int {
} else if c.nLeadingNonStarters > 0 && len(f.expandedDecomp) == 0 && c.ccc == 0 && !f.combinesBackward {
// Handle cases where it can't be detected that the nLead should be equal
// to nTrail.
trie.insert(c.codePoint, positionMap[nLeadStr])
trie.Insert(c.codePoint, uint64(positionMap[nLeadStr]))
} else if v := makeEntry(&f, &c)<<8 | uint16(c.ccc); v != 0 {
trie.insert(c.codePoint, 0x8000|v)
trie.Insert(c.codePoint, uint64(0x8000|v))
}
}
size += trie.printTables(varnames[i])
sz, err := trie.Gen(os.Stdout, triegen.Compact(&normCompacter{name: varnames[i]}))
if err != nil {
logger.Fatal(err)
}
size += sz
}
return size
}

View File

@@ -23,7 +23,7 @@ import (
"unicode"
"unicode/utf8"
"code.google.com/p/go.text/unicode/norm"
"golang.org/x/text/unicode/norm"
)
func main() {

View File

File diff suppressed because it is too large Load Diff

View File

@@ -7,13 +7,16 @@ package norm
import (
"unicode/utf8"
"code.google.com/p/go.text/transform"
"golang.org/x/text/transform"
)
// Transform implements the transform.Transformer interface. It may need to
// write segments of up to MaxSegmentSize at once. Users should either catch
// ErrShortDst and allow dst to grow or have dst be at least of size
// MaxTransformChunkSize to be guaranteed of progress.
// Reset implements the Reset method of the transform.Transformer interface.
func (Form) Reset() {}
// Transform implements the Transform method of the transform.Transformer
// interface. It may need to write segments of up to MaxSegmentSize at once.
// Users should either catch ErrShortDst and allow dst to grow or have dst be at
// least of size MaxTransformChunkSize to be guaranteed of progress.
func (f Form) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := 0
// Cap the maximum number of src bytes to check.

View File

@@ -8,7 +8,7 @@ import (
"fmt"
"testing"
"code.google.com/p/go.text/transform"
"golang.org/x/text/transform"
)
func TestTransform(t *testing.T) {

View File

@@ -0,0 +1,54 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
type valueRange struct {
value uint16 // header: value:stride
lo, hi byte // header: lo:n
}
type sparseBlocks struct {
values []valueRange
offset []uint16
}
var nfcSparse = sparseBlocks{
values: nfcSparseValues[:],
offset: nfcSparseOffset[:],
}
var nfkcSparse = sparseBlocks{
values: nfkcSparseValues[:],
offset: nfkcSparseOffset[:],
}
var (
nfcData = newNfcTrie(0)
nfkcData = newNfkcTrie(0)
)
// lookupValue determines the type of block n and looks up the value for b.
// For n < t.cutoff, the block is a simple lookup table. Otherwise, the block
// is a list of ranges with an accompanying value. Given a matching range r,
// the value for b is by r.value + (b - r.lo) * stride.
func (t *sparseBlocks) lookup(n uint32, b byte) uint16 {
offset := t.offset[n]
header := t.values[offset]
lo := offset + 1
hi := lo + uint16(header.lo)
for lo < hi {
m := lo + (hi-lo)/2
r := t.values[m]
if r.lo <= b && b <= r.hi {
return r.value + uint16(b-r.lo)*header.value
}
if b < r.lo {
hi = m
} else {
lo = m + 1
}
}
return 0
}

View File

@@ -0,0 +1,117 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Trie table generator.
// Used by make*tables tools to generate a go file with trie data structures
// for mapping UTF-8 to a 16-bit value. All but the last byte in a UTF-8 byte
// sequence are used to lookup offsets in the index table to be used for the
// next byte. The last byte is used to index into a table with 16-bit values.
package main
import (
"fmt"
"io"
)
const maxSparseEntries = 16
type normCompacter struct {
sparseBlocks [][]uint64
sparseOffset []uint16
sparseCount int
name string
}
func mostFrequentStride(a []uint64) int {
counts := make(map[int]int)
var v int
for _, x := range a {
if stride := int(x) - v; v != 0 && stride >= 0 {
counts[stride]++
}
v = int(x)
}
var maxs, maxc int
for stride, cnt := range counts {
if cnt > maxc || (cnt == maxc && stride < maxs) {
maxs, maxc = stride, cnt
}
}
return maxs
}
func countSparseEntries(a []uint64) int {
stride := mostFrequentStride(a)
var v, count int
for _, tv := range a {
if int(tv)-v != stride {
if tv != 0 {
count++
}
}
v = int(tv)
}
return count
}
func (c *normCompacter) Size(v []uint64) (sz int, ok bool) {
if n := countSparseEntries(v); n <= maxSparseEntries {
return (n+1)*4 + 2, true
}
return 0, false
}
func (c *normCompacter) Store(v []uint64) uint32 {
h := uint32(len(c.sparseOffset))
c.sparseBlocks = append(c.sparseBlocks, v)
c.sparseOffset = append(c.sparseOffset, uint16(c.sparseCount))
c.sparseCount += countSparseEntries(v) + 1
return h
}
func (c *normCompacter) Handler() string {
return c.name + "Sparse.lookup"
}
func (c *normCompacter) Print(w io.Writer) (retErr error) {
p := func(f string, x ...interface{}) {
if _, err := fmt.Fprintf(w, f, x...); retErr == nil && err != nil {
retErr = err
}
}
ls := len(c.sparseBlocks)
p("// %sSparseOffset: %d entries, %d bytes\n", c.name, ls, ls*2)
p("var %sSparseOffset = %#v\n\n", c.name, c.sparseOffset)
ns := c.sparseCount
p("// %sSparseValues: %d entries, %d bytes\n", c.name, ns, ns*4)
p("var %sSparseValues = [%d]valueRange {", c.name, ns)
for i, b := range c.sparseBlocks {
p("\n// Block %#x, offset %#x", i, c.sparseOffset[i])
var v int
stride := mostFrequentStride(b)
n := countSparseEntries(b)
p("\n{value:%#04x,lo:%#02x},", stride, uint8(n))
for i, nv := range b {
if int(nv)-v != stride {
if v != 0 {
p(",hi:%#02x},", 0x80+i-1)
}
if nv != 0 {
p("\n{value:%#04x,lo:%#02x", nv, 0x80+i)
}
}
v = int(nv)
}
if v != 0 {
p(",hi:%#02x},", 0x80+len(b)-1)
}
}
p("\n}\n\n")
return
}

35
NICKS Normal file
View File

@@ -0,0 +1,35 @@
# This file maps email addresses used in commits to nicks used the changelog.
AudriusButkevicius <audrius.butkevicius@gmail.com>
Cathryne <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
KayoticSully <kayoticsully@gmail.com>
Nutomic <me@nutomic.com>
Vilbrekin <vilbrekin@gmail.com>
Zillode <zillode@zillode.be>
alex2108 <register-github@alex-graf.de>
andrew-d <andrew@du.nham.ca>
asdil12 <dominik@heidler.eu>
bigbear2nd <bigbear2nd@gmail.com>
bsidhom <bsidhom@gmail.com>
calmh <jakob@nym.se>
cdata <chris@scriptolo.gy>
ceh <emil@hessman.se>
cqcallaw <enlightened.despot@gmail.com>
filoozoom <philippe@schommers.be>
frioux <frew@afoolishmanifesto.com> <frioux@gmail.com>
gillisig <gilli@vx.is>
jedie <github.com@jensdiemer.de> <git@jensdiemer.de>
jpjp <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
kozec <kozec@kozec.com>
marcindziadus <dziadus.marcin@gmail.com>
mvdan <mvdan@mvdan.cc>
philips <brandon@ifup.org>
piobpl <piotrb10@gmail.com>
pluby <phill.luby@newredo.com>
pyfisch <pyfisch@gmail.com>
qbit <qbit@deftly.net>
seehuhn <voss@seehuhn.de>
snnd <dw@risu.io>
tojrobinson <tully@tojr.org>
uok <ueomkail@gmail.com> <uok@users.noreply.github.com>
veeti <veeti.paananen@rojekti.fi>

View File

@@ -11,7 +11,7 @@ This is the `syncthing` project. The following are the project goals:
collaborating devices. The protocol should be well defined, unambiguous,
easily understood, free to use, efficient, secure and language neutral.
This is the [Block Exchange
Protocol](https://github.com/syncthing/syncthing/blob/master/protocol/PROTOCOL.md).
Protocol](https://github.com/syncthing/protocol/blob/master/BEPv1.md).
2. Provide the reference implementation to demonstrate the usability of
said protocol. This is the `syncthing` utility. It is the hope that
@@ -25,13 +25,20 @@ for incompatible changes.
Getting Started
---------------
Take a look at the [getting started guide](http://discourse.syncthing.net/t/46).
Take a look at the [getting started
guide](https://github.com/syncthing/syncthing/wiki/Getting-Started).
There are a few examples for keeping syncthing running in the background
on your system in [the etc directory](https://github.com/syncthing/syncthing/blob/master/etc).
There is an IRC channel, `#syncthing` on Freenode, for talking directly
to developers and users (when awake and present, etc.).
Building
--------
Building Syncthing from source is easy, and there's a
[guide](http://discourse.syncthing.net/t/44)
[guide](https://github.com/syncthing/syncthing/wiki/Building).
that describes it for both Unix and Windows.
Signed Releases
@@ -46,15 +53,8 @@ Documentation
=============
The [syncthing
documentation](http://discourse.syncthing.net/category/documentation) is
on the discourse site.
License
=======
All documentation and protocol specifications are licensed
under the [Creative Commons Attribution 4.0 International
License](http://creativecommons.org/licenses/by/4.0/).
documentation](https://github.com/syncthing/syncthing/wiki/) is on the
Github wiki.
All code is licensed under the
[GPL](https://github.com/syncthing/syncthing/blob/master/LICENSE), v3 or

116
build.go
View File

@@ -22,6 +22,7 @@ import (
"archive/zip"
"bytes"
"compress/gzip"
"crypto/md5"
"flag"
"fmt"
"io"
@@ -44,6 +45,7 @@ var (
goos string
noupgrade bool
version string
race bool
)
const minGoVersion = 1.3
@@ -65,16 +67,14 @@ func main() {
flag.StringVar(&goarch, "goarch", runtime.GOARCH, "GOARCH")
flag.StringVar(&goos, "goos", runtime.GOOS, "GOOS")
flag.BoolVar(&noupgrade, "no-upgrade", false, "Disable upgrade functionality")
flag.BoolVar(&noupgrade, "no-upgrade", noupgrade, "Disable upgrade functionality")
flag.StringVar(&version, "version", getVersion(), "Set compiled in version string")
flag.BoolVar(&race, "race", race, "Use race detector")
flag.Parse()
switch goarch {
case "386", "amd64", "armv5", "armv6", "armv7":
case "386", "amd64", "arm", "armv5", "armv6", "armv7":
break
case "arm":
log.Println("Invalid goarch \"arm\". Use one of \"armv5\", \"armv6\", \"armv7\".")
log.Fatalln("Note that producing a correct \"armv5\" binary requires a rebuilt stdlib.")
default:
log.Printf("Unknown goarch %q; proceed with caution!", goarch)
}
@@ -153,7 +153,7 @@ func checkRequiredGoVersion() {
// This is a standard go build. Verify that it's new enough.
f, err := strconv.ParseFloat(vs, 64)
if err != nil {
log.Printf("*** Could parse Go version out of %q.\n*** This isn't known to work, proceed on your own risk.", vs)
log.Printf("*** Couldn't parse Go version out of %q.\n*** This isn't known to work, proceed on your own risk.", vs)
return
}
if f < minGoVersion {
@@ -165,15 +165,15 @@ func checkRequiredGoVersion() {
}
func setup() {
runPrint("go", "get", "-v", "code.google.com/p/go.tools/cmd/cover")
runPrint("go", "get", "-v", "code.google.com/p/go.tools/cmd/vet")
runPrint("go", "get", "-v", "code.google.com/p/go.net/html")
runPrint("go", "get", "-v", "golang.org/x/tools/cmd/cover")
runPrint("go", "get", "-v", "golang.org/x/tools/cmd/vet")
runPrint("go", "get", "-v", "golang.org/x/net/html")
runPrint("go", "get", "-v", "github.com/tools/godep")
}
func test(pkg string) {
setBuildEnv()
runPrint("go", "test", "-short", "-timeout", "10s", pkg)
runPrint("go", "test", "-short", "-timeout", "60s", pkg)
}
func install(pkg string, tags []string) {
@@ -182,20 +182,38 @@ func install(pkg string, tags []string) {
if len(tags) > 0 {
args = append(args, "-tags", strings.Join(tags, ","))
}
if race {
args = append(args, "-race")
}
args = append(args, pkg)
setBuildEnv()
runPrint("go", args...)
}
func build(pkg string, tags []string) {
rmr("syncthing", "syncthing.exe")
binary := "syncthing"
if goos == "windows" {
binary += ".exe"
}
rmr(binary, binary+".md5")
args := []string{"build", "-ldflags", ldflags()}
if len(tags) > 0 {
args = append(args, "-tags", strings.Join(tags, ","))
}
if race {
args = append(args, "-race")
}
args = append(args, pkg)
setBuildEnv()
runPrint("go", args...)
// Create an md5 checksum of the binary, to be included in the archive for
// automatic upgrades.
err := md5File(binary)
if err != nil {
log.Fatal(err)
}
}
func buildTar() {
@@ -207,12 +225,17 @@ func buildTar() {
}
build("./cmd/syncthing", tags)
filename := name + ".tar.gz"
tarGz(filename, []archiveFile{
files := []archiveFile{
{"README.md", name + "/README.txt"},
{"LICENSE", name + "/LICENSE.txt"},
{"AUTHORS", name + "/AUTHORS.txt"},
{"syncthing", name + "/syncthing"},
})
{"syncthing.md5", name + "/syncthing.md5"},
}
for _, file := range listFiles("etc") {
files = append(files, archiveFile{file, name + "/" + file})
}
tarGz(filename, files)
log.Println(filename)
}
@@ -225,18 +248,34 @@ func buildZip() {
}
build("./cmd/syncthing", tags)
filename := name + ".zip"
zipFile(filename, []archiveFile{
files := []archiveFile{
{"README.md", name + "/README.txt"},
{"LICENSE", name + "/LICENSE.txt"},
{"AUTHORS", name + "/AUTHORS.txt"},
{"syncthing.exe", name + "/syncthing.exe"},
})
{"syncthing.exe.md5", name + "/syncthing.exe.md5"},
}
zipFile(filename, files)
log.Println(filename)
}
func listFiles(dir string) []string {
var res []string
filepath.Walk(dir, func(path string, fi os.FileInfo, err error) error {
if err != nil {
return err
}
if fi.Mode().IsRegular() {
res = append(res, path)
}
return nil
})
return res
}
func setBuildEnv() {
os.Setenv("GOOS", goos)
if strings.HasPrefix(goarch, "arm") {
if strings.HasPrefix(goarch, "armv") {
os.Setenv("GOARCH", "arm")
os.Setenv("GOARM", goarch[4:])
} else {
@@ -260,26 +299,24 @@ func assets() {
}
func xdr() {
for _, f := range []string{"internal/discover/packets", "internal/files/leveldb", "internal/protocol/message"} {
runPipe(f+"_xdr.go", "go", "run", "./Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go", "--", f+".go")
}
runPrint("go", "generate", "./internal/discover", "./internal/files", "./internal/protocol")
}
func translate() {
os.Chdir("gui/lang")
runPipe("lang-en-new.json", "go", "run", "../../cmd/translate/main.go", "lang-en.json", "../index.html")
os.Chdir("gui/assets/lang")
runPipe("lang-en-new.json", "go", "run", "../../../cmd/translate/main.go", "lang-en.json", "../../index.html")
os.Remove("lang-en.json")
err := os.Rename("lang-en-new.json", "lang-en.json")
if err != nil {
log.Fatal(err)
}
os.Chdir("../..")
os.Chdir("../../..")
}
func transifex() {
os.Chdir("gui/lang")
runPrint("go", "run", "../../cmd/transifexdl/main.go")
os.Chdir("../..")
os.Chdir("gui/assets/lang")
runPrint("go", "run", "../../../cmd/transifexdl/main.go")
os.Chdir("../../..")
assets()
}
@@ -301,9 +338,6 @@ func ldflags() string {
b.WriteString(fmt.Sprintf(" -X main.BuildUser %s", buildUser()))
b.WriteString(fmt.Sprintf(" -X main.BuildHost %s", buildHost()))
b.WriteString(fmt.Sprintf(" -X main.BuildEnv %s", buildEnvironment()))
if strings.HasPrefix(goarch, "arm") {
b.WriteString(fmt.Sprintf(" -X main.GoArchExtra %s", goarch[3:]))
}
return b.String()
}
@@ -535,3 +569,29 @@ func zipFile(out string, files []archiveFile) {
log.Fatal(err)
}
}
func md5File(file string) error {
fd, err := os.Open(file)
if err != nil {
return err
}
defer fd.Close()
h := md5.New()
_, err = io.Copy(h, fd)
if err != nil {
return err
}
out, err := os.Create(file + ".md5")
if err != nil {
return err
}
_, err = fmt.Fprintf(out, "%x\n", h.Sum(nil))
if err != nil {
return err
}
return out.Close()
}

View File

@@ -2,6 +2,8 @@
set -euo pipefail
IFS=$'\n\t'
DOCKERIMGV=1.4-4
case "${1:-default}" in
default)
go run build.go
@@ -71,7 +73,7 @@ case "${1:-default}" in
;;
test-cov)
ulimit -t 60 &>/dev/null || true
ulimit -t 600 &>/dev/null || true
ulimit -d 512000 &>/dev/null || true
ulimit -m 512000 &>/dev/null || true
@@ -102,6 +104,35 @@ case "${1:-default}" in
fi
;;
docker-init)
docker build -q -t syncthing/build:$DOCKERIMGV docker
;;
docker-all)
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
syncthing/build:$DOCKERIMGV \
sh -c './build.sh clean \
&& go vet ./cmd/... ./internal/... \
&& ( golint ./cmd/... ; golint ./internal/... ) | egrep -v "comment on exported|should have comment" \
&& ./build.sh all \
&& STTRACE=all ./build.sh test-cov'
;;
docker-test)
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
syncthing/build:$DOCKERIMGV \
sh -euxc './build.sh clean \
&& go run build.go -race \
&& export GOPATH=$(pwd)/Godeps/_workspace:$GOPATH \
&& cd test \
&& go test -tags integration -v -timeout 60m -short \
&& git clean -fxd .'
;;
*)
echo "Unknown build command $1"
;;

9
changelog.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
since="$1"
if [[ -z $since ]] ; then
since="$(git describe --abbrev=0 HEAD^).."
fi
git log --pretty=format:'* %s (%h, @%aN)' "$since" | egrep 'fixes #\d|ref #\d'

View File

@@ -1,7 +1,7 @@
#!/bin/bash
missing-contribs() {
for email in $(git log --format=%ae master | sort | uniq) ; do
missing-authors() {
for email in $(git log --format=%ae HEAD | sort | uniq) ; do
grep -q "$email" AUTHORS || echo $email
done
}
@@ -13,32 +13,37 @@ no-docs-typos() {
grep -v f2459ef3319b2f060dbcdacd0c35a1788a94b8bd |\
grep -v b61f418bf2d1f7d5a9d7088a20a2a448e5e66801 |\
grep -v f0621207e3953711f9ab86d99724f1d0faac45b1 |\
grep -v f1120d7aa936c0658429edef0037792520b46334
grep -v f1120d7aa936c0658429edef0037792520b46334 |\
grep -v a9339d0627fff439879d157c75077f02c9fac61b |\
grep -v 254c63763a3ad42fd82259f1767db526cff94a14 |\
grep -v 4b76ec40c07078beaa2c5e250ed7d9bd6276a718
}
print-missing-contribs() {
for email in $(missing-contribs) ; do
print-missing-authors() {
for email in $(missing-authors) ; do
git log --author="$email" --format="%H %ae %s" | no-docs-typos
done
}
print-missing-copyright() {
find . -name \*.go | xargs grep -L 'Copyright (C)' | grep -v Godeps
find . -name \*.go | xargs egrep -L 'Copyright \(C\)|automatically generated' | grep -v Godeps | grep -v internal/auto/
}
print-line-blame() {
for f in $(find . -name \*.go | grep -v Godep) gui/app.js gui/index.html ; do
git blame --line-porcelain $f | grep author-mail
done | sort | uniq -c | sort -n
}
echo Author emails missing in CONTRIBUTORS:
print-missing-contribs
echo
authors=$(print-missing-authors)
if [[ ! -z $authors ]] ; then
echo '***'
echo Author emails not in AUTHORS:
echo $authors
echo '***'
exit 1
fi
echo Files missing copyright notice:
print-missing-copyright
echo
echo Blame lines per author:
print-line-blame
copy=$(print-missing-copyright)
if [[ ! -z $copy ]] ; then
echo ***
echo Files missing copyright notice:
echo $copy
echo ***
exit 1
fi

View File

@@ -26,6 +26,7 @@ import (
"io"
"os"
"path/filepath"
"strings"
"text/template"
)
@@ -65,6 +66,11 @@ func walkerFor(basePath string) filepath.WalkFunc {
return err
}
if strings.HasPrefix(filepath.Base(name), ".") {
// Skip dotfiles
return nil
}
if info.Mode().IsRegular() {
fd, err := os.Open(name)
if err != nil {

177
cmd/stcompdirs/main.go Normal file
View File

@@ -0,0 +1,177 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/md5"
"errors"
"flag"
"fmt"
"io"
"log"
"os"
"path/filepath"
"github.com/syncthing/syncthing/internal/symlinks"
)
func main() {
flag.Parse()
log.Println(compareDirectories(flag.Args()...))
}
// Compare a number of directories. Returns nil if the contents are identical,
// otherwise an error describing the first found difference.
func compareDirectories(dirs ...string) error {
chans := make([]chan fileInfo, len(dirs))
for i := range chans {
chans[i] = make(chan fileInfo)
}
errcs := make([]chan error, len(dirs))
abort := make(chan struct{})
for i := range dirs {
errcs[i] = startWalker(dirs[i], chans[i], abort)
}
res := make([]fileInfo, len(dirs))
for {
numDone := 0
for i := range chans {
fi, ok := <-chans[i]
if !ok {
err, hasError := <-errcs[i]
if hasError {
close(abort)
return err
}
numDone++
}
res[i] = fi
}
for i := 1; i < len(res); i++ {
if res[i] != res[0] {
close(abort)
if res[i].name < res[0].name {
return fmt.Errorf("%s missing %v (present in %s)", dirs[0], res[i], dirs[i])
} else if res[i].name > res[0].name {
return fmt.Errorf("%s missing %v (present in %s)", dirs[i], res[0], dirs[0])
}
return fmt.Errorf("Mismatch; %v (%s) != %v (%s)", res[i], dirs[i], res[0], dirs[0])
}
}
if numDone == len(dirs) {
return nil
}
}
}
type fileInfo struct {
name string
mode os.FileMode
mod int64
hash [16]byte
}
func (f fileInfo) String() string {
return fmt.Sprintf("%s %04o %d %x", f.name, f.mode, f.mod, f.hash)
}
func startWalker(dir string, res chan<- fileInfo, abort <-chan struct{}) chan error {
walker := func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
rn, _ := filepath.Rel(dir, path)
if rn == "." || rn == ".stfolder" {
return nil
}
if rn == ".stversions" {
return filepath.SkipDir
}
var f fileInfo
if info.Mode()&os.ModeSymlink != 0 {
f = fileInfo{
name: rn,
mode: os.ModeSymlink,
}
tgt, _, err := symlinks.Read(path)
if err != nil {
return err
}
h := md5.New()
h.Write([]byte(tgt))
hash := h.Sum(nil)
copy(f.hash[:], hash)
} else if info.IsDir() {
f = fileInfo{
name: rn,
mode: info.Mode(),
// hash and modtime zero for directories
}
} else {
f = fileInfo{
name: rn,
mode: info.Mode(),
mod: info.ModTime().Unix(),
}
sum, err := md5file(path)
if err != nil {
return err
}
f.hash = sum
}
select {
case res <- f:
return nil
case <-abort:
return errors.New("abort")
}
}
errc := make(chan error)
go func() {
err := filepath.Walk(dir, walker)
close(res)
if err != nil {
errc <- err
}
close(errc)
}()
return errc
}
func md5file(fname string) (hash [16]byte, err error) {
f, err := os.Open(fname)
if err != nil {
return
}
defer f.Close()
h := md5.New()
io.Copy(h, f)
hb := h.Sum(nil)
copy(hash[:], hb)
return
}

89
cmd/stfileinfo/main.go Normal file
View File

@@ -0,0 +1,89 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"flag"
"log"
"os"
"path/filepath"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/syncthing/internal/scanner"
)
func main() {
log.SetFlags(0)
log.SetOutput(os.Stdout)
standardBlocks := flag.Bool("s", false, "Use standard block size")
flag.Parse()
path := flag.Arg(0)
if path == "" {
log.Fatal("Need one argument: path to check")
}
log.Println("File:")
log.Println(" ", filepath.Clean(path))
log.Println()
fi, err := os.Lstat(path)
if err != nil {
log.Fatal(err)
}
log.Println("Lstat:")
log.Printf(" Size: %d bytes", fi.Size())
log.Printf(" Mode: 0%o", fi.Mode())
log.Printf(" Time: %v (%d)", fi.ModTime(), fi.ModTime().Unix())
log.Println()
if !fi.Mode().IsDir() && !fi.Mode().IsRegular() {
fi, err = os.Stat(path)
if err != nil {
log.Fatal(err)
}
log.Println("Stat:")
log.Printf(" Size: %d bytes", fi.Size())
log.Printf(" Mode: 0%o", fi.Mode())
log.Printf(" Time: %v (%d)", fi.ModTime(), fi.ModTime().Unix())
log.Println()
}
if fi.Mode().IsRegular() {
log.Println("Blocks:")
fd, err := os.Open(path)
if err != nil {
log.Fatal(err)
}
blockSize := int(fi.Size())
if *standardBlocks || blockSize < protocol.BlockSize {
blockSize = protocol.BlockSize
}
bs, err := scanner.Blocks(fd, blockSize, fi.Size())
if err != nil {
log.Fatal(err)
}
for _, b := range bs {
log.Println(" ", b)
}
}
}

View File

@@ -31,7 +31,6 @@ import (
"sync"
"time"
"code.google.com/p/go.crypto/bcrypt"
"github.com/calmh/logger"
"github.com/syncthing/syncthing/internal/auto"
"github.com/syncthing/syncthing/internal/config"
@@ -42,6 +41,7 @@ import (
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/syncthing/internal/upgrade"
"github.com/vitrun/qart/qr"
"golang.org/x/crypto/bcrypt"
)
type guiError struct {
@@ -70,7 +70,16 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
if err != nil {
l.Infoln("Loading HTTPS certificate:", err)
l.Infoln("Creating new HTTPS certificate")
newCertificate(confDir, "https-")
// When generating the HTTPS certificate, use the system host name per
// default. If that isn't available, use the "syncthing" default.
var name string
name, err = os.Hostname()
if err != nil {
name = tlsDefaultCommonName
}
newCertificate(confDir, "https-", name)
cert, err = loadCert(confDir, "https-")
}
if err != nil {
@@ -78,7 +87,20 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
ServerName: "syncthing",
MinVersion: tls.VersionTLS10, // No SSLv3
CipherSuites: []uint16{
// No RC4
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
},
}
rawListener, err := net.Listen("tcp", cfg.Address)
@@ -108,6 +130,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
getRestMux.HandleFunc("/rest/upgrade", restGetUpgrade)
getRestMux.HandleFunc("/rest/version", restGetVersion)
getRestMux.HandleFunc("/rest/stats/device", withModel(m, restGetDeviceStats))
getRestMux.HandleFunc("/rest/stats/folder", withModel(m, restGetFolderStats))
// Debug endpoints, not for general use
getRestMux.HandleFunc("/rest/debug/peerCompletion", withModel(m, restGetPeerCompletion))
@@ -126,6 +149,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
postRestMux.HandleFunc("/rest/shutdown", restPostShutdown)
postRestMux.HandleFunc("/rest/upgrade", restPostUpgrade)
postRestMux.HandleFunc("/rest/scan", withModel(m, restPostScan))
postRestMux.HandleFunc("/rest/bump", withModel(m, restPostBump))
// A handler that splits requests between the two above and disables
// caching
@@ -158,7 +182,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
srv := http.Server{
Handler: handler,
ReadTimeout: 2 * time.Second,
ReadTimeout: 10 * time.Second,
}
go func() {
@@ -291,10 +315,16 @@ func restGetNeed(m *model.Model, w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var folder = qs.Get("folder")
files := m.NeedFolderFilesLimited(folder, 100, 2500) // max 100 files or 2500 blocks
progress, queued, rest := m.NeedFolderFiles(folder, 100)
// Convert the struct to a more loose structure, and inject the size.
output := map[string][]map[string]interface{}{
"progress": toNeedSlice(progress),
"queued": toNeedSlice(queued),
"rest": toNeedSlice(rest),
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(files)
json.NewEncoder(w).Encode(output)
}
func restGetConnections(m *model.Model, w http.ResponseWriter, r *http.Request) {
@@ -309,6 +339,12 @@ func restGetDeviceStats(m *model.Model, w http.ResponseWriter, r *http.Request)
json.NewEncoder(w).Encode(res)
}
func restGetFolderStats(m *model.Model, w http.ResponseWriter, r *http.Request) {
var res = m.FolderStatistics()
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
}
func restGetConfig(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(cfg.Raw())
@@ -321,42 +357,44 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
l.Warnln("decoding posted config:", err)
http.Error(w, err.Error(), 500)
return
} else {
if newCfg.GUI.Password != cfg.GUI().Password {
if newCfg.GUI.Password != "" {
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
if err != nil {
l.Warnln("bcrypting password:", err)
http.Error(w, err.Error(), 500)
return
} else {
newCfg.GUI.Password = string(hash)
}
}
}
// Start or stop usage reporting as appropriate
if curAcc := cfg.Options().URAccepted; newCfg.Options.URAccepted > curAcc {
// UR was enabled
newCfg.Options.URAccepted = usageReportVersion
err := sendUsageReport(m)
if err != nil {
l.Infoln("Usage report:", err)
}
go usageReportingLoop(m)
} else if newCfg.Options.URAccepted < curAcc {
// UR was disabled
newCfg.Options.URAccepted = -1
stopUsageReporting()
}
// Activate and save
configInSync = !config.ChangeRequiresRestart(cfg.Raw(), newCfg)
cfg.Replace(newCfg)
cfg.Save()
}
if newCfg.GUI.Password != cfg.GUI().Password {
if newCfg.GUI.Password != "" {
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
if err != nil {
l.Warnln("bcrypting password:", err)
http.Error(w, err.Error(), 500)
return
}
newCfg.GUI.Password = string(hash)
}
}
// Start or stop usage reporting as appropriate
if curAcc := cfg.Options().URAccepted; newCfg.Options.URAccepted > curAcc {
// UR was enabled
newCfg.Options.URAccepted = usageReportVersion
newCfg.Options.URUniqueID = randomString(8)
err := sendUsageReport(m)
if err != nil {
l.Infoln("Usage report:", err)
}
go usageReportingLoop(m)
} else if newCfg.Options.URAccepted < curAcc {
// UR was disabled
newCfg.Options.URAccepted = -1
newCfg.Options.URUniqueID = ""
stopUsageReporting()
}
// Activate and save
configInSync = !config.ChangeRequiresRestart(cfg.Raw(), newCfg)
cfg.Replace(newCfg)
cfg.Save()
}
func restGetConfigInSync(w http.ResponseWriter, r *http.Request) {
@@ -583,7 +621,7 @@ func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
}
if upgrade.CompareVersions(rel.Tag, Version) == 1 {
err = upgrade.UpgradeTo(rel, GoArchExtra)
err = upgrade.To(rel)
if err != nil {
l.Warnln("upgrading:", err)
http.Error(w, err.Error(), 500)
@@ -606,6 +644,14 @@ func restPostScan(m *model.Model, w http.ResponseWriter, r *http.Request) {
}
}
func restPostBump(m *model.Model, w http.ResponseWriter, r *http.Request) {
qs := r.URL.Query()
folder := qs.Get("folder")
file := qs.Get("file")
m.BringToFront(folder, file)
restGetNeed(m, w, r)
}
func getQR(w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var text = qs.Get("text")
@@ -658,7 +704,7 @@ func restGetAutocompleteDirectory(w http.ResponseWriter, r *http.Request) {
for _, subdirectory := range subdirectories {
info, err := os.Stat(subdirectory)
if err == nil && info.IsDir() {
ret = append(ret, subdirectory)
ret = append(ret, subdirectory+pathSeparator)
if len(ret) > 9 {
break
}
@@ -731,3 +777,19 @@ func mimeTypeForFile(file string) string {
return mime.TypeByExtension(ext)
}
}
func toNeedSlice(files []protocol.FileInfoTruncated) []map[string]interface{} {
output := make([]map[string]interface{}, len(files))
for i, file := range files {
output[i] = map[string]interface{}{
"Name": file.Name,
"Flags": file.Flags,
"Modified": file.Modified,
"Version": file.Version,
"LocalVersion": file.LocalVersion,
"NumBlocks": file.NumBlocks,
"Size": protocol.BlocksToSize(file.NumBlocks),
}
}
return output
}

2
cmd/syncthing/gui_auth.go Executable file → Normal file
View File

@@ -24,8 +24,8 @@ import (
"sync"
"time"
"code.google.com/p/go.crypto/bcrypt"
"github.com/syncthing/syncthing/internal/config"
"golang.org/x/crypto/bcrypt"
)
var (

View File

@@ -17,8 +17,6 @@ package main
import (
"bufio"
"crypto/rand"
"encoding/base64"
"fmt"
"net/http"
"os"
@@ -88,7 +86,7 @@ func validCsrfToken(token string) bool {
}
func newCsrfToken() string {
token := randomString(30)
token := randomString(32)
csrfMut.Lock()
csrfTokens = append(csrfTokens, token)
@@ -140,13 +138,3 @@ func loadCsrfTokens() {
csrfTokens = append(csrfTokens, s.Text())
}
}
func randomString(len int) string {
bs := make([]byte, len)
_, err := rand.Reader.Read(bs)
if err != nil {
l.Fatalln(err)
}
return base64.StdEncoding.EncodeToString(bs)
}

0
cmd/syncthing/gui_windows.go Executable file → Normal file
View File

View File

@@ -21,7 +21,6 @@ import (
"fmt"
"io"
"log"
"math/rand"
"net"
"net/http"
_ "net/http/pprof"
@@ -36,7 +35,6 @@ import (
"strings"
"time"
"code.google.com/p/go.crypto/bcrypt"
"github.com/calmh/logger"
"github.com/juju/ratelimit"
"github.com/syncthing/syncthing/internal/config"
@@ -46,10 +44,12 @@ import (
"github.com/syncthing/syncthing/internal/model"
"github.com/syncthing/syncthing/internal/osutil"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/syncthing/internal/symlinks"
"github.com/syncthing/syncthing/internal/upgrade"
"github.com/syncthing/syncthing/internal/upnp"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"golang.org/x/crypto/bcrypt"
)
var (
@@ -62,7 +62,6 @@ var (
IsRelease bool
IsBeta bool
LongVersion string
GoArchExtra string // "", "v5", "v6", "v7"
)
const (
@@ -103,10 +102,10 @@ func init() {
}
var (
cfg *config.ConfigWrapper
cfg *config.Wrapper
myID protocol.DeviceID
confDir string
logFlags int = log.Ltime
logFlags = log.Ltime
writeRateLimit *ratelimit.Bucket
readRateLimit *ratelimit.Bucket
stop = make(chan int)
@@ -171,24 +170,25 @@ are mostly useful for developers. Use with care.
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
supported on Windows.
STNOUPGRADE Disable automatic upgrades.
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
available CPU cores.`
)
func init() {
rand.Seed(time.Now().UnixNano())
}
// Command line and environment options
var (
reset bool
showVersion bool
doUpgrade bool
doUpgradeCheck bool
upgradeTo string
noBrowser bool
noConsole bool
generateDir string
logFile string
noRestart = os.Getenv("STNORESTART") != ""
noUpgrade = os.Getenv("STNOUPGRADE") != ""
guiAddress = os.Getenv("STGUIADDRESS") // legacy
guiAuthentication = os.Getenv("STGUIAUTH") // legacy
guiAPIKey = os.Getenv("STGUIAPIKEY") // legacy
@@ -211,6 +211,9 @@ func main() {
logFile = filepath.Join(defConfDir, "syncthing.log")
flag.StringVar(&logFile, "logfile", logFile, "Log file name (blank for stdout)")
// We also add an option to hide the console window
flag.BoolVar(&noConsole, "no-console", false, "Hide console window")
}
flag.StringVar(&generateDir, "generate", "", "Generate key and config in specified dir, then exit")
@@ -225,10 +228,15 @@ func main() {
flag.BoolVar(&doUpgrade, "upgrade", false, "Perform upgrade")
flag.BoolVar(&doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
flag.BoolVar(&showVersion, "version", false, "Show version")
flag.StringVar(&upgradeTo, "upgrade-to", upgradeTo, "Force upgrade directly from specified URL")
flag.Usage = usageFor(flag.CommandLine, usage, fmt.Sprintf(extraUsage, defConfDir))
flag.Parse()
if noConsole {
osutil.HideConsole()
}
if confDir == "" {
// Not set as default above because the string can be really long.
confDir = defConfDir
@@ -255,19 +263,22 @@ func main() {
}
info, err := os.Stat(dir)
if err != nil {
l.Fatalln("generate:", err)
}
if !info.IsDir() {
if err == nil && !info.IsDir() {
l.Fatalln(dir, "is not a directory")
}
if err != nil && os.IsNotExist(err) {
err = os.MkdirAll(dir, 0700)
if err != nil {
l.Fatalln("generate:", err)
}
}
cert, err := loadCert(dir, "")
if err == nil {
l.Warnln("Key exists; will not overwrite.")
l.Infoln("Device ID:", protocol.NewDeviceID(cert.Certificate[0]))
} else {
newCertificate(dir, "")
newCertificate(dir, "", tlsDefaultCommonName)
cert, err = loadCert(dir, "")
myID = protocol.NewDeviceID(cert.Certificate[0])
if err != nil {
@@ -306,6 +317,15 @@ func main() {
// Ensure that our home directory exists.
ensureDir(confDir, 0700)
if upgradeTo != "" {
err := upgrade.ToURL(upgradeTo)
if err != nil {
l.Fatalln("Upgrade:", err) // exits 1
}
l.Okln("Upgraded from", upgradeTo)
return
}
if doUpgrade || doUpgradeCheck {
rel, err := upgrade.LatestRelease(IsBeta)
if err != nil {
@@ -321,20 +341,19 @@ func main() {
if doUpgrade {
// Use leveldb database locks to protect against concurrent upgrades
_, err = leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{CachedOpenFiles: 100})
_, err = leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{OpenFilesCacheCapacity: 100})
if err != nil {
l.Fatalln("Cannot upgrade, database seems to be locked. Is another copy of Syncthing already running?")
}
err = upgrade.UpgradeTo(rel, GoArchExtra)
err = upgrade.To(rel)
if err != nil {
l.Fatalln("Upgrade:", err) // exits 1
}
l.Okf("Upgraded to %q", rel.Tag)
return
} else {
return
}
return
}
if reset {
@@ -365,13 +384,17 @@ func syncthingMain() {
// Ensure that that we have a certificate and key.
cert, err = loadCert(confDir, "")
if err != nil {
newCertificate(confDir, "")
newCertificate(confDir, "", tlsDefaultCommonName)
cert, err = loadCert(confDir, "")
if err != nil {
l.Fatalln("load cert:", err)
}
}
// We reinitialize the predictable RNG with our device ID, to get a
// sequence that is always the same but unique to this syncthing instance.
predictableRandom.Seed(seedFromBytes(cert.Certificate[0]))
myID = protocol.NewDeviceID(cert.Certificate[0])
l.SetPrefix(fmt.Sprintf("[%s] ", myID.String()[:5]))
@@ -436,7 +459,6 @@ func syncthingMain() {
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
NextProtos: []string{"bep/1.0"},
ServerName: myID.String(),
ClientAuth: tls.RequestClientCert,
SessionTicketsDisabled: true,
InsecureSkipVerify: true,
@@ -456,6 +478,10 @@ func syncthingMain() {
opts := cfg.Options()
if !opts.SymlinksEnabled {
symlinks.Supported = false
}
if opts.MaxSendKbps > 0 {
writeRateLimit = ratelimit.NewBucketWithRate(float64(1000*opts.MaxSendKbps), int64(5*1000*opts.MaxSendKbps))
}
@@ -463,7 +489,7 @@ func syncthingMain() {
readRateLimit = ratelimit.NewBucketWithRate(float64(1000*opts.MaxRecvKbps), int64(5*1000*opts.MaxRecvKbps))
}
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{CachedOpenFiles: 100})
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{OpenFilesCacheCapacity: 100})
if err != nil {
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
}
@@ -553,9 +579,17 @@ func syncthingMain() {
if opts.URAccepted > 0 && opts.URAccepted < usageReportVersion {
l.Infoln("Anonymous usage report has changed; revoking acceptance")
opts.URAccepted = 0
opts.URUniqueID = ""
cfg.SetOptions(opts)
}
if opts.URAccepted >= usageReportVersion {
if opts.URUniqueID == "" {
// Previously the ID was generated from the node ID. We now need
// to generate a new one.
opts.URUniqueID = randomString(8)
cfg.SetOptions(opts)
cfg.Save()
}
go usageReportingLoop(m)
go func() {
time.Sleep(10 * time.Minute)
@@ -571,7 +605,9 @@ func syncthingMain() {
}
if opts.AutoUpgradeIntervalH > 0 {
if IsRelease {
if noUpgrade {
l.Infof("No automatic upgrades; STNOUPGRADE environment variable defined.")
} else if IsRelease {
go autoUpgrade()
} else {
l.Infof("No automatic upgrades; %s is not a relase version.", Version)
@@ -587,7 +623,7 @@ func syncthingMain() {
os.Exit(code)
}
func setupGUI(cfg *config.ConfigWrapper, m *model.Model) {
func setupGUI(cfg *config.Wrapper, m *model.Model) {
opts := cfg.Options()
guiCfg := overrideGUIConfig(cfg.GUI(), guiAddress, guiAuthentication, guiAPIKey)
@@ -628,7 +664,7 @@ func setupGUI(cfg *config.ConfigWrapper, m *model.Model) {
}
}
func sanityCheckFolders(cfg *config.ConfigWrapper, m *model.Model) {
func sanityCheckFolders(cfg *config.Wrapper, m *model.Model) {
nextFolder:
for id, folder := range cfg.Folders() {
if folder.Invalid != "" {
@@ -733,7 +769,7 @@ func setupUPnP() {
if len(igds) > 0 {
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
// for handling multiple IGDs, which will require changes to the global discovery service
igd = igds[0]
igd = &igds[0]
externalPort = setupExternalPort(igd, port)
if externalPort == 0 {
@@ -757,12 +793,9 @@ func setupExternalPort(igd *upnp.IGD, port int) int {
return 0
}
// We seed the random number generator with the node ID to get a
// repeatable sequence of random external ports.
rnd := rand.NewSource(certSeed(cert.Certificate[0]))
for i := 0; i < 10; i++ {
r := 1024 + int(rnd.Int63()%(65535-1024))
err := igd.AddPortMapping(upnp.TCP, r, port, "syncthing", cfg.Options().UPnPLease*60)
r := 1024 + predictableRandom.Intn(65535-1024)
err := igd.AddPortMapping(upnp.TCP, r, port, fmt.Sprintf("syncthing-%d", r), cfg.Options().UPnPLease*60)
if err == nil {
return r
}
@@ -784,7 +817,7 @@ func renewUPnP(port int) {
if len(igds) > 0 {
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
// for handling multiple IGDs, which will require changes to the global discovery service
igd = igds[0]
igd = &igds[0]
} else {
if debugNet {
l.Debugln("Failed to discover IGD during UPnP port mapping renewal.")
@@ -816,7 +849,7 @@ func renewUPnP(port int) {
if forwardedPort != 0 {
externalPort = forwardedPort
discoverer.StopGlobal()
discoverer.StartGlobal(opts.GlobalAnnServer, uint16(forwardedPort))
discoverer.StartGlobal(opts.GlobalAnnServers, uint16(forwardedPort))
if debugNet {
l.Debugf("Updated UPnP port mapping for external port %d on device %s.", forwardedPort, igd.FriendlyIdentifier())
}
@@ -827,6 +860,17 @@ func renewUPnP(port int) {
}
func resetFolders() {
confDir, err := osutil.ExpandTilde(confDir)
if err != nil {
log.Fatal(err)
}
cfgFile := filepath.Join(confDir, "config.xml")
cfg, err := config.Load(cfgFile, myID)
if err != nil {
log.Fatal(err)
}
suffix := fmt.Sprintf(".syncthing-reset-%d", time.Now().UnixNano())
for _, folder := range cfg.Folders() {
if _, err := os.Stat(folder.Path); err == nil {
@@ -890,7 +934,7 @@ next:
// the certificate and used another name.
certName := deviceCfg.CertName
if certName == "" {
certName = "syncthing"
certName = tlsDefaultCommonName
}
err := remoteCert.VerifyHostname(certName)
if err != nil {
@@ -904,12 +948,12 @@ next:
// If rate limiting is set, we wrap the connection in a
// limiter.
var wr io.Writer = conn
wr := io.Writer(conn)
if writeRateLimit != nil {
wr = &limitedWriter{conn, writeRateLimit}
}
var rd io.Reader = conn
rd := io.Reader(conn)
if readRateLimit != nil {
rd = &limitedReader{conn, readRateLimit}
}
@@ -931,11 +975,16 @@ next:
}
}
events.Default.Log(events.DeviceRejected, map[string]string{
"device": remoteID.String(),
"address": conn.RemoteAddr().String(),
})
l.Infof("Connection from %s with unknown device ID %s; ignoring", conn.RemoteAddr(), remoteID)
if !cfg.IgnoredDevice(remoteID) {
events.Default.Log(events.DeviceRejected, map[string]string{
"device": remoteID.String(),
"address": conn.RemoteAddr().String(),
})
l.Infof("Connection from %s with unknown device ID %s", conn.RemoteAddr(), remoteID)
} else {
l.Infof("Connection from %s with ignored device ID %s", conn.RemoteAddr(), remoteID)
}
conn.Close()
}
}
@@ -982,7 +1031,7 @@ func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
}
func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
var delay time.Duration = 1 * time.Second
delay := time.Second
for {
nextDevice:
for deviceID, deviceCfg := range cfg.Devices() {
@@ -1088,7 +1137,7 @@ func discovery(extPort int) *discover.Discoverer {
if opts.GlobalAnnEnabled {
l.Infoln("Starting global discovery announcements")
disc.StartGlobal(opts.GlobalAnnServer, uint16(extPort))
disc.StartGlobal(opts.GlobalAnnServers, uint16(extPort))
}
return disc
@@ -1121,9 +1170,8 @@ func getDefaultConfDir() (string, error) {
default:
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
return filepath.Join(xdgCfg, "syncthing"), nil
} else {
return osutil.ExpandTilde("~/.config/syncthing")
}
return osutil.ExpandTilde("~/.config/syncthing")
}
}
@@ -1231,12 +1279,13 @@ func autoUpgrade() {
continue
}
if upgrade.CompareVersions(rel.Tag, Version) <= 0 {
if upgrade.CompareVersions(rel.Tag, Version) != upgrade.Newer {
// Skip equal, older or majorly newer (incompatible) versions
continue
}
l.Infof("Automatic upgrade (current %q < latest %q)", Version, rel.Tag)
err = upgrade.UpgradeTo(rel, GoArchExtra)
err = upgrade.To(rel)
if err != nil {
l.Warnln("Automatic upgrade:", err)
continue

View File

@@ -38,8 +38,8 @@ var (
)
const (
countRestarts = 5
loopThreshold = 15 * time.Second
countRestarts = 4
loopThreshold = 60 * time.Second
)
func monitorMain() {

68
cmd/syncthing/random.go Normal file
View File

@@ -0,0 +1,68 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/md5"
cryptoRand "crypto/rand"
"encoding/binary"
"io"
mathRand "math/rand"
)
// randomCharset contains the characters that can make up a randomString().
const randomCharset = "01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-"
// predictableRandom is an RNG that will always have the same sequence. It
// will be seeded with the device ID during startup, so that the sequence is
// predictable but varies between instances.
var predictableRandom = mathRand.New(mathRand.NewSource(42))
func init() {
// The default RNG should be seeded with something good.
mathRand.Seed(randomInt64())
}
// randomString returns a string of random characters (taken from
// randomCharset) of the specified length.
func randomString(l int) string {
bs := make([]byte, l)
for i := range bs {
bs[i] = randomCharset[mathRand.Intn(len(randomCharset))]
}
return string(bs)
}
// randomInt64 returns a strongly random int64, slowly
func randomInt64() int64 {
var bs [8]byte
_, err := io.ReadFull(cryptoRand.Reader, bs[:])
if err != nil {
panic("randomness failure: " + err.Error())
}
return seedFromBytes(bs[:])
}
// seedFromBytes calculates a weak 64 bit hash from the given byte slice,
// suitable for use a predictable random seed.
func seedFromBytes(bs []byte) int64 {
h := md5.New()
h.Write(bs)
s := h.Sum(nil)
// The MD5 hash of the byte slice is 16 bytes long. We interpret it as two
// uint64s and XOR them together.
return int64(binary.BigEndian.Uint64(s[0:]) ^ binary.BigEndian.Uint64(s[8:]))
}

View File

@@ -0,0 +1,80 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import "testing"
func TestPredictableRandom(t *testing.T) {
// predictable random sequence is predictable
e := 3440579354231278675
if v := predictableRandom.Int(); v != e {
t.Errorf("Unexpected random value %d != %d", v, e)
}
}
func TestSeedFromBytes(t *testing.T) {
// should always return the same seed for the same bytes
tcs := []struct {
bs []byte
v int64
}{
{[]byte("hello world"), -3639725434188061933},
{[]byte("hello worlx"), -2539100776074091088},
}
for _, tc := range tcs {
if v := seedFromBytes(tc.bs); v != tc.v {
t.Errorf("Unexpected seed value %d != %d", v, tc.v)
}
}
}
func TestRandomString(t *testing.T) {
for _, l := range []int{0, 1, 2, 3, 4, 8, 42} {
s := randomString(l)
if len(s) != l {
t.Errorf("Incorrect length %d != %d", len(s), l)
}
}
strings := make([]string, 1000)
for i := range strings {
strings[i] = randomString(8)
for j := range strings {
if i == j {
continue
}
if strings[i] == strings[j] {
t.Errorf("Repeated random string %q", strings[i])
}
}
}
}
func TestRandomInt64(t *testing.T) {
ints := make([]int64, 1000)
for i := range ints {
ints[i] = randomInt64()
for j := range ints {
if i == j {
continue
}
if ints[i] == ints[j] {
t.Errorf("Repeated random int64 %d", ints[i])
}
}
}
}

View File

@@ -19,11 +19,9 @@ import (
"bufio"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/binary"
"encoding/pem"
"io"
"math/big"
@@ -35,8 +33,8 @@ import (
)
const (
tlsRSABits = 3072
tlsName = "syncthing"
tlsRSABits = 3072
tlsDefaultCommonName = "syncthing"
)
func loadCert(dir string, prefix string) (tls.Certificate, error) {
@@ -45,15 +43,8 @@ func loadCert(dir string, prefix string) (tls.Certificate, error) {
return tls.LoadX509KeyPair(cf, kf)
}
func certSeed(bs []byte) int64 {
hf := sha256.New()
hf.Write(bs)
id := hf.Sum(nil)
return int64(binary.BigEndian.Uint64(id))
}
func newCertificate(dir string, prefix string) {
l.Infoln("Generating RSA key and certificate...")
func newCertificate(dir, prefix, name string) {
l.Infof("Generating RSA key and certificate for %s...", name)
priv, err := rsa.GenerateKey(rand.Reader, tlsRSABits)
if err != nil {
@@ -66,7 +57,7 @@ func newCertificate(dir string, prefix string) {
template := x509.Certificate{
SerialNumber: new(big.Int).SetInt64(mr.Int63()),
Subject: pkix.Name{
CommonName: tlsName,
CommonName: name,
},
NotBefore: notBefore,
NotAfter: notAfter,

View File

@@ -23,7 +23,6 @@ import (
"net"
"net/http"
"runtime"
"strings"
"time"
"github.com/syncthing/syncthing/internal/model"
@@ -38,7 +37,7 @@ var stopUsageReportingCh = make(chan struct{})
func reportData(m *model.Model) map[string]interface{} {
res := make(map[string]interface{})
res["uniqueID"] = strings.ToLower(myID.String()[:6])
res["uniqueID"] = cfg.Options().URUniqueID
res["version"] = Version
res["longVersion"] = LongVersion
res["platform"] = runtime.GOOS + "-" + runtime.GOARCH

View File

@@ -24,7 +24,7 @@ import (
"regexp"
"strings"
"code.google.com/p/go.net/html"
"golang.org/x/net/html"
)
var trans = make(map[string]string)

74
docker/Dockerfile Normal file
View File

@@ -0,0 +1,74 @@
FROM debian:squeeze
MAINTAINER Jakob Borg <jakob@nym.se>
ENV GOLANG_VERSION 1.4
# SCMs for "go get", gcc for cgo
RUN apt-get update && apt-get install -y \
ca-certificates curl gcc libc6-dev make \
bzr git mercurial unzip patch \
--no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Get the binary dist of Go to be able to bootstrap gonative.
RUN curl -sSL https://golang.org/dl/go${GOLANG_VERSION}.linux-amd64.tar.gz \
| tar -v -C /usr/local -xz
ENV PATH /usr/local/go/bin:$PATH
RUN mkdir /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
WORKDIR /go
# Use gonative to install native Go for most arch/OS combos
RUN go get github.com/calmh/gonative \
&& cd /usr/local \
&& rm -rf go \
&& gonative -version $GOLANG_VERSION
# Rebuild the special and missing versions, using patches as appropriate
RUN mkdir /tmp/patches
ADD *.patch /tmp/patches/
RUN bash -xec '\
cd /usr/local/go ; \
for patch in /tmp/patches/*.patch ; do \
patch -p0 < "$patch" ; \
done \
'
RUN bash -xec '\
cd /usr/local/go/src; \
for platform in linux/386 freebsd/386 windows/386 linux/arm openbsd/amd64 openbsd/386; do \
GOOS=${platform%/*} \
GOARCH=${platform##*/} \
GOARM=5 \
GO386=387 \
CGO_ENABLED=0 \
./make.bash --no-clean 2>&1; \
done \
&& ./make.bash --no-clean \
'
# Install packages needed for test coverage
RUN go get github.com/tools/godep \
&& go get golang.org/x/tools/cmd/cover \
&& go get github.com/axw/gocov/gocov \
&& go get github.com/AlekSi/gocov-xml
# Install tools "go vet" and "golint"
RUN go get golang.org/x/tools/cmd/vet \
&& go get github.com/golang/lint/golint
# Build standard library for race
RUN go install -race std
# Random build users needs to be able to create stuff in /go
RUN chmod -R 777 /go/bin /go/pkg /go/src

29
docker/README.md Normal file
View File

@@ -0,0 +1,29 @@
Docker Build
============
Official builds are produced using a Docker image specified by the
Dockerfile in this directory. The following commands exactly reproduce
the official build process.
Create an image called `syncthing/build` with the build environment.
```
./build.sh docker-init
```
> This is a Debian based image containing the latest stable version of
> Go set up for cross compilation. The cross compilation uses the
> dynamically linked standard libraries and SSE instructions for amd64
> builds, but static linking and minimal instruction set for the 386 and
> arm builds. The command should be run in the main repo directory, as a
> user with permission to perform Docker operations.
Build the full set of supported binaries.
```
./build.sh docker-all
```
> This uses a temporary container with the image from above and a volume
> mapped to the directory containing the source. Tests are run and
> binary packages created.

22
docker/go-9102.patch Normal file
View File

@@ -0,0 +1,22 @@
--- src/syscall/route_openbsd.go.orig Fri Jul 25 23:38:47 2014
+++ src/syscall/route_openbsd.go Fri Jul 25 23:39:20 2014
@@ -12,16 +12,16 @@ func (any *anyMessage) toRoutingMessage(b []byte) Rout
switch any.Type {
case RTM_ADD, RTM_DELETE, RTM_CHANGE, RTM_GET, RTM_LOSING, RTM_REDIRECT, RTM_MISS, RTM_LOCK, RTM_RESOLVE:
p := (*RouteMessage)(unsafe.Pointer(any))
- return &RouteMessage{Header: p.Header, Data: b[SizeofRtMsghdr:any.Msglen]}
+ return &RouteMessage{Header: p.Header, Data: b[p.Header.Hdrlen:any.Msglen]}
case RTM_IFINFO:
p := (*InterfaceMessage)(unsafe.Pointer(any))
- return &InterfaceMessage{Header: p.Header, Data: b[SizeofIfMsghdr:any.Msglen]}
+ return &InterfaceMessage{Header: p.Header, Data: b[p.Header.Hdrlen:any.Msglen]}
case RTM_IFANNOUNCE:
p := (*InterfaceAnnounceMessage)(unsafe.Pointer(any))
return &InterfaceAnnounceMessage{Header: p.Header}
case RTM_NEWADDR, RTM_DELADDR:
p := (*InterfaceAddrMessage)(unsafe.Pointer(any))
- return &InterfaceAddrMessage{Header: p.Header, Data: b[SizeofIfaMsghdr:any.Msglen]}
+ return &InterfaceAddrMessage{Header: p.Header, Data: b[p.Header.Hdrlen:any.Msglen]}
}
return nil
}

1
etc/README.md Normal file
View File

@@ -0,0 +1 @@
This directory contains contributed setup examples.

15
etc/linux-runit/README.md Normal file
View File

@@ -0,0 +1,15 @@
This directory contains a configuration for running syncthing under the
"runit" service manager on Linux. It probably works perfectly fine on
other platforms also using runit.
1. Install runit.
2. Edit the `run` file to set the username to run as, the user's home
directory and the place where the syncthing binary lives. It is
recommended to place it in a directory writeable by the running user
so that automatic upgrades work.
3. Copy the edited `run` file to `/etc/service/syncthing/run`.
Log output is sent to syslogd.

Some files were not shown because too many files have changed in this diff Show More