Compare commits

...

286 Commits

Author SHA1 Message Date
Jakob Borg
1068eaa0b9 Translation update 2014-08-31 21:52:29 +02:00
Jakob Borg
faac3e7d7c Don't clobber staggeredMaxAge = 0 (fixes #604) 2014-08-31 21:44:06 +02:00
Jakob Borg
dab4340207 Merge pull request #603 from AudriusButkevicius/restart
Fix GUI breaking during restarts (fixes #577)
2014-08-31 21:30:51 +02:00
Audrius Butkevicius
fd2567748f Fix GUI breaking during restarts (fixes #577) 2014-08-31 15:49:08 +01:00
Jakob Borg
cf1bfdfb61 Hold rmut read lock when looking at nodeStatRefs 2014-08-31 13:48:43 +02:00
Jakob Borg
75b26513e1 Don't crash under suspicious circumstances... (fixes #602) 2014-08-31 13:48:16 +02:00
Jakob Borg
6c09a77a97 Clean out index for nonexistent repositories (fixes #549) 2014-08-31 13:34:17 +02:00
Jakob Borg
67389c39fb For now, don't allow changing repo path (ref #549) 2014-08-31 13:05:08 +02:00
Jakob Borg
c326103e6e Add X-Syncthing-Version header to HTTP responses 2014-08-31 12:59:20 +02:00
Jakob Borg
c2120a16da Try to set some reasonable resource limits when running tests 2014-08-30 10:02:10 +02:00
Jakob Borg
258ad4352e Fix connecting to discovered IPv6 address 2014-08-29 17:18:25 +02:00
Jakob Borg
435d3958f4 Update goleveldb 2014-08-29 12:36:45 +02:00
Jakob Borg
b0408ef5c6 Info line formatting (ref #583) 2014-08-28 21:35:55 +02:00
Jakob Borg
1c41b0bc2f Document GOMAXPROCS instead of (useless) STDEADLOCKTIMEOUT 2014-08-28 15:29:49 +02:00
Jakob Borg
aa827f3042 Fix language detection, never show untranslated strings (fixes #543) 2014-08-28 13:23:23 +02:00
Audrius Butkevicius
f44f5964bb Set rescan interval on default repository (fixes #579) 2014-08-27 23:45:09 +01:00
Audrius Butkevicius
91ba93bd7a Merge pull request #571 from syncthing/recheck
Add routine for checking possible standby (fixes #565)
2014-08-27 22:44:36 +01:00
Audrius Butkevicius
0abe4cefb4 Add routine for checking possible standby (fixes #565) 2014-08-27 22:42:59 +01:00
Jakob Borg
bccd460f3b Translation update 2014-08-27 10:20:44 +02:00
Jakob Borg
d1023004e1 Saner error/debug messsages for permission issues 2014-08-27 07:00:15 +02:00
Jakob Borg
04a5f9cb04 Fix fnmatch tests for Windows 2014-08-26 13:26:52 +02:00
Jakob Borg
9818e2b550 Use more fnmatch-like matcher in .stignore (fixes #426) 2014-08-26 11:12:20 +02:00
Jakob Borg
fe43e3b89d Try not to leave directories behind with incorrect permissions 2014-08-26 11:12:20 +02:00
Jakob Borg
e1f1ae041f Don't leak request slots (fixes #483) 2014-08-25 17:48:18 +02:00
Jakob Borg
5bcf26e324 Fix table layout for wide elements, at the price of ellipsis (fixes #326, fixes #309) 2014-08-25 16:37:15 +02:00
Jakob Borg
5f47a8149f Use ISO date format because I'm opinionated 2014-08-25 15:53:32 +02:00
Jakob Borg
00b662b53a Merge branch 'pr/556'
* pr/556:
  Add translation strings
  Display Last Seen value in the UI
  Add /rest/stats/node endpoint
  Add stats package and node related statistics model
2014-08-25 15:52:59 +02:00
Jakob Borg
faf519ab1b Warn about incorrect -goarch values 2014-08-25 14:55:19 +02:00
Jakob Borg
fce73f6f17 Verify CONTRIBUTORS file 2014-08-25 14:55:19 +02:00
Audrius Butkevicius
887890baf5 Add translation strings 2014-08-25 12:57:44 +01:00
Audrius Butkevicius
c66b24feeb Display Last Seen value in the UI 2014-08-25 12:54:50 +01:00
Audrius Butkevicius
84c6f147ad Add /rest/stats/node endpoint 2014-08-25 12:49:22 +01:00
Audrius Butkevicius
0cdb0daa8c Add stats package and node related statistics model 2014-08-25 12:49:21 +01:00
Jakob Borg
eee702f299 Don't run tests in build.sh all 2014-08-25 08:50:13 +02:00
Jakob Borg
df65247325 Increase max path length 1024 -> 8192 bytes (fixes #551)
PATH_MAX seems to be 4096 most of the time; Windows limit is much lower.
2014-08-25 08:48:49 +02:00
Jakob Borg
1a174e75d3 Merge pull request #562 from AudriusButkevicius/restart
Fix race condition while restarting (fixes #560)
2014-08-25 08:03:18 +02:00
Audrius Butkevicius
9e1fd3454f Fix race condition while restarting (fixes #560) 2014-08-25 00:15:28 +01:00
Audrius Butkevicius
3b1603cadf Merge pull request #557 from AudriusButkevicius/opts
Allow configuring GUI options from command line and environment (fixes #505, closes #507)
2014-08-24 16:56:15 +01:00
Audrius Butkevicius
8803bac708 Allow configuring GUI options from command line and environment (fixes #505, closes #507) 2014-08-24 16:55:35 +01:00
Audrius Butkevicius
3a01eaa4a6 Fix build.go on Windows 2014-08-23 21:19:29 +01:00
Jakob Borg
9f84c1c448 New repos must have a default rescan interval (fixes #555) 2014-08-23 19:40:39 +02:00
Jakob Borg
dda0390156 Correctly set GOARM on ARM builds 2014-08-23 10:52:12 +02:00
Jakob Borg
c74509dd5f Add forgotten lang-*.json files 2014-08-23 10:44:08 +02:00
Jakob Borg
f61bbb2ff4 Tweaks and optimizations 2014-08-23 10:43:48 +02:00
Jakob Borg
e7f60161a3 Don't leak fd 2014-08-23 10:37:58 +02:00
Jakob Borg
ebec4fbc24 Translation update (add Bulgarian, Lithuanian) 2014-08-22 18:18:13 +02:00
Jakob Borg
1d4105ae3d UI tweaks for staggered versioner 2014-08-22 18:16:05 +02:00
Jakob Borg
586d49f0c3 Merge pull request #541 from alex2108/master 2014-08-22 17:58:01 +02:00
Jakob Borg
5b0fab0697 Add alex2108 2014-08-22 17:57:43 +02:00
Alexander Graf
2b3359dff3 add staggered versioner 2014-08-22 00:41:17 +02:00
Jakob Borg
63203aa14c Merge pull request #548 from AudriusButkevicius/warning
Do not warn about failed IPv6 discovery, warn about no discovery
2014-08-21 18:54:33 +02:00
Audrius Butkevicius
716a8329c2 Do not warn about failed IPv6 discovery 2014-08-20 22:06:58 +01:00
Jakob Borg
dab0aec85e Latest build badge should link to latest build 2014-08-20 12:23:04 +02:00
Jakob Borg
1f1ab017c0 Show rescan interval per repo 2014-08-20 01:44:05 +02:00
Audrius Butkevicius
b6912ef95e Merge pull request #544 from marcindziadus/rescan-interval
Per repository scan intervals
2014-08-20 00:02:34 +01:00
Audrius Butkevicius
db54dca694 Do not fire UIOffline when navigating away
Fixes #487
2014-08-19 23:44:40 +01:00
Marcin
0e751b983c Enable to configure scan interval per each repository independently
Fix broken tests

Bugfix

Clean up

Refactor variable name

Adjust tests

Minor fixes

Fix typo. Remove indent.
2014-08-20 00:36:36 +02:00
Audrius Butkevicius
997b20a975 Set Content-Type before sending out headers 2014-08-19 23:30:32 +01:00
Jakob Borg
386f9c42c2 Merge pull request #545 from AudriusButkevicius/flush
Flush headers before potentially blocking
2014-08-20 00:21:49 +02:00
Audrius Butkevicius
cfae06db65 Flush headers before potentially blocking 2014-08-19 23:18:28 +01:00
Jakob Borg
44260b7b5c Add marcindziadus 2014-08-20 00:05:43 +02:00
Jakob Borg
13063b957f Use drained legacy pool in goleveldb 2014-08-19 23:49:03 +02:00
Jakob Borg
ee05e12480 Windows nodes should ignore deleted impossible files 2014-08-19 15:36:57 +02:00
Jakob Borg
5538545fb0 README links to build guide 2014-08-19 15:33:20 +02:00
Jakob Borg
bc1167c2c5 README links to build, not only artefacts 2014-08-19 15:20:53 +02:00
Jakob Borg
c57656e4c3 Do honest test coverage analysis in Jenkins 2014-08-19 12:43:50 +02:00
Jakob Borg
264400a984 Check for supported go version build.go 2014-08-19 11:04:20 +02:00
Jakob Borg
408db4eb1d rm -rf travis 2014-08-19 10:05:40 +02:00
Jakob Borg
9347f223ef Note about review of pull requests 2014-08-19 09:55:50 +02:00
Jakob Borg
518aa30c9c Don't consider empty language codes when selecting language (fixes #540) 2014-08-18 23:43:58 +02:00
Jakob Borg
6bbf1f9355 Emit Node/Repo Rejected events on unknown nodes / repos. 2014-08-18 23:34:03 +02:00
Jakob Borg
b221e4d445 build.sh is a shim 2014-08-18 22:05:26 +02:00
Jakob Borg
580fccbfca Don't build build.go on go get 2014-08-18 21:57:10 +02:00
Jakob Borg
045916efcc ARM builds in build.go 2014-08-18 21:53:08 +02:00
Jakob Borg
4f92482294 build.sh -> build.go for better cross platform support 2014-08-18 21:39:35 +02:00
Jakob Borg
2f055a75a0 Merge pull request #537 from marclaporte/patch-2
Fix some typos
2014-08-18 10:43:29 +02:00
Marc Laporte
f0621207e3 Fix some typos 2014-08-17 23:27:04 -04:00
Jakob Borg
d657bc4e3d Implement IPv6 multicast again (fixes #346) 2014-08-17 15:14:44 +02:00
Jakob Borg
a1fd07b27c beacon.Beacon -> beacon.Broadcast 2014-08-17 15:14:44 +02:00
Audrius Butkevicius
52219c5f3f Merge pull request #532 from AudriusButkevicius/config
Replace NodeConfiguration with RepositoryNodeConfiguration (Fixes #522)
2014-08-17 12:47:12 +01:00
Jakob Borg
1a66461e07 All printed warnings should have some context 2014-08-17 10:28:36 +02:00
Jakob Borg
d20df12168 Add repoPath and repoID as parameters to versioner factory (fixes #531) 2014-08-17 07:52:49 +02:00
Audrius Butkevicius
668b429615 Better error message
Closes #526
2014-08-17 00:03:41 +01:00
Audrius Butkevicius
7db528be39 Replace NodeConfiguration with RepositoryNodeConfiguration 2014-08-16 23:20:21 +01:00
Jakob Borg
60f760ee49 Translation update 2014-08-16 23:05:57 +02:00
Jakob Borg
884aaab751 Always print hostname on connect (even if something is set in config) 2014-08-16 22:55:05 +02:00
Jakob Borg
e968560ea4 Spelling 2014-08-16 22:35:15 +02:00
Jakob Borg
07caaa96e4 New translation strings 2014-08-16 22:29:21 +02:00
Audrius Butkevicius
e8a679c280 Advertise and update node names on cluster config exchange
Closes #244
2014-08-16 21:26:30 +01:00
Jakob Borg
bc885f1d08 Don't attempt to create default repo before config (fixes #530)
We'll create it anyway a little later during startup, as part of the
general "check all repos for viability" step.
2014-08-16 22:22:33 +02:00
Jakob Borg
f2f051d6de Merge pull request #529 from syncthing/windows-build
Fix tests on Windows
2014-08-16 21:37:00 +02:00
Jakob Borg
49a0bfccba Cache discovery results up to five minutes (fixes #358) 2014-08-16 21:27:00 +02:00
Audrius Butkevicius
0c1e60894f Fix tests on Windows 2014-08-16 17:33:01 +01:00
Jakob Borg
ace87ad7bb Normalize file name format in on disk db (fixes #479) 2014-08-15 12:52:16 +02:00
Jakob Borg
50f0097843 Add Rescan button to repositories 2014-08-15 12:48:36 +02:00
Jakob Borg
32a9466277 Update goleveldb 2014-08-15 09:18:38 +02:00
Jakob Borg
1ee3407946 Merge pull request #524 from marclaporte/patch-1
Fix typo
2014-08-15 08:35:25 +02:00
Marc Laporte
f1120d7aa9 Fix typo 2014-08-14 19:58:25 -04:00
Jakob Borg
2e7d6b2f99 Translation update, zh-CN 2014-08-14 17:09:29 +02:00
Jakob Borg
dfef929187 Translation update, handle locales precisely 2014-08-14 17:04:17 +02:00
Jakob Borg
e78d9ad592 Translation update (add Hungarian) 2014-08-14 14:00:33 +02:00
Jakob Borg
9f2948f595 Fix tests for UPnP options 2014-08-14 12:59:09 +02:00
Jakob Borg
198da910ed Use new StopGlobal on the discovery when external port changes 2014-08-14 12:49:41 +02:00
Jakob Borg
5f1bf9d9d6 Merge branch 'master' into pr/511
* master: (21 commits)
  Mechanism to stop external announcement routine
  Update goleveldb
  Perfstats are not supported on Windows
  Build should fail if a platform does not build
  Include perfstats and heap profiles in standard build
  Actually no, lets not do uploads at all from the build script.
  ./build.sh upload build server artifacts
  Sign checksums, not files.
  Badges, add build server
  Remove Solaris build again, for now
  Travis should build with 1.3 + tip
  Translation update
  Indicate aproximativeness of repo sizes...
  Slightly more conservative guess on file size
  Fix set tests
  Small goleveldb hack to reduce allocations somewhat
  Don't load block lists from db unless necessary
  Rip out the Suppressor (maybe to be reintroduced)
  Reduce allocations while hash scanning
  Add heap profiling support
  ...

Conflicts:
	discover/discover.go
2014-08-14 12:48:33 +02:00
Jakob Borg
798c4aef9a Mechanism to stop external announcement routine 2014-08-14 12:44:49 +02:00
Jakob Borg
f80f5b3bda Update goleveldb 2014-08-14 12:14:48 +02:00
Audrius Butkevicius
cbb07b0d67 Set default UPnP renewal to 30 minutes 2014-08-13 22:45:44 +01:00
Audrius Butkevicius
7cc9921615 Restart port sequence when UPnP renewal fails 2014-08-13 22:42:58 +01:00
Jakob Borg
7555fe065e Perfstats are not supported on Windows 2014-08-13 22:31:56 +02:00
Jakob Borg
d977f4278e Build should fail if a platform does not build 2014-08-13 22:27:16 +02:00
Audrius Butkevicius
870e3ca893 Rediscover gateway on UPnP renewal 2014-08-13 21:15:20 +01:00
Jakob Borg
213acaee3b Include perfstats and heap profiles in standard build 2014-08-13 14:39:47 +02:00
Jakob Borg
58381496a2 Actually no, lets not do uploads at all from the build script. 2014-08-13 13:11:41 +02:00
Jakob Borg
5981e42aed ./build.sh upload build server artifacts 2014-08-13 12:58:59 +02:00
Jakob Borg
3c9165d295 Sign checksums, not files. 2014-08-13 12:52:04 +02:00
Jakob Borg
60d0ef93ac Badges, add build server 2014-08-13 10:15:22 +02:00
Jakob Borg
f45d5b0066 Remove Solaris build again, for now 2014-08-13 09:42:21 +02:00
Jakob Borg
b71306480f Travis should build with 1.3 + tip 2014-08-13 09:01:17 +02:00
Jakob Borg
0c7771ccc5 Translation update 2014-08-13 00:35:37 +02:00
Audrius Butkevicius
dc9df0a79a Reannounce renewed UPnP mapping 2014-08-12 23:29:29 +01:00
Jakob Borg
17cd49fbdc Indicate aproximativeness of repo sizes... 2014-08-12 23:59:20 +02:00
Jakob Borg
ad273adb78 Slightly more conservative guess on file size 2014-08-12 16:36:24 +02:00
Jakob Borg
150e7daf2d Fix set tests 2014-08-12 16:17:32 +02:00
Jakob Borg
b004155e8f Small goleveldb hack to reduce allocations somewhat 2014-08-12 15:39:24 +02:00
Jakob Borg
92eed3b33b Don't load block lists from db unless necessary 2014-08-12 15:04:32 +02:00
Jakob Borg
fe7b77198c Rip out the Suppressor (maybe to be reintroduced) 2014-08-12 15:04:02 +02:00
Jakob Borg
f51b775698 Reduce allocations while hash scanning 2014-08-12 15:04:02 +02:00
Jakob Borg
939dd5cb31 Add heap profiling support 2014-08-12 15:04:01 +02:00
Jakob Borg
adcbe13ecd Update goleveldb 2014-08-12 09:24:36 +02:00
Audrius Butkevicius
8976e53998 Add UPnP renewal 2014-08-11 23:10:24 +01:00
Jakob Borg
97dda6a4bb Correct the memory stats in perfstats-*.csv 2014-08-11 22:10:15 +02:00
Jakob Borg
9e395eb883 Use a slightly heavier Raleway for headings (fixes #493) 2014-08-11 21:50:15 +02:00
Jakob Borg
60da59623e Limit size of sent indexes a bit, taking number of blocks into account 2014-08-11 20:54:59 +02:00
Jakob Borg
9752ea9ac3 Implement external scan request (fixes #9) 2014-08-11 20:20:01 +02:00
Jakob Borg
279693078a Update deps 2014-08-11 14:24:20 +02:00
Jakob Borg
19b93045a4 Merge pull request #508 from AudriusButkevicius/modals
Fix and refactor modals
2014-08-11 12:12:28 +02:00
Jakob Borg
5231a09820 Add ./build.sh noupgrade and all-noupgrade 2014-08-11 11:59:33 +02:00
Jakob Borg
ab952e6103 Add ./build.sh clean 2014-08-11 11:54:48 +02:00
Jakob Borg
a418771c04 Puller entrance warning 2014-08-11 07:52:03 +02:00
Audrius Butkevicius
b41590ce38 Fix and refactor modals 2014-08-10 23:28:04 +01:00
Jakob Borg
c7dde9499f Verify locking and correct update order for global 2014-08-10 07:27:24 +02:00
Jakob Borg
528cbf62ec POST to /config should return an error when something bad happens (fixes #489) 2014-08-08 14:09:27 +02:00
Jakob Borg
1be4b8bb5d Merge pull request #486 from AudriusButkevicius/windows
Add Windows upgrade support
2014-08-07 23:20:26 +02:00
Jakob Borg
c832fc9917 Merge pull request #485 from tojrobinson/world-writable-root
World writable root
2014-08-07 23:17:42 +02:00
Jakob Borg
4797a94689 Add explicit GC calls after expensive db ops (ref #468) 2014-08-07 23:09:50 +02:00
Audrius Butkevicius
6948903084 Add Windows upgrade support 2014-08-07 21:07:21 +01:00
treefingers
94164611ae Fix root being left world writable 2014-08-08 05:45:50 +10:00
treefingers
ae298e8902 Merge branch 'master' of https://github.com/syncthing/syncthing 2014-08-08 05:06:42 +10:00
Jakob Borg
3d8771ecb0 Woops, broke the build 2014-08-07 15:58:48 +02:00
Jakob Borg
28db264e90 Upgrade debugging, fix upgrade on ARM (fixes #482) 2014-08-07 15:57:20 +02:00
Jakob Borg
6af9fa4b81 Localize Close button in standard modals (fixes #481) 2014-08-07 12:35:38 +02:00
Jakob Borg
60b4d05860 Translation update, add Danish & Dutch 2014-08-07 10:49:29 +02:00
Jakob Borg
7b93839ed1 Woops, broke the build 2014-08-07 10:26:26 +02:00
Jakob Borg
fdb11d7c06 Correctly handle file updates in read only directories (fixes #470) 2014-08-07 08:31:22 +02:00
Jakob Borg
5651847877 Merge commit 'bc2bb22'
* commit 'bc2bb22':
  Add no-browser flag
2014-08-07 07:20:39 +02:00
Jakob Borg
e1442290b6 Add tojrobinson 2014-08-07 07:20:21 +02:00
Tully Robinson
c45b18cc75 Merge branch 'master' into browser-flag 2014-08-06 23:01:35 +10:00
Jakob Borg
bb2ad77987 Never remove currently valid languages when updating translations 2014-08-06 14:56:32 +02:00
Jakob Borg
68b1ffec19 Fix translation in upgrading/restarting dialogs 2014-08-06 14:41:46 +02:00
Tully Robinson
bc2bb22673 Add no-browser flag 2014-08-06 22:30:18 +10:00
Jakob Borg
83d707fc4b Add Transifex info to contribution guidelines 2014-08-06 11:03:39 +02:00
Jakob Borg
175b32e56c Forgot the favicon 2014-08-06 09:12:11 +02:00
Jakob Borg
97b4a6553b Logo update 2014-08-06 09:07:13 +02:00
Jakob Borg
4ade30e681 Merge branch 'pr/477'
* pr/477:
  Logo changed
2014-08-05 23:21:30 +02:00
Gilli Sigurdsson
4e03b4f191 Logo changed 2014-08-05 23:20:33 +02:00
Jakob Borg
bfe1d1d4ca Add Gilli 2014-08-05 23:19:11 +02:00
Jakob Borg
8918de85fd Correct memory usage in anonymous report 2014-08-05 23:13:55 +02:00
Jakob Borg
5e237aecae Reflect memory returned to OS in RAM Utilization 2014-08-05 22:14:11 +02:00
Jakob Borg
13291ad481 Tweak contribution guide 2014-08-05 20:54:53 +02:00
Jakob Borg
a47ee86bee Don't show 100 warnings for unknown repo at connect when once is enough 2014-08-05 20:26:05 +02:00
Jakob Borg
62d703f967 Show 100% complete status for nodes without any files to sync (fixes #453) 2014-08-05 20:16:25 +02:00
Jakob Borg
b2c196e5c7 Don't overwrite Node ID field with 'corrected' format 2014-08-05 19:47:29 +02:00
Jakob Borg
4be6a54bc0 Hide build version behind plus character (fixes #473) 2014-08-05 19:38:31 +02:00
Jakob Borg
8ce8476547 Exclude integration tests from normal go test 2014-08-05 15:50:05 +02:00
Jakob Borg
d82caf6bd4 Don't depend on a pretty printer just for testing 2014-08-05 15:43:29 +02:00
Jakob Borg
8ea1e302c3 Also expose ItemStarted events 2014-08-05 13:14:04 +02:00
Jakob Borg
a8799efa94 Don't reuse existing indexes, yet (fixes #463) 2014-08-05 12:20:50 +02:00
Jakob Borg
0cfac4e021 Start rewriting integration tests in Go instead of bash 2014-08-05 12:20:07 +02:00
Jakob Borg
f6c9642d72 Pull files in random-ish order again 2014-08-05 09:46:21 +02:00
Jakob Borg
5a07f9ddee Woops: don't consider all close()s to be failures... 2014-08-05 09:44:35 +02:00
Jakob Borg
9db75e91ac HTTP testing corrections 2014-08-05 09:38:38 +02:00
Jakob Borg
f288e00c37 Actually show Node ID in QR (fixes #471) 2014-08-04 22:53:37 +02:00
Jakob Borg
c9edd31993 Show pull errors, stop repo when not making progress (fixes #302) 2014-08-04 22:46:35 +02:00
Jakob Borg
5a7780ab5f Use Raleway font for headings 2014-08-04 22:46:29 +02:00
Jakob Borg
ac0fba99ad "52 or 56 characters" (fixes #466) 2014-08-04 22:11:44 +02:00
Jakob Borg
6f724a113c Use repo ID rather than path in header (fixes #425) 2014-08-03 21:58:36 +02:00
Jakob Borg
327cd4cb87 Fix statistics report preview (fixes #460) 2014-08-03 21:47:02 +02:00
Jakob Borg
25de3a2590 Also build for freebsd-386 (fixes #458) 2014-08-03 10:42:39 +02:00
Jakob Borg
06208a703a Implement -generate (fixes #459) 2014-08-03 09:41:08 +02:00
Jakob Borg
56afba6606 Only change the announce server when upgrading config version 2014-08-02 08:37:10 +02:00
Jakob Borg
d65bbf2113 Allow GET requests without CSRF 2014-08-02 08:19:10 +02:00
Jakob Borg
b8bfc9b732 Coveralls syncthing/syncthing 2014-08-02 08:18:55 +02:00
Jakob Borg
cec3bad373 Move calmh/syncthing -> syncthing/syncthing 2014-08-01 16:48:46 +02:00
Jakob Borg
9312e3c7de Config version 3: default to compression=true on nodes 2014-08-01 16:48:46 +02:00
Jakob Borg
43e7435c41 Call the darwin releases macosx instead 2014-08-01 16:30:28 +02:00
Jakob Borg
f34f5e41a4 Don't always run the tedious protocol tests 2014-08-01 16:30:13 +02:00
Jakob Borg
47a70a536b Translation update 2014-08-01 14:30:57 +02:00
Jakob Borg
bbeddfe522 Extract github.com/calmh/xdr 2014-08-01 13:12:54 +02:00
Jakob Borg
28220310a5 Use a lock port to ensure parent has exited (fixes #450) 2014-07-31 21:29:44 +02:00
Jakob Borg
3e82a0a259 Again, the poor unsupporteds 2014-07-31 17:11:53 +02:00
Jakob Borg
c860ad23a0 Docstrings 2014-07-31 17:01:11 +02:00
Jakob Borg
4e36dd2943 Refactor out upgrade package 2014-07-31 16:51:58 +02:00
Jakob Borg
13d77f1557 Remove dead code 2014-07-31 15:43:29 +02:00
Jakob Borg
cc619f6b53 Don't get packages that are already in Godeps 2014-07-31 15:37:34 +02:00
Jakob Borg
d425794665 Setup should download packages for test 2014-07-31 15:25:44 +02:00
Jakob Borg
32da1c8d58 Handle ElementSizeExceeded on nested structs 2014-07-31 15:21:33 +02:00
Jakob Borg
830be1035b Remove pointless CompareClusterConfig 2014-07-31 14:17:46 +02:00
Jakob Borg
e9e45d0e29 Test clock ticks 2014-07-31 14:14:40 +02:00
Jakob Borg
d3ca265a25 Test logging handlers 2014-07-31 14:14:19 +02:00
Jakob Borg
244f0ffaf1 Test maps and versioning config 2014-07-31 14:13:55 +02:00
Jakob Borg
73f5c47fe2 Fix broadcast addrs for nets smaller than /8 2014-07-31 13:39:49 +02:00
Jakob Borg
e8b9600ddb Shiny badges are shiny 2014-07-31 13:31:24 +02:00
Jakob Borg
d2c813ffac Revert "Use drone.io instead of Travis"
This reverts commit 8e699f8243.
2014-07-31 13:18:27 +02:00
Jakob Borg
8e699f8243 Use drone.io instead of Travis 2014-07-31 12:56:05 +02:00
Jakob Borg
3f6cdc829b Get cover and goveralls in ./build.sh setup 2014-07-31 12:51:50 +02:00
Jakob Borg
c5c9ee92ac Rename pidx utility to stindex 2014-07-31 12:30:53 +02:00
Jakob Borg
7f1fcc9cfc Don't build all utility scripts as part of ./build.sh 2014-07-31 12:30:19 +02:00
Jakob Borg
9de45c3be4 No need to keep entire Bootstrap source 2014-07-31 12:16:26 +02:00
Jakob Borg
144a881ae5 Fix build for upgrade-unsupported platforms 2014-07-31 11:47:00 +02:00
Jakob Borg
4566690617 Enabling compression for self does not make sense 2014-07-31 11:01:39 +02:00
Jakob Borg
e8fe1590b6 Scanning status should have same color as syncing (ref #449) 2014-07-31 10:53:54 +02:00
Jakob Borg
25f4fd5a19 Woops! Use our logger, not log 2014-07-31 10:33:47 +02:00
Jakob Borg
7b8c126aa1 Exit codes for -upgrade and -upgrade-check (fixes #194) 2014-07-31 10:32:19 +02:00
Jakob Borg
86b3ff3099 Better lang-en updates 2014-07-31 09:08:31 +02:00
Jakob Borg
fa9df4dc5e Don't log a panic when there are no releases 2014-07-31 09:08:31 +02:00
Jakob Borg
fbd22e7b94 Rearrange settings slightly 2014-07-31 09:08:31 +02:00
Jakob Borg
e35411d90f Translation update 2014-07-31 08:07:40 +02:00
Jakob Borg
be15e48074 Remove discosrv (see https://github.com/syncthing/discosrv) 2014-07-30 22:18:02 +02:00
Jakob Borg
2be1218aa3 Fast parallel file hasher (fixes #293) 2014-07-30 20:10:46 +02:00
Jakob Borg
c47aebdd2a Don't hold memory used for sending indexes forever 2014-07-30 20:08:04 +02:00
Jakob Borg
f4d1632506 Better automatic translation update 2014-07-30 11:52:16 +02:00
Jakob Borg
8bfe4374de Archive indexes and config from v0.8 on upgrade 2014-07-30 11:45:55 +02:00
Jakob Borg
4afe02cb21 Implement almost full semver comparison (fixes #436) 2014-07-30 08:57:27 +02:00
Jakob Borg
115b967e5b Provide context in warnings, reduce severity of TLS handshake error (fixes #437) 2014-07-30 08:23:48 +02:00
Jakob Borg
ea4524024a Verify certificate name 2014-07-30 07:59:22 +02:00
Jakob Borg
4ff6cd9105 Asset update 2014-07-29 13:29:19 +02:00
Jakob Borg
96c17d8292 Translation update 2014-07-29 13:26:49 +02:00
Jakob Borg
bc6faaffc4 Add debug hook for completion, for integration tests 2014-07-29 13:01:27 +02:00
Jakob Borg
51e9839237 Handle UI in restart/shutdown 2014-07-29 11:59:11 +02:00
Jakob Borg
6115631746 Fix status updates for remote nodes 2014-07-29 11:54:00 +02:00
Jakob Borg
ee005fbc8e Generate events on scanning updates 2014-07-29 11:53:45 +02:00
Jakob Borg
e27d42935c Use event interface for GUI (fixes #383) 2014-07-29 11:06:52 +02:00
Jakob Borg
9c99d65716 Build on 32 bit archs (ref #446) 2014-07-28 15:25:34 +02:00
Jakob Borg
5b9469eed3 Might want to keep English as a valid language... 2014-07-28 15:17:43 +02:00
Jakob Borg
6805ac915b Ugly hack to automatically update translations. 2014-07-28 15:14:02 +02:00
Jakob Borg
7148cf99f7 Fix tests, again 2014-07-28 13:11:09 +02:00
Jakob Borg
67a3fb8bf2 Compression as a user option (fixes #446) 2014-07-28 12:44:46 +02:00
Jakob Borg
933b61f99f Fix protocol tests 2014-07-28 12:16:15 +02:00
Jakob Borg
6c5c14f35f Refactor compression support, now at message level. 2014-07-28 11:31:22 +02:00
Jakob Borg
6a441d5013 Merge pull request #445 from AudriusButkevicius/dupes
Fixes and improvements
2014-07-28 10:55:38 +02:00
Audrius Butkevicius
6b46465c77 Avoid resorting multiple times 2014-07-28 00:21:22 +01:00
Audrius Butkevicius
75388caeed Prevent duplicate nodes in repos 2014-07-28 00:15:16 +01:00
Audrius Butkevicius
2546930a1a Fix in-place removal 2014-07-28 00:08:15 +01:00
Jakob Borg
135e29a3bb Don't FATAL if a repo dir cannot be created (fixes #443) 2014-07-27 14:31:15 +02:00
Jakob Borg
3b65a58f59 Translation, language detection 2014-07-26 22:56:12 +02:00
Jakob Borg
49cb931572 Minor refactoring: extract variable... 2014-07-26 21:28:32 +02:00
Jakob Borg
b7176d2204 Implement reception of Close message 2014-07-26 21:27:55 +02:00
Jakob Borg
5bf7d372f6 Genfiles use actual source data 2014-07-26 13:06:57 +02:00
Jakob Borg
073775e461 Build Solaris again 2014-07-25 15:26:23 +02:00
Jakob Borg
fbf8f3dc68 Add LZ4 compression 2014-07-25 15:16:23 +02:00
Jakob Borg
e8c8cc550b Don't use 100% doing nothing 2014-07-25 14:59:56 +02:00
Jakob Borg
87c3790fa8 Debug events module 2014-07-25 14:50:14 +02:00
Jakob Borg
0d9dcb2f4f Remove file count and size limits in protocol 2014-07-25 09:01:54 +02:00
Jakob Borg
6188185b22 Beta versions *should* upgrade to other beta version (ref #436) 2014-07-24 14:23:25 +02:00
Jakob Borg
f762bd5e25 Always use correct format Node IDs in GUI 2014-07-24 13:23:26 +02:00
Jakob Borg
b676264fca Don't consider prereleases for -upgrade (fixes #436) 2014-07-24 12:55:41 +02:00
Jakob Borg
3640c3b66a Install all cmds when running build.sh without options 2014-07-24 10:00:57 +02:00
Jakob Borg
5087d02fba Faster puller loop 2014-07-24 09:56:54 +02:00
Jakob Borg
2aa4340551 Add performance stats collection 2014-07-24 09:56:53 +02:00
Jakob Borg
3b34895ae6 LocalVersion can move backwards as well as forwards 2014-07-23 13:03:52 +02:00
Jakob Borg
91cc84c4e6 Hand incoming indexes on main goroutine (this should be fine now) 2014-07-23 13:03:36 +02:00
Jakob Borg
797e53c5ba Merge branch 'v0.8'
* v0.8:
  Handle WANPPPConnection devices (fixes #431)
  Revert "Add temporary debug logging for #344 (revert later)"
  incomingIndexes should not be a package variable (fixes #344)
  Continue discovery on connect errors (fixes #324)

Conflicts:
	files/set.go
	model/model.go
	protocol/protocol.go
2014-07-23 12:00:54 +02:00
Jakob Borg
c714a12ad7 Improve protocol & leveldb debugging 2014-07-23 11:55:55 +02:00
Jakob Borg
08ce9b09ec Test and fix reconnects during pull 2014-07-23 10:52:07 +02:00
Jakob Borg
3152152ed9 Always build discosrv by default 2014-07-23 08:42:49 +02:00
Jakob Borg
544fea51b0 Update all deps to latest version 2014-07-23 08:31:36 +02:00
Jakob Borg
08ca9f9378 Consolidate cmds in cmd/ 2014-07-23 08:31:13 +02:00
Jakob Borg
978f68b744 Update deps to unfail tests 2014-07-23 07:59:45 +02:00
Jakob Borg
680896e4c4 Merge pull request #433 from AudriusButkevicius/dup
Remove non-existing nodes from repositories
2014-07-23 07:58:03 +02:00
Jakob Borg
975627af2e Add AudriusButkevicius 2014-07-23 07:57:37 +02:00
Audrius Butkevicius
b208102b98 Remove non-existing nodes from repositories 2014-07-22 22:29:44 +01:00
Jakob Borg
88a063434c Handle WANPPPConnection devices (fixes #431) 2014-07-22 22:47:54 +02:00
Jakob Borg
58cc108c0c Handle WANPPPConnection devices (fixes #431) 2014-07-22 19:23:43 +02:00
Jakob Borg
50b37f1366 Revert "Add temporary debug logging for #344 (revert later)"
This reverts commit 5353659f9f.
2014-07-08 11:49:28 +02:00
Jakob Borg
a7b6e35467 incomingIndexes should not be a package variable (fixes #344) 2014-07-08 11:49:11 +02:00
Ben Sidhom
37d83a4e2e Continue discovery on connect errors (fixes #324)
Continues trying to connect to the discovery server at regular intervals despite
failure. Whether or not to retry and retry interval should be specified in
configuration (not currently in this fix).
2014-07-05 23:10:11 +02:00
463 changed files with 28419 additions and 49748 deletions

4
.gitignore vendored
View File

@@ -4,7 +4,9 @@ syncthing.exe
*.zip
*.asc
*.sublime*
discosrv
.jshintrc
coverage.out
files/pidx
bin
perfstats*.csv
coverage.xml

View File

@@ -1,20 +0,0 @@
language: go
go:
- tip
install:
- export PATH=$PATH:$HOME/gopath/bin
- ./build.sh setup
- go get code.google.com/p/go.tools/cmd/cover
- go get github.com/mattn/goveralls
script:
- ./build.sh test-cov
after_success:
- goveralls -coverprofile=coverage.out -service=travis-ci -package=calmh/syncthing -repotoken="$COVERALS_TOKEN"
env:
global:
secure: "zEV2h2XtKHNLVdXJjM4LA/VjMfLVydm6goF+ARit+nOSGxGoH7f7jIdzJzhxgh7shKG93q61eLO1Tug+WBMYB2EpBuYnTB5AIMYhCDwNI8C4uBV6c3brHfcrie7MASNao8TID2QScASKNFFWvjv/i1Ccn5ztxdcQuhSsNjGZp8A="

View File

@@ -1,7 +1,43 @@
## Reporting Bugs
Please file bugs in the [Github Issue
Tracker](https://github.com/syncthing/syncthing/issues). Include at
least the following:
- What happened
- What did you expect to happen instead of what *did* happen, if it's
not crazy obvious
- What operating system, operating system version and version of
Syncthing you are running
- The same for other connected nodes, where relevant
- Screenshot if the issue concerns something visible in the GUI
- Console log entries, where possible and relevant
If you're not sure whether something is relevant, erring on the side of
too much information will never get you yelled at. :)
## Contributing Translations
All translations are done via
[Transifex](https://www.transifex.com/projects/p/syncthing/). If you
wish to contribute to a translation, just head over there and sign up.
Before every release, the language resources are updated from the
latest info on Transifex.
## Contributing Code
Please do contribute! If you want to contribute but are unsure where to
start, the [Contributions Needed
topic](http://discourse.syncthing.net/t/contributions-needed/49)
lists areas in need of attention.
topic](http://discourse.syncthing.net/t/49) lists areas in need of
attention. In general, any open issues are fair game! Be prepared for a
[certain amount of
review](https://discourse.syncthing.net/t/733); it's all in the name of
quality. :)
## Licensing
@@ -16,8 +52,8 @@ to add yourself as a separate commit in your first pull request.
## Building
[See the
documentation](http://discourse.syncthing.net/t/building-syncthing/44)
[See the documentation](http://discourse.syncthing.net/t/44) on how to
get started with a build environment.
## Branches
@@ -44,7 +80,9 @@ Yes please!
## Style
`go fmt`
- `go fmt`
- Unix line breaks
## Documentation
@@ -53,4 +91,3 @@ Yes please!
## License
MIT

View File

@@ -1,10 +1,15 @@
Aaron Bieber <qbit@deftly.net>
Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
James Patterson <jamespatterson@operamail.com>
Jens Diemer <github.com@jensdiemer.de>
Gilli Sigurdsson <gilli@vx.is>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
Marcin Dziadus <dziadus.marcin@gmail.com>
Philippe Schommers <philippe@schommers.be>
Ryan Sullivan <kayoticsully@gmail.com>
Tully Robinson <tully@tojr.org>
Veeti Paananen <veeti.paananen@rojekti.fi>

34
Godeps/Godeps.json generated
View File

@@ -1,10 +1,8 @@
{
"ImportPath": "github.com/calmh/syncthing",
"GoVersion": "go1.3",
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.3.1",
"Packages": [
"./cmd/syncthing",
"./cmd/assets",
"./discover/cmd/discosrv"
"./cmd/..."
],
"Deps": [
{
@@ -14,23 +12,23 @@
},
{
"ImportPath": "code.google.com/p/go.crypto/bcrypt",
"Comment": "null-212",
"Rev": "1064b89a6fb591df0dd65422295b8498916b092f"
"Comment": "null-216",
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
},
{
"ImportPath": "code.google.com/p/go.crypto/blowfish",
"Comment": "null-212",
"Rev": "1064b89a6fb591df0dd65422295b8498916b092f"
"Comment": "null-216",
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
},
{
"ImportPath": "code.google.com/p/go.text/transform",
"Comment": "null-87",
"Rev": "c59e4f2f93824f81213799e64c3eead7be24660a"
"Comment": "null-90",
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
},
{
"ImportPath": "code.google.com/p/go.text/unicode/norm",
"Comment": "null-87",
"Rev": "c59e4f2f93824f81213799e64c3eead7be24660a"
"Comment": "null-90",
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
},
{
"ImportPath": "code.google.com/p/snappy-go/snappy",
@@ -38,8 +36,12 @@
"Rev": "12e4b4183793ac4b061921e7980845e750679fd0"
},
{
"ImportPath": "github.com/golang/groupcache/lru",
"Rev": "a531d51b7f9f3dd13c1c2b50d42d739b70442dbb"
"ImportPath": "github.com/bkaradzic/go-lz4",
"Rev": "77e2ba877bde9da31213bec75dbbe197fa507c21"
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "a597b63b87d6140f79084c8aab214b4d533833a1"
},
{
"ImportPath": "github.com/juju/ratelimit",
@@ -47,7 +49,7 @@
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "e1f2d2bdccd7c62f4d4a29aaf081bf1fc4404f91"
"Rev": "59d87758aeaab5ab6ed289c773349500228a1557"
},
{
"ImportPath": "github.com/vitrun/qart/coding",

View File

@@ -4,6 +4,22 @@
package blowfish
// getNextWord returns the next big-endian uint32 value from the byte slice
// at the given position in a circular manner, updating the position.
func getNextWord(b []byte, pos *int) uint32 {
var w uint32
j := *pos
for i := 0; i < 4; i++ {
w = w<<8 | uint32(b[j])
j++
if j >= len(b) {
j = 0
}
}
*pos = j
return w
}
// ExpandKey performs a key expansion on the given *Cipher. Specifically, it
// performs the Blowfish algorithm's key schedule which sets up the *Cipher's
// pi and substitution tables for calls to Encrypt. This is used, primarily,
@@ -12,6 +28,7 @@ package blowfish
func ExpandKey(key []byte, c *Cipher) {
j := 0
for i := 0; i < 18; i++ {
// Using inlined getNextWord for performance.
var d uint32
for k := 0; k < 4; k++ {
d = d<<8 | uint32(key[j])
@@ -54,86 +71,44 @@ func ExpandKey(key []byte, c *Cipher) {
func expandKeyWithSalt(key []byte, salt []byte, c *Cipher) {
j := 0
for i := 0; i < 18; i++ {
var d uint32
for k := 0; k < 4; k++ {
d = d<<8 | uint32(key[j])
j++
if j >= len(key) {
j = 0
}
}
c.p[i] ^= d
c.p[i] ^= getNextWord(key, &j)
}
j = 0
var expandedSalt [4]uint32
for i := range expandedSalt {
var d uint32
for k := 0; k < 4; k++ {
d = d<<8 | uint32(salt[j])
j++
if j >= len(salt) {
j = 0
}
}
expandedSalt[i] = d
}
var l, r uint32
for i := 0; i < 18; i += 2 {
l ^= expandedSalt[i&2]
r ^= expandedSalt[(i&2)+1]
l ^= getNextWord(salt, &j)
r ^= getNextWord(salt, &j)
l, r = encryptBlock(l, r, c)
c.p[i], c.p[i+1] = l, r
}
for i := 0; i < 256; i += 4 {
l ^= expandedSalt[2]
r ^= expandedSalt[3]
for i := 0; i < 256; i += 2 {
l ^= getNextWord(salt, &j)
r ^= getNextWord(salt, &j)
l, r = encryptBlock(l, r, c)
c.s0[i], c.s0[i+1] = l, r
l ^= expandedSalt[0]
r ^= expandedSalt[1]
l, r = encryptBlock(l, r, c)
c.s0[i+2], c.s0[i+3] = l, r
}
for i := 0; i < 256; i += 4 {
l ^= expandedSalt[2]
r ^= expandedSalt[3]
for i := 0; i < 256; i += 2 {
l ^= getNextWord(salt, &j)
r ^= getNextWord(salt, &j)
l, r = encryptBlock(l, r, c)
c.s1[i], c.s1[i+1] = l, r
l ^= expandedSalt[0]
r ^= expandedSalt[1]
l, r = encryptBlock(l, r, c)
c.s1[i+2], c.s1[i+3] = l, r
}
for i := 0; i < 256; i += 4 {
l ^= expandedSalt[2]
r ^= expandedSalt[3]
for i := 0; i < 256; i += 2 {
l ^= getNextWord(salt, &j)
r ^= getNextWord(salt, &j)
l, r = encryptBlock(l, r, c)
c.s2[i], c.s2[i+1] = l, r
l ^= expandedSalt[0]
r ^= expandedSalt[1]
l, r = encryptBlock(l, r, c)
c.s2[i+2], c.s2[i+3] = l, r
}
for i := 0; i < 256; i += 4 {
l ^= expandedSalt[2]
r ^= expandedSalt[3]
for i := 0; i < 256; i += 2 {
l ^= getNextWord(salt, &j)
r ^= getNextWord(salt, &j)
l, r = encryptBlock(l, r, c)
c.s3[i], c.s3[i+1] = l, r
l ^= expandedSalt[0]
r ^= expandedSalt[1]
l, r = encryptBlock(l, r, c)
c.s3[i+2], c.s3[i+3] = l, r
}
}
@@ -182,9 +157,3 @@ func decryptBlock(l, r uint32, c *Cipher) (uint32, uint32) {
xr ^= c.p[0]
return xr, xl
}
func zero(x []uint32) {
for i := range x {
x[i] = 0
}
}

View File

@@ -4,9 +4,7 @@
package blowfish
import (
"testing"
)
import "testing"
type CryptTest struct {
key []byte
@@ -202,3 +200,75 @@ func TestSaltedCipherKeyLength(t *testing.T) {
t.Errorf("NewSaltedCipher with long key, gave error %#v", err)
}
}
// Test vectors generated with Blowfish from OpenSSH.
var saltedVectors = [][8]byte{
{0x0c, 0x82, 0x3b, 0x7b, 0x8d, 0x01, 0x4b, 0x7e},
{0xd1, 0xe1, 0x93, 0xf0, 0x70, 0xa6, 0xdb, 0x12},
{0xfc, 0x5e, 0xba, 0xde, 0xcb, 0xf8, 0x59, 0xad},
{0x8a, 0x0c, 0x76, 0xe7, 0xdd, 0x2c, 0xd3, 0xa8},
{0x2c, 0xcb, 0x7b, 0xee, 0xac, 0x7b, 0x7f, 0xf8},
{0xbb, 0xf6, 0x30, 0x6f, 0xe1, 0x5d, 0x62, 0xbf},
{0x97, 0x1e, 0xc1, 0x3d, 0x3d, 0xe0, 0x11, 0xe9},
{0x06, 0xd7, 0x4d, 0xb1, 0x80, 0xa3, 0xb1, 0x38},
{0x67, 0xa1, 0xa9, 0x75, 0x0e, 0x5b, 0xc6, 0xb4},
{0x51, 0x0f, 0x33, 0x0e, 0x4f, 0x67, 0xd2, 0x0c},
{0xf1, 0x73, 0x7e, 0xd8, 0x44, 0xea, 0xdb, 0xe5},
{0x14, 0x0e, 0x16, 0xce, 0x7f, 0x4a, 0x9c, 0x7b},
{0x4b, 0xfe, 0x43, 0xfd, 0xbf, 0x36, 0x04, 0x47},
{0xb1, 0xeb, 0x3e, 0x15, 0x36, 0xa7, 0xbb, 0xe2},
{0x6d, 0x0b, 0x41, 0xdd, 0x00, 0x98, 0x0b, 0x19},
{0xd3, 0xce, 0x45, 0xce, 0x1d, 0x56, 0xb7, 0xfc},
{0xd9, 0xf0, 0xfd, 0xda, 0xc0, 0x23, 0xb7, 0x93},
{0x4c, 0x6f, 0xa1, 0xe4, 0x0c, 0xa8, 0xca, 0x57},
{0xe6, 0x2f, 0x28, 0xa7, 0x0c, 0x94, 0x0d, 0x08},
{0x8f, 0xe3, 0xf0, 0xb6, 0x29, 0xe3, 0x44, 0x03},
{0xff, 0x98, 0xdd, 0x04, 0x45, 0xb4, 0x6d, 0x1f},
{0x9e, 0x45, 0x4d, 0x18, 0x40, 0x53, 0xdb, 0xef},
{0xb7, 0x3b, 0xef, 0x29, 0xbe, 0xa8, 0x13, 0x71},
{0x02, 0x54, 0x55, 0x41, 0x8e, 0x04, 0xfc, 0xad},
{0x6a, 0x0a, 0xee, 0x7c, 0x10, 0xd9, 0x19, 0xfe},
{0x0a, 0x22, 0xd9, 0x41, 0xcc, 0x23, 0x87, 0x13},
{0x6e, 0xff, 0x1f, 0xff, 0x36, 0x17, 0x9c, 0xbe},
{0x79, 0xad, 0xb7, 0x40, 0xf4, 0x9f, 0x51, 0xa6},
{0x97, 0x81, 0x99, 0xa4, 0xde, 0x9e, 0x9f, 0xb6},
{0x12, 0x19, 0x7a, 0x28, 0xd0, 0xdc, 0xcc, 0x92},
{0x81, 0xda, 0x60, 0x1e, 0x0e, 0xdd, 0x65, 0x56},
{0x7d, 0x76, 0x20, 0xb2, 0x73, 0xc9, 0x9e, 0xee},
}
func TestSaltedCipher(t *testing.T) {
var key, salt [32]byte
for i := range key {
key[i] = byte(i)
salt[i] = byte(i + 32)
}
for i, v := range saltedVectors {
c, err := NewSaltedCipher(key[:], salt[:i])
if err != nil {
t.Fatal(err)
}
var buf [8]byte
c.Encrypt(buf[:], buf[:])
if v != buf {
t.Errorf("%d: expected %x, got %x", i, v, buf)
}
}
}
func BenchmarkExpandKeyWithSalt(b *testing.B) {
key := make([]byte, 32)
salt := make([]byte, 16)
c, _ := NewCipher(key)
for i := 0; i < b.N; i++ {
expandKeyWithSalt(key, salt, c)
}
}
func BenchmarkExpandKey(b *testing.B) {
key := make([]byte, 32)
c, _ := NewCipher(key)
for i := 0; i < b.N; i++ {
ExpandKey(key, c)
}
}

View File

@@ -40,8 +40,11 @@ func NewCipher(key []byte) (*Cipher, error) {
// NewSaltedCipher creates a returns a Cipher that folds a salt into its key
// schedule. For most purposes, NewCipher, instead of NewSaltedCipher, is
// sufficient and desirable. For bcrypt compatiblity, the key can be over 56
// bytes. Only the first 16 bytes of salt are used.
// bytes.
func NewSaltedCipher(key, salt []byte) (*Cipher, error) {
if len(salt) == 0 {
return NewCipher(key)
}
var result Cipher
if k := len(key); k < 1 {
return nil, KeySizeError(k)

View File

@@ -9,6 +9,7 @@
package transform
import (
"bytes"
"errors"
"io"
"unicode/utf8"
@@ -127,7 +128,7 @@ func (r *Reader) Read(p []byte) (int, error) {
// cannot read more bytes into src.
r.transformComplete = r.err != nil
continue
case err == ErrShortDst && r.dst1 != 0:
case err == ErrShortDst && (r.dst1 != 0 || n != 0):
// Make room in dst by copying out, and try again.
continue
case err == ErrShortSrc && r.src1-r.src0 != len(r.src) && r.err == nil:
@@ -210,7 +211,7 @@ func (w *Writer) Write(data []byte) (n int, err error) {
n += nSrc
}
switch {
case err == ErrShortDst && nDst > 0:
case err == ErrShortDst && (nDst > 0 || nSrc > 0):
case err == ErrShortSrc && len(src) < len(w.src):
m := copy(w.src, src)
// If w.n > 0, bytes from data were already copied to w.src and n
@@ -467,30 +468,125 @@ func (t removeF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err err
return
}
// Bytes returns a new byte slice with the result of converting b using t.
// If any unrecoverable error occurs it returns nil.
func Bytes(t Transformer, b []byte) []byte {
out := make([]byte, len(b))
n := 0
for {
nDst, nSrc, err := t.Transform(out[n:], b, true)
n += nDst
if err == nil {
return out[:n]
} else if err != ErrShortDst {
return nil
}
b = b[nSrc:]
// grow returns a new []byte that is longer than b, and copies the first n bytes
// of b to the start of the new slice.
func grow(b []byte, n int) []byte {
m := len(b)
if m <= 256 {
m *= 2
} else {
m += m >> 1
}
buf := make([]byte, m)
copy(buf, b[:n])
return buf
}
// Grow the destination buffer.
sz := len(out)
if sz <= 256 {
sz *= 2
} else {
sz += sz >> 1
const initialBufSize = 128
// String returns a string with the result of converting s[:n] using t, where
// n <= len(s). If err == nil, n will be len(s).
func String(t Transformer, s string) (result string, n int, err error) {
if s == "" {
return "", 0, nil
}
// Allocate only once. Note that both dst and src escape when passed to
// Transform.
buf := [2 * initialBufSize]byte{}
dst := buf[:initialBufSize:initialBufSize]
src := buf[initialBufSize : 2*initialBufSize]
// Avoid allocation if the transformed string is identical to the original.
// After this loop, pDst will point to the furthest point in s for which it
// could be detected that t gives equal results, src[:nSrc] will
// indicated the last processed chunk of s for which the output is not equal
// and dst[:nDst] will be the transform of this chunk.
var nDst, nSrc int
pDst := 0 // Used as index in both src and dst in this loop.
for {
n := copy(src, s[pDst:])
nDst, nSrc, err = t.Transform(dst, src[:n], pDst+n == len(s))
// Note 1: we will not enter the loop with pDst == len(s) and we will
// not end the loop with it either. So if nSrc is 0, this means there is
// some kind of error from which we cannot recover given the current
// buffer sizes. We will give up in this case.
// Note 2: it is not entirely correct to simply do a bytes.Equal as
// a Transformer may buffer internally. It will work in most cases,
// though, and no harm is done if it doesn't work.
// TODO: let transformers implement an optional Spanner interface, akin
// to norm's QuickSpan. This would even allow us to avoid any allocation.
if nSrc == 0 || !bytes.Equal(dst[:nDst], src[:nSrc]) {
break
}
if pDst += nDst; pDst == len(s) {
return s, pDst, nil
}
}
// Move the bytes seen so far to dst.
pSrc := pDst + nSrc
if pDst+nDst <= initialBufSize {
copy(dst[pDst:], dst[:nDst])
} else {
b := make([]byte, len(s)+nDst-nSrc)
copy(b[pDst:], dst[:nDst])
dst = b
}
copy(dst, s[:pDst])
pDst += nDst
if err != nil && err != ErrShortDst && err != ErrShortSrc {
return string(dst[:pDst]), pSrc, err
}
// Complete the string with the remainder.
for {
n := copy(src, s[pSrc:])
nDst, nSrc, err = t.Transform(dst[pDst:], src[:n], pSrc+n == len(s))
pDst += nDst
pSrc += nSrc
switch err {
case nil:
if pSrc == len(s) {
return string(dst[:pDst]), pSrc, nil
}
case ErrShortDst:
// Do not grow as long as we can make progress. This may avoid
// excessive allocations.
if nDst == 0 {
dst = grow(dst, pDst)
}
case ErrShortSrc:
if nSrc == 0 {
src = grow(src, 0)
}
default:
return string(dst[:pDst]), pSrc, err
}
}
}
// Bytes returns a new byte slice with the result of converting b[:n] using t,
// where n <= len(b). If err == nil, n will be len(b).
func Bytes(t Transformer, b []byte) (result []byte, n int, err error) {
dst := make([]byte, len(b))
pDst, pSrc := 0, 0
for {
nDst, nSrc, err := t.Transform(dst[pDst:], b[pSrc:], true)
pDst += nDst
pSrc += nSrc
if err != ErrShortDst {
return dst[:pDst], pSrc, err
}
// Grow the destination buffer, but do not grow as long as we can make
// progress. This may avoid excessive allocations.
if nDst == 0 {
dst = grow(dst, pDst)
}
out2 := make([]byte, sz)
copy(out2, out[:n])
out = out2
}
}

View File

@@ -12,6 +12,7 @@ import (
"strconv"
"strings"
"testing"
"time"
"unicode/utf8"
)
@@ -132,6 +133,43 @@ func (e rleEncode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err e
return nDst, nSrc, nil
}
// trickler consumes all input bytes, but writes a single byte at a time to dst.
type trickler []byte
func (t *trickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
*t = append(*t, src...)
if len(*t) == 0 {
return 0, 0, nil
}
if len(dst) == 0 {
return 0, len(src), ErrShortDst
}
dst[0] = (*t)[0]
*t = (*t)[1:]
if len(*t) > 0 {
err = ErrShortDst
}
return 1, len(src), err
}
// delayedTrickler is like trickler, but delays writing output to dst. This is
// highly unlikely to be relevant in practice, but it seems like a good idea
// to have some tolerance as long as progress can be detected.
type delayedTrickler []byte
func (t *delayedTrickler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if len(*t) > 0 && len(dst) > 0 {
dst[0] = (*t)[0]
*t = (*t)[1:]
nDst = 1
}
*t = append(*t, src...)
if len(*t) > 0 {
err = ErrShortDst
}
return nDst, len(src), err
}
type testCase struct {
desc string
t Transformer
@@ -170,6 +208,15 @@ func (c chain) String() string {
}
var testCases = []testCase{
{
desc: "empty",
t: lowerCaseASCII{},
src: "",
dstSize: 100,
srcSize: 100,
wantStr: "",
},
{
desc: "basic",
t: lowerCaseASCII{},
@@ -378,6 +425,24 @@ var testCases = []testCase{
ioSize: 10,
wantStr: "4a6b2b4c4d1d",
},
{
desc: "trickler",
t: &trickler{},
src: "abcdefghijklm",
dstSize: 3,
srcSize: 15,
wantStr: "abcdefghijklm",
},
{
desc: "delayedTrickler",
t: &delayedTrickler{},
src: "abcdefghijklm",
dstSize: 3,
srcSize: 15,
wantStr: "abcdefghijklm",
},
}
func TestReader(t *testing.T) {
@@ -685,7 +750,7 @@ func doTransform(tc testCase) (res string, iter int, err error) {
switch {
case err == nil && len(in) != 0:
case err == ErrShortSrc && nSrc > 0:
case err == ErrShortDst && nDst > 0:
case err == ErrShortDst && (nDst > 0 || nSrc > 0):
default:
return string(out), iter, err
}
@@ -875,27 +940,136 @@ func TestRemoveFunc(t *testing.T) {
}
}
func TestBytes(t *testing.T) {
func testString(t *testing.T, f func(Transformer, string) (string, int, error)) {
for _, tt := range append(testCases, chainTests()...) {
if tt.desc == "allowStutter = true" {
// We don't have control over the buffer size, so we eliminate tests
// that depend on a specific buffer size being set.
continue
}
got := Bytes(tt.t, []byte(tt.src))
if tt.wantErr != nil {
if tt.wantErr != ErrShortDst && tt.wantErr != ErrShortSrc {
// Bytes should return nil for non-recoverable errors.
if g, w := (got == nil), (tt.wantErr != nil); g != w {
t.Errorf("%s:error: got %v; want %v", tt.desc, g, w)
}
}
// The output strings in the tests that expect an error will
// almost certainly not be the same as the result of Bytes.
reset(tt.t)
if tt.wantErr == ErrShortDst || tt.wantErr == ErrShortSrc {
// The result string will be different.
continue
}
if string(got) != tt.wantStr {
got, n, err := f(tt.t, tt.src)
if tt.wantErr != err {
t.Errorf("%s:error: got %v; want %v", tt.desc, err, tt.wantErr)
}
if got, want := err == nil, n == len(tt.src); got != want {
t.Errorf("%s:n: got %v; want %v", tt.desc, got, want)
}
if got != tt.wantStr {
t.Errorf("%s:string: got %q; want %q", tt.desc, got, tt.wantStr)
}
}
}
func TestBytes(t *testing.T) {
testString(t, func(z Transformer, s string) (string, int, error) {
b, n, err := Bytes(z, []byte(s))
return string(b), n, err
})
}
func TestString(t *testing.T) {
testString(t, String)
// Overrun the internal destination buffer.
for i, s := range []string{
strings.Repeat("a", initialBufSize-1),
strings.Repeat("a", initialBufSize+0),
strings.Repeat("a", initialBufSize+1),
strings.Repeat("A", initialBufSize-1),
strings.Repeat("A", initialBufSize+0),
strings.Repeat("A", initialBufSize+1),
strings.Repeat("A", 2*initialBufSize-1),
strings.Repeat("A", 2*initialBufSize+0),
strings.Repeat("A", 2*initialBufSize+1),
strings.Repeat("a", initialBufSize-2) + "A",
strings.Repeat("a", initialBufSize-1) + "A",
strings.Repeat("a", initialBufSize+0) + "A",
strings.Repeat("a", initialBufSize+1) + "A",
} {
got, _, _ := String(lowerCaseASCII{}, s)
if want := strings.ToLower(s); got != want {
t.Errorf("%d:dst buffer test: got %s (%d); want %s (%d)", i, got, len(got), want, len(want))
}
}
// Overrun the internal source buffer.
for i, s := range []string{
strings.Repeat("a", initialBufSize-1),
strings.Repeat("a", initialBufSize+0),
strings.Repeat("a", initialBufSize+1),
strings.Repeat("a", 2*initialBufSize+1),
strings.Repeat("a", 2*initialBufSize+0),
strings.Repeat("a", 2*initialBufSize+1),
} {
got, _, _ := String(rleEncode{}, s)
if want := fmt.Sprintf("%da", len(s)); got != want {
t.Errorf("%d:src buffer test: got %s (%d); want %s (%d)", i, got, len(got), want, len(want))
}
}
// Test allocations for non-changing strings.
// Note we still need to allocate a single buffer.
for i, s := range []string{
"",
"123",
"123456789",
strings.Repeat("a", initialBufSize),
strings.Repeat("a", 10*initialBufSize),
} {
if n := testing.AllocsPerRun(5, func() { String(&lowerCaseASCII{}, s) }); n > 1 {
t.Errorf("%d: #allocs was %f; want 1", i, n)
}
}
}
// TestBytesAllocation tests that buffer growth stays limited with the trickler
// transformer, which behaves oddly but within spec. In case buffer growth is
// not correctly handled, the test will either panic with a failed allocation or
// thrash. To ensure the tests terminate under the last condition, we time out
// after some sufficiently long period of time.
func TestBytesAllocation(t *testing.T) {
done := make(chan bool)
go func() {
in := bytes.Repeat([]byte{'a'}, 1000)
tr := trickler(make([]byte, 1))
Bytes(&tr, in)
done <- true
}()
select {
case <-done:
case <-time.After(3 * time.Second):
t.Error("time out, likely due to excessive allocation")
}
}
// TestStringAllocation tests that buffer growth stays limited with the trickler
// transformer, which behaves oddly but within spec. In case buffer growth is
// not correctly handled, the test will either panic with a failed allocation or
// thrash. To ensure the tests terminate under the last condition, we time out
// after some sufficiently long period of time.
func TestStringAllocation(t *testing.T) {
done := make(chan bool)
go func() {
in := strings.Repeat("a", 1000)
tr := trickler(make([]byte, 1))
String(&tr, in)
done <- true
}()
select {
case <-done:
case <-time.After(3 * time.Second):
t.Error("time out, likely due to excessive allocation")
}
}
func BenchmarkStringLower(b *testing.B) {
in := strings.Repeat("a", 4096)
for i := 0; i < b.N; i++ {
String(&lowerCaseASCII{}, in)
}
}

View File

@@ -11,7 +11,6 @@
package main
import (
"bufio"
"bytes"
"flag"
"fmt"
@@ -24,6 +23,8 @@ import (
"strconv"
"strings"
"unicode"
"code.google.com/p/go.text/internal/ucd"
)
func main() {
@@ -63,31 +64,7 @@ var localFiles = flag.Bool("local",
var logger = log.New(os.Stderr, "", log.Lshortfile)
// UnicodeData.txt has form:
// 0037;DIGIT SEVEN;Nd;0;EN;;7;7;7;N;;;;;
// 007A;LATIN SMALL LETTER Z;Ll;0;L;;;;;N;;;005A;;005A
// See http://unicode.org/reports/tr44/ for full explanation
// The fields:
const (
FCodePoint = iota
FName
FGeneralCategory
FCanonicalCombiningClass
FBidiClass
FDecompMapping
FDecimalValue
FDigitValue
FNumericValue
FBidiMirrored
FUnicode1Name
FISOComment
FSimpleUppercaseMapping
FSimpleLowercaseMapping
FSimpleTitlecaseMapping
NumField
MaxChar = 0x10FFFF // anything above this shouldn't exist
)
const MaxChar = 0x10FFFF // anything above this shouldn't exist
// Quick Check properties of runes allow us to quickly
// determine whether a rune may occur in a normal form.
@@ -232,7 +209,7 @@ func openReader(file string) (input io.ReadCloser) {
return
}
func parseDecomposition(s string, skipfirst bool) (a []rune, e error) {
func parseDecomposition(s string, skipfirst bool) (a []rune, err error) {
decomp := strings.Split(s, " ")
if len(decomp) > 0 && skipfirst {
decomp = decomp[1:]
@@ -247,56 +224,31 @@ func parseDecomposition(s string, skipfirst bool) (a []rune, e error) {
return a, nil
}
func parseCharacter(line string) {
field := strings.Split(line, ";")
if len(field) != NumField {
logger.Fatalf("%5s: %d fields (expected %d)\n", line, len(field), NumField)
}
x, err := strconv.ParseUint(field[FCodePoint], 16, 64)
point := int(x)
if err != nil {
logger.Fatalf("%.5s...: %s", line, err)
}
if point == 0 {
return // not interesting and we use 0 as unset
}
if point > MaxChar {
logger.Fatalf("%5s: Rune %X > MaxChar (%X)", line, point, MaxChar)
return
}
state := SNormal
switch {
case strings.Index(field[FName], ", First>") > 0:
state = SFirst
case strings.Index(field[FName], ", Last>") > 0:
state = SLast
}
firstChar := lastChar + 1
lastChar = rune(point)
if state != SLast {
firstChar = lastChar
}
x, err = strconv.ParseUint(field[FCanonicalCombiningClass], 10, 64)
if err != nil {
logger.Fatalf("%U: bad ccc field: %s", int(x), err)
}
ccc := uint8(x)
decmap := field[FDecompMapping]
exp, e := parseDecomposition(decmap, false)
isCompat := false
if e != nil {
if len(decmap) > 0 {
exp, e = parseDecomposition(decmap, true)
if e != nil {
logger.Fatalf(`%U: bad decomp |%v|: "%s"`, int(x), decmap, e)
func loadUnicodeData() {
f := openReader("UnicodeData.txt")
defer f.Close()
p := ucd.New(f)
for p.Next() {
r := p.Rune(ucd.CodePoint)
char := &chars[r]
char.ccc = uint8(p.Uint(ucd.CanonicalCombiningClass))
decmap := p.String(ucd.DecompMapping)
exp, err := parseDecomposition(decmap, false)
isCompat := false
if err != nil {
if len(decmap) > 0 {
exp, err = parseDecomposition(decmap, true)
if err != nil {
logger.Fatalf(`%U: bad decomp |%v|: "%s"`, r, decmap, err)
}
isCompat = true
}
isCompat = true
}
}
for i := firstChar; i <= lastChar; i++ {
char := &chars[i]
char.name = field[FName]
char.codePoint = i
char.name = p.String(ucd.Name)
char.codePoint = r
char.forms[FCompatibility].decomp = exp
if !isCompat {
char.forms[FCanonical].decomp = exp
@@ -306,24 +258,9 @@ func parseCharacter(line string) {
if len(decmap) > 0 {
char.forms[FCompatibility].decomp = exp
}
char.ccc = ccc
char.state = SMissing
if i == lastChar {
char.state = state
}
}
return
}
func loadUnicodeData() {
f := openReader("UnicodeData.txt")
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
parseCharacter(scanner.Text())
}
if scanner.Err() != nil {
logger.Fatal(scanner.Err())
if err := p.Err(); err != nil {
logger.Fatal(err)
}
}
@@ -354,47 +291,22 @@ func compactCCC() {
}
}
var singlePointRe = regexp.MustCompile(`^([0-9A-F]+) *$`)
// CompositionExclusions.txt has form:
// 0958 # ...
// See http://unicode.org/reports/tr44/ for full explanation
func parseExclusion(line string) int {
comment := strings.Index(line, "#")
if comment >= 0 {
line = line[0:comment]
}
if len(line) == 0 {
return 0
}
matches := singlePointRe.FindStringSubmatch(line)
if len(matches) != 2 {
logger.Fatalf("%s: %d matches (expected 1)\n", line, len(matches))
}
point, err := strconv.ParseUint(matches[1], 16, 64)
if err != nil {
logger.Fatalf("%.5s...: %s", line, err)
}
return int(point)
}
func loadCompositionExclusions() {
f := openReader("CompositionExclusions.txt")
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
point := parseExclusion(scanner.Text())
if point == 0 {
continue
}
c := &chars[point]
p := ucd.New(f)
for p.Next() {
c := &chars[p.Rune(0)]
if c.excludeInComp {
logger.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint)
}
c.excludeInComp = true
}
if scanner.Err() != nil {
log.Fatal(scanner.Err())
if e := p.Err(); e != nil {
logger.Fatal(e)
}
}
@@ -988,8 +900,6 @@ func verifyComputed() {
}
}
var qcRe = regexp.MustCompile(`([0-9A-F\.]+) *; (NF.*_QC); ([YNM]) #.*`)
// Use values in DerivedNormalizationProps.txt to compare against the
// values we computed.
// DerivedNormalizationProps.txt has form:
@@ -999,27 +909,13 @@ var qcRe = regexp.MustCompile(`([0-9A-F\.]+) *; (NF.*_QC); ([YNM]) #.*`)
func testDerived() {
f := openReader("DerivedNormalizationProps.txt")
defer f.Close()
scanner := bufio.NewScanner(f)
for scanner.Scan() {
line := scanner.Text()
qc := qcRe.FindStringSubmatch(line)
if qc == nil {
continue
}
rng := strings.Split(qc[1], "..")
i, err := strconv.ParseUint(rng[0], 16, 64)
if err != nil {
log.Fatal(err)
}
j := i
if len(rng) > 1 {
j, err = strconv.ParseUint(rng[1], 16, 64)
if err != nil {
log.Fatal(err)
}
}
p := ucd.New(f)
for p.Next() {
r := p.Rune(0)
c := &chars[r]
var ftype, mode int
qt := strings.TrimSpace(qc[2])
qt := p.String(1)
switch qt {
case "NFC_QC":
ftype, mode = FCanonical, MComposed
@@ -1030,10 +926,10 @@ func testDerived() {
case "NFKD_QC":
ftype, mode = FCompatibility, MDecomposed
default:
log.Fatalf(`Unexpected quick check type "%s"`, qt)
continue
}
var qr QCResult
switch qc[3] {
switch p.String(2) {
case "Y":
qr = QCYes
case "N":
@@ -1041,27 +937,15 @@ func testDerived() {
case "M":
qr = QCMaybe
default:
log.Fatalf(`Unexpected quick check value "%s"`, qc[3])
log.Fatalf(`Unexpected quick check value "%s"`, p.String(2))
}
var lastFailed bool
// Verify current
for ; i <= j; i++ {
c := &chars[int(i)]
c.forms[ftype].verified[mode] = true
curqr := c.forms[ftype].quickCheck[mode]
if curqr != qr {
if !lastFailed {
logger.Printf("%s: %.4X..%.4X -- %s\n",
qt, int(i), int(j), line[0:50])
}
logger.Printf("%U: FAILED %s (was %v need %v)\n",
int(i), qt, curqr, qr)
lastFailed = true
}
if got := c.forms[ftype].quickCheck[mode]; got != qr {
logger.Printf("%U: FAILED %s (was %v need %v)\n", r, qt, got, qr)
}
c.forms[ftype].verified[mode] = true
}
if scanner.Err() != nil {
logger.Fatal(scanner.Err())
if err := p.Err(); err != nil {
logger.Fatal(err)
}
// Any unspecified value must be QCYes. Verify this.
for i, c := range chars {

View File

@@ -0,0 +1 @@
/lz4-example/lz4-example

View File

@@ -0,0 +1,7 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- tip

View File

@@ -0,0 +1,24 @@
Copyright 2011-2012 Branimir Karadzic. All rights reserved.
Copyright 2013 Damian Gryski. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,71 @@
go-lz4
======
go-lz4 is port of LZ4 lossless compression algorithm to Go. The original C code
is located at:
https://code.google.com/p/lz4/
Status
------
[![Build Status](https://secure.travis-ci.org/bkaradzic/go-lz4.png)](http://travis-ci.org/bkaradzic/go-lz4)
[![GoDoc](https://godoc.org/github.com/bkaradzic/go-lz4?status.png)](https://godoc.org/github.com/bkaradzic/go-lz4)
Usage
-----
go get github.com/bkaradzic/go-lz4
import "github.com/bkaradzic/go-lz4"
The package name is `lz4`
Notes
-----
* go-lz4 saves a uint32 with the original uncompressed length at the beginning
of the encoded buffer. They may get in the way of interoperability with
other implementations.
Contributors
------------
Damian Gryski ([@dgryski](https://github.com/dgryski))
Dustin Sallings ([@dustin](https://github.com/dustin))
Contact
-------
[@bkaradzic](https://twitter.com/bkaradzic)
http://www.stuckingeometry.com
Project page
https://github.com/bkaradzic/go-lz4
License
-------
Copyright 2011-2012 Branimir Karadzic. All rights reserved.
Copyright 2013 Damian Gryski. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,74 @@
package main
import (
"math/rand"
"github.com/bkaradzic/go-lz4"
// lz4's API matches snappy's, so we can easily see how it performs
// lz4 "code.google.com/p/snappy-go/snappy"
)
var input = `
ADVENTURE I. A SCANDAL IN BOHEMIA
I.
To Sherlock Holmes she is always THE woman. I have seldom heard
him mention her under any other name. In his eyes she eclipses
and predominates the whole of her sex. It was not that he felt
any emotion akin to love for Irene Adler. All emotions, and that
one particularly, were abhorrent to his cold, precise but
admirably balanced mind. He was, I take it, the most perfect
reasoning and observing machine that the world has seen, but as a
lover he would have placed himself in a false position. He never
spoke of the softer passions, save with a gibe and a sneer. They
were admirable things for the observer--excellent for drawing the
veil from men's motives and actions. But for the trained reasoner
to admit such intrusions into his own delicate and finely
adjusted temperament was to introduce a distracting factor which
might throw a doubt upon all his mental results. Grit in a
sensitive instrument, or a crack in one of his own high-power
lenses, would not be more disturbing than a strong emotion in a
nature such as his. And yet there was but one woman to him, and
that woman was the late Irene Adler, of dubious and questionable
memory.
I had seen little of Holmes lately. My marriage had drifted us
away from each other. My own complete happiness, and the
home-centred interests which rise up around the man who first
finds himself master of his own establishment, were sufficient to
absorb all my attention, while Holmes, who loathed every form of
society with his whole Bohemian soul, remained in our lodgings in
Baker Street, buried among his old books, and alternating from
week to week between cocaine and ambition, the drowsiness of the
drug, and the fierce energy of his own keen nature. He was still,
as ever, deeply attracted by the study of crime, and occupied his
immense faculties and extraordinary powers of observation in
following out those clues, and clearing up those mysteries which
had been abandoned as hopeless by the official police. From time
to time I heard some vague account of his doings: of his summons
to Odessa in the case of the Trepoff murder, of his clearing up
of the singular tragedy of the Atkinson brothers at Trincomalee,
and finally of the mission which he had accomplished so
delicately and successfully for the reigning family of Holland.
Beyond these signs of his activity, however, which I merely
shared with all the readers of the daily press, I knew little of
my former friend and companion.
`
func main() {
compressed, _ := lz4.Encode(nil, []byte(input))
modified := make([]byte, len(compressed))
for {
copy(modified, compressed)
for i := 0; i < 100; i++ {
modified[rand.Intn(len(compressed)-4)+4] = byte(rand.Intn(256))
}
lz4.Decode(nil, modified)
}
}

View File

@@ -0,0 +1,86 @@
/*
* Copyright 2011 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"os"
"runtime/pprof"
lz4 "github.com/bkaradzic/go-lz4"
)
var (
decompress = flag.Bool("d", false, "decompress")
)
func main() {
var optCPUProfile = flag.String("cpuprofile", "", "profile")
flag.Parse()
if *optCPUProfile != "" {
f, err := os.Create(*optCPUProfile)
if err != nil {
log.Fatal(err)
}
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
}
args := flag.Args()
var data []byte
if len(args) < 2 {
fmt.Print("Usage: lz4 [-d] <input> <output>\n")
os.Exit(1)
}
input, err := os.OpenFile(args[0], os.O_RDONLY, 0644)
if err != nil {
fmt.Printf("Failed to open input file %s\n", args[0])
os.Exit(1)
}
defer input.Close()
if *decompress {
data, _ = ioutil.ReadAll(input)
data, _ = lz4.Decode(nil, data)
} else {
data, _ = ioutil.ReadAll(input)
data, _ = lz4.Encode(nil, data)
}
err = ioutil.WriteFile(args[1], data, 0644)
if err != nil {
fmt.Printf("Failed to open output file %s\n", args[1])
os.Exit(1)
}
}

View File

@@ -0,0 +1,63 @@
package lz4
import (
"bytes"
"io/ioutil"
"testing"
)
var testfile, _ = ioutil.ReadFile("testdata/pg1661.txt")
func roundtrip(t *testing.T, input []byte) {
dst, err := Encode(nil, input)
if err != nil {
t.Errorf("got error during compression: %s", err)
}
output, err := Decode(nil, dst)
if err != nil {
t.Errorf("got error during decompress: %s", err)
}
if !bytes.Equal(output, input) {
t.Errorf("roundtrip failed")
}
}
func TestEmpty(t *testing.T) {
roundtrip(t, nil)
}
func TestLengths(t *testing.T) {
for i := 0; i < 1024; i++ {
roundtrip(t, testfile[:i])
}
for i := 1024; i < 4096; i += 23 {
roundtrip(t, testfile[:i])
}
}
func TestWords(t *testing.T) {
roundtrip(t, testfile)
}
func BenchmarkLZ4Encode(b *testing.B) {
for i := 0; i < b.N; i++ {
Encode(nil, testfile)
}
}
func BenchmarkLZ4Decode(b *testing.B) {
var compressed, _ = Encode(nil, testfile)
b.ResetTimer()
for i := 0; i < b.N; i++ {
Decode(nil, compressed)
}
}

View File

@@ -0,0 +1,194 @@
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt indicates the input was corrupt
ErrCorrupt = errors.New("corrupt input")
)
const (
mlBits = 4
mlMask = (1 << mlBits) - 1
runBits = 8 - mlBits
runMask = (1 << runBits) - 1
)
type decoder struct {
src []byte
dst []byte
spos uint32
dpos uint32
ref uint32
}
func (d *decoder) readByte() (uint8, error) {
if int(d.spos) == len(d.src) {
return 0, io.EOF
}
b := d.src[d.spos]
d.spos++
return b, nil
}
func (d *decoder) getLen() (uint32, error) {
length := uint32(0)
ln, err := d.readByte()
if err != nil {
return 0, ErrCorrupt
}
for ln == 255 {
length += 255
ln, err = d.readByte()
if err != nil {
return 0, ErrCorrupt
}
}
length += uint32(ln)
return length, nil
}
func (d *decoder) cp(length, decr uint32) {
if int(d.ref+length) < int(d.dpos) {
copy(d.dst[d.dpos:], d.dst[d.ref:d.ref+length])
} else {
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.dst[d.ref+ii]
}
}
d.dpos += length
d.ref += length - decr
}
func (d *decoder) finish(err error) error {
if err == io.EOF {
return nil
}
return err
}
// Decode returns the decoded form of src. The returned slice may be a
// subslice of dst if it was large enough to hold the entire decoded block.
func Decode(dst, src []byte) ([]byte, error) {
if len(src) < 4 {
return nil, ErrCorrupt
}
uncompressedLen := binary.LittleEndian.Uint32(src)
if uncompressedLen == 0 {
return nil, nil
}
if uncompressedLen > MaxInputSize {
return nil, ErrTooLarge
}
if dst == nil || len(dst) < int(uncompressedLen) {
dst = make([]byte, uncompressedLen)
}
d := decoder{src: src, dst: dst[:uncompressedLen], spos: 4}
decr := []uint32{0, 3, 2, 3}
for {
code, err := d.readByte()
if err != nil {
return d.dst, d.finish(err)
}
length := uint32(code >> mlBits)
if length == runMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
if int(d.spos+length) > len(d.src) {
return nil, ErrCorrupt
}
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.src[d.spos+ii]
}
d.spos += length
d.dpos += length
if int(d.spos) == len(d.src) {
return d.dst, nil
}
if int(d.spos+2) >= len(d.src) {
return nil, ErrCorrupt
}
back := uint32(d.src[d.spos]) | uint32(d.src[d.spos+1])<<8
if back > d.dpos {
return nil, ErrCorrupt
}
d.spos += 2
d.ref = d.dpos - back
length = uint32(code & mlMask)
if length == mlMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
literal := d.dpos - d.ref
if literal < 4 {
d.cp(4, decr[literal])
} else {
length += 4
}
if d.dpos+length > uncompressedLen {
return nil, ErrCorrupt
}
d.cp(length, 0)
}
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,188 @@
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import "encoding/binary"
import "errors"
const (
minMatch = 4
hashLog = 17
hashTableSize = 1 << hashLog
hashShift = (minMatch * 8) - hashLog
incompressible uint32 = 128
uninitHash = 0x88888888
// MaxInputSize is the largest buffer than can be compressed in a single block
MaxInputSize = 0x7E000000
)
var (
// ErrTooLarge indicates the input buffer was too large
ErrTooLarge = errors.New("input too large")
)
type encoder struct {
src []byte
dst []byte
hashTable []uint32
pos uint32
anchor uint32
dpos uint32
}
// CompressBound returns the maximum length of a lz4 block, given it's uncompressed length
func CompressBound(isize int) int {
if isize > MaxInputSize {
return 0
}
return isize + ((isize) / 255) + 16 + 4
}
func (e *encoder) writeLiterals(length, mlLen, pos uint32) {
ln := length
var code byte
if ln > runMask-1 {
code = runMask
} else {
code = byte(ln)
}
if mlLen > mlMask-1 {
e.dst[e.dpos] = (code << mlBits) + byte(mlMask)
} else {
e.dst[e.dpos] = (code << mlBits) + byte(mlLen)
}
e.dpos++
if code == runMask {
ln -= runMask
for ; ln > 254; ln -= 255 {
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(ln)
e.dpos++
}
for ii := uint32(0); ii < length; ii++ {
e.dst[e.dpos+ii] = e.src[pos+ii]
}
e.dpos += length
}
// Encode returns the encoded form of src. The returned array may be a
// sub-slice of dst if it was large enough to hold the entire output.
func Encode(dst, src []byte) ([]byte, error) {
if len(src) >= MaxInputSize {
return nil, ErrTooLarge
}
if n := CompressBound(len(src)); len(dst) < n {
dst = make([]byte, n)
}
e := encoder{src: src, dst: dst, hashTable: make([]uint32, hashTableSize)}
binary.LittleEndian.PutUint32(dst, uint32(len(src)))
e.dpos = 4
var (
step uint32 = 1
limit = incompressible
)
for {
if int(e.pos)+4 >= len(e.src) {
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
return e.dst[:e.dpos], nil
}
sequence := uint32(e.src[e.pos+3])<<24 | uint32(e.src[e.pos+2])<<16 | uint32(e.src[e.pos+1])<<8 | uint32(e.src[e.pos+0])
hash := (sequence * 2654435761) >> hashShift
ref := e.hashTable[hash] + uninitHash
e.hashTable[hash] = e.pos - uninitHash
if ((e.pos-ref)>>16) != 0 || uint32(e.src[ref+3])<<24|uint32(e.src[ref+2])<<16|uint32(e.src[ref+1])<<8|uint32(e.src[ref+0]) != sequence {
if e.pos-e.anchor > limit {
limit <<= 1
step += 1 + (step >> 2)
}
e.pos += step
continue
}
if step > 1 {
e.hashTable[hash] = ref - uninitHash
e.pos -= step - 1
step = 1
continue
}
limit = incompressible
ln := e.pos - e.anchor
back := e.pos - ref
anchor := e.anchor
e.pos += minMatch
ref += minMatch
e.anchor = e.pos
for int(e.pos) < len(e.src) && e.src[e.pos] == e.src[ref] {
e.pos++
ref++
}
mlLen := e.pos - e.anchor
e.writeLiterals(ln, mlLen, anchor)
e.dst[e.dpos] = uint8(back)
e.dst[e.dpos+1] = uint8(back >> 8)
e.dpos += 2
if mlLen > mlMask-1 {
mlLen -= mlMask
for mlLen > 254 {
mlLen -= 255
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(mlLen)
e.dpos++
}
e.anchor = e.pos
}
}

View File

@@ -0,0 +1 @@
coverage.out

19
Godeps/_workspace/src/github.com/calmh/xdr/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,19 @@
language: go
go:
- tip
install:
- export PATH=$PATH:$HOME/gopath/bin
- go get code.google.com/p/go.tools/cmd/cover
- go get github.com/mattn/goveralls
script:
- ./generate.sh
- go test -coverprofile=coverage.out
after_success:
- goveralls -coverprofile=coverage.out -service=travis-ci -package=calmh/xdr -repotoken="$COVERALLS_TOKEN"
env:
global:
secure: SmgnrGfp2zLrA44ChRMpjPeujubt9veZ8Fx/OseMWECmacyV5N/TuDhzIbwo6QwV4xB0sBacoPzvxQbJRVjNKsPiSu72UbcQmQ7flN4Tf7nW09tSh1iW8NgrpBCq/3UYLoBu2iPBEBKm93IK0aGNAKs6oEkB0fU27iTVBwiTXOY=

View File

@@ -1,21 +1,19 @@
The MIT License (MIT)
Copyright (c) 2011-2014 Twitter, Inc
Copyright (C) 2014 Jakob Borg.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.

12
Godeps/_workspace/src/github.com/calmh/xdr/README.md generated vendored Normal file
View File

@@ -0,0 +1,12 @@
xdr
===
[![Build Status](https://img.shields.io/travis/calmh/xdr.svg?style=flat)](https://travis-ci.org/calmh/xdr)
[![Coverage Status](https://img.shields.io/coveralls/calmh/xdr.svg?style=flat)](https://coveralls.io/r/calmh/xdr?branch=master)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat)](http://godoc.org/github.com/calmh/xdr)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://opensource.org/licenses/MIT)
This is an XDR encoding/decoding library. It uses code generation and
not reflection. It supports the IPDR bastardized XDR format when built
with `-tags ipdr`.

View File

@@ -1,6 +1,5 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr_test
@@ -8,12 +7,15 @@ import (
"io"
"io/ioutil"
"testing"
"github.com/calmh/xdr"
)
type XDRBenchStruct struct {
I1 uint64
I2 uint32
I3 uint16
I4 uint8
Bs0 []byte // max:128
Bs1 []byte
S0 string // max:128
@@ -25,6 +27,7 @@ var s = XDRBenchStruct{
I1: 42,
I2: 43,
I3: 44,
I4: 45,
Bs0: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18},
Bs1: []byte{11, 12, 13, 14, 15, 16, 17, 18, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
S0: "Hello World! String one.",
@@ -57,6 +60,16 @@ func BenchmarkThisEncode(b *testing.B) {
}
}
func BenchmarkThisEncoder(b *testing.B) {
w := xdr.NewWriter(ioutil.Discard)
for i := 0; i < b.N; i++ {
_, err := s.encodeXDR(w)
if err != nil {
b.Fatal(err)
}
}
}
type repeatReader struct {
data []byte
}
@@ -85,3 +98,16 @@ func BenchmarkThisDecode(b *testing.B) {
rr.Reset(e)
}
}
func BenchmarkThisDecoder(b *testing.B) {
rr := &repeatReader{e}
r := xdr.NewReader(rr)
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
err := t.decodeXDR(r)
if err != nil {
b.Fatal(err)
}
rr.Reset(e)
}
}

View File

@@ -0,0 +1,183 @@
// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package xdr_test
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
/*
XDRBenchStruct Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ I1 (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| I2 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | I3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| uint8 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Bs0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Bs0 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Bs1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Bs1 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of S0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ S0 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of S1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ S1 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct XDRBenchStruct {
unsigned hyper I1;
unsigned int I2;
unsigned int I3;
uint8 I4;
opaque Bs0<128>;
opaque Bs1<>;
string S0<128>;
string S1<>;
}
*/
func (o XDRBenchStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
}
func (o XDRBenchStruct) MarshalXDR() []byte {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o XDRBenchStruct) AppendXDR(bs []byte) []byte {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
o.encodeXDR(xw)
return []byte(aw)
}
func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteUint64(o.I1)
xw.WriteUint32(o.I2)
xw.WriteUint16(o.I3)
xw.WriteUint8(o.I4)
if len(o.Bs0) > 128 {
return xw.Tot(), xdr.ErrElementSizeExceeded
}
xw.WriteBytes(o.Bs0)
xw.WriteBytes(o.Bs1)
if len(o.S0) > 128 {
return xw.Tot(), xdr.ErrElementSizeExceeded
}
xw.WriteString(o.S0)
xw.WriteString(o.S1)
return xw.Tot(), xw.Error()
}
func (o *XDRBenchStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
}
func (o *XDRBenchStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
}
func (o *XDRBenchStruct) decodeXDR(xr *xdr.Reader) error {
o.I1 = xr.ReadUint64()
o.I2 = xr.ReadUint32()
o.I3 = xr.ReadUint16()
o.I4 = xr.ReadUint8()
o.Bs0 = xr.ReadBytesMax(128)
o.Bs1 = xr.ReadBytes()
o.S0 = xr.ReadStringMax(128)
o.S1 = xr.ReadString()
return xr.Error()
}
/*
repeatReader Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ data (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct repeatReader {
opaque data<>;
}
*/
func (o repeatReader) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
}
func (o repeatReader) MarshalXDR() []byte {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o repeatReader) AppendXDR(bs []byte) []byte {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
o.encodeXDR(xw)
return []byte(aw)
}
func (o repeatReader) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteBytes(o.data)
return xw.Tot(), xw.Error()
}
func (o *repeatReader) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
}
func (o *repeatReader) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
}
func (o *repeatReader) decodeXDR(xr *xdr.Reader) error {
o.data = xr.ReadBytes()
return xr.Error()
}

View File

@@ -1,6 +1,5 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package main
@@ -34,11 +33,7 @@ type structInfo struct {
Fields []fieldInfo
}
var headerTpl = template.Must(template.New("header").Parse(`// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// ************************************************************
var headerTpl = template.Must(template.New("header").Parse(`// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
@@ -48,7 +43,7 @@ import (
"bytes"
"io"
"github.com/calmh/syncthing/xdr"
"github.com/calmh/xdr"
)
`))
@@ -82,7 +77,10 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
{{end}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
{{else}}
o.{{$fieldInfo.Name}}.encodeXDR(xw)
_, err := o.{{$fieldInfo.Name}}.encodeXDR(xw)
if err != nil {
return xw.Tot(), err
}
{{end}}
{{else}}
{{if ge $fieldInfo.Max 1}}
@@ -97,7 +95,10 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
{{else if $fieldInfo.IsBasic}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}}[i])
{{else}}
o.{{$fieldInfo.Name}}[i].encodeXDR(xw)
_, err := o.{{$fieldInfo.Name}}[i].encodeXDR(xw)
if err != nil {
return xw.Tot(), err
}
{{end}}
}
{{end}}
@@ -160,6 +161,8 @@ type typeSet struct {
}
var xdrEncoders = map[string]typeSet{
"int8": typeSet{"uint8", "Uint8"},
"uint8": typeSet{"", "Uint8"},
"int16": typeSet{"uint16", "Uint16"},
"uint16": typeSet{"", "Uint16"},
"int32": typeSet{"uint32", "Uint32"},

16
Godeps/_workspace/src/github.com/calmh/xdr/debug.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import (
"log"
"os"
)
var (
debug = len(os.Getenv("XDRTRACE")) > 0
dl = log.New(os.Stdout, "xdr: ", log.Lshortfile|log.Ltime|log.Lmicroseconds)
)
const maxDebugBytes = 32

5
Godeps/_workspace/src/github.com/calmh/xdr/doc.go generated vendored Normal file
View File

@@ -0,0 +1,5 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// Package xdr implements an XDR (RFC 4506) encoder/decoder.
package xdr

View File

@@ -1,18 +1,23 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr_test
import (
"bytes"
"math/rand"
"reflect"
"testing"
"testing/quick"
"github.com/calmh/xdr"
)
// Contains all supported types
type TestStruct struct {
I int
I8 int8
UI8 uint8
I16 int16
UI16 uint16
I32 int32
@@ -21,6 +26,25 @@ type TestStruct struct {
UI64 uint64
BS []byte
S string
C Opaque
}
type Opaque [32]byte
func (u *Opaque) encodeXDR(w *xdr.Writer) (int, error) {
return w.WriteRaw(u[:])
}
func (u *Opaque) decodeXDR(r *xdr.Reader) (int, error) {
return r.ReadRaw(u[:])
}
func (Opaque) Generate(rand *rand.Rand, size int) reflect.Value {
var u Opaque
for i := range u[:] {
u[i] = byte(rand.Int())
}
return reflect.ValueOf(u)
}
func TestEncDec(t *testing.T) {
@@ -38,7 +62,7 @@ func TestEncDec(t *testing.T) {
t0.I32 != t1.I32 || t0.UI32 != t1.UI32 ||
t0.I64 != t1.I64 || t0.UI64 != t1.UI64 ||
bytes.Compare(t0.BS, t1.BS) != 0 ||
t0.S != t1.S {
t0.S != t1.S || t0.C != t1.C {
t.Logf("%#v", t0)
t.Logf("%#v", t1)
return false

View File

@@ -0,0 +1,136 @@
// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package xdr_test
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
/*
TestStruct Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int8 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| uint8 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | UI16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| int32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| UI32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ I64 (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ UI64 (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of BS |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ BS (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of S |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ S (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Opaque |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct TestStruct {
int I;
int8 I8;
uint8 UI8;
int16 I16;
unsigned int UI16;
int32 I32;
unsigned int UI32;
hyper I64;
unsigned hyper UI64;
opaque BS<>;
string S<>;
Opaque C;
}
*/
func (o TestStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
}
func (o TestStruct) MarshalXDR() []byte {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o TestStruct) AppendXDR(bs []byte) []byte {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
o.encodeXDR(xw)
return []byte(aw)
}
func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteUint64(uint64(o.I))
xw.WriteUint8(uint8(o.I8))
xw.WriteUint8(o.UI8)
xw.WriteUint16(uint16(o.I16))
xw.WriteUint16(o.UI16)
xw.WriteUint32(uint32(o.I32))
xw.WriteUint32(o.UI32)
xw.WriteUint64(uint64(o.I64))
xw.WriteUint64(o.UI64)
xw.WriteBytes(o.BS)
xw.WriteString(o.S)
_, err := o.C.encodeXDR(xw)
if err != nil {
return xw.Tot(), err
}
return xw.Tot(), xw.Error()
}
func (o *TestStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
}
func (o *TestStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
}
func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
o.I = int(xr.ReadUint64())
o.I8 = int8(xr.ReadUint8())
o.UI8 = xr.ReadUint8()
o.I16 = int16(xr.ReadUint16())
o.UI16 = xr.ReadUint16()
o.I32 = int32(xr.ReadUint32())
o.UI32 = xr.ReadUint32()
o.I64 = int64(xr.ReadUint64())
o.UI64 = xr.ReadUint64()
o.BS = xr.ReadBytes()
o.S = xr.ReadString()
(&o.C).decodeXDR(xr)
return xr.Error()
}

View File

@@ -0,0 +1,4 @@
#!/bin/sh
go run cmd/genxdr/main.go -- bench_test.go > bench_xdr_test.go
go run cmd/genxdr/main.go -- encdec_test.go > encdec_xdr_test.go

10
Godeps/_workspace/src/github.com/calmh/xdr/pad_ipdr.go generated vendored Normal file
View File

@@ -0,0 +1,10 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build ipdr
package xdr
func pad(l int) int {
return 0
}

14
Godeps/_workspace/src/github.com/calmh/xdr/pad_xdr.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build !ipdr
package xdr
func pad(l int) int {
d := l % 4
if d == 0 {
return 0
}
return 4 - d
}

View File

@@ -7,18 +7,16 @@ package xdr
import (
"errors"
"io"
"time"
"reflect"
"unsafe"
)
var ErrElementSizeExceeded = errors.New("element size exceeded")
type Reader struct {
r io.Reader
tot int
err error
b [8]byte
sb []byte
last time.Time
r io.Reader
err error
b [8]byte
}
func NewReader(r io.Reader) *Reader {
@@ -27,24 +25,28 @@ func NewReader(r io.Reader) *Reader {
}
}
func (r *Reader) ReadString() string {
if r.sb == nil {
r.sb = make([]byte, 64)
} else {
r.sb = r.sb[:cap(r.sb)]
func (r *Reader) ReadRaw(bs []byte) (int, error) {
if r.err != nil {
return 0, r.err
}
r.sb = r.ReadBytesInto(r.sb)
return string(r.sb)
var n int
n, r.err = io.ReadFull(r.r, bs)
return n, r.err
}
func (r *Reader) ReadString() string {
return r.ReadStringMax(0)
}
func (r *Reader) ReadStringMax(max int) string {
if r.sb == nil {
r.sb = make([]byte, 64)
} else {
r.sb = r.sb[:cap(r.sb)]
buf := r.ReadBytesMaxInto(max, nil)
bh := (*reflect.SliceHeader)(unsafe.Pointer(&buf))
sh := reflect.StringHeader{
Data: bh.Data,
Len: bh.Len,
}
r.sb = r.ReadBytesMaxInto(max, r.sb)
return string(r.sb)
return *((*string)(unsafe.Pointer(&sh)))
}
func (r *Reader) ReadBytes() []byte {
@@ -63,8 +65,6 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
if r.err != nil {
return nil
}
r.last = time.Now()
s := r.tot
l := int(r.ReadUint32())
if r.err != nil {
@@ -75,53 +75,44 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
return nil
}
if l+pad(l) > len(dst) {
dst = make([]byte, l+pad(l))
if fullLen := l + pad(l); fullLen > len(dst) {
dst = make([]byte, fullLen)
} else {
dst = dst[:l+pad(l)]
dst = dst[:fullLen]
}
var n int
n, r.err = io.ReadFull(r.r, dst)
if r.err != nil {
if debug {
dl.Debugf("@0x%x: rd bytes (%d): %v", s, len(dst), r.err)
dl.Printf("rd bytes (%d): %v", len(dst), r.err)
}
return nil
}
r.tot += n
if debug {
if n > maxDebugBytes {
dl.Debugf("@0x%x: rd bytes (%d): %x...", s, len(dst), dst[:maxDebugBytes])
dl.Printf("rd bytes (%d): %x...", len(dst), dst[:maxDebugBytes])
} else {
dl.Debugf("@0x%x: rd bytes (%d): %x", s, len(dst), dst)
dl.Printf("rd bytes (%d): %x", len(dst), dst)
}
}
return dst[:l]
}
func (r *Reader) ReadBool() bool {
return r.ReadUint32() != 0
}
func (r *Reader) ReadUint16() uint16 {
return uint16(r.ReadUint32())
return r.ReadUint8() != 0
}
func (r *Reader) ReadUint32() uint32 {
if r.err != nil {
return 0
}
r.last = time.Now()
s := r.tot
var n int
n, r.err = io.ReadFull(r.r, r.b[:4])
r.tot += n
_, r.err = io.ReadFull(r.r, r.b[:4])
if r.err != nil {
if debug {
dl.Debugf("@0x%x: rd uint32: %v", r.tot, r.err)
dl.Printf("rd uint32: %v", r.err)
}
return 0
}
@@ -129,7 +120,7 @@ func (r *Reader) ReadUint32() uint32 {
v := uint32(r.b[3]) | uint32(r.b[2])<<8 | uint32(r.b[1])<<16 | uint32(r.b[0])<<24
if debug {
dl.Debugf("@0x%x: rd uint32=%d (0x%08x)", s, v, v)
dl.Printf("rd uint32=%d (0x%08x)", v, v)
}
return v
}
@@ -138,15 +129,11 @@ func (r *Reader) ReadUint64() uint64 {
if r.err != nil {
return 0
}
r.last = time.Now()
s := r.tot
var n int
n, r.err = io.ReadFull(r.r, r.b[:8])
r.tot += n
_, r.err = io.ReadFull(r.r, r.b[:8])
if r.err != nil {
if debug {
dl.Debugf("@0x%x: rd uint64: %v", r.tot, r.err)
dl.Printf("rd uint64: %v", r.err)
}
return 0
}
@@ -155,19 +142,23 @@ func (r *Reader) ReadUint64() uint64 {
uint64(r.b[3])<<32 | uint64(r.b[2])<<40 | uint64(r.b[1])<<48 | uint64(r.b[0])<<56
if debug {
dl.Debugf("@0x%x: rd uint64=%d (0x%016x)", s, v, v)
dl.Printf("rd uint64=%d (0x%016x)", v, v)
}
return v
}
func (r *Reader) Tot() int {
return r.tot
type XDRError struct {
op string
err error
}
func (e XDRError) Error() string {
return "xdr " + e.op + ": " + e.err.Error()
}
func (r *Reader) Error() error {
return r.err
}
func (r *Reader) LastRead() time.Time {
return r.last
if r.err == nil {
return nil
}
return XDRError{"read", r.err}
}

View File

@@ -0,0 +1,49 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build ipdr
package xdr
import "io"
func (r *Reader) ReadUint8() uint8 {
if r.err != nil {
return 0
}
_, r.err = io.ReadFull(r.r, r.b[:1])
if r.err != nil {
if debug {
dl.Printf("rd uint8: %v", r.err)
}
return 0
}
if debug {
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
}
return r.b[0]
}
func (r *Reader) ReadUint16() uint16 {
if r.err != nil {
return 0
}
_, r.err = io.ReadFull(r.r, r.b[:2])
if r.err != nil {
if debug {
dl.Printf("rd uint16: %v", r.err)
}
return 0
}
v := uint16(r.b[1]) | uint16(r.b[0])<<8
if debug {
dl.Printf("rd uint16=%d (0x%04x)", v, v)
}
return v
}

View File

@@ -0,0 +1,15 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !ipdr
package xdr
func (r *Reader) ReadUint8() uint8 {
return uint8(r.ReadUint32())
}
func (r *Reader) ReadUint16() uint16 {
return uint16(r.ReadUint32())
}

View File

@@ -1,6 +1,5 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build refl

View File

@@ -1,30 +1,21 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import (
"io"
"time"
"reflect"
"unsafe"
)
func pad(l int) int {
d := l % 4
if d == 0 {
return 0
}
return 4 - d
}
var padBytes = []byte{0, 0, 0}
type Writer struct {
w io.Writer
tot int
err error
b [8]byte
last time.Time
w io.Writer
tot int
err error
b [8]byte
}
type AppendWriter []byte
@@ -40,8 +31,24 @@ func NewWriter(w io.Writer) *Writer {
}
}
func (w *Writer) WriteRaw(bs []byte) (int, error) {
if w.err != nil {
return 0, w.err
}
var n int
n, w.err = w.w.Write(bs)
return n, w.err
}
func (w *Writer) WriteString(s string) (int, error) {
return w.WriteBytes([]byte(s))
sh := *((*reflect.StringHeader)(unsafe.Pointer(&s)))
bh := reflect.SliceHeader{
Data: sh.Data,
Len: sh.Len,
Cap: sh.Len,
}
return w.WriteBytes(*(*[]byte)(unsafe.Pointer(&bh)))
}
func (w *Writer) WriteBytes(bs []byte) (int, error) {
@@ -49,7 +56,6 @@ func (w *Writer) WriteBytes(bs []byte) (int, error) {
return 0, w.err
}
w.last = time.Now()
w.WriteUint32(uint32(len(bs)))
if w.err != nil {
return 0, w.err
@@ -57,9 +63,9 @@ func (w *Writer) WriteBytes(bs []byte) (int, error) {
if debug {
if len(bs) > maxDebugBytes {
dl.Debugf("wr bytes (%d): %x...", len(bs), bs[:maxDebugBytes])
dl.Printf("wr bytes (%d): %x...", len(bs), bs[:maxDebugBytes])
} else {
dl.Debugf("wr bytes (%d): %x", len(bs), bs)
dl.Printf("wr bytes (%d): %x", len(bs), bs)
}
}
@@ -78,24 +84,19 @@ func (w *Writer) WriteBytes(bs []byte) (int, error) {
func (w *Writer) WriteBool(v bool) (int, error) {
if v {
return w.WriteUint32(1)
return w.WriteUint8(1)
} else {
return w.WriteUint32(0)
return w.WriteUint8(0)
}
}
func (w *Writer) WriteUint16(v uint16) (int, error) {
return w.WriteUint32(uint32(v))
}
func (w *Writer) WriteUint32(v uint32) (int, error) {
if w.err != nil {
return 0, w.err
}
w.last = time.Now()
if debug {
dl.Debugf("wr uint32=%d", v)
dl.Printf("wr uint32=%d", v)
}
w.b[0] = byte(v >> 24)
@@ -114,9 +115,8 @@ func (w *Writer) WriteUint64(v uint64) (int, error) {
return 0, w.err
}
w.last = time.Now()
if debug {
dl.Debugf("wr uint64=%d", v)
dl.Printf("wr uint64=%d", v)
}
w.b[0] = byte(v >> 56)
@@ -139,9 +139,8 @@ func (w *Writer) Tot() int {
}
func (w *Writer) Error() error {
return w.err
}
func (w *Writer) LastWrite() time.Time {
return w.last
if w.err == nil {
return nil
}
return XDRError{"write", w.err}
}

View File

@@ -0,0 +1,41 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build ipdr
package xdr
func (w *Writer) WriteUint8(v uint8) (int, error) {
if w.err != nil {
return 0, w.err
}
if debug {
dl.Printf("wr uint8=%d", v)
}
w.b[0] = byte(v)
var l int
l, w.err = w.w.Write(w.b[:1])
w.tot += l
return l, w.err
}
func (w *Writer) WriteUint16(v uint16) (int, error) {
if w.err != nil {
return 0, w.err
}
if debug {
dl.Printf("wr uint8=%d", v)
}
w.b[0] = byte(v >> 8)
w.b[1] = byte(v)
var l int
l, w.err = w.w.Write(w.b[:2])
w.tot += l
return l, w.err
}

View File

@@ -0,0 +1,14 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build !ipdr
package xdr
func (w *Writer) WriteUint8(v uint8) (int, error) {
return w.WriteUint32(uint32(v))
}
func (w *Writer) WriteUint16(v uint16) (int, error) {
return w.WriteUint32(uint32(v))
}

View File

@@ -1,6 +1,5 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
@@ -10,23 +9,6 @@ import (
"testing/quick"
)
func TestPad(t *testing.T) {
tests := [][]int{
{0, 0},
{1, 3},
{2, 2},
{3, 1},
{4, 0},
{32, 0},
{33, 3},
}
for _, tc := range tests {
if p := pad(tc[0]); p != tc[1] {
t.Errorf("Incorrect padding for %d bytes, %d != %d", tc[0], p, tc[1])
}
}
}
func TestBytesNil(t *testing.T) {
fn := func(bs []byte) bool {
var b = new(bytes.Buffer)
@@ -85,7 +67,7 @@ func TestReadBytesMaxInto(t *testing.T) {
}
}
func TestReadBytesMaxIntoNil(t *testing.T) {
func TestReadStringMax(t *testing.T) {
for tot := 42; tot < 72; tot++ {
for max := 0; max < 128; max++ {
var b = new(bytes.Buffer)
@@ -95,8 +77,8 @@ func TestReadBytesMaxIntoNil(t *testing.T) {
var toWrite = make([]byte, tot)
w.WriteBytes(toWrite)
var bs = r.ReadBytesMaxInto(max, nil)
var read = len(bs)
var str = r.ReadStringMax(max)
var read = len(str)
if max == 0 || tot <= max {
if read != tot {

View File

@@ -1,121 +0,0 @@
/*
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package lru implements an LRU cache.
package lru
import "container/list"
// Cache is an LRU cache. It is not safe for concurrent access.
type Cache struct {
// MaxEntries is the maximum number of cache entries before
// an item is evicted. Zero means no limit.
MaxEntries int
// OnEvicted optionally specificies a callback function to be
// executed when an entry is purged from the cache.
OnEvicted func(key Key, value interface{})
ll *list.List
cache map[interface{}]*list.Element
}
// A Key may be any value that is comparable. See http://golang.org/ref/spec#Comparison_operators
type Key interface{}
type entry struct {
key Key
value interface{}
}
// New creates a new Cache.
// If maxEntries is zero, the cache has no limit and it's assumed
// that eviction is done by the caller.
func New(maxEntries int) *Cache {
return &Cache{
MaxEntries: maxEntries,
ll: list.New(),
cache: make(map[interface{}]*list.Element),
}
}
// Add adds a value to the cache.
func (c *Cache) Add(key Key, value interface{}) {
if c.cache == nil {
c.cache = make(map[interface{}]*list.Element)
c.ll = list.New()
}
if ee, ok := c.cache[key]; ok {
c.ll.MoveToFront(ee)
ee.Value.(*entry).value = value
return
}
ele := c.ll.PushFront(&entry{key, value})
c.cache[key] = ele
if c.MaxEntries != 0 && c.ll.Len() > c.MaxEntries {
c.RemoveOldest()
}
}
// Get looks up a key's value from the cache.
func (c *Cache) Get(key Key) (value interface{}, ok bool) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.ll.MoveToFront(ele)
return ele.Value.(*entry).value, true
}
return
}
// Remove removes the provided key from the cache.
func (c *Cache) Remove(key Key) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.removeElement(ele)
}
}
// RemoveOldest removes the oldest item from the cache.
func (c *Cache) RemoveOldest() {
if c.cache == nil {
return
}
ele := c.ll.Back()
if ele != nil {
c.removeElement(ele)
}
}
func (c *Cache) removeElement(e *list.Element) {
c.ll.Remove(e)
kv := e.Value.(*entry)
delete(c.cache, kv.key)
if c.OnEvicted != nil {
c.OnEvicted(kv.key, kv.value)
}
}
// Len returns the number of items in the cache.
func (c *Cache) Len() int {
if c.cache == nil {
return 0
}
return c.ll.Len()
}

View File

@@ -1,73 +0,0 @@
/*
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package lru
import (
"testing"
)
type simpleStruct struct {
int
string
}
type complexStruct struct {
int
simpleStruct
}
var getTests = []struct {
name string
keyToAdd interface{}
keyToGet interface{}
expectedOk bool
}{
{"string_hit", "myKey", "myKey", true},
{"string_miss", "myKey", "nonsense", false},
{"simple_struct_hit", simpleStruct{1, "two"}, simpleStruct{1, "two"}, true},
{"simeple_struct_miss", simpleStruct{1, "two"}, simpleStruct{0, "noway"}, false},
{"complex_struct_hit", complexStruct{1, simpleStruct{2, "three"}},
complexStruct{1, simpleStruct{2, "three"}}, true},
}
func TestGet(t *testing.T) {
for _, tt := range getTests {
lru := New(0)
lru.Add(tt.keyToAdd, 1234)
val, ok := lru.Get(tt.keyToGet)
if ok != tt.expectedOk {
t.Fatalf("%s: cache hit = %v; want %v", tt.name, ok, !ok)
} else if ok && val != 1234 {
t.Fatalf("%s expected get to return 1234 but got %v", tt.name, val)
}
}
}
func TestRemove(t *testing.T) {
lru := New(0)
lru.Add("myKey", 1234)
if val, ok := lru.Get("myKey"); !ok {
t.Fatal("TestRemove returned no match")
} else if val != 1234 {
t.Fatalf("TestRemove failed. Expected %d, got %v", 1234, val)
}
lru.Remove("myKey")
if _, ok := lru.Get("myKey"); ok {
t.Fatal("TestRemove returned a removed entry")
}
}

View File

@@ -170,7 +170,7 @@ func (p *dbBench) writes(perBatch int) {
b.SetBytes(116)
}
func (p *dbBench) drop() {
func (p *dbBench) gc() {
p.keys, p.values = nil, nil
runtime.GC()
}
@@ -249,6 +249,9 @@ func (p *dbBench) newIter() iterator.Iterator {
}
func (p *dbBench) close() {
if bp, err := p.db.GetProperty("leveldb.blockpool"); err == nil {
p.b.Log("Block pool stats: ", bp)
}
p.db.Close()
p.stor.Close()
os.RemoveAll(benchDB)
@@ -331,7 +334,7 @@ func BenchmarkDBRead(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.drop()
p.gc()
iter := p.newIter()
b.ResetTimer()
@@ -362,7 +365,7 @@ func BenchmarkDBReadUncompressed(b *testing.B) {
p := openDBBench(b, true)
p.populate(b.N)
p.fill()
p.drop()
p.gc()
iter := p.newIter()
b.ResetTimer()
@@ -379,7 +382,7 @@ func BenchmarkDBReadTable(b *testing.B) {
p.populate(b.N)
p.fill()
p.reopen()
p.drop()
p.gc()
iter := p.newIter()
b.ResetTimer()
@@ -395,7 +398,7 @@ func BenchmarkDBReadReverse(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.drop()
p.gc()
iter := p.newIter()
b.ResetTimer()
@@ -413,7 +416,7 @@ func BenchmarkDBReadReverseTable(b *testing.B) {
p.populate(b.N)
p.fill()
p.reopen()
p.drop()
p.gc()
iter := p.newIter()
b.ResetTimer()

View File

@@ -11,84 +11,106 @@ import (
"sync/atomic"
)
// SetFunc used by Namespace.Get method to create a cache object. SetFunc
// may return ok false, in that case the cache object will not be created.
type SetFunc func() (ok bool, value interface{}, charge int, fin SetFin)
// SetFunc is the function that will be called by Namespace.Get to create
// a cache object, if charge is less than one than the cache object will
// not be registered to cache tree, if value is nil then the cache object
// will not be created.
type SetFunc func() (charge int, value interface{})
// SetFin will be called when corresponding cache object are released.
type SetFin func()
// DelFin is the function that will be called as the result of a delete operation.
// Exist == true is indication that the object is exist, and pending == true is
// indication of deletion already happen but haven't done yet (wait for all handles
// to be released). And exist == false means the object doesn't exist.
type DelFin func(exist, pending bool)
// DelFin will be called when corresponding cache object are released.
// DelFin will be called after SetFin. The exist is true if the corresponding
// cache object is actually exist in the cache tree.
type DelFin func(exist bool)
// PurgeFin is the function that will be called as the result of a purge operation.
type PurgeFin func(ns, key uint64)
// PurgeFin will be called when corresponding cache object are released.
// PurgeFin will be called after SetFin. If PurgeFin present DelFin will
// not be executed but passed to the PurgeFin, it is up to the caller
// to call it or not.
type PurgeFin func(ns, key uint64, delfin DelFin)
// Cache is a cache tree.
// Cache is a cache tree. A cache instance must be goroutine-safe.
type Cache interface {
// SetCapacity sets cache capacity.
// SetCapacity sets cache tree capacity.
SetCapacity(capacity int)
// GetNamespace gets or creates a cache namespace for the given id.
// Capacity returns cache tree capacity.
Capacity() int
// Used returns used cache tree capacity.
Used() int
// Size returns entire alive cache objects size.
Size() int
// GetNamespace gets cache namespace with the given id.
// GetNamespace is never return nil.
GetNamespace(id uint64) Namespace
// Purge purges all cache namespaces, read Namespace.Purge method documentation.
// Purge purges all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Purge method on all cache namespace.
Purge(fin PurgeFin)
// Zap zaps all cache namespaces, read Namespace.Zap method documentation.
Zap(closed bool)
// Zap detaches all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Zap method on all cache namespace.
Zap()
}
// Namespace is a cache namespace.
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
type Namespace interface {
// Get gets cache object for the given key. The given SetFunc (if not nil) will
// be called if the given key does not exist.
// If the given key does not exist, SetFunc is nil or SetFunc return ok false, Get
// will return ok false.
Get(key uint64, setf SetFunc) (obj Object, ok bool)
// Get deletes cache object for the given key. If exist the cache object will
// be deleted later when all of its handles have been released (i.e. no one use
// it anymore) and the given DelFin (if not nil) will finally be executed. If
// such cache object does not exist the given DelFin will be executed anyway.
// Get gets cache object with the given key.
// If cache object is not found and setf is not nil, Get will atomically creates
// the cache object by calling setf. Otherwise Get will returns nil.
//
// Delete returns true if such cache object exist.
// The returned cache handle should be released after use by calling Release
// method.
Get(key uint64, setf SetFunc) Handle
// Delete removes cache object with the given key from cache tree.
// A deleted cache object will be released as soon as all of its handles have
// been released.
// Delete only happen once, subsequent delete will consider cache object doesn't
// exist, even if the cache object ins't released yet.
//
// If not nil, fin will be called if the cache object doesn't exist or when
// finally be released.
//
// Delete returns true if such cache object exist and never been deleted.
Delete(key uint64, fin DelFin) bool
// Purge deletes all cache objects, read Delete method documentation.
// Purge removes all cache objects within this namespace from cache tree.
// This is the same as doing delete on all cache objects.
//
// If not nil, fin will be called on all cache objects when its finally be
// released.
Purge(fin PurgeFin)
// Zap detaches the namespace from the cache tree and delete all its cache
// objects. The cache objects deletion and finalizers execution are happen
// immediately, even if its existing handles haven't yet been released.
// A zapped namespace can't never be filled again.
// If closed is false then the Get function will always call the given SetFunc
// if it is not nil, but resultant of the SetFunc will not be cached.
Zap(closed bool)
// Zap detaches namespace from cache tree and release all its cache objects.
// A zapped namespace can never be filled again.
// Calling Get on zapped namespace will always return nil.
Zap()
}
// Object is a cache object.
type Object interface {
// Release releases the cache object. Other methods should not be called
// after the cache object has been released.
// Handle is a cache handle.
type Handle interface {
// Release releases this cache handle. This method can be safely called mutiple
// times.
Release()
// Value returns value of the cache object.
// Value returns value of this cache handle.
// Value will returns nil after this cache handle have be released.
Value() interface{}
}
const (
DelNotExist = iota
DelExist
DelPendig
)
// Namespace state.
type nsState int
const (
nsEffective nsState = iota
nsZapped
nsClosed
)
// Node state.
@@ -97,29 +119,29 @@ type nodeState int
const (
nodeEffective nodeState = iota
nodeEvicted
nodeRemoved
nodeDeleted
)
// Fake object.
type fakeObject struct {
// Fake handle.
type fakeHandle struct {
value interface{}
fin func()
once uint32
}
func (o *fakeObject) Value() interface{} {
if atomic.LoadUint32(&o.once) == 0 {
return o.value
func (h *fakeHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.value
}
return nil
}
func (o *fakeObject) Release() {
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
func (h *fakeHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
if o.fin != nil {
o.fin()
o.fin = nil
if h.fin != nil {
h.fin()
h.fin = nil
}
}

View File

@@ -7,15 +7,35 @@
package cache
import (
"fmt"
"math/rand"
"runtime"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
)
func set(ns Namespace, key uint64, value interface{}, charge int, fin func()) Object {
obj, _ := ns.Get(key, func() (bool, interface{}, int, SetFin) {
return true, value, charge, fin
type releaserFunc struct {
fn func()
value interface{}
}
func (r releaserFunc) Release() {
if r.fn != nil {
r.fn()
}
}
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
return ns.Get(key, func() (int, interface{}) {
if relf != nil {
return charge, releaserFunc{relf, value}
} else {
return charge, value
}
})
return obj
}
func TestCache_HitMiss(t *testing.T) {
@@ -43,29 +63,31 @@ func TestCache_HitMiss(t *testing.T) {
setfin++
}).Release()
for j, y := range cases {
r, ok := ns.Get(y.key, nil)
h := ns.Get(y.key, nil)
if j <= i {
// should hit
if !ok {
if h == nil {
t.Errorf("case '%d' iteration '%d' is miss", i, j)
} else if r.Value().(string) != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
} else {
if x := h.Value().(releaserFunc).value.(string); x != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
}
}
} else {
// should miss
if ok {
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, r.Value().(string))
if h != nil {
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, h.Value().(releaserFunc).value.(string))
}
}
if ok {
r.Release()
if h != nil {
h.Release()
}
}
}
for i, x := range cases {
finalizerOk := false
ns.Delete(x.key, func(exist bool) {
ns.Delete(x.key, func(exist, pending bool) {
finalizerOk = true
})
@@ -74,22 +96,24 @@ func TestCache_HitMiss(t *testing.T) {
}
for j, y := range cases {
r, ok := ns.Get(y.key, nil)
h := ns.Get(y.key, nil)
if j > i {
// should hit
if !ok {
if h == nil {
t.Errorf("case '%d' iteration '%d' is miss", i, j)
} else if r.Value().(string) != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
} else {
if x := h.Value().(releaserFunc).value.(string); x != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
}
}
} else {
// should miss
if ok {
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, r.Value().(string))
if h != nil {
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, h.Value().(releaserFunc).value.(string))
}
}
if ok {
r.Release()
if h != nil {
h.Release()
}
}
}
@@ -107,42 +131,42 @@ func TestLRUCache_Eviction(t *testing.T) {
set(ns, 3, 3, 1, nil).Release()
set(ns, 4, 4, 1, nil).Release()
set(ns, 5, 5, 1, nil).Release()
if r, ok := ns.Get(2, nil); ok { // 1,3,4,5,2
r.Release()
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
h.Release()
}
set(ns, 9, 9, 10, nil).Release() // 5,2,9
for _, x := range []uint64{9, 2, 5, 1} {
r, ok := ns.Get(x, nil)
if !ok {
t.Errorf("miss for key '%d'", x)
for _, key := range []uint64{9, 2, 5, 1} {
h := ns.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
o1.Release()
for _, x := range []uint64{1, 2, 5} {
r, ok := ns.Get(x, nil)
if !ok {
t.Errorf("miss for key '%d'", x)
for _, key := range []uint64{1, 2, 5} {
h := ns.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
for _, x := range []uint64{3, 4, 9} {
r, ok := ns.Get(x, nil)
if ok {
t.Errorf("hit for key '%d'", x)
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
for _, key := range []uint64{3, 4, 9} {
h := ns.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
}
@@ -153,16 +177,15 @@ func TestLRUCache_SetGet(t *testing.T) {
for i := 0; i < 200; i++ {
n := uint64(rand.Intn(99999) % 20)
set(ns, n, n, 1, nil).Release()
if p, ok := ns.Get(n, nil); ok {
if p.Value() == nil {
if h := ns.Get(n, nil); h != nil {
if h.Value() == nil {
t.Errorf("key '%d' contains nil value", n)
} else {
got := p.Value().(uint64)
if got != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, got)
if x := h.Value().(uint64); x != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
}
}
p.Release()
h.Release()
} else {
t.Errorf("key '%d' doesn't exist", n)
}
@@ -176,31 +199,319 @@ func TestLRUCache_Purge(t *testing.T) {
o2 := set(ns1, 2, 2, 1, nil)
ns1.Purge(nil)
set(ns1, 3, 3, 1, nil).Release()
for _, x := range []uint64{1, 2, 3} {
r, ok := ns1.Get(x, nil)
if !ok {
t.Errorf("miss for key '%d'", x)
for _, key := range []uint64{1, 2, 3} {
h := ns1.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
o1.Release()
o2.Release()
for _, x := range []uint64{1, 2} {
r, ok := ns1.Get(x, nil)
if ok {
t.Errorf("hit for key '%d'", x)
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
for _, key := range []uint64{1, 2} {
h := ns1.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
}
type testingCacheObjectCounter struct {
created uint32
released uint32
}
func (c *testingCacheObjectCounter) createOne() {
atomic.AddUint32(&c.created, 1)
}
func (c *testingCacheObjectCounter) releaseOne() {
atomic.AddUint32(&c.released, 1)
}
type testingCacheObject struct {
t *testing.T
cnt *testingCacheObjectCounter
ns, key uint64
releaseCalled uint32
}
func (x *testingCacheObject) Release() {
if atomic.CompareAndSwapUint32(&x.releaseCalled, 0, 1) {
x.cnt.releaseOne()
} else {
x.t.Errorf("duplicate setfin NS#%d KEY#%s", x.ns, x.key)
}
}
func TestLRUCache_Finalizer(t *testing.T) {
const (
capacity = 100
goroutines = 100
iterations = 10000
keymax = 8000
)
runtime.GOMAXPROCS(runtime.NumCPU())
defer runtime.GOMAXPROCS(1)
wg := &sync.WaitGroup{}
cnt := &testingCacheObjectCounter{}
c := NewLRUCache(capacity)
type instance struct {
seed int64
rnd *rand.Rand
ns uint64
effective int32
handles []Handle
handlesMap map[uint64]int
delete bool
purge bool
zap bool
wantDel int32
delfinCalledAll int32
delfinCalledEff int32
purgefinCalled int32
}
instanceGet := func(p *instance, ns Namespace, key uint64) {
h := ns.Get(key, func() (charge int, value interface{}) {
to := &testingCacheObject{
t: t, cnt: cnt,
ns: p.ns,
key: key,
}
atomic.AddInt32(&p.effective, 1)
cnt.createOne()
return 1, releaserFunc{func() {
to.Release()
atomic.AddInt32(&p.effective, -1)
}, to}
})
p.handles = append(p.handles, h)
p.handlesMap[key] = p.handlesMap[key] + 1
}
instanceRelease := func(p *instance, ns Namespace, i int) {
h := p.handles[i]
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
if n := p.handlesMap[key]; n == 0 {
t.Fatal("key ref == 0")
} else if n > 1 {
p.handlesMap[key] = n - 1
} else {
delete(p.handlesMap, key)
}
h.Release()
p.handles = append(p.handles[:i], p.handles[i+1:]...)
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
}
seeds := make([]int64, goroutines)
instances := make([]instance, goroutines)
for i := range instances {
p := &instances[i]
p.handlesMap = make(map[uint64]int)
if seeds[i] == 0 {
seeds[i] = time.Now().UnixNano()
}
p.seed = seeds[i]
p.rnd = rand.New(rand.NewSource(p.seed))
p.ns = uint64(i)
p.delete = i%6 == 0
p.purge = i%8 == 0
p.zap = i%12 == 0 || i%3 == 0
}
seedsStr := make([]string, len(seeds))
for i, seed := range seeds {
seedsStr[i] = fmt.Sprint(seed)
}
t.Logf("seeds := []int64{%s}", strings.Join(seedsStr, ", "))
// Get and release.
for i := range instances {
p := &instances[i]
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
for i := 0; i < iterations; i++ {
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
instanceGet(p, ns, uint64(p.rnd.Intn(keymax)))
} else {
instanceRelease(p, ns, p.rnd.Intn(len(p.handles)))
}
}
}(p)
}
wg.Wait()
if used, cap := c.Used(), c.Capacity(); used > cap {
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
}
// Check effective objects.
for i := range instances {
p := &instances[i]
if int(p.effective) < len(p.handlesMap) {
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
}
}
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Delete and purge.
for i := range instances {
p := &instances[i]
p.wantDel = p.effective
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
if p.delete {
for key := uint64(0); key < keymax; key++ {
_, wantExist := p.handlesMap[key]
gotExist := ns.Delete(key, func(exist, pending bool) {
atomic.AddInt32(&p.delfinCalledAll, 1)
if exist {
atomic.AddInt32(&p.delfinCalledEff, 1)
}
})
if !gotExist && wantExist {
t.Errorf("delete on NS#%d KEY#%d not found", p.ns, key)
}
}
var delfinCalled int
for key := uint64(0); key < keymax; key++ {
func(key uint64) {
gotExist := ns.Delete(key, func(exist, pending bool) {
if exist && !pending {
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.ns, key)
}
delfinCalled++
})
if gotExist {
t.Errorf("delete on NS#%d KEY#%d found", p.ns, key)
}
}(key)
}
if delfinCalled != keymax {
t.Errorf("(2) #%d not all delete fin called, diff=%d", p.ns, keymax-delfinCalled)
}
}
if p.purge {
ns.Purge(func(ns, key uint64) {
atomic.AddInt32(&p.purgefinCalled, 1)
})
}
}(p)
}
wg.Wait()
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Release.
for i := range instances {
p := &instances[i]
if !p.zap {
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
for i := len(p.handles) - 1; i >= 0; i-- {
instanceRelease(p, ns, i)
}
}(p)
}
}
wg.Wait()
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Zap.
for i := range instances {
p := &instances[i]
if p.zap {
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
ns.Zap()
p.handles = nil
p.handlesMap = nil
}(p)
}
}
wg.Wait()
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
}
c.Purge(nil)
for i := range instances {
p := &instances[i]
if p.delete {
if p.delfinCalledAll != keymax {
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.ns, p.purge, p.zap, keymax-p.delfinCalledAll)
}
if p.delfinCalledEff != p.wantDel {
t.Errorf("#%d not all effective delete fin called, diff=%d", p.ns, p.wantDel-p.delfinCalledEff)
}
if p.purge && p.purgefinCalled > 0 {
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.ns, p.delete, p.zap, p.purgefinCalled)
}
} else {
if p.purge {
if p.purgefinCalled != p.wantDel {
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.ns, p.delete, p.zap, p.wantDel-p.purgefinCalled)
}
}
}
}
if cnt.created != cnt.released {
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
}
}
func BenchmarkLRUCache_SetRelease(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {

View File

@@ -1,246 +0,0 @@
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"sync/atomic"
)
type emptyCache struct {
sync.Mutex
table map[uint64]*emptyNS
}
// NewEmptyCache creates a new initialized empty cache.
func NewEmptyCache() Cache {
return &emptyCache{
table: make(map[uint64]*emptyNS),
}
}
func (c *emptyCache) GetNamespace(id uint64) Namespace {
c.Lock()
defer c.Unlock()
if ns, ok := c.table[id]; ok {
return ns
}
ns := &emptyNS{
cache: c,
id: id,
table: make(map[uint64]*emptyNode),
}
c.table[id] = ns
return ns
}
func (c *emptyCache) Purge(fin PurgeFin) {
c.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.Unlock()
}
func (c *emptyCache) Zap(closed bool) {
c.Lock()
for _, ns := range c.table {
ns.zapNB(closed)
}
c.table = make(map[uint64]*emptyNS)
c.Unlock()
}
func (*emptyCache) SetCapacity(capacity int) {}
type emptyNS struct {
cache *emptyCache
id uint64
table map[uint64]*emptyNode
state nsState
}
func (ns *emptyNS) Get(key uint64, setf SetFunc) (o Object, ok bool) {
ns.cache.Lock()
switch ns.state {
case nsZapped:
ns.cache.Unlock()
if setf == nil {
return
}
var value interface{}
var fin func()
ok, value, _, fin = setf()
if ok {
o = &fakeObject{
value: value,
fin: fin,
}
}
return
case nsClosed:
ns.cache.Unlock()
return
}
n, ok := ns.table[key]
if ok {
n.ref++
} else {
if setf == nil {
ns.cache.Unlock()
return
}
var value interface{}
var fin func()
ok, value, _, fin = setf()
if !ok {
ns.cache.Unlock()
return
}
n = &emptyNode{
ns: ns,
key: key,
value: value,
setfin: fin,
ref: 1,
}
ns.table[key] = n
}
ns.cache.Unlock()
o = &emptyObject{node: n}
return
}
func (ns *emptyNS) Delete(key uint64, fin DelFin) bool {
ns.cache.Lock()
if ns.state != nsEffective {
ns.cache.Unlock()
if fin != nil {
fin(false)
}
return false
}
n, ok := ns.table[key]
if !ok {
ns.cache.Unlock()
if fin != nil {
fin(false)
}
return false
}
n.delfin = fin
ns.cache.Unlock()
return true
}
func (ns *emptyNS) purgeNB(fin PurgeFin) {
if ns.state != nsEffective {
return
}
for _, n := range ns.table {
n.purgefin = fin
}
}
func (ns *emptyNS) Purge(fin PurgeFin) {
ns.cache.Lock()
ns.purgeNB(fin)
ns.cache.Unlock()
}
func (ns *emptyNS) zapNB(closed bool) {
if ns.state != nsEffective {
return
}
for _, n := range ns.table {
n.execFin()
}
if closed {
ns.state = nsClosed
} else {
ns.state = nsZapped
}
ns.table = nil
}
func (ns *emptyNS) Zap(closed bool) {
ns.cache.Lock()
ns.zapNB(closed)
delete(ns.cache.table, ns.id)
ns.cache.Unlock()
}
type emptyNode struct {
ns *emptyNS
key uint64
value interface{}
ref int
setfin SetFin
delfin DelFin
purgefin PurgeFin
}
func (n *emptyNode) execFin() {
if n.setfin != nil {
n.setfin()
n.setfin = nil
}
if n.purgefin != nil {
n.purgefin(n.ns.id, n.key, n.delfin)
n.delfin = nil
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true)
n.delfin = nil
}
}
func (n *emptyNode) evict() {
n.ns.cache.Lock()
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// Remove elem.
delete(n.ns.table, n.key)
// Execute finalizer.
n.execFin()
}
} else if n.ref < 0 {
panic("leveldb/cache: emptyNode: negative node reference")
}
n.ns.cache.Unlock()
}
type emptyObject struct {
node *emptyNode
once uint32
}
func (o *emptyObject) Value() interface{} {
if atomic.LoadUint32(&o.once) == 0 {
return o.node.value
}
return nil
}
func (o *emptyObject) Release() {
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
return
}
o.node.evict()
o.node = nil
}

View File

@@ -9,16 +9,17 @@ package cache
import (
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/util"
)
// lruCache represent a LRU cache state.
type lruCache struct {
sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
size int
mu sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
used, size int
}
// NewLRUCache creates a new initialized LRU cache with the given capacity.
@@ -32,57 +33,75 @@ func NewLRUCache(capacity int) Cache {
return c
}
func (c *lruCache) Capacity() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.capacity
}
func (c *lruCache) Used() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.used
}
func (c *lruCache) Size() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.size
}
// SetCapacity set cache capacity.
func (c *lruCache) SetCapacity(capacity int) {
c.Lock()
c.mu.Lock()
c.capacity = capacity
c.evict()
c.Unlock()
c.mu.Unlock()
}
// GetNamespace return namespace object for given id.
func (c *lruCache) GetNamespace(id uint64) Namespace {
c.Lock()
defer c.Unlock()
c.mu.Lock()
defer c.mu.Unlock()
if p, ok := c.table[id]; ok {
return p
if ns, ok := c.table[id]; ok {
return ns
}
p := &lruNs{
ns := &lruNs{
lru: c,
id: id,
table: make(map[uint64]*lruNode),
}
c.table[id] = p
return p
c.table[id] = ns
return ns
}
// Purge purge entire cache.
func (c *lruCache) Purge(fin PurgeFin) {
c.Lock()
c.mu.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.Unlock()
c.mu.Unlock()
}
func (c *lruCache) Zap(closed bool) {
c.Lock()
func (c *lruCache) Zap() {
c.mu.Lock()
for _, ns := range c.table {
ns.zapNB(closed)
ns.zapNB()
}
c.table = make(map[uint64]*lruNs)
c.Unlock()
c.mu.Unlock()
}
func (c *lruCache) evict() {
top := &c.recent
for n := c.recent.rPrev; c.size > c.capacity && n != top; {
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
n.state = nodeEvicted
n.rRemove()
n.evictNB()
c.size -= n.charge
n.derefNB()
c.used -= n.charge
n = c.recent.rPrev
}
}
@@ -94,170 +113,157 @@ type lruNs struct {
state nsState
}
func (ns *lruNs) Get(key uint64, setf SetFunc) (o Object, ok bool) {
lru := ns.lru
lru.Lock()
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
ns.lru.mu.Lock()
switch ns.state {
case nsZapped:
lru.Unlock()
if setf == nil {
return
}
var value interface{}
var fin func()
ok, value, _, fin = setf()
if ok {
o = &fakeObject{
value: value,
fin: fin,
}
}
return
case nsClosed:
lru.Unlock()
return
if ns.state != nsEffective {
ns.lru.mu.Unlock()
return nil
}
n, ok := ns.table[key]
node, ok := ns.table[key]
if ok {
switch n.state {
switch node.state {
case nodeEvicted:
// Insert to recent list.
n.state = nodeEffective
n.ref++
lru.size += n.charge
lru.evict()
node.state = nodeEffective
node.ref++
ns.lru.used += node.charge
ns.lru.evict()
fallthrough
case nodeEffective:
// Bump to front
n.rRemove()
n.rInsert(&lru.recent)
// Bump to front.
node.rRemove()
node.rInsert(&ns.lru.recent)
}
n.ref++
node.ref++
} else {
if setf == nil {
lru.Unlock()
return
ns.lru.mu.Unlock()
return nil
}
var value interface{}
var charge int
var fin func()
ok, value, charge, fin = setf()
if !ok {
lru.Unlock()
return
charge, value := setf()
if value == nil {
ns.lru.mu.Unlock()
return nil
}
n = &lruNode{
node = &lruNode{
ns: ns,
key: key,
value: value,
charge: charge,
setfin: fin,
ref: 2,
ref: 1,
}
ns.table[key] = n
n.rInsert(&lru.recent)
ns.table[key] = node
lru.size += charge
lru.evict()
if charge > 0 {
node.ref++
node.rInsert(&ns.lru.recent)
ns.lru.used += charge
ns.lru.size += charge
ns.lru.evict()
}
}
lru.Unlock()
o = &lruObject{node: n}
return
ns.lru.mu.Unlock()
return &lruHandle{node: node}
}
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
lru := ns.lru
lru.Lock()
ns.lru.mu.Lock()
if ns.state != nsEffective {
lru.Unlock()
if fin != nil {
fin(false)
fin(false, false)
}
ns.lru.mu.Unlock()
return false
}
n, ok := ns.table[key]
if !ok {
lru.Unlock()
node, exist := ns.table[key]
if !exist {
if fin != nil {
fin(false)
fin(false, false)
}
ns.lru.mu.Unlock()
return false
}
n.delfin = fin
switch n.state {
case nodeRemoved:
lru.Unlock()
switch node.state {
case nodeDeleted:
if fin != nil {
fin(true, true)
}
ns.lru.mu.Unlock()
return false
case nodeEffective:
lru.size -= n.charge
n.rRemove()
n.evictNB()
ns.lru.used -= node.charge
node.state = nodeDeleted
node.delfin = fin
node.rRemove()
node.derefNB()
default:
node.state = nodeDeleted
node.delfin = fin
}
n.state = nodeRemoved
lru.Unlock()
ns.lru.mu.Unlock()
return true
}
func (ns *lruNs) purgeNB(fin PurgeFin) {
lru := ns.lru
if ns.state != nsEffective {
return
}
for _, n := range ns.table {
n.purgefin = fin
if n.state == nodeEffective {
lru.size -= n.charge
n.rRemove()
n.evictNB()
for _, node := range ns.table {
switch node.state {
case nodeDeleted:
case nodeEffective:
ns.lru.used -= node.charge
node.state = nodeDeleted
node.purgefin = fin
node.rRemove()
node.derefNB()
default:
node.state = nodeDeleted
node.purgefin = fin
}
n.state = nodeRemoved
}
}
func (ns *lruNs) Purge(fin PurgeFin) {
ns.lru.Lock()
ns.lru.mu.Lock()
ns.purgeNB(fin)
ns.lru.Unlock()
ns.lru.mu.Unlock()
}
func (ns *lruNs) zapNB(closed bool) {
lru := ns.lru
func (ns *lruNs) zapNB() {
if ns.state != nsEffective {
return
}
if closed {
ns.state = nsClosed
} else {
ns.state = nsZapped
}
for _, n := range ns.table {
if n.state == nodeEffective {
lru.size -= n.charge
n.rRemove()
ns.state = nsZapped
for _, node := range ns.table {
if node.state == nodeEffective {
ns.lru.used -= node.charge
node.rRemove()
}
n.state = nodeRemoved
n.execFin()
ns.lru.size -= node.charge
node.state = nodeDeleted
node.fin()
}
ns.table = nil
}
func (ns *lruNs) Zap(closed bool) {
ns.lru.Lock()
ns.zapNB(closed)
func (ns *lruNs) Zap() {
ns.lru.mu.Lock()
ns.zapNB()
delete(ns.lru.table, ns.id)
ns.lru.Unlock()
ns.lru.mu.Unlock()
}
type lruNode struct {
@@ -270,7 +276,6 @@ type lruNode struct {
charge int
ref int
state nodeState
setfin SetFin
delfin DelFin
purgefin PurgeFin
}
@@ -284,7 +289,6 @@ func (n *lruNode) rInsert(at *lruNode) {
}
func (n *lruNode) rRemove() bool {
// only remove if not already removed
if n.rPrev == nil {
return false
}
@@ -297,58 +301,56 @@ func (n *lruNode) rRemove() bool {
return true
}
func (n *lruNode) execFin() {
if n.setfin != nil {
n.setfin()
n.setfin = nil
func (n *lruNode) fin() {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
if n.purgefin != nil {
n.purgefin(n.ns.id, n.key, n.delfin)
n.purgefin(n.ns.id, n.key)
n.delfin = nil
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true)
n.delfin(true, false)
n.delfin = nil
}
}
func (n *lruNode) evictNB() {
func (n *lruNode) derefNB() {
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// remove elem
// Remove elemement.
delete(n.ns.table, n.key)
// execute finalizer
n.execFin()
n.ns.lru.size -= n.charge
n.fin()
}
} else if n.ref < 0 {
panic("leveldb/cache: lruCache: negative node reference")
}
}
func (n *lruNode) evict() {
n.ns.lru.Lock()
n.evictNB()
n.ns.lru.Unlock()
func (n *lruNode) deref() {
n.ns.lru.mu.Lock()
n.derefNB()
n.ns.lru.mu.Unlock()
}
type lruObject struct {
type lruHandle struct {
node *lruNode
once uint32
}
func (o *lruObject) Value() interface{} {
if atomic.LoadUint32(&o.once) == 0 {
return o.node.value
func (h *lruHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.node.value
}
return nil
}
func (o *lruObject) Release() {
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
func (h *lruHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
o.node.evict()
o.node = nil
h.node.deref()
h.node = nil
}

View File

@@ -30,23 +30,24 @@ type DB struct {
// Need 64-bit alignment.
seq uint64
// Session.
s *session
// MemDB
// MemDB.
memMu sync.RWMutex
mem *memdb.DB
frozenMem *memdb.DB
memPool *util.Pool
mem, frozenMem *memDB
journal *journal.Writer
journalWriter storage.Writer
journalFile storage.File
frozenJournalFile storage.File
frozenSeq uint64
// Snapshot
// Snapshot.
snapsMu sync.Mutex
snapsRoot snapshotElement
// Write
// Write.
writeC chan *Batch
writeMergedC chan bool
writeLockC chan struct{}
@@ -54,7 +55,7 @@ type DB struct {
journalC chan *Batch
journalAckC chan error
// Compaction
// Compaction.
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
tcompTriggerC chan struct{}
@@ -64,7 +65,7 @@ type DB struct {
compErrSetC chan error
compStats [kNumLevels]cStats
// Close
// Close.
closeW sync.WaitGroup
closeC chan struct{}
closed uint32
@@ -78,6 +79,8 @@ func openDB(s *session) (*DB, error) {
s: s,
// Initial sequence
seq: s.stSeq,
// MemDB
memPool: util.NewPool(1),
// Write
writeC: make(chan *Batch),
writeMergedC: make(chan bool),
@@ -135,9 +138,10 @@ func openDB(s *session) (*DB, error) {
// detected in the DB. Corrupted DB can be recovered with Recover
// function.
//
// The returned DB instance is goroutine-safe.
// The DB must be closed after use, by calling Close method.
func Open(p storage.Storage, o *opt.Options) (db *DB, err error) {
s, err := newSession(p, o)
func Open(stor storage.Storage, o *opt.Options) (db *DB, err error) {
s, err := newSession(stor, o)
if err != nil {
return
}
@@ -177,6 +181,7 @@ func Open(p storage.Storage, o *opt.Options) (db *DB, err error) {
// detected in the DB. Corrupted DB can be recovered with Recover
// function.
//
// The returned DB instance is goroutine-safe.
// The DB must be closed after use, by calling Close method.
func OpenFile(path string, o *opt.Options) (db *DB, err error) {
stor, err := storage.OpenFile(path)
@@ -197,9 +202,10 @@ func OpenFile(path string, o *opt.Options) (db *DB, err error) {
// The DB must already exist or it will returns an error.
// Also, Recover will ignore ErrorIfMissing and ErrorIfExist options.
//
// The returned DB instance is goroutine-safe.
// The DB must be closed after use, by calling Close method.
func Recover(p storage.Storage, o *opt.Options) (db *DB, err error) {
s, err := newSession(p, o)
func Recover(stor storage.Storage, o *opt.Options) (db *DB, err error) {
s, err := newSession(stor, o)
if err != nil {
return
}
@@ -225,6 +231,7 @@ func Recover(p storage.Storage, o *opt.Options) (db *DB, err error) {
// RecoverFile uses standard file-system backed storage implementation as desribed
// in the leveldb/storage package.
//
// The returned DB instance is goroutine-safe.
// The DB must be closed after use, by calling Close method.
func RecoverFile(path string, o *opt.Options) (db *DB, err error) {
stor, err := storage.OpenFile(path)
@@ -241,16 +248,18 @@ func RecoverFile(path string, o *opt.Options) (db *DB, err error) {
}
func recoverTable(s *session, o *opt.Options) error {
ff0, err := s.getFiles(storage.TypeTable)
// Get all tables and sort it by file number.
tableFiles_, err := s.getFiles(storage.TypeTable)
if err != nil {
return err
}
ff1 := files(ff0)
ff1.sort()
tableFiles := files(tableFiles_)
tableFiles.sort()
var mSeq uint64
var good, corrupted int
rec := new(sessionRecord)
bpool := util.NewBufferPool(o.GetBlockSize() + 5)
buildTable := func(iter iterator.Iterator) (tmp storage.File, size int64, err error) {
tmp = s.newTemp()
writer, err := tmp.Create()
@@ -264,8 +273,9 @@ func recoverTable(s *session, o *opt.Options) error {
tmp = nil
}
}()
// Copy entries.
tw := table.NewWriter(writer, o)
// Copy records.
for iter.Next() {
key := iter.Key()
if validIkey(key) {
@@ -297,20 +307,23 @@ func recoverTable(s *session, o *opt.Options) error {
return err
}
defer reader.Close()
// Get file size.
size, err := reader.Seek(0, 2)
if err != nil {
return err
}
var tSeq uint64
var tgood, tcorrupted, blockerr int
var min, max []byte
tr := table.NewReader(reader, size, nil, o)
var imin, imax []byte
tr := table.NewReader(reader, size, nil, bpool, o)
iter := tr.NewIterator(nil, nil)
iter.(iterator.ErrorCallbackSetter).SetErrorCallback(func(err error) {
s.logf("table@recovery found error @%d %q", file.Num(), err)
blockerr++
})
// Scan the table.
for iter.Next() {
key := iter.Key()
@@ -323,16 +336,17 @@ func recoverTable(s *session, o *opt.Options) error {
if seq > tSeq {
tSeq = seq
}
if min == nil {
min = append([]byte{}, key...)
if imin == nil {
imin = append([]byte{}, key...)
}
max = append(max[:0], key...)
imax = append(imax[:0], key...)
}
if err := iter.Error(); err != nil {
iter.Release()
return err
}
iter.Release()
if tgood > 0 {
if tcorrupted > 0 || blockerr > 0 {
// Rebuild the table.
@@ -353,7 +367,7 @@ func recoverTable(s *session, o *opt.Options) error {
mSeq = tSeq
}
// Add table to level 0.
rec.addTable(0, file.Num(), uint64(size), min, max)
rec.addTable(0, file.Num(), uint64(size), imin, imax)
s.logf("table@recovery recovered @%d N·%d C·%d B·%d S·%d Q·%d", file.Num(), tgood, tcorrupted, blockerr, size, tSeq)
} else {
s.logf("table@recovery unrecoverable @%d C·%d B·%d S·%d", file.Num(), tcorrupted, blockerr, size)
@@ -364,41 +378,56 @@ func recoverTable(s *session, o *opt.Options) error {
return nil
}
// Recover all tables.
if len(ff1) > 0 {
s.logf("table@recovery F·%d", len(ff1))
s.markFileNum(ff1[len(ff1)-1].Num())
for _, file := range ff1 {
if len(tableFiles) > 0 {
s.logf("table@recovery F·%d", len(tableFiles))
// Mark file number as used.
s.markFileNum(tableFiles[len(tableFiles)-1].Num())
for _, file := range tableFiles {
if err := recoverTable(file); err != nil {
return err
}
}
s.logf("table@recovery recovered F·%d N·%d C·%d Q·%d", len(ff1), good, corrupted, mSeq)
s.logf("table@recovery recovered F·%d N·%d C·%d Q·%d", len(tableFiles), good, corrupted, mSeq)
}
// Set sequence number.
rec.setSeq(mSeq + 1)
// Create new manifest.
if err := s.create(); err != nil {
return err
}
// Commit.
return s.commit(rec)
}
func (d *DB) recoverJournal() error {
s := d.s
ff0, err := s.getFiles(storage.TypeJournal)
func (db *DB) recoverJournal() error {
// Get all tables and sort it by file number.
journalFiles_, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
ff1 := files(ff0)
ff1.sort()
ff2 := make([]storage.File, 0, len(ff1))
for _, file := range ff1 {
if file.Num() >= s.stJournalNum || file.Num() == s.stPrevJournalNum {
s.markFileNum(file.Num())
ff2 = append(ff2, file)
journalFiles := files(journalFiles_)
journalFiles.sort()
// Discard older journal.
prev := -1
for i, file := range journalFiles {
if file.Num() >= db.s.stJournalNum {
if prev >= 0 {
i--
journalFiles[i] = journalFiles[prev]
}
journalFiles = journalFiles[i:]
break
} else if file.Num() == db.s.stPrevJournalNum {
prev = i
}
}
@@ -406,38 +435,43 @@ func (d *DB) recoverJournal() error {
var of storage.File
var mem *memdb.DB
batch := new(Batch)
cm := newCMem(s)
cm := newCMem(db.s)
buf := new(util.Buffer)
// Options.
strict := s.o.GetStrict(opt.StrictJournal)
checksum := s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer := s.o.GetWriteBuffer()
strict := db.s.o.GetStrict(opt.StrictJournal)
checksum := db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer := db.s.o.GetWriteBuffer()
recoverJournal := func(file storage.File) error {
s.logf("journal@recovery recovering @%d", file.Num())
db.logf("journal@recovery recovering @%d", file.Num())
reader, err := file.Open()
if err != nil {
return err
}
defer reader.Close()
// Create/reset journal reader instance.
if jr == nil {
jr = journal.NewReader(reader, dropper{s, file}, strict, checksum)
jr = journal.NewReader(reader, dropper{db.s, file}, strict, checksum)
} else {
jr.Reset(reader, dropper{s, file}, strict, checksum)
jr.Reset(reader, dropper{db.s, file}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
return err
}
}
if err := cm.commit(file.Num(), d.seq); err != nil {
if err := cm.commit(file.Num(), db.seq); err != nil {
return err
}
cm.reset()
of.Remove()
of = nil
}
// Reset memdb.
// Replay journal to memdb.
mem.Reset()
for {
r, err := jr.Next()
@@ -447,12 +481,14 @@ func (d *DB) recoverJournal() error {
}
return err
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if strict {
if err == io.ErrUnexpectedEOF {
continue
} else {
return err
}
continue
}
if err := batch.decode(buf.Bytes()); err != nil {
return err
@@ -460,28 +496,37 @@ func (d *DB) recoverJournal() error {
if err := batch.memReplay(mem); err != nil {
return err
}
d.seq = batch.seq + uint64(batch.len())
// Save sequence number.
db.seq = batch.seq + uint64(batch.len())
// Flush it if large enough.
if mem.Size() >= writeBuffer {
// Large enough, flush it.
if err := cm.flush(mem, 0); err != nil {
return err
}
// Reset memdb.
mem.Reset()
}
}
of = file
return nil
}
// Recover all journals.
if len(ff2) > 0 {
s.logf("journal@recovery F·%d", len(ff2))
mem = memdb.New(s.icmp, writeBuffer)
for _, file := range ff2 {
if len(journalFiles) > 0 {
db.logf("journal@recovery F·%d", len(journalFiles))
// Mark file number as used.
db.s.markFileNum(journalFiles[len(journalFiles)-1].Num())
mem = memdb.New(db.s.icmp, writeBuffer)
for _, file := range journalFiles {
if err := recoverJournal(file); err != nil {
return err
}
}
// Flush the last journal.
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
@@ -489,51 +534,60 @@ func (d *DB) recoverJournal() error {
}
}
}
// Create a new journal.
if _, err := d.newMem(0); err != nil {
if _, err := db.newMem(0); err != nil {
return err
}
// Commit.
if err := cm.commit(d.journalFile.Num(), d.seq); err != nil {
if err := cm.commit(db.journalFile.Num(), db.seq); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
}
return err
}
// Remove the last journal.
// Remove the last obsolete journal file.
if of != nil {
of.Remove()
}
return nil
}
func (d *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err error) {
s := d.s
func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err error) {
ikey := newIKey(key, seq, tSeek)
em, fm := d.getMems()
for _, m := range [...]*memdb.DB{em, fm} {
em, fm := db.getMems()
for _, m := range [...]*memDB{em, fm} {
if m == nil {
continue
}
mk, mv, me := m.Find(ikey)
defer m.decref()
mk, mv, me := m.db.Find(ikey)
if me == nil {
ukey, _, t, ok := parseIkey(mk)
if ok && s.icmp.uCompare(ukey, key) == 0 {
if ok && db.s.icmp.uCompare(ukey, key) == 0 {
if t == tDel {
return nil, ErrNotFound
}
return mv, nil
return append([]byte{}, mv...), nil
}
} else if me != ErrNotFound {
return nil, me
}
}
v := s.version()
v := db.s.version()
value, cSched, err := v.get(ikey, ro)
v.release()
if cSched {
// Trigger table compaction.
d.compTrigger(d.tcompTriggerC)
db.compTrigger(db.tcompTriggerC)
}
return
}
@@ -541,15 +595,16 @@ func (d *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err
// Get gets the value for the given key. It returns ErrNotFound if the
// DB does not contain the key.
//
// The caller should not modify the contents of the returned slice, but
// it is safe to modify the contents of the argument after Get returns.
func (d *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
err = d.ok()
// The returned slice is its own copy, it is safe to modify the contents
// of the returned slice.
// It is safe to modify the contents of the argument after Get returns.
func (db *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
err = db.ok()
if err != nil {
return
}
return d.get(key, d.getSeq(), ro)
return db.get(key, db.getSeq(), ro)
}
// NewIterator returns an iterator for the latest snapshot of the
@@ -568,14 +623,14 @@ func (d *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
// The iterator must be released after use, by calling Release method.
//
// Also read Iterator documentation of the leveldb/iterator package.
func (d *DB) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
if err := d.ok(); err != nil {
func (db *DB) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
if err := db.ok(); err != nil {
return iterator.NewEmptyIterator(err)
}
p := d.newSnapshot()
defer p.Release()
return p.NewIterator(slice, ro)
snap := db.newSnapshot()
defer snap.Release()
return snap.NewIterator(slice, ro)
}
// GetSnapshot returns a latest snapshot of the underlying DB. A snapshot
@@ -583,12 +638,12 @@ func (d *DB) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterat
// content of snapshot are guaranteed to be consistent.
//
// The snapshot must be released after use, by calling Release method.
func (d *DB) GetSnapshot() (*Snapshot, error) {
if err := d.ok(); err != nil {
func (db *DB) GetSnapshot() (*Snapshot, error) {
if err := db.ok(); err != nil {
return nil, err
}
return d.newSnapshot(), nil
return db.newSnapshot(), nil
}
// GetProperty returns value of the given property name.
@@ -600,8 +655,10 @@ func (d *DB) GetSnapshot() (*Snapshot, error) {
// Returns statistics of the underlying DB.
// leveldb.sstables
// Returns sstables list for each level.
func (d *DB) GetProperty(name string) (value string, err error) {
err = d.ok()
// leveldb.blockpool
// Returns block pool stats.
func (db *DB) GetProperty(name string) (value string, err error) {
err = db.ok()
if err != nil {
return
}
@@ -610,11 +667,9 @@ func (d *DB) GetProperty(name string) (value string, err error) {
if !strings.HasPrefix(name, prefix) {
return "", errors.New("leveldb: GetProperty: unknown property: " + name)
}
p := name[len(prefix):]
s := d.s
v := s.version()
v := db.s.version()
defer v.release()
switch {
@@ -631,22 +686,32 @@ func (d *DB) GetProperty(name string) (value string, err error) {
value = "Compactions\n" +
" Level | Tables | Size(MB) | Time(sec) | Read(MB) | Write(MB)\n" +
"-------+------------+---------------+---------------+---------------+---------------\n"
for level, tt := range v.tables {
duration, read, write := d.compStats[level].get()
if len(tt) == 0 && duration == 0 {
for level, tables := range v.tables {
duration, read, write := db.compStats[level].get()
if len(tables) == 0 && duration == 0 {
continue
}
value += fmt.Sprintf(" %3d | %10d | %13.5f | %13.5f | %13.5f | %13.5f\n",
level, len(tt), float64(tt.size())/1048576.0, duration.Seconds(),
level, len(tables), float64(tables.size())/1048576.0, duration.Seconds(),
float64(read)/1048576.0, float64(write)/1048576.0)
}
case p == "sstables":
for level, tt := range v.tables {
for level, tables := range v.tables {
value += fmt.Sprintf("--- level %d ---\n", level)
for _, t := range tt {
value += fmt.Sprintf("%d:%d[%q .. %q]\n", t.file.Num(), t.size, t.min, t.max)
for _, t := range tables {
value += fmt.Sprintf("%d:%d[%q .. %q]\n", t.file.Num(), t.size, t.imin, t.imax)
}
}
case p == "blockpool":
value = fmt.Sprintf("%v", db.s.tops.bpool)
case p == "cachedblock":
if bc := db.s.o.GetBlockCache(); bc != nil {
value = fmt.Sprintf("%d", bc.Size())
} else {
value = "<nil>"
}
case p == "openedtables":
value = fmt.Sprintf("%d", db.s.tops.cache.Size())
default:
err = errors.New("leveldb: GetProperty: unknown property: " + name)
}
@@ -660,23 +725,23 @@ func (d *DB) GetProperty(name string) (value string, err error) {
// data compresses by a factor of ten, the returned sizes will be one-tenth
// the size of the corresponding user data size.
// The results may not include the sizes of recently written data.
func (d *DB) SizeOf(ranges []util.Range) (Sizes, error) {
if err := d.ok(); err != nil {
func (db *DB) SizeOf(ranges []util.Range) (Sizes, error) {
if err := db.ok(); err != nil {
return nil, err
}
v := d.s.version()
v := db.s.version()
defer v.release()
sizes := make(Sizes, 0, len(ranges))
for _, r := range ranges {
min := newIKey(r.Start, kMaxSeq, tSeek)
max := newIKey(r.Limit, kMaxSeq, tSeek)
start, err := v.offsetOf(min)
imin := newIKey(r.Start, kMaxSeq, tSeek)
imax := newIKey(r.Limit, kMaxSeq, tSeek)
start, err := v.offsetOf(imin)
if err != nil {
return nil, err
}
limit, err := v.offsetOf(max)
limit, err := v.offsetOf(imax)
if err != nil {
return nil, err
}
@@ -690,61 +755,63 @@ func (d *DB) SizeOf(ranges []util.Range) (Sizes, error) {
return sizes, nil
}
// Close closes the DB. This will also releases any outstanding snapshot.
// Close closes the DB. This will also releases any outstanding snapshot and
// abort any in-flight compaction.
//
// It is not safe to close a DB until all outstanding iterators are released.
// It is valid to call Close multiple times. Other methods should not be
// called after the DB has been closed.
func (d *DB) Close() error {
if !d.setClosed() {
func (db *DB) Close() error {
if !db.setClosed() {
return ErrClosed
}
s := d.s
start := time.Now()
s.log("db@close closing")
db.log("db@close closing")
// Clear the finalizer.
runtime.SetFinalizer(d, nil)
runtime.SetFinalizer(db, nil)
// Get compaction error.
var err error
select {
case err = <-d.compErrC:
case err = <-db.compErrC:
default:
}
close(d.closeC)
close(db.closeC)
// Wait for the close WaitGroup.
d.closeW.Wait()
db.closeW.Wait()
// Close journal.
if d.journal != nil {
d.journal.Close()
d.journalWriter.Close()
db.writeLockC <- struct{}{}
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
}
// Close session.
s.close()
s.logf("db@close done T·%v", time.Since(start))
s.release()
db.s.close()
db.logf("db@close done T·%v", time.Since(start))
db.s.release()
if d.closer != nil {
if err1 := d.closer.Close(); err == nil {
if db.closer != nil {
if err1 := db.closer.Close(); err == nil {
err = err1
}
}
d.s = nil
d.mem = nil
d.frozenMem = nil
d.journal = nil
d.journalWriter = nil
d.journalFile = nil
d.frozenJournalFile = nil
d.snapsRoot = snapshotElement{}
d.closer = nil
// NIL'ing pointers.
db.s = nil
db.mem = nil
db.frozenMem = nil
db.journal = nil
db.journalWriter = nil
db.journalFile = nil
db.frozenJournalFile = nil
db.snapsRoot = snapshotElement{}
db.closer = nil
return err
}

View File

@@ -74,7 +74,7 @@ func newCMem(s *session) *cMem {
func (c *cMem) flush(mem *memdb.DB, level int) error {
s := c.s
// Write memdb to table
// Write memdb to table.
iter := mem.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
@@ -82,12 +82,13 @@ func (c *cMem) flush(mem *memdb.DB, level int) error {
return err
}
// Pick level.
if level < 0 {
level = s.version_NB().pickLevel(t.min.ukey(), t.max.ukey())
level = s.version_NB().pickLevel(t.imin.ukey(), t.imax.ukey())
}
c.rec.addTableFile(level, t)
s.logf("mem@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.min, t.max)
s.logf("mem@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
c.level = level
return nil
@@ -100,33 +101,34 @@ func (c *cMem) reset() {
func (c *cMem) commit(journal, seq uint64) error {
c.rec.setJournalNum(journal)
c.rec.setSeq(seq)
// Commit changes
// Commit changes.
return c.s.commit(c.rec)
}
func (d *DB) compactionError() {
func (db *DB) compactionError() {
var err error
noerr:
for {
select {
case _, _ = <-d.closeC:
return
case err = <-d.compErrSetC:
case err = <-db.compErrSetC:
if err != nil {
goto haserr
}
case _, _ = <-db.closeC:
return
}
}
haserr:
for {
select {
case _, _ = <-d.closeC:
return
case err = <-d.compErrSetC:
case db.compErrC <- err:
case err = <-db.compErrSetC:
if err == nil {
goto noerr
}
case d.compErrC <- err:
case _, _ = <-db.closeC:
return
}
}
}
@@ -137,18 +139,18 @@ func (cnt *compactionTransactCounter) incr() {
*cnt++
}
func (d *DB) compactionTransact(name string, exec func(cnt *compactionTransactCounter) error, rollback func() error) {
s := d.s
func (db *DB) compactionTransact(name string, exec func(cnt *compactionTransactCounter) error, rollback func() error) {
defer func() {
if x := recover(); x != nil {
if x == errCompactionTransactExiting && rollback != nil {
if err := rollback(); err != nil {
s.logf("%s rollback error %q", name, err)
db.logf("%s rollback error %q", name, err)
}
}
panic(x)
}
}()
const (
backoffMin = 1 * time.Second
backoffMax = 8 * time.Second
@@ -159,11 +161,11 @@ func (d *DB) compactionTransact(name string, exec func(cnt *compactionTransactCo
lastCnt := compactionTransactCounter(0)
for n := 0; ; n++ {
// Check wether the DB is closed.
if d.isClosed() {
s.logf("%s exiting", name)
d.compactionExitTransact()
if db.isClosed() {
db.logf("%s exiting", name)
db.compactionExitTransact()
} else if n > 0 {
s.logf("%s retrying N·%d", name, n)
db.logf("%s retrying N·%d", name, n)
}
// Execute.
@@ -172,15 +174,15 @@ func (d *DB) compactionTransact(name string, exec func(cnt *compactionTransactCo
// Set compaction error status.
select {
case d.compErrSetC <- err:
case _, _ = <-d.closeC:
s.logf("%s exiting", name)
d.compactionExitTransact()
case db.compErrSetC <- err:
case _, _ = <-db.closeC:
db.logf("%s exiting", name)
db.compactionExitTransact()
}
if err == nil {
return
}
s.logf("%s error I·%d %q", name, cnt, err)
db.logf("%s error I·%d %q", name, cnt, err)
// Reset backoff duration if counter is advancing.
if cnt > lastCnt {
@@ -198,53 +200,53 @@ func (d *DB) compactionTransact(name string, exec func(cnt *compactionTransactCo
}
select {
case <-backoffT.C:
case _, _ = <-d.closeC:
s.logf("%s exiting", name)
d.compactionExitTransact()
case _, _ = <-db.closeC:
db.logf("%s exiting", name)
db.compactionExitTransact()
}
}
}
func (d *DB) compactionExitTransact() {
func (db *DB) compactionExitTransact() {
panic(errCompactionTransactExiting)
}
func (d *DB) memCompaction() {
mem := d.getFrozenMem()
func (db *DB) memCompaction() {
mem := db.getFrozenMem()
if mem == nil {
return
}
defer mem.decref()
s := d.s
c := newCMem(s)
c := newCMem(db.s)
stats := new(cStatsStaging)
s.logf("mem@flush N·%d S·%s", mem.Len(), shortenb(mem.Size()))
db.logf("mem@flush N·%d S·%s", mem.db.Len(), shortenb(mem.db.Size()))
// Don't compact empty memdb.
if mem.Len() == 0 {
s.logf("mem@flush skipping")
if mem.db.Len() == 0 {
db.logf("mem@flush skipping")
// drop frozen mem
d.dropFrozenMem()
db.dropFrozenMem()
return
}
// Pause table compaction.
ch := make(chan struct{})
select {
case d.tcompPauseC <- (chan<- struct{})(ch):
case _, _ = <-d.closeC:
case db.tcompPauseC <- (chan<- struct{})(ch):
case _, _ = <-db.closeC:
return
}
d.compactionTransact("mem@flush", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransact("mem@flush", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.flush(mem, -1)
return c.flush(mem.db, -1)
}, func() error {
for _, r := range c.rec.addedTables {
s.logf("mem@flush rollback @%d", r.num)
f := s.getTableFile(r.num)
db.logf("mem@flush rollback @%d", r.num)
f := db.s.getTableFile(r.num)
if err := f.Remove(); err != nil {
return err
}
@@ -252,61 +254,59 @@ func (d *DB) memCompaction() {
return nil
})
d.compactionTransact("mem@commit", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransact("mem@commit", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.commit(d.journalFile.Num(), d.frozenSeq)
return c.commit(db.journalFile.Num(), db.frozenSeq)
}, nil)
s.logf("mem@flush commited F·%d T·%v", len(c.rec.addedTables), stats.duration)
db.logf("mem@flush commited F·%d T·%v", len(c.rec.addedTables), stats.duration)
for _, r := range c.rec.addedTables {
stats.write += r.size
}
d.compStats[c.level].add(stats)
db.compStats[c.level].add(stats)
// Drop frozen mem.
d.dropFrozenMem()
db.dropFrozenMem()
// Resume table compaction.
select {
case <-ch:
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
return
}
// Trigger table compaction.
d.compTrigger(d.mcompTriggerC)
db.compTrigger(db.mcompTriggerC)
}
func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
s := d.s
func (db *DB) tableCompaction(c *compaction, noTrivial bool) {
rec := new(sessionRecord)
rec.addCompactionPointer(c.level, c.max)
rec.addCompactionPointer(c.level, c.imax)
if !noTrivial && c.trivial() {
t := c.tables[0][0]
s.logf("table@move L%d@%d -> L%d", c.level, t.file.Num(), c.level+1)
db.logf("table@move L%d@%d -> L%d", c.level, t.file.Num(), c.level+1)
rec.deleteTable(c.level, t.file.Num())
rec.addTableFile(c.level+1, t)
d.compactionTransact("table@move", func(cnt *compactionTransactCounter) (err error) {
return s.commit(rec)
db.compactionTransact("table@move", func(cnt *compactionTransactCounter) (err error) {
return db.s.commit(rec)
}, nil)
return
}
var stats [2]cStatsStaging
for i, tt := range c.tables {
for _, t := range tt {
for i, tables := range c.tables {
for _, t := range tables {
stats[i].read += t.size
// Insert deleted tables into record
rec.deleteTable(c.level+i, t.file.Num())
}
}
sourceSize := int(stats[0].read + stats[1].read)
minSeq := d.minSeq()
s.logf("table@compaction L%d·%d -> L%d·%d S·%s Q·%d", c.level, len(c.tables[0]), c.level+1, len(c.tables[1]), shortenb(sourceSize), minSeq)
minSeq := db.minSeq()
db.logf("table@compaction L%d·%d -> L%d·%d S·%s Q·%d", c.level, len(c.tables[0]), c.level+1, len(c.tables[1]), shortenb(sourceSize), minSeq)
var snapUkey []byte
var snapHasUkey bool
@@ -314,7 +314,7 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
var snapIter int
var snapDropCnt int
var dropCnt int
d.compactionTransact("table@build", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransact("table@build", func(cnt *compactionTransactCounter) (err error) {
ukey := append([]byte{}, snapUkey...)
hasUkey := snapHasUkey
lseq := snapSeq
@@ -329,7 +329,7 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
}
rec.addTableFile(c.level+1, t)
stats[1].write += t.size
s.logf("table@build created L%d@%d N·%d S·%s %q:%q", c.level+1, t.file.Num(), tw.tw.EntriesLen(), shortenb(int(t.size)), t.min, t.max)
db.logf("table@build created L%d@%d N·%d S·%s %q:%q", c.level+1, t.file.Num(), tw.tw.EntriesLen(), shortenb(int(t.size)), t.imin, t.imax)
return nil
}
@@ -353,9 +353,9 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
continue
}
key := iKey(iter.Key())
ikey := iKey(iter.Key())
if c.shouldStopBefore(key) && tw != nil {
if c.shouldStopBefore(ikey) && tw != nil {
err = finish()
if err != nil {
return
@@ -375,15 +375,15 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
snapSched = false
}
if seq, t, ok := key.parseNum(); !ok {
if seq, vt, ok := ikey.parseNum(); !ok {
// Don't drop error keys
ukey = ukey[:0]
hasUkey = false
lseq = kMaxSeq
} else {
if !hasUkey || s.icmp.uCompare(key.ukey(), ukey) != 0 {
if !hasUkey || db.s.icmp.uCompare(ikey.ukey(), ukey) != 0 {
// First occurrence of this user key
ukey = append(ukey[:0], key.ukey()...)
ukey = append(ukey[:0], ikey.ukey()...)
hasUkey = true
lseq = kMaxSeq
}
@@ -392,7 +392,7 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
if lseq <= minSeq {
// Dropped because newer entry for same user key exist
drop = true // (A)
} else if t == tDel && seq <= minSeq && c.isBaseLevelForKey(ukey) {
} else if vt == tDel && seq <= minSeq && c.baseLevelForKey(ukey) {
// For this user key:
// (1) there is no data in higher levels
// (2) data in lower levels will have larger seq numbers
@@ -414,22 +414,22 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
if tw == nil {
// Check for pause event.
select {
case ch := <-d.tcompPauseC:
d.pauseCompaction(ch)
case _, _ = <-d.closeC:
d.compactionExitTransact()
case ch := <-db.tcompPauseC:
db.pauseCompaction(ch)
case _, _ = <-db.closeC:
db.compactionExitTransact()
default:
}
// Create new table.
tw, err = s.tops.create()
tw, err = db.s.tops.create()
if err != nil {
return
}
}
// Write key/value into table
err = tw.add(key, iter.Value())
err = tw.append(ikey, iter.Value())
if err != nil {
return
}
@@ -461,8 +461,8 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
return
}, func() error {
for _, r := range rec.addedTables {
s.logf("table@build rollback @%d", r.num)
f := s.getTableFile(r.num)
db.logf("table@build rollback @%d", r.num)
f := db.s.getTableFile(r.num)
if err := f.Remove(); err != nil {
return err
}
@@ -471,60 +471,61 @@ func (d *DB) tableCompaction(c *compaction, noTrivial bool) {
})
// Commit changes
d.compactionTransact("table@commit", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransact("table@commit", func(cnt *compactionTransactCounter) (err error) {
stats[1].startTimer()
defer stats[1].stopTimer()
return s.commit(rec)
return db.s.commit(rec)
}, nil)
resultSize := int(int(stats[1].write))
s.logf("table@compaction commited F%s S%s D·%d T·%v", sint(len(rec.addedTables)-len(rec.deletedTables)), sshortenb(resultSize-sourceSize), dropCnt, stats[1].duration)
resultSize := int(stats[1].write)
db.logf("table@compaction commited F%s S%s D·%d T·%v", sint(len(rec.addedTables)-len(rec.deletedTables)), sshortenb(resultSize-sourceSize), dropCnt, stats[1].duration)
// Save compaction stats
for i := range stats {
d.compStats[c.level+1].add(&stats[i])
db.compStats[c.level+1].add(&stats[i])
}
}
func (d *DB) tableRangeCompaction(level int, min, max []byte) {
s := d.s
s.logf("table@compaction range L%d %q:%q", level, min, max)
func (db *DB) tableRangeCompaction(level int, umin, umax []byte) {
db.logf("table@compaction range L%d %q:%q", level, umin, umax)
if level >= 0 {
if c := s.getCompactionRange(level, min, max); c != nil {
d.tableCompaction(c, true)
if c := db.s.getCompactionRange(level, umin, umax); c != nil {
db.tableCompaction(c, true)
}
} else {
v := s.version_NB()
v := db.s.version_NB()
m := 1
for i, t := range v.tables[1:] {
if t.isOverlaps(min, max, true, s.icmp) {
if t.overlaps(db.s.icmp, umin, umax, false) {
m = i + 1
}
}
for level := 0; level < m; level++ {
if c := s.getCompactionRange(level, min, max); c != nil {
d.tableCompaction(c, true)
if c := db.s.getCompactionRange(level, umin, umax); c != nil {
db.tableCompaction(c, true)
}
}
}
}
func (d *DB) tableAutoCompaction() {
if c := d.s.pickCompaction(); c != nil {
d.tableCompaction(c, false)
func (db *DB) tableAutoCompaction() {
if c := db.s.pickCompaction(); c != nil {
db.tableCompaction(c, false)
}
}
func (d *DB) tableNeedCompaction() bool {
return d.s.version_NB().needCompaction()
func (db *DB) tableNeedCompaction() bool {
return db.s.version_NB().needCompaction()
}
func (d *DB) pauseCompaction(ch chan<- struct{}) {
func (db *DB) pauseCompaction(ch chan<- struct{}) {
select {
case ch <- struct{}{}:
case _, _ = <-d.closeC:
d.compactionExitTransact()
case _, _ = <-db.closeC:
db.compactionExitTransact()
}
}
@@ -555,48 +556,48 @@ func (r cRange) ack(err error) {
}
}
func (d *DB) compSendIdle(compC chan<- cCmd) error {
func (db *DB) compSendIdle(compC chan<- cCmd) error {
ch := make(chan error)
defer close(ch)
// Send cmd.
select {
case compC <- cIdle{ch}:
case err := <-d.compErrC:
case err := <-db.compErrC:
return err
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
return ErrClosed
}
// Wait cmd.
return <-ch
}
func (d *DB) compSendRange(compC chan<- cCmd, level int, min, max []byte) (err error) {
func (db *DB) compSendRange(compC chan<- cCmd, level int, min, max []byte) (err error) {
ch := make(chan error)
defer close(ch)
// Send cmd.
select {
case compC <- cRange{level, min, max, ch}:
case err := <-d.compErrC:
case err := <-db.compErrC:
return err
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
return ErrClosed
}
// Wait cmd.
select {
case err = <-d.compErrC:
case err = <-db.compErrC:
case err = <-ch:
}
return err
}
func (d *DB) compTrigger(compTriggerC chan struct{}) {
func (db *DB) compTrigger(compTriggerC chan struct{}) {
select {
case compTriggerC <- struct{}{}:
default:
}
}
func (d *DB) mCompaction() {
func (db *DB) mCompaction() {
var x cCmd
defer func() {
@@ -608,24 +609,24 @@ func (d *DB) mCompaction() {
if x != nil {
x.ack(ErrClosed)
}
d.closeW.Done()
db.closeW.Done()
}()
for {
select {
case _, _ = <-d.closeC:
return
case x = <-d.mcompCmdC:
d.memCompaction()
case x = <-db.mcompCmdC:
db.memCompaction()
x.ack(nil)
x = nil
case <-d.mcompTriggerC:
d.memCompaction()
case <-db.mcompTriggerC:
db.memCompaction()
case _, _ = <-db.closeC:
return
}
}
}
func (d *DB) tCompaction() {
func (db *DB) tCompaction() {
var x cCmd
var ackQ []cCmd
@@ -642,19 +643,19 @@ func (d *DB) tCompaction() {
if x != nil {
x.ack(ErrClosed)
}
d.closeW.Done()
db.closeW.Done()
}()
for {
if d.tableNeedCompaction() {
if db.tableNeedCompaction() {
select {
case x = <-d.tcompCmdC:
case <-d.tcompTriggerC:
case _, _ = <-d.closeC:
return
case ch := <-d.tcompPauseC:
d.pauseCompaction(ch)
case x = <-db.tcompCmdC:
case <-db.tcompTriggerC:
case ch := <-db.tcompPauseC:
db.pauseCompaction(ch)
continue
case _, _ = <-db.closeC:
return
default:
}
} else {
@@ -664,12 +665,12 @@ func (d *DB) tCompaction() {
}
ackQ = ackQ[:0]
select {
case x = <-d.tcompCmdC:
case <-d.tcompTriggerC:
case ch := <-d.tcompPauseC:
d.pauseCompaction(ch)
case x = <-db.tcompCmdC:
case <-db.tcompTriggerC:
case ch := <-db.tcompPauseC:
db.pauseCompaction(ch)
continue
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
return
}
}
@@ -678,11 +679,11 @@ func (d *DB) tCompaction() {
case cIdle:
ackQ = append(ackQ, x)
case cRange:
d.tableRangeCompaction(cmd.level, cmd.min, cmd.max)
db.tableRangeCompaction(cmd.level, cmd.min, cmd.max)
x.ack(nil)
}
x = nil
}
d.tableAutoCompaction()
db.tableAutoCompaction()
}
}

View File

@@ -9,6 +9,7 @@ package leveldb
import (
"errors"
"runtime"
"sync"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
@@ -19,38 +20,51 @@ var (
errInvalidIkey = errors.New("leveldb: Iterator: invalid internal key")
)
func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
s := db.s
type memdbReleaser struct {
once sync.Once
m *memDB
}
func (mr *memdbReleaser) Release() {
mr.once.Do(func() {
mr.m.decref()
})
}
func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
em, fm := db.getMems()
v := s.version()
v := db.s.version()
ti := v.getIterators(slice, ro)
n := len(ti) + 2
i := make([]iterator.Iterator, 0, n)
i = append(i, em.NewIterator(slice))
emi := em.db.NewIterator(slice)
emi.SetReleaser(&memdbReleaser{m: em})
i = append(i, emi)
if fm != nil {
i = append(i, fm.NewIterator(slice))
fmi := fm.db.NewIterator(slice)
fmi.SetReleaser(&memdbReleaser{m: fm})
i = append(i, fmi)
}
i = append(i, ti...)
strict := s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
mi := iterator.NewMergedIterator(i, s.icmp, strict)
strict := db.s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
mi := iterator.NewMergedIterator(i, db.s.icmp, strict)
mi.SetReleaser(&versionReleaser{v: v})
return mi
}
func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *dbIter {
var slice_ *util.Range
var islice *util.Range
if slice != nil {
slice_ = &util.Range{}
islice = &util.Range{}
if slice.Start != nil {
slice_.Start = newIKey(slice.Start, kMaxSeq, tSeek)
islice.Start = newIKey(slice.Start, kMaxSeq, tSeek)
}
if slice.Limit != nil {
slice_.Limit = newIKey(slice.Limit, kMaxSeq, tSeek)
islice.Limit = newIKey(slice.Limit, kMaxSeq, tSeek)
}
}
rawIter := db.newRawIterator(slice_, ro)
rawIter := db.newRawIterator(islice, ro)
iter := &dbIter{
icmp: db.s.icmp,
iter: rawIter,

View File

@@ -87,12 +87,12 @@ type Snapshot struct {
// Creates new snapshot object.
func (db *DB) newSnapshot() *Snapshot {
p := &Snapshot{
snap := &Snapshot{
db: db,
elem: db.acquireSnapshot(),
}
runtime.SetFinalizer(p, (*Snapshot).Release)
return p
runtime.SetFinalizer(snap, (*Snapshot).Release)
return snap
}
// Get gets the value for the given key. It returns ErrNotFound if
@@ -100,19 +100,18 @@ func (db *DB) newSnapshot() *Snapshot {
//
// The caller should not modify the contents of the returned slice, but
// it is safe to modify the contents of the argument after Get returns.
func (p *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
db := p.db
err = db.ok()
func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
err = snap.db.ok()
if err != nil {
return
}
p.mu.Lock()
defer p.mu.Unlock()
if p.released {
snap.mu.Lock()
defer snap.mu.Unlock()
if snap.released {
err = ErrSnapshotReleased
return
}
return db.get(key, p.elem.seq, ro)
return snap.db.get(key, snap.elem.seq, ro)
}
// NewIterator returns an iterator for the snapshot of the uderlying DB.
@@ -132,17 +131,18 @@ func (p *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error
// iterator would be still valid until released.
//
// Also read Iterator documentation of the leveldb/iterator package.
func (p *Snapshot) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
db := p.db
if err := db.ok(); err != nil {
func (snap *Snapshot) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
if err := snap.db.ok(); err != nil {
return iterator.NewEmptyIterator(err)
}
p.mu.Lock()
defer p.mu.Unlock()
if p.released {
snap.mu.Lock()
defer snap.mu.Unlock()
if snap.released {
return iterator.NewEmptyIterator(ErrSnapshotReleased)
}
return db.newIterator(p.elem.seq, slice, ro)
// Since iterator already hold version ref, it doesn't need to
// hold snapshot ref.
return snap.db.newIterator(snap.elem.seq, slice, ro)
}
// Release releases the snapshot. This will not release any returned
@@ -150,16 +150,17 @@ func (p *Snapshot) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.
// underlying DB is closed.
//
// Other methods should not be called after the snapshot has been released.
func (p *Snapshot) Release() {
p.mu.Lock()
if !p.released {
// Clear the finalizer.
runtime.SetFinalizer(p, nil)
func (snap *Snapshot) Release() {
snap.mu.Lock()
defer snap.mu.Unlock()
p.released = true
p.db.releaseSnapshot(p.elem)
p.db = nil
p.elem = nil
if !snap.released {
// Clear the finalizer.
runtime.SetFinalizer(snap, nil)
snap.released = true
snap.db.releaseSnapshot(snap.elem)
snap.db = nil
snap.elem = nil
}
p.mu.Unlock()
}

View File

@@ -11,103 +11,153 @@ import (
"github.com/syndtr/goleveldb/leveldb/journal"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/util"
)
type memDB struct {
pool *util.Pool
db *memdb.DB
ref int32
}
func (m *memDB) incref() {
atomic.AddInt32(&m.ref, 1)
}
func (m *memDB) decref() {
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
m.pool.Put(m)
} else if ref < 0 {
panic("negative memdb ref")
}
}
// Get latest sequence number.
func (d *DB) getSeq() uint64 {
return atomic.LoadUint64(&d.seq)
func (db *DB) getSeq() uint64 {
return atomic.LoadUint64(&db.seq)
}
// Atomically adds delta to seq.
func (d *DB) addSeq(delta uint64) {
atomic.AddUint64(&d.seq, delta)
func (db *DB) addSeq(delta uint64) {
atomic.AddUint64(&db.seq, delta)
}
// Create new memdb and froze the old one; need external synchronization.
// newMem only called synchronously by the writer.
func (d *DB) newMem(n int) (mem *memdb.DB, err error) {
s := d.s
num := s.allocFileNum()
file := s.getJournalFile(num)
func (db *DB) newMem(n int) (mem *memDB, err error) {
num := db.s.allocFileNum()
file := db.s.getJournalFile(num)
w, err := file.Create()
if err != nil {
s.reuseFileNum(num)
db.s.reuseFileNum(num)
return
}
d.memMu.Lock()
if d.journal == nil {
d.journal = journal.NewWriter(w)
} else {
d.journal.Reset(w)
d.journalWriter.Close()
d.frozenJournalFile = d.journalFile
db.memMu.Lock()
defer db.memMu.Unlock()
if db.frozenMem != nil {
panic("still has frozen mem")
}
d.journalWriter = w
d.journalFile = file
d.frozenMem = d.mem
d.mem = memdb.New(s.icmp, maxInt(d.s.o.GetWriteBuffer(), n))
mem = d.mem
// The seq only incremented by the writer.
d.frozenSeq = d.seq
d.memMu.Unlock()
if db.journal == nil {
db.journal = journal.NewWriter(w)
} else {
db.journal.Reset(w)
db.journalWriter.Close()
db.frozenJournalFile = db.journalFile
}
db.journalWriter = w
db.journalFile = file
db.frozenMem = db.mem
mem, ok := db.memPool.Get().(*memDB)
if ok && mem.db.Capacity() >= n {
mem.db.Reset()
mem.incref()
} else {
mem = &memDB{
pool: db.memPool,
db: memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n)),
ref: 1,
}
}
mem.incref()
db.mem = mem
// The seq only incremented by the writer. And whoever called newMem
// should hold write lock, so no need additional synchronization here.
db.frozenSeq = db.seq
return
}
// Get all memdbs.
func (d *DB) getMems() (e *memdb.DB, f *memdb.DB) {
d.memMu.RLock()
defer d.memMu.RUnlock()
return d.mem, d.frozenMem
func (db *DB) getMems() (e, f *memDB) {
db.memMu.RLock()
defer db.memMu.RUnlock()
if db.mem == nil {
panic("nil effective mem")
}
db.mem.incref()
if db.frozenMem != nil {
db.frozenMem.incref()
}
return db.mem, db.frozenMem
}
// Get frozen memdb.
func (d *DB) getEffectiveMem() *memdb.DB {
d.memMu.RLock()
defer d.memMu.RUnlock()
return d.mem
func (db *DB) getEffectiveMem() *memDB {
db.memMu.RLock()
defer db.memMu.RUnlock()
if db.mem == nil {
panic("nil effective mem")
}
db.mem.incref()
return db.mem
}
// Check whether we has frozen memdb.
func (d *DB) hasFrozenMem() bool {
d.memMu.RLock()
defer d.memMu.RUnlock()
return d.frozenMem != nil
func (db *DB) hasFrozenMem() bool {
db.memMu.RLock()
defer db.memMu.RUnlock()
return db.frozenMem != nil
}
// Get frozen memdb.
func (d *DB) getFrozenMem() *memdb.DB {
d.memMu.RLock()
defer d.memMu.RUnlock()
return d.frozenMem
func (db *DB) getFrozenMem() *memDB {
db.memMu.RLock()
defer db.memMu.RUnlock()
if db.frozenMem != nil {
db.frozenMem.incref()
}
return db.frozenMem
}
// Drop frozen memdb; assume that frozen memdb isn't nil.
func (d *DB) dropFrozenMem() {
d.memMu.Lock()
if err := d.frozenJournalFile.Remove(); err != nil {
d.s.logf("journal@remove removing @%d %q", d.frozenJournalFile.Num(), err)
func (db *DB) dropFrozenMem() {
db.memMu.Lock()
if err := db.frozenJournalFile.Remove(); err != nil {
db.logf("journal@remove removing @%d %q", db.frozenJournalFile.Num(), err)
} else {
d.s.logf("journal@remove removed @%d", d.frozenJournalFile.Num())
db.logf("journal@remove removed @%d", db.frozenJournalFile.Num())
}
d.frozenJournalFile = nil
d.frozenMem = nil
d.memMu.Unlock()
db.frozenJournalFile = nil
db.frozenMem.decref()
db.frozenMem = nil
db.memMu.Unlock()
}
// Set closed flag; return true if not already closed.
func (d *DB) setClosed() bool {
return atomic.CompareAndSwapUint32(&d.closed, 0, 1)
func (db *DB) setClosed() bool {
return atomic.CompareAndSwapUint32(&db.closed, 0, 1)
}
// Check whether DB was closed.
func (d *DB) isClosed() bool {
return atomic.LoadUint32(&d.closed) != 0
func (db *DB) isClosed() bool {
return atomic.LoadUint32(&db.closed) != 0
}
// Check read ok status.
func (d *DB) ok() error {
if d.isClosed() {
func (db *DB) ok() error {
if db.isClosed() {
return ErrClosed
}
return nil

View File

@@ -154,9 +154,7 @@ func (h *dbHarness) maxNextLevelOverlappingBytes(want uint64) {
level := i + 1
next := v.tables[level+1]
for _, t := range tt {
var r tFiles
min, max := t.min.ukey(), t.max.ukey()
next.getOverlaps(min, max, &r, true, db.s.icmp.ucmp)
r := next.getOverlaps(nil, db.s.icmp, t.imin.ukey(), t.imax.ukey(), false)
sum := r.size()
if sum > res {
res = sum

View File

@@ -32,40 +32,42 @@ func (p Sizes) Sum() (n uint64) {
return n
}
// Check and clean files.
func (d *DB) checkAndCleanFiles() error {
s := d.s
// Logging.
func (db *DB) log(v ...interface{}) { db.s.log(v...) }
func (db *DB) logf(format string, v ...interface{}) { db.s.logf(format, v...) }
v := s.version_NB()
tables := make(map[uint64]bool)
for _, tt := range v.tables {
for _, t := range tt {
tables[t.file.Num()] = false
// Check and clean files.
func (db *DB) checkAndCleanFiles() error {
v := db.s.version_NB()
tablesMap := make(map[uint64]bool)
for _, tables := range v.tables {
for _, t := range tables {
tablesMap[t.file.Num()] = false
}
}
ff, err := s.getFiles(storage.TypeAll)
files, err := db.s.getFiles(storage.TypeAll)
if err != nil {
return err
}
var nTables int
var rem []storage.File
for _, f := range ff {
for _, f := range files {
keep := true
switch f.Type() {
case storage.TypeManifest:
keep = f.Num() >= s.manifestFile.Num()
keep = f.Num() >= db.s.manifestFile.Num()
case storage.TypeJournal:
if d.frozenJournalFile != nil {
keep = f.Num() >= d.frozenJournalFile.Num()
if db.frozenJournalFile != nil {
keep = f.Num() >= db.frozenJournalFile.Num()
} else {
keep = f.Num() >= d.journalFile.Num()
keep = f.Num() >= db.journalFile.Num()
}
case storage.TypeTable:
_, keep = tables[f.Num()]
_, keep = tablesMap[f.Num()]
if keep {
tables[f.Num()] = true
tablesMap[f.Num()] = true
nTables++
}
}
@@ -75,18 +77,18 @@ func (d *DB) checkAndCleanFiles() error {
}
}
if nTables != len(tables) {
for num, present := range tables {
if nTables != len(tablesMap) {
for num, present := range tablesMap {
if !present {
s.logf("db@janitor table missing @%d", num)
db.logf("db@janitor table missing @%d", num)
}
}
return ErrCorrupted{Type: MissingFiles, Err: errors.New("leveldb: table files missing")}
}
s.logf("db@janitor F·%d G·%d", len(ff), len(rem))
db.logf("db@janitor F·%d G·%d", len(files), len(rem))
for _, f := range rem {
s.logf("db@janitor removing %s-%d", f.Type(), f.Num())
db.logf("db@janitor removing %s-%d", f.Type(), f.Num())
if err := f.Remove(); err != nil {
return err
}

View File

@@ -14,64 +14,68 @@ import (
"github.com/syndtr/goleveldb/leveldb/util"
)
func (d *DB) writeJournal(b *Batch) error {
w, err := d.journal.Next()
func (db *DB) writeJournal(b *Batch) error {
w, err := db.journal.Next()
if err != nil {
return err
}
if _, err := w.Write(b.encode()); err != nil {
return err
}
if err := d.journal.Flush(); err != nil {
if err := db.journal.Flush(); err != nil {
return err
}
if b.sync {
return d.journalWriter.Sync()
return db.journalWriter.Sync()
}
return nil
}
func (d *DB) jWriter() {
defer d.closeW.Done()
func (db *DB) jWriter() {
defer db.closeW.Done()
for {
select {
case b := <-d.journalC:
case b := <-db.journalC:
if b != nil {
d.journalAckC <- d.writeJournal(b)
db.journalAckC <- db.writeJournal(b)
}
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
return
}
}
}
func (d *DB) rotateMem(n int) (mem *memdb.DB, err error) {
func (db *DB) rotateMem(n int) (mem *memDB, err error) {
// Wait for pending memdb compaction.
err = d.compSendIdle(d.mcompCmdC)
err = db.compSendIdle(db.mcompCmdC)
if err != nil {
return
}
// Create new memdb and journal.
mem, err = d.newMem(n)
mem, err = db.newMem(n)
if err != nil {
return
}
// Schedule memdb compaction.
d.compTrigger(d.mcompTriggerC)
db.compTrigger(db.mcompTriggerC)
return
}
func (d *DB) flush(n int) (mem *memdb.DB, nn int, err error) {
s := d.s
func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
delayed := false
flush := func() bool {
v := s.version()
flush := func() (retry bool) {
v := db.s.version()
defer v.release()
mem = d.getEffectiveMem()
nn = mem.Free()
mem = db.getEffectiveMem()
defer func() {
if retry {
mem.decref()
mem = nil
}
}()
nn = mem.db.Free()
switch {
case v.tLen(0) >= kL0_SlowdownWritesTrigger && !delayed:
delayed = true
@@ -80,18 +84,23 @@ func (d *DB) flush(n int) (mem *memdb.DB, nn int, err error) {
return false
case v.tLen(0) >= kL0_StopWritesTrigger:
delayed = true
err = d.compSendIdle(d.tcompCmdC)
err = db.compSendIdle(db.tcompCmdC)
if err != nil {
return false
}
default:
// Allow memdb to grow if it has no entry.
if mem.Len() == 0 {
if mem.db.Len() == 0 {
nn = n
return false
} else {
mem.decref()
mem, err = db.rotateMem(n)
if err == nil {
nn = mem.db.Free()
} else {
nn = 0
}
}
mem, err = d.rotateMem(n)
nn = mem.Free()
return false
}
return true
@@ -100,7 +109,7 @@ func (d *DB) flush(n int) (mem *memdb.DB, nn int, err error) {
for flush() {
}
if delayed {
s.logf("db@write delayed T·%v", time.Since(start))
db.logf("db@write delayed T·%v", time.Since(start))
}
return
}
@@ -109,8 +118,8 @@ func (d *DB) flush(n int) (mem *memdb.DB, nn int, err error) {
// sequentially.
//
// It is safe to modify the contents of the arguments after Write returns.
func (d *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
err = d.ok()
func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
err = db.ok()
if err != nil || b == nil || b.len() == 0 {
return
}
@@ -120,28 +129,29 @@ func (d *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
// The write happen synchronously.
retry:
select {
case d.writeC <- b:
if <-d.writeMergedC {
return <-d.writeAckC
case db.writeC <- b:
if <-db.writeMergedC {
return <-db.writeAckC
}
goto retry
case d.writeLockC <- struct{}{}:
case _, _ = <-d.closeC:
case db.writeLockC <- struct{}{}:
case _, _ = <-db.closeC:
return ErrClosed
}
merged := 0
defer func() {
<-d.writeLockC
<-db.writeLockC
for i := 0; i < merged; i++ {
d.writeAckC <- err
db.writeAckC <- err
}
}()
mem, memFree, err := d.flush(b.size())
mem, memFree, err := db.flush(b.size())
if err != nil {
return
}
defer mem.decref()
// Calculate maximum size of the batch.
m := 1 << 20
@@ -154,13 +164,13 @@ retry:
drain:
for b.size() < m && !b.sync {
select {
case nb := <-d.writeC:
case nb := <-db.writeC:
if b.size()+nb.size() <= m {
b.append(nb)
d.writeMergedC <- true
db.writeMergedC <- true
merged++
} else {
d.writeMergedC <- false
db.writeMergedC <- false
break drain
}
default:
@@ -169,44 +179,44 @@ drain:
}
// Set batch first seq number relative from last seq.
b.seq = d.seq + 1
b.seq = db.seq + 1
// Write journal concurrently if it is large enough.
if b.size() >= (128 << 10) {
// Push the write batch to the journal writer
select {
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
err = ErrClosed
return
case d.journalC <- b:
case db.journalC <- b:
// Write into memdb
b.memReplay(mem)
b.memReplay(mem.db)
}
// Wait for journal writer
select {
case _, _ = <-d.closeC:
case _, _ = <-db.closeC:
err = ErrClosed
return
case err = <-d.journalAckC:
case err = <-db.journalAckC:
if err != nil {
// Revert memdb if error detected
b.revertMemReplay(mem)
b.revertMemReplay(mem.db)
return
}
}
} else {
err = d.writeJournal(b)
err = db.writeJournal(b)
if err != nil {
return
}
b.memReplay(mem)
b.memReplay(mem.db)
}
// Set last seq number.
d.addSeq(uint64(b.len()))
db.addSeq(uint64(b.len()))
if b.size() >= memFree {
d.rotateMem(0)
db.rotateMem(0)
}
return
}
@@ -215,20 +225,20 @@ drain:
// for that key; a DB is not a multi-map.
//
// It is safe to modify the contents of the arguments after Put returns.
func (d *DB) Put(key, value []byte, wo *opt.WriteOptions) error {
func (db *DB) Put(key, value []byte, wo *opt.WriteOptions) error {
b := new(Batch)
b.Put(key, value)
return d.Write(b, wo)
return db.Write(b, wo)
}
// Delete deletes the value for the given key. It returns ErrNotFound if
// the DB does not contain the key.
//
// It is safe to modify the contents of the arguments after Delete returns.
func (d *DB) Delete(key []byte, wo *opt.WriteOptions) error {
func (db *DB) Delete(key []byte, wo *opt.WriteOptions) error {
b := new(Batch)
b.Delete(key)
return d.Write(b, wo)
return db.Write(b, wo)
}
func isMemOverlaps(icmp *iComparer, mem *memdb.DB, min, max []byte) bool {
@@ -247,33 +257,34 @@ func isMemOverlaps(icmp *iComparer, mem *memdb.DB, min, max []byte) bool {
// A nil Range.Start is treated as a key before all keys in the DB.
// And a nil Range.Limit is treated as a key after all keys in the DB.
// Therefore if both is nil then it will compact entire DB.
func (d *DB) CompactRange(r util.Range) error {
if err := d.ok(); err != nil {
func (db *DB) CompactRange(r util.Range) error {
if err := db.ok(); err != nil {
return err
}
select {
case d.writeLockC <- struct{}{}:
case _, _ = <-d.closeC:
case db.writeLockC <- struct{}{}:
case _, _ = <-db.closeC:
return ErrClosed
}
// Check for overlaps in memdb.
mem := d.getEffectiveMem()
if isMemOverlaps(d.s.icmp, mem, r.Start, r.Limit) {
mem := db.getEffectiveMem()
defer mem.decref()
if isMemOverlaps(db.s.icmp, mem.db, r.Start, r.Limit) {
// Memdb compaction.
if _, err := d.rotateMem(0); err != nil {
<-d.writeLockC
if _, err := db.rotateMem(0); err != nil {
<-db.writeLockC
return err
}
<-d.writeLockC
if err := d.compSendIdle(d.mcompCmdC); err != nil {
<-db.writeLockC
if err := db.compSendIdle(db.mcompCmdC); err != nil {
return err
}
} else {
<-d.writeLockC
<-db.writeLockC
}
// Table compaction.
return d.compSendRange(d.tcompCmdC, -1, r.Start, r.Limit)
return db.compSendRange(db.tcompCmdC, -1, r.Start, r.Limit)
}

View File

@@ -37,6 +37,16 @@
// err = iter.Error()
// ...
//
// Iterate over subset of database content with a particular prefix:
// iter := db.NewIterator(util.BytesPrefix([]byte("foo-")), nil)
// for iter.Next() {
// // Use key/value.
// ...
// }
// iter.Release()
// err = iter.Error()
// ...
//
// Seek-then-Iterate:
//
// iter := db.NewIterator(nil, nil)

View File

@@ -36,7 +36,7 @@ var _ = testutil.Defer(func() {
testutil.DoDBTesting(&t)
db.TestClose()
done <- true
}, 9.0)
}, 20.0)
})
Describe("read test", func() {

View File

@@ -0,0 +1,58 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build go1.3
package leveldb
import (
"sync/atomic"
"testing"
)
func BenchmarkDBReadConcurrent(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gc()
defer p.close()
b.ResetTimer()
b.SetBytes(116)
b.RunParallel(func(pb *testing.PB) {
iter := p.newIter()
defer iter.Release()
for pb.Next() && iter.Next() {
}
})
}
func BenchmarkDBReadConcurrent2(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gc()
defer p.close()
b.ResetTimer()
b.SetBytes(116)
var dir uint32
b.RunParallel(func(pb *testing.PB) {
iter := p.newIter()
defer iter.Release()
if atomic.AddUint32(&dir, 1)%2 == 0 {
for pb.Next() && iter.Next() {
}
} else {
if pb.Next() && iter.Last() {
for pb.Next() && iter.Prev() {
}
}
}
})
}

View File

@@ -103,18 +103,18 @@ type flusher interface {
Flush() error
}
// DroppedError is the error type that passed to Dropper.Drop method.
type DroppedError struct {
// ErrCorrupted is the error type that generated by corrupted block or chunk.
type ErrCorrupted struct {
Size int
Reason string
}
func (e DroppedError) Error() string {
return fmt.Sprintf("leveldb/journal: dropped %d bytes: %s", e.Size, e.Reason)
func (e ErrCorrupted) Error() string {
return fmt.Sprintf("leveldb/journal: block/chunk corrupted: %s (%d bytes)", e.Reason, e.Size)
}
// Dropper is the interface that wrap simple Drop method. The Drop
// method will be called when the journal reader dropping a chunk.
// method will be called when the journal reader dropping a block or chunk.
type Dropper interface {
Drop(err error)
}
@@ -158,76 +158,78 @@ func NewReader(r io.Reader, dropper Dropper, strict, checksum bool) *Reader {
}
}
var errSkip = errors.New("leveldb/journal: skipped")
func (r *Reader) corrupt(n int, reason string, skip bool) error {
if r.dropper != nil {
r.dropper.Drop(ErrCorrupted{n, reason})
}
if r.strict && !skip {
r.err = ErrCorrupted{n, reason}
return r.err
}
return errSkip
}
// nextChunk sets r.buf[r.i:r.j] to hold the next chunk's payload, reading the
// next block into the buffer if necessary.
func (r *Reader) nextChunk(wantFirst, skip bool) error {
func (r *Reader) nextChunk(first bool) error {
for {
if r.j+headerSize <= r.n {
checksum := binary.LittleEndian.Uint32(r.buf[r.j+0 : r.j+4])
length := binary.LittleEndian.Uint16(r.buf[r.j+4 : r.j+6])
chunkType := r.buf[r.j+6]
var err error
if checksum == 0 && length == 0 && chunkType == 0 {
// Drop entire block.
err = DroppedError{r.n - r.j, "zero header"}
m := r.n - r.j
r.i = r.n
r.j = r.n
return r.corrupt(m, "zero header", false)
} else {
m := r.n - r.j
r.i = r.j + headerSize
r.j = r.j + headerSize + int(length)
if r.j > r.n {
// Drop entire block.
err = DroppedError{m, "chunk length overflows block"}
r.i = r.n
r.j = r.n
return r.corrupt(m, "chunk length overflows block", false)
} else if r.checksum && checksum != util.NewCRC(r.buf[r.i-1:r.j]).Value() {
// Drop entire block.
err = DroppedError{m, "checksum mismatch"}
r.i = r.n
r.j = r.n
return r.corrupt(m, "checksum mismatch", false)
}
}
if wantFirst && err == nil && chunkType != fullChunkType && chunkType != firstChunkType {
if skip {
// The chunk are intentionally skipped.
if chunkType == lastChunkType {
skip = false
}
continue
} else {
// Drop the chunk.
err = DroppedError{r.j - r.i + headerSize, "orphan chunk"}
}
if first && chunkType != fullChunkType && chunkType != firstChunkType {
m := r.j - r.i
r.i = r.j
// Report the error, but skip it.
return r.corrupt(m+headerSize, "orphan chunk", true)
}
if err == nil {
r.last = chunkType == fullChunkType || chunkType == lastChunkType
} else {
if r.dropper != nil {
r.dropper.Drop(err)
}
if r.strict {
r.err = err
}
r.last = chunkType == fullChunkType || chunkType == lastChunkType
return nil
}
// The last block.
if r.n < blockSize && r.n > 0 {
if !first {
return r.corrupt(0, "missing chunk part", false)
}
r.err = io.EOF
return r.err
}
// Read block.
n, err := io.ReadFull(r.r, r.buf[:])
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return err
}
if r.n < blockSize && r.n > 0 {
// This is the last block.
if r.j != r.n {
r.err = io.ErrUnexpectedEOF
} else {
r.err = io.EOF
}
return r.err
}
n, err := io.ReadFull(r.r, r.buf[:])
if err != nil && err != io.ErrUnexpectedEOF {
r.err = err
return r.err
}
if n == 0 {
if !first {
return r.corrupt(0, "missing chunk part", false)
}
r.err = io.EOF
return r.err
}
@@ -237,29 +239,26 @@ func (r *Reader) nextChunk(wantFirst, skip bool) error {
// Next returns a reader for the next journal. It returns io.EOF if there are no
// more journals. The reader returned becomes stale after the next Next call,
// and should no longer be used.
// and should no longer be used. If strict is false, the reader will returns
// io.ErrUnexpectedEOF error when found corrupted journal.
func (r *Reader) Next() (io.Reader, error) {
r.seq++
if r.err != nil {
return nil, r.err
}
skip := !r.last
r.i = r.j
for {
r.i = r.j
if r.nextChunk(true, skip) != nil {
// So that 'orphan chunk' drop will be reported.
skip = false
} else {
if err := r.nextChunk(true); err == nil {
break
}
if r.err != nil {
return nil, r.err
} else if err != errSkip {
return nil, err
}
}
return &singleReader{r, r.seq, nil}, nil
}
// Reset resets the journal reader, allows reuse of the journal reader.
// Reset resets the journal reader, allows reuse of the journal reader. Reset returns
// last accumulated error.
func (r *Reader) Reset(reader io.Reader, dropper Dropper, strict, checksum bool) error {
r.seq++
err := r.err
@@ -296,7 +295,11 @@ func (x *singleReader) Read(p []byte) (int, error) {
if r.last {
return 0, io.EOF
}
if x.err = r.nextChunk(false, false); x.err != nil {
x.err = r.nextChunk(false)
if x.err != nil {
if x.err == errSkip {
x.err = io.ErrUnexpectedEOF
}
return 0, x.err
}
}
@@ -320,7 +323,11 @@ func (x *singleReader) ReadByte() (byte, error) {
if r.last {
return 0, io.EOF
}
if x.err = r.nextChunk(false, false); x.err != nil {
x.err = r.nextChunk(false)
if x.err != nil {
if x.err == errSkip {
x.err = io.ErrUnexpectedEOF
}
return 0, x.err
}
}

View File

@@ -12,6 +12,7 @@ package journal
import (
"bytes"
"encoding/binary"
"fmt"
"io"
"io/ioutil"
@@ -326,3 +327,492 @@ func TestStaleWriter(t *testing.T) {
t.Fatalf("stale write #1: unexpected error: %v", err)
}
}
func TestCorrupt_MissingLastBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-1024)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
// Cut the last block.
b := buf.Bytes()[:blockSize]
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read.
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if n != blockSize-1024 {
t.Fatalf("read #0: got %d bytes want %d", n, blockSize-1024)
}
// Second read.
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #1: unexpected error: %v", err)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_CorruptedFirstBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
// Fourth record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
t.Fatalf("write #3: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting block #0.
for i := 0; i < 1024; i++ {
b[i] = '1'
}
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (third record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize-headerSize) + 1; n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (fourth record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #1: %v", err)
}
if want := int64(blockSize-headerSize) + 2; n != want {
t.Fatalf("read #1: got %d bytes want %d", n, want)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_CorruptedMiddleBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
// Fourth record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
t.Fatalf("write #3: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting block #1.
for i := 0; i < 1024; i++ {
b[blockSize+i] = '1'
}
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (second record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #1: unexpected error: %v", err)
}
// Third read (fourth record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #2: %v", err)
}
if want := int64(blockSize-headerSize) + 2; n != want {
t.Fatalf("read #2: got %d bytes want %d", n, want)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_CorruptedLastBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
// Fourth record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
t.Fatalf("write #3: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting block #3.
for i := len(b) - 1; i > len(b)-1024; i-- {
b[i] = '1'
}
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (second record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #1: %v", err)
}
if want := int64(blockSize - headerSize); n != want {
t.Fatalf("read #1: got %d bytes want %d", n, want)
}
// Third read (third record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #2: %v", err)
}
if want := int64(blockSize-headerSize) + 1; n != want {
t.Fatalf("read #2: got %d bytes want %d", n, want)
}
// Fourth read (fourth record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #3: unexpected error: %v", err)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_FirstChuckLengthOverflow(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting record #1.
x := blockSize
binary.LittleEndian.PutUint16(b[x+4:], 0xffff)
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (second record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #1: unexpected error: %v", err)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_MiddleChuckLengthOverflow(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting record #1.
x := blockSize/2 + headerSize
binary.LittleEndian.PutUint16(b[x+4:], 0xffff)
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (third record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #1: %v", err)
}
if want := int64(blockSize-headerSize) + 1; n != want {
t.Fatalf("read #1: got %d bytes want %d", n, want)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}

View File

@@ -437,6 +437,8 @@ func (p *DB) Reset() {
// New creates a new initalized in-memory key/value DB. The capacity
// is the initial key/value buffer capacity. The capacity is advisory,
// not enforced.
//
// The returned DB instance is goroutine-safe.
func New(cmp comparer.BasicComparer, capacity int) *DB {
p := &DB{
cmp: cmp,

View File

@@ -31,9 +31,12 @@ const (
type noCache struct{}
func (noCache) SetCapacity(capacity int) {}
func (noCache) Capacity() int { return 0 }
func (noCache) Used() int { return 0 }
func (noCache) Size() int { return 0 }
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
func (noCache) Purge(fin cache.PurgeFin) {}
func (noCache) Zap(closed bool) {}
func (noCache) Zap() {}
var NoCache cache.Cache = noCache{}

View File

@@ -39,11 +39,12 @@ type session struct {
manifestWriter storage.Writer
manifestFile storage.File
stCPtrs [kNumLevels]iKey // compact pointers; need external synchronization
stCptrs [kNumLevels]iKey // compact pointers; need external synchronization
stVersion *version // current version
vmu sync.Mutex
}
// Creates new initialized session instance.
func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
if stor == nil {
return nil, os.ErrInvalid
@@ -81,6 +82,7 @@ func (s *session) close() {
s.stVersion = nil
}
// Release session lock.
func (s *session) release() {
s.storLock.Release()
}
@@ -132,8 +134,8 @@ func (s *session) recover() (err error) {
err = rec.decode(r)
if err == nil {
// save compact pointers
for _, rp := range rec.compactionPointers {
s.stCPtrs[rp.level] = iKey(rp.key)
for _, r := range rec.compactionPointers {
s.stCptrs[r.level] = iKey(r.ikey)
}
// commit record to version staging
staging.commit(rec)
@@ -195,16 +197,16 @@ func (s *session) pickCompaction() *compaction {
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cp := s.stCPtrs[level]
tt := v.tables[level]
for _, t := range tt {
if cp == nil || s.icmp.Compare(t.max, cp) > 0 {
cptr := s.stCptrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tt[0])
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
@@ -216,11 +218,10 @@ func (s *session) pickCompaction() *compaction {
}
}
c := &compaction{s: s, version: v, level: level}
c := &compaction{s: s, v: v, level: level}
if level == 0 {
min, max := t0.getRange(s.icmp)
t0 = nil
v.tables[0].getOverlaps(min.ukey(), max.ukey(), &t0, false, s.icmp.ucmp)
imin, imax := t0.getRange(s.icmp)
t0 = v.tables[0].getOverlaps(t0[:0], s.icmp, imin.ukey(), imax.ukey(), true)
}
c.tables[0] = t0
@@ -229,11 +230,10 @@ func (s *session) pickCompaction() *compaction {
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, min, max []byte) *compaction {
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version_NB()
var t0 tFiles
v.tables[level].getOverlaps(min, max, &t0, level != 0, s.icmp.ucmp)
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
return nil
}
@@ -255,16 +255,16 @@ func (s *session) getCompactionRange(level int, min, max []byte) *compaction {
}
}
c := &compaction{s: s, version: v, level: level}
c := &compaction{s: s, v: v, level: level}
c.tables[0] = t0
c.expand()
return c
}
// compaction represent a compaction state
// compaction represent a compaction state.
type compaction struct {
s *session
version *version
s *session
v *version
level int
tables [2]tFiles
@@ -273,42 +273,36 @@ type compaction struct {
gpidx int
seenKey bool
overlappedBytes uint64
min, max iKey
imin, imax iKey
tPtrs [kNumLevels]int
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
s := c.s
v := c.version
level := c.level
vt0, vt1 := v.tables[level], v.tables[level+1]
vt0, vt1 := c.v.tables[level], c.v.tables[level+1]
t0, t1 := c.tables[0], c.tables[1]
min, max := t0.getRange(s.icmp)
vt1.getOverlaps(min.ukey(), max.ukey(), &t1, true, s.icmp.ucmp)
// Get entire range covered by compaction
amin, amax := append(t0, t1...).getRange(s.icmp)
imin, imax := t0.getRange(c.s.icmp)
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
var exp0 tFiles
vt0.getOverlaps(amin.ukey(), amax.ukey(), &exp0, level != 0, s.icmp.ucmp)
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < kExpCompactionMaxBytes {
var exp1 tFiles
xmin, xmax := exp0.getRange(s.icmp)
vt1.getOverlaps(xmin.ukey(), xmax.ukey(), &exp1, true, s.icmp.ucmp)
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
level, level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
min, max = xmin, xmax
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(s.icmp)
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
@@ -316,11 +310,11 @@ func (c *compaction) expand() {
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if level+2 < kNumLevels {
v.tables[level+2].getOverlaps(amin.ukey(), amax.ukey(), &c.gp, true, s.icmp.ucmp)
c.gp = c.v.tables[level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.min, c.max = min, max
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
@@ -328,17 +322,14 @@ func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= kMaxGrandParentOverlapBytes
}
func (c *compaction) isBaseLevelForKey(key []byte) bool {
s := c.s
v := c.version
for level, tt := range v.tables[c.level+2:] {
for c.tPtrs[level] < len(tt) {
t := tt[c.tPtrs[level]]
if s.icmp.uCompare(key, t.max.ukey()) <= 0 {
// We've advanced far enough
if s.icmp.uCompare(key, t.min.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
@@ -349,10 +340,10 @@ func (c *compaction) isBaseLevelForKey(key []byte) bool {
return true
}
func (c *compaction) shouldStopBefore(key iKey) bool {
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpidx < len(c.gp); c.gpidx++ {
gp := c.gp[c.gpidx]
if c.s.icmp.Compare(key, gp.max) <= 0 {
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
@@ -362,42 +353,44 @@ func (c *compaction) shouldStopBefore(key iKey) bool {
c.seenKey = true
if c.overlappedBytes > kMaxGrandParentOverlapBytes {
// Too much overlap for current output; start new output
// Too much overlap for current output; start new output.
c.overlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
s := c.s
level := c.level
icap := 2
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
}
strict := s.o.GetStrict(opt.StrictIterator)
strict := c.s.o.GetStrict(opt.StrictIterator)
for i, tt := range c.tables {
if len(tt) == 0 {
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
if level+i == 0 {
for _, t := range tt {
its = append(its, s.tops.newIterator(t, nil, ro))
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tt.newIndexIterator(s.tops, s.icmp, nil, ro), strict, true)
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict, true)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, s.icmp, true)
return iterator.NewMergedIterator(its, c.s.icmp, true)
}

View File

@@ -35,19 +35,19 @@ const (
type cpRecord struct {
level int
key iKey
ikey iKey
}
type ntRecord struct {
level int
num uint64
size uint64
min iKey
max iKey
imin iKey
imax iKey
}
func (r ntRecord) makeFile(s *session) *tFile {
return newTFile(s.getTableFile(r.num), r.size, r.min, r.max)
return newTableFile(s.getTableFile(r.num), r.size, r.imin, r.imax)
}
type dtRecord struct {
@@ -98,9 +98,9 @@ func (p *sessionRecord) setSeq(seq uint64) {
p.seq = seq
}
func (p *sessionRecord) addCompactionPointer(level int, key iKey) {
func (p *sessionRecord) addCompactionPointer(level int, ikey iKey) {
p.hasRec |= 1 << recCompactionPointer
p.compactionPointers = append(p.compactionPointers, cpRecord{level, key})
p.compactionPointers = append(p.compactionPointers, cpRecord{level, ikey})
}
func (p *sessionRecord) resetCompactionPointers() {
@@ -108,13 +108,13 @@ func (p *sessionRecord) resetCompactionPointers() {
p.compactionPointers = p.compactionPointers[:0]
}
func (p *sessionRecord) addTable(level int, num, size uint64, min, max iKey) {
func (p *sessionRecord) addTable(level int, num, size uint64, imin, imax iKey) {
p.hasRec |= 1 << recNewTable
p.addedTables = append(p.addedTables, ntRecord{level, num, size, min, max})
p.addedTables = append(p.addedTables, ntRecord{level, num, size, imin, imax})
}
func (p *sessionRecord) addTableFile(level int, t *tFile) {
p.addTable(level, t.file.Num(), t.size, t.min, t.max)
p.addTable(level, t.file.Num(), t.size, t.imin, t.imax)
}
func (p *sessionRecord) resetAddedTables() {
@@ -169,23 +169,23 @@ func (p *sessionRecord) encode(w io.Writer) error {
p.putUvarint(w, recSeq)
p.putUvarint(w, p.seq)
}
for _, cp := range p.compactionPointers {
for _, r := range p.compactionPointers {
p.putUvarint(w, recCompactionPointer)
p.putUvarint(w, uint64(cp.level))
p.putBytes(w, cp.key)
p.putUvarint(w, uint64(r.level))
p.putBytes(w, r.ikey)
}
for _, t := range p.deletedTables {
for _, r := range p.deletedTables {
p.putUvarint(w, recDeletedTable)
p.putUvarint(w, uint64(t.level))
p.putUvarint(w, t.num)
p.putUvarint(w, uint64(r.level))
p.putUvarint(w, r.num)
}
for _, t := range p.addedTables {
for _, r := range p.addedTables {
p.putUvarint(w, recNewTable)
p.putUvarint(w, uint64(t.level))
p.putUvarint(w, t.num)
p.putUvarint(w, t.size)
p.putBytes(w, t.min)
p.putBytes(w, t.max)
p.putUvarint(w, uint64(r.level))
p.putUvarint(w, r.num)
p.putUvarint(w, r.size)
p.putBytes(w, r.imin)
p.putBytes(w, r.imax)
}
return p.err
}
@@ -282,18 +282,18 @@ func (p *sessionRecord) decode(r io.Reader) error {
}
case recCompactionPointer:
level := p.readLevel(br)
key := p.readBytes(br)
ikey := p.readBytes(br)
if p.err == nil {
p.addCompactionPointer(level, iKey(key))
p.addCompactionPointer(level, iKey(ikey))
}
case recNewTable:
level := p.readLevel(br)
num := p.readUvarint(br)
size := p.readUvarint(br)
min := p.readBytes(br)
max := p.readBytes(br)
imin := p.readBytes(br)
imax := p.readBytes(br)
if p.err == nil {
p.addTable(level, num, size, min, max)
p.addTable(level, num, size, imin, imax)
}
case recDeletedTable:
level := p.readLevel(br)

View File

@@ -14,7 +14,7 @@ import (
"github.com/syndtr/goleveldb/leveldb/storage"
)
// logging
// Logging.
type dropper struct {
s *session
@@ -22,22 +22,17 @@ type dropper struct {
}
func (d dropper) Drop(err error) {
if e, ok := err.(journal.DroppedError); ok {
if e, ok := err.(journal.ErrCorrupted); ok {
d.s.logf("journal@drop %s-%d S·%s %q", d.file.Type(), d.file.Num(), shortenb(e.Size), e.Reason)
} else {
d.s.logf("journal@drop %s-%d %q", d.file.Type(), d.file.Num(), err)
}
}
func (s *session) log(v ...interface{}) {
s.stor.Log(fmt.Sprint(v...))
}
func (s *session) log(v ...interface{}) { s.stor.Log(fmt.Sprint(v...)) }
func (s *session) logf(format string, v ...interface{}) { s.stor.Log(fmt.Sprintf(format, v...)) }
func (s *session) logf(format string, v ...interface{}) {
s.stor.Log(fmt.Sprintf(format, v...))
}
// file utils
// File utils.
func (s *session) getJournalFile(num uint64) storage.File {
return s.stor.GetFile(num, storage.TypeJournal)
@@ -56,7 +51,7 @@ func (s *session) newTemp() storage.File {
return s.stor.GetFile(num, storage.TypeTemp)
}
// session state
// Session state.
// Get current version.
func (s *session) version() *version {
@@ -126,7 +121,7 @@ func (s *session) reuseFileNum(num uint64) {
}
}
// manifest related utils
// Manifest related utils.
// Fill given session record obj with current states; need external
// synchronization.
@@ -142,7 +137,7 @@ func (s *session) fillRecord(r *sessionRecord, snapshot bool) {
r.setSeq(s.stSeq)
}
for level, ik := range s.stCPtrs {
for level, ik := range s.stCptrs {
if ik != nil {
r.addCompactionPointer(level, ik)
}
@@ -168,7 +163,7 @@ func (s *session) recordCommited(r *sessionRecord) {
}
for _, p := range r.compactionPointers {
s.stCPtrs[p.level] = iKey(p.key)
s.stCptrs[p.level] = iKey(p.ikey)
}
}

View File

@@ -344,19 +344,17 @@ type fileWrap struct {
}
func (fw fileWrap) Sync() error {
if err := fw.File.Sync(); err != nil {
return err
}
if fw.f.Type() == TypeManifest {
// Also sync parent directory if file type is manifest.
// See: https://code.google.com/p/leveldb/issues/detail?id=190.
f, err := os.Open(fw.f.fs.path)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
if err := syncDir(fw.f.fs.path); err != nil {
return err
}
}
return fw.File.Sync()
return nil
}
func (fw fileWrap) Close() error {

View File

@@ -38,3 +38,15 @@ func rename(oldpath, newpath string) error {
_, fname := filepath.Split(newpath)
return os.Rename(oldpath, fname)
}
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,68 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build solaris
package storage
import (
"os"
"syscall"
)
type unixFileLock struct {
f *os.File
}
func (fl *unixFileLock) release() error {
if err := setFileLock(fl.f, false); err != nil {
return err
}
return fl.f.Close()
}
func newFileLock(path string) (fl fileLock, err error) {
f, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
return
}
err = setFileLock(f, true)
if err != nil {
f.Close()
return
}
fl = &unixFileLock{f: f}
return
}
func setFileLock(f *os.File, lock bool) error {
flock := syscall.Flock_t{
Type: syscall.F_UNLCK,
Start: 0,
Len: 0,
Whence: 1,
}
if lock {
flock.Type = syscall.F_WRLCK
}
return syscall.FcntlFlock(f.Fd(), syscall.F_SETLK, &flock)
}
func rename(oldpath, newpath string) error {
return os.Rename(oldpath, newpath)
}
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
return err
}
return nil
}

View File

@@ -4,7 +4,7 @@
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build darwin freebsd linux netbsd openbsd
// +build darwin dragonfly freebsd linux netbsd openbsd
package storage
@@ -49,3 +49,15 @@ func setFileLock(f *os.File, lock bool) error {
func rename(oldpath, newpath string) error {
return os.Rename(oldpath, newpath)
}
func syncDir(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
if err := f.Sync(); err != nil {
return err
}
return nil
}

View File

@@ -65,3 +65,5 @@ func rename(oldpath, newpath string) error {
}
return moveFileEx(from, to, _MOVEFILE_REPLACE_EXISTING)
}
func syncDir(name string) error { return nil }

View File

@@ -67,7 +67,7 @@ type Writer interface {
Syncer
}
// File is the file.
// File is the file. A file instance must be goroutine-safe.
type File interface {
// Open opens the file for read. Returns os.ErrNotExist error
// if the file does not exist.
@@ -94,7 +94,7 @@ type File interface {
Remove() error
}
// Storage is the storage.
// Storage is the storage. A storage instance must be goroutine-safe.
type Storage interface {
// Lock locks the storage. Any subsequent attempt to call Lock will fail
// until the last lock released.

View File

@@ -7,11 +7,11 @@
package leveldb
import (
"io"
"sort"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
@@ -19,34 +19,41 @@ import (
"github.com/syndtr/goleveldb/leveldb/util"
)
// table file
// tFile holds basic information about a table.
type tFile struct {
file storage.File
seekLeft int32
size uint64
min, max iKey
file storage.File
seekLeft int32
size uint64
imin, imax iKey
}
// test if key is after t
func (t *tFile) isAfter(key []byte, ucmp comparer.BasicComparer) bool {
return key != nil && ucmp.Compare(key, t.max.ukey()) > 0
// Returns true if given key is after largest key of this table.
func (t *tFile) after(icmp *iComparer, ukey []byte) bool {
return ukey != nil && icmp.uCompare(ukey, t.imax.ukey()) > 0
}
// test if key is before t
func (t *tFile) isBefore(key []byte, ucmp comparer.BasicComparer) bool {
return key != nil && ucmp.Compare(key, t.min.ukey()) < 0
// Returns true if given key is before smallest key of this table.
func (t *tFile) before(icmp *iComparer, ukey []byte) bool {
return ukey != nil && icmp.uCompare(ukey, t.imin.ukey()) < 0
}
func (t *tFile) incrSeek() int32 {
// Returns true if given key range overlaps with this table key range.
func (t *tFile) overlaps(icmp *iComparer, umin, umax []byte) bool {
return !t.after(icmp, umin) && !t.before(icmp, umax)
}
// Cosumes one seek and return current seeks left.
func (t *tFile) consumeSeek() int32 {
return atomic.AddInt32(&t.seekLeft, -1)
}
func newTFile(file storage.File, size uint64, min, max iKey) *tFile {
// Creates new tFile.
func newTableFile(file storage.File, size uint64, imin, imax iKey) *tFile {
f := &tFile{
file: file,
size: size,
min: min,
max: max,
imin: imin,
imax: imax,
}
// We arrange to automatically compact this file after
@@ -70,33 +77,40 @@ func newTFile(file storage.File, size uint64, min, max iKey) *tFile {
return f
}
// table files
// tFiles hold multiple tFile.
type tFiles []*tFile
func (tf tFiles) Len() int { return len(tf) }
func (tf tFiles) Swap(i, j int) { tf[i], tf[j] = tf[j], tf[i] }
// Returns true if i smallest key is less than j.
// This used for sort by key in ascending order.
func (tf tFiles) lessByKey(icmp *iComparer, i, j int) bool {
a, b := tf[i], tf[j]
n := icmp.Compare(a.min, b.min)
n := icmp.Compare(a.imin, b.imin)
if n == 0 {
return a.file.Num() < b.file.Num()
}
return n < 0
}
// Returns true if i file number is greater than j.
// This used for sort by file number in descending order.
func (tf tFiles) lessByNum(i, j int) bool {
return tf[i].file.Num() > tf[j].file.Num()
}
// Sorts tables by key in ascending order.
func (tf tFiles) sortByKey(icmp *iComparer) {
sort.Sort(&tFilesSortByKey{tFiles: tf, icmp: icmp})
}
// Sorts tables by file number in descending order.
func (tf tFiles) sortByNum() {
sort.Sort(&tFilesSortByNum{tFiles: tf})
}
// Returns sum of all tables size.
func (tf tFiles) size() (sum uint64) {
for _, t := range tf {
sum += t.size
@@ -104,94 +118,106 @@ func (tf tFiles) size() (sum uint64) {
return sum
}
func (tf tFiles) searchMin(key iKey, icmp *iComparer) int {
// Searches smallest index of tables whose its smallest
// key is after or equal with given key.
func (tf tFiles) searchMin(icmp *iComparer, ikey iKey) int {
return sort.Search(len(tf), func(i int) bool {
return icmp.Compare(tf[i].min, key) >= 0
return icmp.Compare(tf[i].imin, ikey) >= 0
})
}
func (tf tFiles) searchMax(key iKey, icmp *iComparer) int {
// Searches smallest index of tables whose its largest
// key is after or equal with given key.
func (tf tFiles) searchMax(icmp *iComparer, ikey iKey) int {
return sort.Search(len(tf), func(i int) bool {
return icmp.Compare(tf[i].max, key) >= 0
return icmp.Compare(tf[i].imax, ikey) >= 0
})
}
func (tf tFiles) isOverlaps(min, max []byte, disjSorted bool, icmp *iComparer) bool {
if !disjSorted {
// Need to check against all files
// Returns true if given key range overlaps with one or more
// tables key range. If unsorted is true then binary search will not be used.
func (tf tFiles) overlaps(icmp *iComparer, umin, umax []byte, unsorted bool) bool {
if unsorted {
// Check against all files.
for _, t := range tf {
if !t.isAfter(min, icmp.ucmp) && !t.isBefore(max, icmp.ucmp) {
if t.overlaps(icmp, umin, umax) {
return true
}
}
return false
}
var idx int
if len(min) > 0 {
// Find the earliest possible internal key for min
idx = tf.searchMax(newIKey(min, kMaxSeq, tSeek), icmp)
i := 0
if len(umin) > 0 {
// Find the earliest possible internal key for min.
i = tf.searchMax(icmp, newIKey(umin, kMaxSeq, tSeek))
}
if idx >= len(tf) {
// beginning of range is after all files, so no overlap
if i >= len(tf) {
// Beginning of range is after all files, so no overlap.
return false
}
return !tf[idx].isBefore(max, icmp.ucmp)
return !tf[i].before(icmp, umax)
}
func (tf tFiles) getOverlaps(min, max []byte, r *tFiles, disjSorted bool, ucmp comparer.BasicComparer) {
// Returns tables whose its key range overlaps with given key range.
// If overlapped is true then the search will be expanded to tables that
// overlaps with each other.
func (tf tFiles) getOverlaps(dst tFiles, icmp *iComparer, umin, umax []byte, overlapped bool) tFiles {
x := len(dst)
for i := 0; i < len(tf); {
t := tf[i]
i++
if t.isAfter(min, ucmp) || t.isBefore(max, ucmp) {
continue
}
*r = append(*r, t)
if !disjSorted {
// Level-0 files may overlap each other. So check if the newly
// added file has expanded the range. If so, restart search.
if min != nil && ucmp.Compare(t.min.ukey(), min) < 0 {
min = t.min.ukey()
*r = nil
i = 0
} else if max != nil && ucmp.Compare(t.max.ukey(), max) > 0 {
max = t.max.ukey()
*r = nil
i = 0
if t.overlaps(icmp, umin, umax) {
if overlapped {
// For overlapped files, check if the newly added file has
// expanded the range. If so, restart search.
if umin != nil && icmp.uCompare(t.imin.ukey(), umin) < 0 {
umin = t.imin.ukey()
dst = dst[:x]
i = 0
continue
} else if umax != nil && icmp.uCompare(t.imax.ukey(), umax) > 0 {
umax = t.imax.ukey()
dst = dst[:x]
i = 0
continue
}
}
dst = append(dst, t)
}
i++
}
return
return dst
}
func (tf tFiles) getRange(icmp *iComparer) (min, max iKey) {
// Returns tables key range.
func (tf tFiles) getRange(icmp *iComparer) (imin, imax iKey) {
for i, t := range tf {
if i == 0 {
min, max = t.min, t.max
imin, imax = t.imin, t.imax
continue
}
if icmp.Compare(t.min, min) < 0 {
min = t.min
if icmp.Compare(t.imin, imin) < 0 {
imin = t.imin
}
if icmp.Compare(t.max, max) > 0 {
max = t.max
if icmp.Compare(t.imax, imax) > 0 {
imax = t.imax
}
}
return
}
// Creates iterator index from tables.
func (tf tFiles) newIndexIterator(tops *tOps, icmp *iComparer, slice *util.Range, ro *opt.ReadOptions) iterator.IteratorIndexer {
if slice != nil {
var start, limit int
if slice.Start != nil {
start = tf.searchMax(iKey(slice.Start), icmp)
start = tf.searchMax(icmp, iKey(slice.Start))
}
if slice.Limit != nil {
limit = tf.searchMin(iKey(slice.Limit), icmp)
limit = tf.searchMin(icmp, iKey(slice.Limit))
} else {
limit = tf.Len()
}
@@ -206,6 +232,7 @@ func (tf tFiles) newIndexIterator(tops *tOps, icmp *iComparer, slice *util.Range
})
}
// Tables iterator index.
type tFilesArrayIndexer struct {
tFiles
tops *tOps
@@ -215,7 +242,7 @@ type tFilesArrayIndexer struct {
}
func (a *tFilesArrayIndexer) Search(key []byte) int {
return a.searchMax(iKey(key), a.icmp)
return a.searchMax(a.icmp, iKey(key))
}
func (a *tFilesArrayIndexer) Get(i int) iterator.Iterator {
@@ -225,6 +252,7 @@ func (a *tFilesArrayIndexer) Get(i int) iterator.Iterator {
return a.tops.newIterator(a.tFiles[i], nil, a.ro)
}
// Helper type for sortByKey.
type tFilesSortByKey struct {
tFiles
icmp *iComparer
@@ -234,6 +262,7 @@ func (x *tFilesSortByKey) Less(i, j int) bool {
return x.lessByKey(x.icmp, i, j)
}
// Helper type for sortByNum.
type tFilesSortByNum struct {
tFiles
}
@@ -242,19 +271,15 @@ func (x *tFilesSortByNum) Less(i, j int) bool {
return x.lessByNum(i, j)
}
// table operations
// Table operations.
type tOps struct {
s *session
cache cache.Cache
cacheNS cache.Namespace
bpool *util.BufferPool
}
func newTableOps(s *session, cacheCap int) *tOps {
c := cache.NewLRUCache(cacheCap)
ns := c.GetNamespace(0)
return &tOps{s, c, ns}
}
// Creates an empty table and returns table writer.
func (t *tOps) create() (*tWriter, error) {
file := t.s.getTableFile(t.s.allocFileNum())
fw, err := file.Create()
@@ -269,10 +294,11 @@ func (t *tOps) create() (*tWriter, error) {
}, nil
}
// Builds table from src iterator.
func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
w, err := t.create()
if err != nil {
return f, n, err
return
}
defer func() {
@@ -282,7 +308,7 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
}()
for src.Next() {
err = w.add(src.Key(), src.Value())
err = w.append(src.Key(), src.Value())
if err != nil {
return
}
@@ -297,84 +323,109 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
return
}
func (t *tOps) lookup(f *tFile) (c cache.Object, err error) {
type tableWrapper struct {
*table.Reader
closer io.Closer
}
func (tr tableWrapper) Release() {
tr.closer.Close()
}
// Opens table. It returns a cache handle, which should
// be released after use.
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
num := f.file.Num()
c, ok := t.cacheNS.Get(num, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
var r storage.Reader
r, err = f.file.Open()
if err != nil {
return
return 0, nil
}
o := t.s.o
var cacheNS cache.Namespace
if bc := o.GetBlockCache(); bc != nil {
cacheNS = bc.GetNamespace(num)
var bcacheNS cache.Namespace
if bc := t.s.o.GetBlockCache(); bc != nil {
bcacheNS = bc.GetNamespace(num)
}
ok = true
value = table.NewReader(r, int64(f.size), cacheNS, o)
charge = 1
fin = func() {
r.Close()
}
return
return 1, tableWrapper{table.NewReader(r, int64(f.size), bcacheNS, t.bpool, t.s.o), r}
})
if !ok && err == nil {
if ch == nil && err == nil {
err = ErrClosed
}
return
}
func (t *tOps) get(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []byte, err error) {
c, err := t.lookup(f)
// Finds key/value pair whose key is greater than or equal to the
// given key.
func (t *tOps) find(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []byte, err error) {
ch, err := t.open(f)
if err != nil {
return nil, nil, err
}
defer c.Release()
return c.Value().(*table.Reader).Find(key, ro)
defer ch.Release()
return ch.Value().(tableWrapper).Find(key, ro)
}
// Returns approximate offset of the given key.
func (t *tOps) offsetOf(f *tFile, key []byte) (offset uint64, err error) {
c, err := t.lookup(f)
ch, err := t.open(f)
if err != nil {
return
}
_offset, err := c.Value().(*table.Reader).OffsetOf(key)
_offset, err := ch.Value().(tableWrapper).OffsetOf(key)
offset = uint64(_offset)
c.Release()
ch.Release()
return
}
// Creates an iterator from the given table.
func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
c, err := t.lookup(f)
ch, err := t.open(f)
if err != nil {
return iterator.NewEmptyIterator(err)
}
iter := c.Value().(*table.Reader).NewIterator(slice, ro)
iter.SetReleaser(c)
iter := ch.Value().(tableWrapper).NewIterator(slice, ro)
iter.SetReleaser(ch)
return iter
}
// Removes table from persistent storage. It waits until
// no one use the the table.
func (t *tOps) remove(f *tFile) {
num := f.file.Num()
t.cacheNS.Delete(num, func(exist bool) {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.GetNamespace(num).Zap(false)
t.cacheNS.Delete(num, func(exist, pending bool) {
if !pending {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.GetNamespace(num).Zap()
}
}
})
}
// Closes the table ops instance. It will close all tables,
// regadless still used or not.
func (t *tOps) close() {
t.cache.Zap(true)
t.cache.Zap()
}
// Creates new initialized table ops instance.
func newTableOps(s *session, cacheCap int) *tOps {
c := cache.NewLRUCache(cacheCap)
return &tOps{
s: s,
cache: c,
cacheNS: c.GetNamespace(0),
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
}
}
// tWriter wraps the table writer. It keep track of file descriptor
// and added key range.
type tWriter struct {
t *tOps
@@ -385,7 +436,8 @@ type tWriter struct {
first, last []byte
}
func (w *tWriter) add(key, value []byte) error {
// Append key/value pair to the table.
func (w *tWriter) append(key, value []byte) error {
if w.first == nil {
w.first = append([]byte{}, key...)
}
@@ -393,10 +445,12 @@ func (w *tWriter) add(key, value []byte) error {
return w.tw.Append(key, value)
}
// Returns true if the table is empty.
func (w *tWriter) empty() bool {
return w.first == nil
}
// Finalizes the table and returns table file.
func (w *tWriter) finish() (f *tFile, err error) {
err = w.tw.Close()
if err != nil {
@@ -408,10 +462,11 @@ func (w *tWriter) finish() (f *tFile, err error) {
return
}
w.w.Close()
f = newTFile(w.file, uint64(w.tw.BytesLen()), iKey(w.first), iKey(w.last))
f = newTableFile(w.file, uint64(w.tw.BytesLen()), iKey(w.first), iKey(w.last))
return
}
// Drops the table.
func (w *tWriter) drop() {
w.w.Close()
w.file.Remove()

View File

@@ -37,6 +37,7 @@ func max(x, y int) int {
}
type block struct {
bpool *util.BufferPool
cmp comparer.BasicComparer
data []byte
restartsLen int
@@ -139,6 +140,14 @@ func (b *block) newIterator(slice *util.Range, inclLimit bool, cache util.Releas
return bi
}
func (b *block) Release() {
if b.bpool != nil {
b.bpool.Put(b.data)
b.bpool = nil
}
b.data = nil
}
type dir int
const (
@@ -437,18 +446,21 @@ func (i *blockIter) Value() []byte {
}
func (i *blockIter) Release() {
i.prevNode = nil
i.prevKeys = nil
i.key = nil
i.value = nil
i.dir = dirReleased
if i.cache != nil {
i.cache.Release()
i.cache = nil
}
if i.releaser != nil {
i.releaser.Release()
i.releaser = nil
if i.dir > dirReleased {
i.block = nil
i.prevNode = nil
i.prevKeys = nil
i.key = nil
i.value = nil
i.dir = dirReleased
if i.cache != nil {
i.cache.Release()
i.cache = nil
}
if i.releaser != nil {
i.releaser.Release()
i.releaser = nil
}
}
}
@@ -519,6 +531,7 @@ type Reader struct {
reader io.ReaderAt
cache cache.Namespace
err error
bpool *util.BufferPool
// Options
cmp comparer.Comparer
filter filter.Filter
@@ -538,7 +551,7 @@ func verifyChecksum(data []byte) bool {
}
func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
data := make([]byte, bh.length+blockTrailerLen)
data := r.bpool.Get(int(bh.length + blockTrailerLen))
if _, err := r.reader.ReadAt(data, int64(bh.offset)); err != nil && err != io.EOF {
return nil, err
}
@@ -551,8 +564,13 @@ func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
case blockTypeNoCompression:
data = data[:bh.length]
case blockTypeSnappyCompression:
var err error
data, err = snappy.Decode(nil, data[:bh.length])
decLen, err := snappy.DecodedLen(data[:bh.length])
if err != nil {
return nil, err
}
tmp := data
data, err = snappy.Decode(r.bpool.Get(decLen), tmp[:bh.length])
r.bpool.Put(tmp)
if err != nil {
return nil, err
}
@@ -606,23 +624,21 @@ func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fi
if r.cache != nil {
// Get/set block cache.
var err error
cache, ok := r.cache.Get(dataBH.offset, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
cache := r.cache.Get(dataBH.offset, func() (charge int, value interface{}) {
if !fillCache {
return
return 0, nil
}
var dataBlock *block
dataBlock, err = r.readBlock(dataBH, checksum)
if err == nil {
ok = true
value = dataBlock
charge = int(dataBH.length)
if err != nil {
return 0, nil
}
return
return int(dataBH.length), dataBlock
})
if err != nil {
return iterator.NewEmptyIterator(err)
}
if ok {
if cache != nil {
dataBlock := cache.Value().(*block)
if !dataBlock.checksum && (r.checksum || checksum) {
if !verifyChecksum(dataBlock.data) {
@@ -638,7 +654,7 @@ func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fi
if err != nil {
return iterator.NewEmptyIterator(err)
}
iter := dataBlock.newIterator(slice, false, nil)
iter := dataBlock.newIterator(slice, false, dataBlock)
return iter
}
@@ -708,8 +724,11 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
}
return
}
// Don't use block buffer, no need to copy the buffer.
rkey = data.Key()
value = data.Value()
// Use block buffer, and since the buffer will be recycled, the buffer
// need to be copied.
value = append([]byte{}, data.Value()...)
return
}
@@ -760,11 +779,17 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
}
// NewReader creates a new initialized table reader for the file.
// The cache is optional and can be nil.
func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, o *opt.Options) *Reader {
// The cache and bpool is optional and can be nil.
//
// The returned table reader instance is goroutine-safe.
func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, bpool *util.BufferPool, o *opt.Options) *Reader {
if bpool == nil {
bpool = util.NewBufferPool(o.GetBlockSize() + blockTrailerLen)
}
r := &Reader{
reader: f,
cache: cache,
bpool: bpool,
cmp: o.GetComparer(),
checksum: o.GetStrict(opt.StrictBlockChecksum),
strictIter: o.GetStrict(opt.StrictIterator),

View File

@@ -59,7 +59,7 @@ var _ = testutil.Defer(func() {
It("Should be able to approximate offset of a key correctly", func() {
Expect(err).ShouldNot(HaveOccurred())
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, o)
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, o)
CheckOffset := func(key string, expect, threshold int) {
offset, err := tr.OffsetOf([]byte(key))
Expect(err).ShouldNot(HaveOccurred())
@@ -95,7 +95,7 @@ var _ = testutil.Defer(func() {
tw.Close()
// Opening the table.
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, o)
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, o)
return tableWrapper{tr}
}
Test := func(kv *testutil.KeyValue, body func(r *Reader)) func() {

View File

@@ -48,6 +48,7 @@ func (t *testingDB) TestClose() {
func newTestingDB(o *opt.Options, ro *opt.ReadOptions, wo *opt.WriteOptions) *testingDB {
stor := testutil.NewStorage()
db, err := Open(stor, o)
// FIXME: This may be called from outside It, which may cause panic.
Expect(err).NotTo(HaveOccurred())
return &testingDB{
DB: db,

View File

@@ -0,0 +1,162 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package util
import (
"fmt"
"sync/atomic"
"time"
)
type buffer struct {
b []byte
miss int
}
// BufferPool is a 'buffer pool'.
type BufferPool struct {
pool [4]chan []byte
size [3]uint32
sizeMiss [3]uint32
baseline0 int
baseline1 int
baseline2 int
get uint32
put uint32
less uint32
equal uint32
greater uint32
miss uint32
}
func (p *BufferPool) poolNum(n int) int {
switch {
case n <= p.baseline0:
return 0
case n <= p.baseline1:
return 1
case n <= p.baseline2:
return 2
default:
return 3
}
}
// Get returns buffer with length of n.
func (p *BufferPool) Get(n int) []byte {
atomic.AddUint32(&p.get, 1)
poolNum := p.poolNum(n)
pool := p.pool[poolNum]
if poolNum == 0 {
// Fast path.
select {
case b := <-pool:
switch {
case cap(b) > n:
atomic.AddUint32(&p.less, 1)
return b[:n]
case cap(b) == n:
atomic.AddUint32(&p.equal, 1)
return b[:n]
default:
panic("not reached")
}
default:
atomic.AddUint32(&p.miss, 1)
}
return make([]byte, n, p.baseline0)
} else {
sizePtr := &p.size[poolNum-1]
select {
case b := <-pool:
switch {
case cap(b) > n:
atomic.AddUint32(&p.less, 1)
return b[:n]
case cap(b) == n:
atomic.AddUint32(&p.equal, 1)
return b[:n]
default:
atomic.AddUint32(&p.greater, 1)
if uint32(cap(b)) >= atomic.LoadUint32(sizePtr) {
select {
case pool <- b:
default:
}
}
}
default:
atomic.AddUint32(&p.miss, 1)
}
if size := atomic.LoadUint32(sizePtr); uint32(n) > size {
if size == 0 {
atomic.CompareAndSwapUint32(sizePtr, 0, uint32(n))
} else {
sizeMissPtr := &p.sizeMiss[poolNum-1]
if atomic.AddUint32(sizeMissPtr, 1) == 20 {
atomic.StoreUint32(sizePtr, uint32(n))
atomic.StoreUint32(sizeMissPtr, 0)
}
}
return make([]byte, n)
} else {
return make([]byte, n, size)
}
}
}
// Put adds given buffer to the pool.
func (p *BufferPool) Put(b []byte) {
atomic.AddUint32(&p.put, 1)
pool := p.pool[p.poolNum(cap(b))]
select {
case pool <- b:
default:
}
}
func (p *BufferPool) String() string {
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v G·%d P·%d <·%d =·%d >·%d M·%d}",
p.baseline0, p.size, p.sizeMiss, p.get, p.put, p.less, p.equal, p.greater, p.miss)
}
func (p *BufferPool) drain() {
for {
time.Sleep(1 * time.Second)
select {
case <-p.pool[0]:
case <-p.pool[1]:
case <-p.pool[2]:
case <-p.pool[3]:
default:
}
}
}
// NewBufferPool creates a new initialized 'buffer pool'.
func NewBufferPool(baseline int) *BufferPool {
if baseline <= 0 {
panic("baseline can't be <= 0")
}
p := &BufferPool{
baseline0: baseline,
baseline1: baseline * 2,
baseline2: baseline * 4,
}
for i, cap := range []int{6, 6, 3, 1} {
p.pool[i] = make(chan []byte, cap)
}
go p.drain()
return p
}

View File

@@ -0,0 +1,21 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build go1.3
package util
import (
"sync"
)
type Pool struct {
sync.Pool
}
func NewPool(cap int) *Pool {
return &Pool{}
}

View File

@@ -0,0 +1,33 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build !go1.3
package util
type Pool struct {
pool chan interface{}
}
func (p *Pool) Get() interface{} {
select {
case x := <-p.pool:
return x
default:
return nil
}
}
func (p *Pool) Put(x interface{}) {
select {
case p.pool <- x:
default:
}
}
func NewPool(cap int) *Pool {
return &Pool{pool: make(chan interface{}, cap)}
}

View File

@@ -14,3 +14,18 @@ type Range struct {
// Limit of the key range, not include in the range.
Limit []byte
}
// BytesPrefix returns key range that satisfy the given prefix.
// This only applicable for the standard 'bytes comparer'.
func BytesPrefix(prefix []byte) *Range {
var limit []byte
for i := len(prefix) - 1; i >= 0; i-- {
c := prefix[i]
if c < 0xff {
limit = make([]byte, i+1)
copy(limit, prefix)
limit[i] = c + 1
}
}
return &Range{prefix, limit}
}

View File

@@ -40,8 +40,8 @@ type version struct {
tables [kNumLevels]tFiles
// Level that should be compacted next and its compaction score.
// Score < 1 means compaction is not strictly needed. These fields
// are initialized by ComputeCompaction()
// Score < 1 means compaction is not strictly needed. These fields
// are initialized by computeCompaction()
cLevel int
cScore float64
@@ -60,8 +60,6 @@ func (v *version) release_NB() {
panic("negative version ref")
}
s := v.s
tables := make(map[uint64]bool)
for _, tt := range v.next.tables {
for _, t := range tt {
@@ -74,7 +72,7 @@ func (v *version) release_NB() {
for _, t := range tt {
num := t.file.Num()
if _, ok := tables[num]; !ok {
s.tops.remove(t)
v.s.tops.remove(t)
}
}
}
@@ -89,130 +87,142 @@ func (v *version) release() {
v.s.vmu.Unlock()
}
func (v *version) get(key iKey, ro *opt.ReadOptions) (value []byte, cstate bool, err error) {
s := v.s
func (v *version) walkOverlapping(ikey iKey, f func(level int, t *tFile) bool, lf func(level int) bool) {
ukey := ikey.ukey()
ukey := key.ukey()
var tset *tSet
tseek := true
// We can search level-by-level since entries never hop across
// levels. Therefore we are guaranteed that if we find data
// in an smaller level, later levels are irrelevant.
for level, ts := range v.tables {
if len(ts) == 0 {
// Walk tables level-by-level.
for level, tables := range v.tables {
if len(tables) == 0 {
continue
}
if level == 0 {
// Level-0 files may overlap each other. Find all files that
// overlap user_key and process them in order from newest to
var tmp tFiles
for _, t := range ts {
if s.icmp.uCompare(ukey, t.min.ukey()) >= 0 &&
s.icmp.uCompare(ukey, t.max.ukey()) <= 0 {
tmp = append(tmp, t)
}
}
if len(tmp) == 0 {
continue
}
tmp.sortByNum()
ts = tmp
} else {
i := ts.searchMax(key, s.icmp)
if i >= len(ts) || s.icmp.uCompare(ukey, ts[i].min.ukey()) < 0 {
continue
}
ts = ts[i : i+1]
}
var l0found bool
var l0seq uint64
var l0type vType
var l0value []byte
for _, t := range ts {
if tseek {
if tset == nil {
tset = &tSet{level, t}
} else if tset.table.incrSeek() <= 0 {
cstate = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
tseek = false
}
}
var _rkey, rval []byte
_rkey, rval, err = s.tops.get(t, key, ro)
if err == ErrNotFound {
continue
} else if err != nil {
return
}
rkey := iKey(_rkey)
if seq, t, ok := rkey.parseNum(); ok {
if s.icmp.uCompare(ukey, rkey.ukey()) == 0 {
if level == 0 {
if seq >= l0seq {
l0found = true
l0seq = seq
l0type = t
l0value = rval
}
} else {
switch t {
case tVal:
value = rval
case tDel:
err = ErrNotFound
default:
panic("invalid type")
}
// overlap ukey.
for _, t := range tables {
if t.overlaps(v.s.icmp, ukey, ukey) {
if !f(level, t) {
return
}
}
}
} else {
if i := tables.searchMax(v.s.icmp, ikey); i < len(tables) {
t := tables[i]
if v.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
if !f(level, t) {
return
}
}
} else {
err = errors.New("leveldb: internal key corrupted")
return
}
}
if level == 0 && l0found {
switch l0type {
case tVal:
value = l0value
case tDel:
err = ErrNotFound
default:
panic("invalid type")
}
if lf != nil && !lf(level) {
return
}
}
}
func (v *version) get(ikey iKey, ro *opt.ReadOptions) (value []byte, tcomp bool, err error) {
ukey := ikey.ukey()
var (
tset *tSet
tseek bool
l0found bool
l0seq uint64
l0vt vType
l0val []byte
)
err = ErrNotFound
// Since entries never hope across level, finding key/value
// in smaller level make later levels irrelevant.
v.walkOverlapping(ikey, func(level int, t *tFile) bool {
if !tseek {
if tset == nil {
tset = &tSet{level, t}
} else if tset.table.consumeSeek() <= 0 {
tseek = true
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
}
ikey__, val_, err_ := v.s.tops.find(t, ikey, ro)
switch err_ {
case nil:
case ErrNotFound:
return true
default:
err = err_
return false
}
ikey_ := iKey(ikey__)
if seq, vt, ok := ikey_.parseNum(); ok {
if v.s.icmp.uCompare(ukey, ikey_.ukey()) != 0 {
return true
}
if level == 0 {
if seq >= l0seq {
l0found = true
l0seq = seq
l0vt = vt
l0val = val_
}
} else {
switch vt {
case tVal:
value = val_
err = nil
case tDel:
default:
panic("leveldb: invalid internal key type")
}
return false
}
} else {
err = errors.New("leveldb: internal key corrupted")
return false
}
return true
}, func(level int) bool {
if l0found {
switch l0vt {
case tVal:
value = l0val
err = nil
case tDel:
default:
panic("leveldb: invalid internal key type")
}
return false
}
return true
})
return
}
func (v *version) getIterators(slice *util.Range, ro *opt.ReadOptions) (its []iterator.Iterator) {
s := v.s
// Merge all level zero files together since they may overlap
for _, t := range v.tables[0] {
it := s.tops.newIterator(t, slice, ro)
it := v.s.tops.newIterator(t, slice, ro)
its = append(its, it)
}
strict := s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
for _, tt := range v.tables[1:] {
if len(tt) == 0 {
strict := v.s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
for _, tables := range v.tables[1:] {
if len(tables) == 0 {
continue
}
it := iterator.NewIndexedIterator(tt.newIndexIterator(s.tops, s.icmp, slice, ro), strict, true)
it := iterator.NewIndexedIterator(tables.newIndexIterator(v.s.tops, v.s.icmp, slice, ro), strict, true)
its = append(its, it)
}
@@ -242,25 +252,25 @@ func (v *version) tLen(level int) int {
return len(v.tables[level])
}
func (v *version) offsetOf(key iKey) (n uint64, err error) {
for level, tt := range v.tables {
for _, t := range tt {
if v.s.icmp.Compare(t.max, key) <= 0 {
// Entire file is before "key", so just add the file size
func (v *version) offsetOf(ikey iKey) (n uint64, err error) {
for level, tables := range v.tables {
for _, t := range tables {
if v.s.icmp.Compare(t.imax, ikey) <= 0 {
// Entire file is before "ikey", so just add the file size
n += t.size
} else if v.s.icmp.Compare(t.min, key) > 0 {
// Entire file is after "key", so ignore
} else if v.s.icmp.Compare(t.imin, ikey) > 0 {
// Entire file is after "ikey", so ignore
if level > 0 {
// Files other than level 0 are sorted by meta->min, so
// no further files in this level will contain data for
// "key".
// "ikey".
break
}
} else {
// "key" falls in the range for this table. Add the
// approximate offset of "key" within the table.
// "ikey" falls in the range for this table. Add the
// approximate offset of "ikey" within the table.
var nn uint64
nn, err = v.s.tops.offsetOf(t, key)
nn, err = v.s.tops.offsetOf(t, ikey)
if err != nil {
return 0, err
}
@@ -272,15 +282,15 @@ func (v *version) offsetOf(key iKey) (n uint64, err error) {
return
}
func (v *version) pickLevel(min, max []byte) (level int) {
if !v.tables[0].isOverlaps(min, max, false, v.s.icmp) {
var r tFiles
func (v *version) pickLevel(umin, umax []byte) (level int) {
if !v.tables[0].overlaps(v.s.icmp, umin, umax, true) {
var overlaps tFiles
for ; level < kMaxMemCompactLevel; level++ {
if v.tables[level+1].isOverlaps(min, max, true, v.s.icmp) {
if v.tables[level+1].overlaps(v.s.icmp, umin, umax, false) {
break
}
v.tables[level+2].getOverlaps(min, max, &r, true, v.s.icmp.ucmp)
if r.size() > kMaxGrandParentOverlapBytes {
overlaps = v.tables[level+2].getOverlaps(overlaps, v.s.icmp, umin, umax, false)
if overlaps.size() > kMaxGrandParentOverlapBytes {
break
}
}
@@ -294,7 +304,7 @@ func (v *version) computeCompaction() {
var bestLevel int = -1
var bestScore float64 = -1
for level, ff := range v.tables {
for level, tables := range v.tables {
var score float64
if level == 0 {
// We treat level-0 specially by bounding the number of files
@@ -308,9 +318,9 @@ func (v *version) computeCompaction() {
// file size is small (perhaps because of a small write-buffer
// setting, or very high compression ratios, or lots of
// overwrites/deletions).
score = float64(len(ff)) / kL0_CompactionTrigger
score = float64(len(tables)) / kL0_CompactionTrigger
} else {
score = float64(ff.size()) / levelMaxSize[level]
score = float64(tables.size()) / levelMaxSize[level]
}
if score > bestScore {
@@ -336,57 +346,51 @@ type versionStaging struct {
}
func (p *versionStaging) commit(r *sessionRecord) {
btt := p.base.tables
// Deleted tables.
for _, r := range r.deletedTables {
tm := &(p.tables[r.level])
// deleted tables
for _, tr := range r.deletedTables {
tm := &(p.tables[tr.level])
bt := btt[tr.level]
if len(bt) > 0 {
if len(p.base.tables[r.level]) > 0 {
if tm.deleted == nil {
tm.deleted = make(map[uint64]struct{})
}
tm.deleted[tr.num] = struct{}{}
tm.deleted[r.num] = struct{}{}
}
if tm.added != nil {
delete(tm.added, tr.num)
delete(tm.added, r.num)
}
}
// new tables
for _, tr := range r.addedTables {
tm := &(p.tables[tr.level])
// New tables.
for _, r := range r.addedTables {
tm := &(p.tables[r.level])
if tm.added == nil {
tm.added = make(map[uint64]ntRecord)
}
tm.added[tr.num] = tr
tm.added[r.num] = r
if tm.deleted != nil {
delete(tm.deleted, tr.num)
delete(tm.deleted, r.num)
}
}
}
func (p *versionStaging) finish() *version {
s := p.base.s
btt := p.base.tables
// build new version
nv := &version{s: s}
// Build new version.
nv := &version{s: p.base.s}
for level, tm := range p.tables {
bt := btt[level]
btables := p.base.tables[level]
n := len(bt) + len(tm.added) - len(tm.deleted)
n := len(btables) + len(tm.added) - len(tm.deleted)
if n < 0 {
n = 0
}
nt := make(tFiles, 0, n)
// base tables
for _, t := range bt {
// Base tables.
for _, t := range btables {
if _, ok := tm.deleted[t.file.Num()]; ok {
continue
}
@@ -396,17 +400,21 @@ func (p *versionStaging) finish() *version {
nt = append(nt, t)
}
// new tables
for _, tr := range tm.added {
nt = append(nt, tr.makeFile(s))
// New tables.
for _, r := range tm.added {
nt = append(nt, r.makeFile(p.base.s))
}
// sort tables
nt.sortByKey(s.icmp)
// Sort tables.
if level == 0 {
nt.sortByNum()
} else {
nt.sortByKey(p.base.s.icmp)
}
nv.tables[level] = nt
}
// compute compaction score for new version
// Compute compaction score for new version.
nv.computeCompaction()
return nv

View File

@@ -1,13 +1,17 @@
syncthing [![Build Status](https://travis-ci.org/calmh/syncthing.svg?branch=master)](https://travis-ci.org/calmh/syncthing) [![Coverage Status](https://img.shields.io/coveralls/calmh/syncthing.svg)](https://coveralls.io/r/calmh/syncthing?branch=master)
syncthing
=========
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/syncthing.svg?style=flat-square)](http://build.syncthing.net/job/syncthing/lastBuild/)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat-square)](http://godoc.org/github.com/syncthing/syncthing)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](http://opensource.org/licenses/MIT)
This is the `syncthing` project. The following are the project goals:
1. Define a protocol for synchronization of a file repository between a
number of collaborating nodes. The protocol should be well defined,
unambiguous, easily understood, free to use, efficient, secure and
language neutral. This is the [Block Exchange
Protocol](https://github.com/calmh/syncthing/blob/master/protocol/PROTOCOL.md).
Protocol](https://github.com/syncthing/syncthing/blob/master/protocol/PROTOCOL.md).
2. Provide the reference implementation to demonstrate the usability of
said protocol. This is the `syncthing` utility. It is the hope that
@@ -21,14 +25,22 @@ for incompatible changes.
Getting Started
---------------
Take a look at the [getting started guide](http://discourse.syncthing.net/t/getting-started/46).
Take a look at the [getting started guide](http://discourse.syncthing.net/t/46).
Building
--------
Building Syncthing from source is easy, and there's a
[guide](http://discourse.syncthing.net/t/44)
that describes it for both Unix and Windows.
Signed Releases
---------------
As of v0.7.0 and onwards, git tags and release binaries are GPG signed with
the key BCE524C7 (http://nym.se/gpg.txt). The signature is included in the
normal release bundle as `syncthing.asc` or `syncthing.exe.asc`.
the key BCE524C7 (http://nym.se/gpg.txt). For release binaries, MD5 and
SHA1 checksums are calculated and signed, available in the
md5sum.txt.asc and sha1sum.txt.asc files.
Documentation
=============
@@ -45,4 +57,4 @@ under the [Creative Commons Attribution 4.0 International
License](http://creativecommons.org/licenses/by/4.0/).
All code is licensed under the [MIT
License](https://github.com/calmh/syncthing/blob/master/LICENSE).
License](https://github.com/syncthing/syncthing/blob/master/LICENSE).

View File

@@ -1,14 +0,0 @@
# editorconfig.org
root = true
[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.py]
indent_size = 4

View File

@@ -1,8 +0,0 @@
# Enforce Unix newlines
*.css text eol=lf
*.html text eol=lf
*.js text eol=lf
*.json text eol=lf
*.less text eol=lf
*.md text eol=lf
*.yml text eol=lf

View File

@@ -1,42 +0,0 @@
# Ignore docs files
_gh_pages
_site
.ruby-version
# Numerous always-ignore extensions
*.diff
*.err
*.orig
*.log
*.rej
*.swo
*.swp
*.zip
*.vi
*~
# OS or Editor folders
.DS_Store
._*
Thumbs.db
.cache
.project
.settings
.tmproj
*.esproj
nbproject
*.sublime-project
*.sublime-workspace
.idea
# Komodo
*.komodoproject
.komodotools
# grunt-html-validation
validation-status.json
validation-report.json
# Folders to ignore
node_modules
bower_components

View File

@@ -1,28 +0,0 @@
language: node_js
node_js:
- "0.10"
before_install:
- time sudo pip install --use-mirrors -r test-infra/requirements.txt
- rvm use 1.9.3 --fuzzy
- if [ "$TWBS_TEST" = validate-html ]; then echo "ruby=$(basename $(rvm gemdir)) jekyll=$JEKYLL_VERSION" > pseudo_Gemfile.lock; fi
install:
- time npm install -g grunt-cli
- time ./test-infra/s3_cache.py download 'npm packages' test-infra/npm-shrinkwrap.canonical.json ./node_modules || time ./test-infra/uncached-npm-install.sh
- if [ "$TWBS_TEST" = validate-html ]; then time ./test-infra/s3_cache.py download rubygems pseudo_Gemfile.lock $(rvm gemdir) || gem install -N jekyll -v $JEKYLL_VERSION; fi
after_script:
- if [ "$TWBS_TEST" = core ]; then time ./test-infra/s3_cache.py upload 'npm packages' test-infra/npm-shrinkwrap.canonical.json ./node_modules; fi
- if [ "$TWBS_TEST" = validate-html ]; then time ./test-infra/s3_cache.py upload rubygems pseudo_Gemfile.lock $(rvm gemdir); fi
env:
global:
- JEKYLL_VERSION: 1.4.1
- SAUCE_USERNAME: bootstrap
- secure: "pJkBwnuae9dKU5tEcCqccfS1QQw7/meEcfz63fM7ba7QJNjoA6BaXj08L5Z3Vb5vBmVPwBawxo5Hp0jC0r/Z/O0hGnAmz/Cz09L+cy7dSAZ9x4hvZePSja/UAusaB5ogMoO8l2b773MzgQeSmrLbExr9BWLeqEfjC2hFgdgHLaQ="
- secure: "gqjqISbxBJK6byFbsmr1AyP1qoWH+rap06A2gI7v72+Tn2PU2nYkIMUkCvhZw6K889jv+LhQ/ybcBxDOXHpNCExCnSgB4dcnmYp+9oeNZb37jSP0rQ+Ib4OTLjzc3/FawE/fUq5kukZTC7porzc/k0qJNLAZRx3YLALmK1GIdUY="
- secure: "Gghh/e3Gsbj1+4RR9Lh2aR/xJl35HWiHqlPIeSUqE9D7uDCVTAwNce/dGL3Ew7uJPfJ6Pgr70wD3zgu3stw0Zmzayax0hiDtGwcQCxVIER08wqGANK9C2Q7PYJkNTNtiTo6ehKWbdV4Z+/U+TEYyQfpQTDbAFYk/vVpsdjp0Lmc="
- secure: "RTbRdx4G/2OTLfrZtP1VbRljxEmd6A1F3GqXboeQTldsnAlwpsES65es5CE3ub/rmixLApOY9ot7OPmNixFgC2Y8xOsV7lNCC62QVpmqQEDyGFFQKb3yO6/dmwQxdsCqGfzf9Np6Wh5V22QFvr50ZLKLd7Uhd9oXMDIk/z1MJ3o="
matrix:
- TWBS_TEST=core
- TWBS_TEST=validate-html
- TWBS_TEST=sauce-js-unit
matrix:
fast_finish: true

View File

@@ -1 +0,0 @@
getbootstrap.com

View File

@@ -1,196 +0,0 @@
# Contributing to Bootstrap
Looking to contribute something to Bootstrap? **Here's how you can help.**
Please take a moment to review this document in order to make the contribution
process easy and effective for everyone involved.
Following these guidelines helps to communicate that you respect the time of
the developers managing and developing this open source project. In return,
they should reciprocate that respect in addressing your issue or assessing
patches and features.
## Using the issue tracker
The [issue tracker](https://github.com/twbs/bootstrap/issues) is
the preferred channel for [bug reports](#bug-reports), [features requests](#feature-requests)
and [submitting pull requests](#pull-requests), but please respect the following
restrictions:
* Please **do not** use the issue tracker for personal support requests. Stack
Overflow ([`twitter-bootstrap-3`](http://stackoverflow.com/questions/tagged/twitter-bootstrap-3) tag) or [IRC](https://github.com/twbs/bootstrap/blob/master/README.md#community) are better places to get help.
* Please **do not** derail or troll issues. Keep the discussion on topic and
respect the opinions of others.
* Please **do not** open issues or pull requests regarding the code in
[`Normalize`](https://github.com/necolas/normalize.css) (open them in
their respective repositories).
## Bug reports
A bug is a _demonstrable problem_ that is caused by the code in the repository.
Good bug reports are extremely helpful, so thanks!
Guidelines for bug reports:
1. **Use the GitHub issue search** &mdash; check if the issue has already been
reported.
2. **Check if the issue has been fixed** &mdash; try to reproduce it using the
latest `master` or development branch in the repository.
3. **Isolate the problem** &mdash; ideally create a [reduced test
case](http://css-tricks.com/6263-reduced-test-cases/) and a live example.
[This JS Bin](http://jsbin.com/EBAwOkOK/1) is a helpful template.
A good bug report shouldn't leave others needing to chase you up for more
information. Please try to be as detailed as possible in your report. What is
your environment? What steps will reproduce the issue? What browser(s) and OS
experience the problem? Do other browsers show the bug differently? What
would you expect to be the outcome? All these details will help people to fix
any potential bugs.
Example:
> Short and descriptive example bug report title
>
> A summary of the issue and the browser/OS environment in which it occurs. If
> suitable, include the steps required to reproduce the bug.
>
> 1. This is the first step
> 2. This is the second step
> 3. Further steps, etc.
>
> `<url>` - a link to the reduced test case
>
> Any other information you want to share that is relevant to the issue being
> reported. This might include the lines of code that you have identified as
> causing the bug, and potential solutions (and your opinions on their
> merits).
## Feature requests
Feature requests are welcome. But take a moment to find out whether your idea
fits with the scope and aims of the project. It's up to *you* to make a strong
case to convince the project's developers of the merits of this feature. Please
provide as much detail and context as possible.
## Pull requests
Good pull requests—patches, improvements, new features—are a fantastic
help. They should remain focused in scope and avoid containing unrelated
commits.
**Please ask first** before embarking on any significant pull request (e.g.
implementing features, refactoring code, porting to a different language),
otherwise you risk spending a lot of time working on something that the
project's developers might not want to merge into the project.
Please adhere to the [coding guidelines](#code-guidelines) used throughout the
project (indentation, accurate comments, etc.) and any other requirements
(such as test coverage).
Adhering to the following process is the best way to get your work
included in the project:
1. [Fork](http://help.github.com/fork-a-repo/) the project, clone your fork,
and configure the remotes:
```bash
# Clone your fork of the repo into the current directory
git clone https://github.com/<your-username>/bootstrap.git
# Navigate to the newly cloned directory
cd bootstrap
# Assign the original repo to a remote called "upstream"
git remote add upstream https://github.com/twbs/bootstrap.git
```
2. If you cloned a while ago, get the latest changes from upstream:
```bash
git checkout master
git pull upstream master
```
3. Create a new topic branch (off the main project development branch) to
contain your feature, change, or fix:
```bash
git checkout -b <topic-branch-name>
```
4. Commit your changes in logical chunks. Please adhere to these [git commit
message guidelines](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)
or your code is unlikely be merged into the main project. Use Git's
[interactive rebase](https://help.github.com/articles/interactive-rebase)
feature to tidy up your commits before making them public.
5. Locally merge (or rebase) the upstream development branch into your topic branch:
```bash
git pull [--rebase] upstream master
```
6. Push your topic branch up to your fork:
```bash
git push origin <topic-branch-name>
```
7. [Open a Pull Request](https://help.github.com/articles/using-pull-requests/)
with a clear title and description against the `master` branch.
**IMPORTANT**: By submitting a patch, you agree to allow the project owners to
license your work under the terms of the [MIT License](LICENSE.md).
## Code guidelines
### HTML
- Two spaces for indentation, never tabs.
- Double quotes only, never single quotes.
- Always use proper indentation.
- Use tags and elements appropriate for an HTML5 doctype (e.g., self-closing tags).
- Use CDNs and HTTPS for third-party JS when possible. We don't use protocol-relative URLs in this case because they break when viewing the page locally via `file://`.
- Use [WAI-ARIA](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA) attributes in documentation examples to promote accessibility.
### CSS
- CSS changes must be done in `.less` files first, never just in the compiled `.css` files.
- Adhere to the [CSS property order](http://markdotto.com/2011/11/29/css-property-order/).
- Multiple-line approach (one property and value per line).
- Always a space after a property's colon (e.g., `display: block;` and not `display:block;`).
- End all lines with a semi-colon.
- For multiple, comma-separated selectors, place each selector on its own line.
- Attribute selectors, like `input[type="text"]` should always wrap the attribute's value in double quotes, for consistency and safety (see this [blog post on unquoted attribute values](http://mathiasbynens.be/notes/unquoted-attribute-values) that can lead to XSS attacks).
- Attribute selectors should only be used where absolutely necessary (e.g., form controls) and should be avoided on custom components for performance and explicitness.
- Series of classes for a component should include a base class (e.g., `.component`) and use the base class as a prefix for modifier and sub-components (e.g., `.component-lg`).
- Avoid inheritance and over nesting—use single, explicit classes whenever possible.
- When feasible, default color palettes should comply with [WCAG color contrast guidelines](http://www.w3.org/TR/WCAG20/#visual-audio-contrast).
- Except in rare cases, don't remove default `:focus` styles (via e.g. `outline: none;`) without providing alternative styles. See [this A11Y Project post](http://a11yproject.com/posts/never-remove-css-outlines/) for more details.
### JS
- No semicolons (in client-side JS)
- 2 spaces (no tabs)
- strict mode
- "Attractive"
### Checking coding style
Run `grunt test` before committing to ensure your changes follow our coding standards.
## License
By contributing your code, you agree to license your contribution under the [MIT license](https://github.com/twbs/bootstrap/blob/master/LICENSE).
Prior to v3.1.0, Bootstrap was released under the Apache License v2.0.

View File

@@ -1,421 +0,0 @@
/*!
* Bootstrap's Gruntfile
* http://getbootstrap.com
* Copyright 2013-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
*/
module.exports = function (grunt) {
'use strict';
// Force use of Unix newlines
grunt.util.linefeed = '\n';
RegExp.quote = function (string) {
return string.replace(/[-\\^$*+?.()|[\]{}]/g, '\\$&');
};
var fs = require('fs');
var path = require('path');
var generateGlyphiconsData = require('./grunt/bs-glyphicons-data-generator.js');
var BsLessdocParser = require('./grunt/bs-lessdoc-parser.js');
var generateRawFilesJs = require('./grunt/bs-raw-files-generator.js');
var updateShrinkwrap = require('./grunt/shrinkwrap.js');
// Project configuration.
grunt.initConfig({
// Metadata.
pkg: grunt.file.readJSON('package.json'),
banner: '/*!\n' +
' * Bootstrap v<%= pkg.version %> (<%= pkg.homepage %>)\n' +
' * Copyright 2011-<%= grunt.template.today("yyyy") %> <%= pkg.author %>\n' +
' * Licensed under <%= pkg.license.type %> (<%= pkg.license.url %>)\n' +
' */\n',
jqueryCheck: 'if (typeof jQuery === \'undefined\') { throw new Error(\'Bootstrap\\\'s JavaScript requires jQuery\') }\n\n',
// Task configuration.
clean: {
dist: ['dist', 'docs/dist']
},
jshint: {
options: {
jshintrc: 'js/.jshintrc'
},
grunt: {
options: {
jshintrc: 'grunt/.jshintrc'
},
src: ['Gruntfile.js', 'grunt/*.js']
},
src: {
src: 'js/*.js'
},
test: {
src: 'js/tests/unit/*.js'
},
assets: {
src: ['docs/assets/js/application.js', 'docs/assets/js/customizer.js']
}
},
jscs: {
options: {
config: 'js/.jscs.json',
},
grunt: {
src: ['Gruntfile.js', 'grunt/*.js']
},
src: {
src: 'js/*.js'
},
test: {
src: 'js/tests/unit/*.js'
},
assets: {
src: ['docs/assets/js/application.js', 'docs/assets/js/customizer.js']
}
},
csslint: {
options: {
csslintrc: 'less/.csslintrc'
},
src: [
'dist/css/bootstrap.css',
'dist/css/bootstrap-theme.css',
'docs/assets/css/docs.css',
'docs/examples/**/*.css'
]
},
concat: {
options: {
banner: '<%= banner %>\n<%= jqueryCheck %>',
stripBanners: false
},
bootstrap: {
src: [
'js/transition.js',
'js/alert.js',
'js/button.js',
'js/carousel.js',
'js/collapse.js',
'js/dropdown.js',
'js/modal.js',
'js/tooltip.js',
'js/popover.js',
'js/scrollspy.js',
'js/tab.js',
'js/affix.js'
],
dest: 'dist/js/<%= pkg.name %>.js'
}
},
uglify: {
options: {
report: 'min'
},
bootstrap: {
options: {
banner: '<%= banner %>'
},
src: '<%= concat.bootstrap.dest %>',
dest: 'dist/js/<%= pkg.name %>.min.js'
},
customize: {
options: {
preserveComments: 'some'
},
src: [
'docs/assets/js/vendor/less.min.js',
'docs/assets/js/vendor/jszip.min.js',
'docs/assets/js/vendor/uglify.min.js',
'docs/assets/js/vendor/blob.js',
'docs/assets/js/vendor/filesaver.js',
'docs/assets/js/raw-files.min.js',
'docs/assets/js/customizer.js'
],
dest: 'docs/assets/js/customize.min.js'
},
docsJs: {
options: {
preserveComments: 'some'
},
src: [
'docs/assets/js/vendor/holder.js',
'docs/assets/js/application.js'
],
dest: 'docs/assets/js/docs.min.js'
}
},
less: {
compileCore: {
options: {
strictMath: true,
sourceMap: true,
outputSourceFiles: true,
sourceMapURL: '<%= pkg.name %>.css.map',
sourceMapFilename: 'dist/css/<%= pkg.name %>.css.map'
},
files: {
'dist/css/<%= pkg.name %>.css': 'less/bootstrap.less'
}
},
compileTheme: {
options: {
strictMath: true,
sourceMap: true,
outputSourceFiles: true,
sourceMapURL: '<%= pkg.name %>-theme.css.map',
sourceMapFilename: 'dist/css/<%= pkg.name %>-theme.css.map'
},
files: {
'dist/css/<%= pkg.name %>-theme.css': 'less/theme.less'
}
},
minify: {
options: {
cleancss: true,
report: 'min'
},
files: {
'dist/css/<%= pkg.name %>.min.css': 'dist/css/<%= pkg.name %>.css',
'dist/css/<%= pkg.name %>-theme.min.css': 'dist/css/<%= pkg.name %>-theme.css'
}
}
},
cssmin: {
compress: {
options: {
keepSpecialComments: '*',
noAdvanced: true, // turn advanced optimizations off until the issue is fixed in clean-css
report: 'min',
selectorsMergeMode: 'ie8'
},
src: [
'docs/assets/css/docs.css',
'docs/assets/css/pygments-manni.css'
],
dest: 'docs/assets/css/docs.min.css'
}
},
usebanner: {
dist: {
options: {
position: 'top',
banner: '<%= banner %>'
},
files: {
src: [
'dist/css/<%= pkg.name %>.css',
'dist/css/<%= pkg.name %>.min.css',
'dist/css/<%= pkg.name %>-theme.css',
'dist/css/<%= pkg.name %>-theme.min.css'
]
}
}
},
csscomb: {
options: {
config: 'less/.csscomb.json'
},
dist: {
files: {
'dist/css/<%= pkg.name %>.css': 'dist/css/<%= pkg.name %>.css',
'dist/css/<%= pkg.name %>-theme.css': 'dist/css/<%= pkg.name %>-theme.css'
}
},
examples: {
expand: true,
cwd: 'docs/examples/',
src: ['**/*.css'],
dest: 'docs/examples/'
}
},
copy: {
fonts: {
expand: true,
src: 'fonts/*',
dest: 'dist/'
},
docs: {
expand: true,
cwd: './dist',
src: [
'{css,js}/*.min.*',
'css/*.map',
'fonts/*'
],
dest: 'docs/dist'
}
},
qunit: {
options: {
inject: 'js/tests/unit/phantom.js'
},
files: 'js/tests/index.html'
},
connect: {
server: {
options: {
port: 3000,
base: '.'
}
}
},
jekyll: {
docs: {}
},
jade: {
compile: {
options: {
pretty: true,
data: function () {
var filePath = path.join(__dirname, 'less/variables.less');
var fileContent = fs.readFileSync(filePath, {encoding: 'utf8'});
var parser = new BsLessdocParser(fileContent);
return {sections: parser.parseFile()};
}
},
files: {
'docs/_includes/customizer-variables.html': 'docs/jade/customizer-variables.jade',
'docs/_includes/nav-customize.html': 'docs/jade/customizer-nav.jade'
}
}
},
validation: {
options: {
charset: 'utf-8',
doctype: 'HTML5',
failHard: true,
reset: true,
relaxerror: [
'Bad value X-UA-Compatible for attribute http-equiv on element meta.',
'Element img is missing required attribute src.'
]
},
files: {
src: '_gh_pages/**/*.html'
}
},
watch: {
src: {
files: '<%= jshint.src.src %>',
tasks: ['jshint:src', 'qunit']
},
test: {
files: '<%= jshint.test.src %>',
tasks: ['jshint:test', 'qunit']
},
less: {
files: 'less/*.less',
tasks: 'less'
}
},
sed: {
versionNumber: {
pattern: (function () {
var old = grunt.option('oldver');
return old ? RegExp.quote(old) : old;
})(),
replacement: grunt.option('newver'),
recursive: true
}
},
'saucelabs-qunit': {
all: {
options: {
build: process.env.TRAVIS_JOB_ID,
concurrency: 10,
urls: ['http://127.0.0.1:3000/js/tests/index.html'],
browsers: grunt.file.readYAML('test-infra/sauce_browsers.yml')
}
}
},
exec: {
npmUpdate: {
command: 'npm update'
},
npmShrinkWrap: {
command: 'npm shrinkwrap --dev'
}
}
});
// These plugins provide necessary tasks.
require('load-grunt-tasks')(grunt, {scope: 'devDependencies'});
// Docs HTML validation task
grunt.registerTask('validate-html', ['jekyll', 'validation']);
// Test task.
var testSubtasks = [];
// Skip core tests if running a different subset of the test suite
if (!process.env.TWBS_TEST || process.env.TWBS_TEST === 'core') {
testSubtasks = testSubtasks.concat(['dist-css', 'csslint', 'jshint', 'jscs', 'qunit', 'build-customizer-html']);
}
// Skip HTML validation if running a different subset of the test suite
if (!process.env.TWBS_TEST || process.env.TWBS_TEST === 'validate-html') {
testSubtasks.push('validate-html');
}
// Only run Sauce Labs tests if there's a Sauce access key
if (typeof process.env.SAUCE_ACCESS_KEY !== 'undefined' &&
// Skip Sauce if running a different subset of the test suite
(!process.env.TWBS_TEST || process.env.TWBS_TEST === 'sauce-js-unit')) {
testSubtasks.push('connect');
testSubtasks.push('saucelabs-qunit');
}
grunt.registerTask('test', testSubtasks);
// JS distribution task.
grunt.registerTask('dist-js', ['concat', 'uglify']);
// CSS distribution task.
grunt.registerTask('dist-css', ['less', 'cssmin', 'csscomb', 'usebanner']);
// Docs distribution task.
grunt.registerTask('dist-docs', 'copy:docs');
// Full distribution task.
grunt.registerTask('dist', ['clean', 'dist-css', 'copy:fonts', 'dist-js', 'dist-docs']);
// Default task.
grunt.registerTask('default', ['test', 'dist', 'build-glyphicons-data', 'build-customizer', 'update-shrinkwrap']);
// Version numbering task.
// grunt change-version-number --oldver=A.B.C --newver=X.Y.Z
// This can be overzealous, so its changes should always be manually reviewed!
grunt.registerTask('change-version-number', 'sed');
grunt.registerTask('build-glyphicons-data', generateGlyphiconsData);
// task for building customizer
grunt.registerTask('build-customizer', ['build-customizer-html', 'build-raw-files']);
grunt.registerTask('build-customizer-html', 'jade');
grunt.registerTask('build-raw-files', 'Add scripts/less files to customizer.', function () {
var banner = grunt.template.process('<%= banner %>');
generateRawFilesJs(banner);
});
// Task for updating the npm packages used by the Travis build.
grunt.registerTask('update-shrinkwrap', ['exec:npmUpdate', 'exec:npmShrinkWrap', '_update-shrinkwrap']);
grunt.registerTask('_update-shrinkwrap', function () { updateShrinkwrap.call(this, grunt); });
};

View File

@@ -1,173 +0,0 @@
# [Bootstrap](http://getbootstrap.com) [![Bower version](https://badge.fury.io/bo/bootstrap.png)](http://badge.fury.io/bo/bootstrap) [![Build Status](https://secure.travis-ci.org/twbs/bootstrap.png)](http://travis-ci.org/twbs/bootstrap) [![devDependency Status](https://david-dm.org/twbs/bootstrap/dev-status.png?theme=shields.io)](https://david-dm.org/twbs/bootstrap#info=devDependencies)
[![Selenium Test Status](https://saucelabs.com/browser-matrix/bootstrap.svg)](https://saucelabs.com/u/bootstrap)
Bootstrap is a sleek, intuitive, and powerful front-end framework for faster and easier web development, created by [Mark Otto](http://twitter.com/mdo) and [Jacob Thornton](http://twitter.com/fat), and maintained by the [core team](https://github.com/twbs?tab=members) with the massive support and involvement of the community.
To get started, check out <http://getbootstrap.com>!
## Table of contents
- [Quick start](#quick-start)
- [Bugs and feature requests](#bugs-and-feature-requests)
- [Documentation](#documentation)
- [Compiling CSS and JavaScript](#compiling-css-and-javascript)
- [Contributing](#contributing)
- [Community](#community)
- [Versioning](#versioning)
- [Authors](#authors)
- [Copyright and license](#copyright-and-license)
## Quick start
Three quick start options are available:
- [Download the latest release](https://github.com/twbs/bootstrap/archive/v3.1.1.zip).
- Clone the repo: `git clone https://github.com/twbs/bootstrap.git`.
- Install with [Bower](http://bower.io): `bower install bootstrap`.
Read the [Getting Started page](http://getbootstrap.com/getting-started/) for information on the framework contents, templates and examples, and more.
### What's included
Within the download you'll find the following directories and files, logically grouping common assets and providing both compiled and minified variations. You'll see something like this:
```
bootstrap/
├── css/
│ ├── bootstrap.css
│ ├── bootstrap.min.css
│ ├── bootstrap-theme.css
│ └── bootstrap-theme.min.css
├── js/
│ ├── bootstrap.js
│ └── bootstrap.min.js
└── fonts/
├── glyphicons-halflings-regular.eot
├── glyphicons-halflings-regular.svg
├── glyphicons-halflings-regular.ttf
└── glyphicons-halflings-regular.woff
```
We provide compiled CSS and JS (`bootstrap.*`), as well as compiled and minified CSS and JS (`bootstrap.min.*`). Fonts from Glyphicons are included, as is the optional Bootstrap theme.
## Bugs and feature requests
Have a bug or a feature request? Please first read the [issue guidelines](https://github.com/twbs/bootstrap/blob/master/CONTRIBUTING.md#using-the-issue-tracker) and search for existing and closed issues. If your problem or idea is not addressed yet, [please open a new issue](https://github.com/twbs/bootstrap/issues/new).
## Documentation
Bootstrap's documentation, included in this repo in the root directory, is built with [Jekyll](http://jekyllrb.com) and publicly hosted on GitHub Pages at <http://getbootstrap.com>. The docs may also be run locally.
### Running documentation locally
1. If necessary, [install Jekyll](http://jekyllrb.com/docs/installation) (requires v1.x).
- **Windows users:** Read [this unofficial guide](https://github.com/juthilo/run-jekyll-on-windows/) to get Jekyll up and running without problems. We use Pygments for syntax highlighting, so make sure to read the sections on installing Python and Pygments.
2. From the root `/bootstrap` directory, run `jekyll serve` in the command line.
- **Windows users:** While we use Jekyll's `encoding` setting, you might still need to change the command prompt's character encoding ([code page](http://en.wikipedia.org/wiki/Windows_code_page)) to UTF-8 so Jekyll runs without errors. For Ruby 2.0.0, run `chcp 65001` first. For Ruby 1.9.3, you can alternatively do `SET LANG=en_EN.UTF-8`.
3. Open <http://localhost:9001> in your browser, and voilà.
Learn more about using Jekyll by reading its [documentation](http://jekyllrb.com/docs/home/).
### Documentation for previous releases
Documentation for v2.3.2 has been made available for the time being at <http://getbootstrap.com/2.3.2/> while folks transition to Bootstrap 3.
[Previous releases](https://github.com/twbs/bootstrap/releases) and their documentation are also available for download.
## Compiling CSS and JavaScript
Bootstrap uses [Grunt](http://gruntjs.com/) with convenient methods for working with the framework. It's how we compile our code, run tests, and more. To use it, install the required dependencies as directed and then run some Grunt commands.
### Install Grunt
From the command line:
1. Install `grunt-cli` globally with `npm install -g grunt-cli`.
2. Navigate to the root `/bootstrap` directory, then run `npm install`. npm will look at [package.json](https://github.com/twbs/bootstrap/blob/master/package.json) and automatically install the necessary local dependencies listed there.
When completed, you'll be able to run the various Grunt commands provided from the command line.
**Unfamiliar with `npm`? Don't have node installed?** That's a-okay. npm stands for [node packaged modules](http://npmjs.org/) and is a way to manage development dependencies through node.js. [Download and install node.js](http://nodejs.org/download/) before proceeding.
### Available Grunt commands
#### Build - `grunt`
Run `grunt` to run tests locally and compile the CSS and JavaScript into `/dist`. **Uses [Less](http://lesscss.org/) and [UglifyJS](http://lisperator.net/uglifyjs/).**
#### Only compile CSS and JavaScript - `grunt dist`
`grunt dist` creates the `/dist` directory with compiled files. **Uses [Less](http://lesscss.org/) and [UglifyJS](http://lisperator.net/uglifyjs/).**
#### Tests - `grunt test`
Runs [JSHint](http://jshint.com) and [QUnit](http://qunitjs.com/) tests headlessly in [PhantomJS](http://phantomjs.org/) (used for CI).
#### Watch - `grunt watch`
This is a convenience method for watching just Less files and automatically building them whenever you save.
### Troubleshooting dependencies
Should you encounter problems with installing dependencies or running Grunt commands, uninstall all previous dependency versions (global and local). Then, rerun `npm install`.
## Contributing
Please read through our [contributing guidelines](https://github.com/twbs/bootstrap/blob/master/CONTRIBUTING.md). Included are directions for opening issues, coding standards, and notes on development.
Moreover, if your pull request contains JavaScript patches or features, you must include relevant unit tests. All HTML and CSS should conform to the [Code Guide](http://github.com/mdo/code-guide), maintained by [Mark Otto](http://github.com/mdo).
Editor preferences are available in the [editor config](https://github.com/twbs/bootstrap/blob/master/.editorconfig) for easy use in common text editors. Read more and download plugins at <http://editorconfig.org>.
## Community
Keep track of development and community news.
- Follow [@twbootstrap on Twitter](http://twitter.com/twbootstrap).
- Read and subscribe to [The Official Bootstrap Blog](http://blog.getbootstrap.com).
- Chat with fellow Bootstrappers in IRC. On the `irc.freenode.net` server, in the `##twitter-bootstrap` channel.
- Implementation help may be found at Stack Overflow (tagged [`twitter-bootstrap-3`](http://stackoverflow.com/questions/tagged/twitter-bootstrap-3)).
## Versioning
For transparency into our release cycle and in striving to maintain backward compatibility, Bootstrap is maintained under the Semantic Versioning guidelines. Sometimes we screw up, but we'll adhere to these rules whenever possible.
Releases will be numbered with the following format:
`<major>.<minor>.<patch>`
And constructed with the following guidelines:
- Breaking backward compatibility **bumps the major** while resetting minor and patch
- New additions without breaking backward compatibility **bumps the minor** while resetting the patch
- Bug fixes and misc changes **bumps only the patch**
For more information on SemVer, please visit <http://semver.org/>.
## Authors
**Mark Otto**
- <http://twitter.com/mdo>
- <http://github.com/mdo>
**Jacob Thornton**
- <http://twitter.com/fat>
- <http://github.com/fat>
## Copyright and license
Code and documentation copyright 2011-2014 Twitter, Inc. Code released under [the MIT license](LICENSE). Docs released under [Creative Commons](docs/LICENSE).

View File

@@ -1,37 +0,0 @@
# Dependencies
markdown: rdiscount
pygments: true
# Permalinks
permalink: pretty
# Server
source: ./docs
destination: ./_gh_pages
host: 0.0.0.0
port: 9001
baseurl: /
url: http://localhost:9001
encoding: UTF-8
exclude:
- "jade"
- "vendor"
# Custom vars
current_version: 3.1.1
repo: https://github.com/twbs/bootstrap
sass_repo: https://github.com/twbs/bootstrap-sass
download:
source: https://github.com/twbs/bootstrap/archive/v3.1.1.zip
dist: https://github.com/twbs/bootstrap/releases/download/v3.1.1/bootstrap-3.1.1-dist.zip
sass: https://github.com/twbs/bootstrap-sass/archive/v3.1.1.tar.gz
blog: http://blog.getbootstrap.com
expo: http://expo.getbootstrap.com
cdn:
css: //netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css
css_theme: //netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap-theme.min.css
js: //netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js

View File

@@ -1,24 +0,0 @@
{
"name": "bootstrap",
"version": "3.1.1",
"main": [
"./dist/css/bootstrap.css",
"./dist/js/bootstrap.js",
"./dist/fonts/glyphicons-halflings-regular.eot",
"./dist/fonts/glyphicons-halflings-regular.svg",
"./dist/fonts/glyphicons-halflings-regular.ttf",
"./dist/fonts/glyphicons-halflings-regular.woff"
],
"ignore": [
"**/.*",
"_config.yml",
"CNAME",
"composer.json",
"CONTRIBUTING.md",
"docs",
"js/tests"
],
"dependencies": {
"jquery": ">= 1.9.0"
}
}

View File

@@ -1,25 +0,0 @@
{
"name": "twbs/bootstrap",
"description": "Sleek, intuitive, and powerful mobile first front-end framework for faster and easier web development.",
"keywords": ["bootstrap", "css"],
"homepage": "http://getbootstrap.com",
"authors": [
{
"name": "Mark Otto",
"email": "markdotto@gmail.com"
},
{
"name": "Jacob Thornton",
"email": "jacobthornton@gmail.com"
}
],
"support": {
"issues": "https://github.com/twbs/bootstrap/issues"
},
"license": "MIT",
"extra": {
"branch-alias": {
"dev-master": "3.0.x-dev"
}
}
}

View File

@@ -1,347 +0,0 @@
/*!
* Bootstrap v3.1.1 (http://getbootstrap.com)
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
*/
.btn-default,
.btn-primary,
.btn-success,
.btn-info,
.btn-warning,
.btn-danger {
text-shadow: 0 -1px 0 rgba(0, 0, 0, .2);
-webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, .15), 0 1px 1px rgba(0, 0, 0, .075);
box-shadow: inset 0 1px 0 rgba(255, 255, 255, .15), 0 1px 1px rgba(0, 0, 0, .075);
}
.btn-default:active,
.btn-primary:active,
.btn-success:active,
.btn-info:active,
.btn-warning:active,
.btn-danger:active,
.btn-default.active,
.btn-primary.active,
.btn-success.active,
.btn-info.active,
.btn-warning.active,
.btn-danger.active {
-webkit-box-shadow: inset 0 3px 5px rgba(0, 0, 0, .125);
box-shadow: inset 0 3px 5px rgba(0, 0, 0, .125);
}
.btn:active,
.btn.active {
background-image: none;
}
.btn-default {
text-shadow: 0 1px 0 #fff;
background-image: -webkit-linear-gradient(top, #fff 0%, #e0e0e0 100%);
background-image: linear-gradient(to bottom, #fff 0%, #e0e0e0 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffffffff', endColorstr='#ffe0e0e0', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-color: #dbdbdb;
border-color: #ccc;
}
.btn-default:hover,
.btn-default:focus {
background-color: #e0e0e0;
background-position: 0 -15px;
}
.btn-default:active,
.btn-default.active {
background-color: #e0e0e0;
border-color: #dbdbdb;
}
.btn-primary {
background-image: -webkit-linear-gradient(top, #3498db 0%, #2077b2 100%);
background-image: linear-gradient(to bottom, #3498db 0%, #2077b2 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff3498db', endColorstr='#ff2077b2', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-color: #1e72aa;
}
.btn-primary:hover,
.btn-primary:focus {
background-color: #2077b2;
background-position: 0 -15px;
}
.btn-primary:active,
.btn-primary.active {
background-color: #2077b2;
border-color: #1e72aa;
}
.btn-success {
background-image: -webkit-linear-gradient(top, #2ecc71 0%, #239a55 100%);
background-image: linear-gradient(to bottom, #2ecc71 0%, #239a55 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff2ecc71', endColorstr='#ff239a55', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-color: #219251;
}
.btn-success:hover,
.btn-success:focus {
background-color: #239a55;
background-position: 0 -15px;
}
.btn-success:active,
.btn-success.active {
background-color: #239a55;
border-color: #219251;
}
.btn-info {
background-image: -webkit-linear-gradient(top, #9b59b6 0%, #7a4092 100%);
background-image: linear-gradient(to bottom, #9b59b6 0%, #7a4092 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff9b59b6', endColorstr='#ff7a4092', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-color: #743d8b;
}
.btn-info:hover,
.btn-info:focus {
background-color: #7a4092;
background-position: 0 -15px;
}
.btn-info:active,
.btn-info.active {
background-color: #7a4092;
border-color: #743d8b;
}
.btn-warning {
background-image: -webkit-linear-gradient(top, #f1c40f 0%, #b8960b 100%);
background-image: linear-gradient(to bottom, #f1c40f 0%, #b8960b 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff1c40f', endColorstr='#ffb8960b', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-color: #ae8e0a;
}
.btn-warning:hover,
.btn-warning:focus {
background-color: #b8960b;
background-position: 0 -15px;
}
.btn-warning:active,
.btn-warning.active {
background-color: #b8960b;
border-color: #ae8e0a;
}
.btn-danger {
background-image: -webkit-linear-gradient(top, #e74c3c 0%, #cd2a19 100%);
background-image: linear-gradient(to bottom, #e74c3c 0%, #cd2a19 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffe74c3c', endColorstr='#ffcd2a19', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-color: #c42818;
}
.btn-danger:hover,
.btn-danger:focus {
background-color: #cd2a19;
background-position: 0 -15px;
}
.btn-danger:active,
.btn-danger.active {
background-color: #cd2a19;
border-color: #c42818;
}
.thumbnail,
.img-thumbnail {
-webkit-box-shadow: 0 1px 2px rgba(0, 0, 0, .075);
box-shadow: 0 1px 2px rgba(0, 0, 0, .075);
}
.dropdown-menu > li > a:hover,
.dropdown-menu > li > a:focus {
background-color: #e8e8e8;
background-image: -webkit-linear-gradient(top, #f5f5f5 0%, #e8e8e8 100%);
background-image: linear-gradient(to bottom, #f5f5f5 0%, #e8e8e8 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff5f5f5', endColorstr='#ffe8e8e8', GradientType=0);
background-repeat: repeat-x;
}
.dropdown-menu > .active > a,
.dropdown-menu > .active > a:hover,
.dropdown-menu > .active > a:focus {
background-color: #258cd1;
background-image: -webkit-linear-gradient(top, #3498db 0%, #258cd1 100%);
background-image: linear-gradient(to bottom, #3498db 0%, #258cd1 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff3498db', endColorstr='#ff258cd1', GradientType=0);
background-repeat: repeat-x;
}
.navbar-default {
background-image: -webkit-linear-gradient(top, #fff 0%, #f8f8f8 100%);
background-image: linear-gradient(to bottom, #fff 0%, #f8f8f8 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffffffff', endColorstr='#fff8f8f8', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
border-radius: 3px;
-webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, .15), 0 1px 5px rgba(0, 0, 0, .075);
box-shadow: inset 0 1px 0 rgba(255, 255, 255, .15), 0 1px 5px rgba(0, 0, 0, .075);
}
.navbar-default .navbar-nav > .active > a {
background-image: -webkit-linear-gradient(top, #ebebeb 0%, #f3f3f3 100%);
background-image: linear-gradient(to bottom, #ebebeb 0%, #f3f3f3 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffebebeb', endColorstr='#fff3f3f3', GradientType=0);
background-repeat: repeat-x;
-webkit-box-shadow: inset 0 3px 9px rgba(0, 0, 0, .075);
box-shadow: inset 0 3px 9px rgba(0, 0, 0, .075);
}
.navbar-brand,
.navbar-nav > li > a {
text-shadow: 0 1px 0 rgba(255, 255, 255, .25);
}
.navbar-inverse {
background-image: -webkit-linear-gradient(top, #3c3c3c 0%, #222 100%);
background-image: linear-gradient(to bottom, #3c3c3c 0%, #222 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff3c3c3c', endColorstr='#ff222222', GradientType=0);
filter: progid:DXImageTransform.Microsoft.gradient(enabled = false);
background-repeat: repeat-x;
}
.navbar-inverse .navbar-nav > .active > a {
background-image: -webkit-linear-gradient(top, #222 0%, #282828 100%);
background-image: linear-gradient(to bottom, #222 0%, #282828 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff222222', endColorstr='#ff282828', GradientType=0);
background-repeat: repeat-x;
-webkit-box-shadow: inset 0 3px 9px rgba(0, 0, 0, .25);
box-shadow: inset 0 3px 9px rgba(0, 0, 0, .25);
}
.navbar-inverse .navbar-brand,
.navbar-inverse .navbar-nav > li > a {
text-shadow: 0 -1px 0 rgba(0, 0, 0, .25);
}
.navbar-static-top,
.navbar-fixed-top,
.navbar-fixed-bottom {
border-radius: 0;
}
.alert {
text-shadow: 0 1px 0 rgba(255, 255, 255, .2);
-webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, .25), 0 1px 2px rgba(0, 0, 0, .05);
box-shadow: inset 0 1px 0 rgba(255, 255, 255, .25), 0 1px 2px rgba(0, 0, 0, .05);
}
.alert-success {
background-image: -webkit-linear-gradient(top, #2ecc71 0%, #27ad60 100%);
background-image: linear-gradient(to bottom, #2ecc71 0%, #27ad60 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff2ecc71', endColorstr='#ff27ad60', GradientType=0);
background-repeat: repeat-x;
border-color: #208e4e;
}
.alert-info {
background-image: -webkit-linear-gradient(top, #9b59b6 0%, #8747a2 100%);
background-image: linear-gradient(to bottom, #9b59b6 0%, #8747a2 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff9b59b6', endColorstr='#ff8747a2', GradientType=0);
background-repeat: repeat-x;
border-color: #713b87;
}
.alert-warning {
background-image: -webkit-linear-gradient(top, #f1c40f 0%, #cea70c 100%);
background-image: linear-gradient(to bottom, #f1c40f 0%, #cea70c 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff1c40f', endColorstr='#ffcea70c', GradientType=0);
background-repeat: repeat-x;
border-color: #aa8a0a;
}
.alert-danger {
background-image: -webkit-linear-gradient(top, #e74c3c 0%, #e12e1c 100%);
background-image: linear-gradient(to bottom, #e74c3c 0%, #e12e1c 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffe74c3c', endColorstr='#ffe12e1c', GradientType=0);
background-repeat: repeat-x;
border-color: #bf2718;
}
.progress {
background-image: -webkit-linear-gradient(top, #ebebeb 0%, #f5f5f5 100%);
background-image: linear-gradient(to bottom, #ebebeb 0%, #f5f5f5 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffebebeb', endColorstr='#fff5f5f5', GradientType=0);
background-repeat: repeat-x;
}
.progress-bar {
background-image: -webkit-linear-gradient(top, #3498db 0%, #217dbb 100%);
background-image: linear-gradient(to bottom, #3498db 0%, #217dbb 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff3498db', endColorstr='#ff217dbb', GradientType=0);
background-repeat: repeat-x;
}
.progress-bar-success {
background-image: -webkit-linear-gradient(top, #2ecc71 0%, #25a25a 100%);
background-image: linear-gradient(to bottom, #2ecc71 0%, #25a25a 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff2ecc71', endColorstr='#ff25a25a', GradientType=0);
background-repeat: repeat-x;
}
.progress-bar-info {
background-image: -webkit-linear-gradient(top, #9b59b6 0%, #804399 100%);
background-image: linear-gradient(to bottom, #9b59b6 0%, #804399 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff9b59b6', endColorstr='#ff804399', GradientType=0);
background-repeat: repeat-x;
}
.progress-bar-warning {
background-image: -webkit-linear-gradient(top, #f1c40f 0%, #c29d0b 100%);
background-image: linear-gradient(to bottom, #f1c40f 0%, #c29d0b 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff1c40f', endColorstr='#ffc29d0b', GradientType=0);
background-repeat: repeat-x;
}
.progress-bar-danger {
background-image: -webkit-linear-gradient(top, #e74c3c 0%, #d62c1a 100%);
background-image: linear-gradient(to bottom, #e74c3c 0%, #d62c1a 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffe74c3c', endColorstr='#ffd62c1a', GradientType=0);
background-repeat: repeat-x;
}
.list-group {
border-radius: 3px;
-webkit-box-shadow: 0 1px 2px rgba(0, 0, 0, .075);
box-shadow: 0 1px 2px rgba(0, 0, 0, .075);
}
.list-group-item.active,
.list-group-item.active:hover,
.list-group-item.active:focus {
text-shadow: 0 -1px 0 #217dbb;
background-image: -webkit-linear-gradient(top, #3498db 0%, #2384c6 100%);
background-image: linear-gradient(to bottom, #3498db 0%, #2384c6 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff3498db', endColorstr='#ff2384c6', GradientType=0);
background-repeat: repeat-x;
border-color: #2384c6;
}
.panel {
-webkit-box-shadow: 0 1px 2px rgba(0, 0, 0, .05);
box-shadow: 0 1px 2px rgba(0, 0, 0, .05);
}
.panel-default > .panel-heading {
background-image: -webkit-linear-gradient(top, #f5f5f5 0%, #e8e8e8 100%);
background-image: linear-gradient(to bottom, #f5f5f5 0%, #e8e8e8 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff5f5f5', endColorstr='#ffe8e8e8', GradientType=0);
background-repeat: repeat-x;
}
.panel-primary > .panel-heading {
background-image: -webkit-linear-gradient(top, #3498db 0%, #258cd1 100%);
background-image: linear-gradient(to bottom, #3498db 0%, #258cd1 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff3498db', endColorstr='#ff258cd1', GradientType=0);
background-repeat: repeat-x;
}
.panel-success > .panel-heading {
background-image: -webkit-linear-gradient(top, #2ecc71 0%, #29b765 100%);
background-image: linear-gradient(to bottom, #2ecc71 0%, #29b765 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff2ecc71', endColorstr='#ff29b765', GradientType=0);
background-repeat: repeat-x;
}
.panel-info > .panel-heading {
background-image: -webkit-linear-gradient(top, #9b59b6 0%, #8f4bab 100%);
background-image: linear-gradient(to bottom, #9b59b6 0%, #8f4bab 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff9b59b6', endColorstr='#ff8f4bab', GradientType=0);
background-repeat: repeat-x;
}
.panel-warning > .panel-heading {
background-image: -webkit-linear-gradient(top, #f1c40f 0%, #dab10d 100%);
background-image: linear-gradient(to bottom, #f1c40f 0%, #dab10d 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff1c40f', endColorstr='#ffdab10d', GradientType=0);
background-repeat: repeat-x;
}
.panel-danger > .panel-heading {
background-image: -webkit-linear-gradient(top, #e74c3c 0%, #e43725 100%);
background-image: linear-gradient(to bottom, #e74c3c 0%, #e43725 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffe74c3c', endColorstr='#ffe43725', GradientType=0);
background-repeat: repeat-x;
}
.well {
background-image: -webkit-linear-gradient(top, #e8e8e8 0%, #f5f5f5 100%);
background-image: linear-gradient(to bottom, #e8e8e8 0%, #f5f5f5 100%);
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffe8e8e8', endColorstr='#fff5f5f5', GradientType=0);
background-repeat: repeat-x;
border-color: #dcdcdc;
-webkit-box-shadow: inset 0 1px 3px rgba(0, 0, 0, .05), 0 1px 0 rgba(255, 255, 255, .1);
box-shadow: inset 0 1px 3px rgba(0, 0, 0, .05), 0 1px 0 rgba(255, 255, 255, .1);
}
/*# sourceMappingURL=bootstrap-theme.css.map */

View File

File diff suppressed because one or more lines are too long

View File

File diff suppressed because one or more lines are too long

View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More