mirror of
https://github.com/syncthing/syncthing.git
synced 2026-01-10 06:49:31 -05:00
Compare commits
532 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b48d9a3a82 | ||
|
|
0255311bbe | ||
|
|
bd91519df9 | ||
|
|
58fe8b0cf1 | ||
|
|
064aa64f20 | ||
|
|
d9f79853fb | ||
|
|
2e68ee5c8b | ||
|
|
a9544ca890 | ||
|
|
9a549a853b | ||
|
|
2dad769a00 | ||
|
|
0ceb14dbf6 | ||
|
|
bab1e26d9b | ||
|
|
9a91cc232c | ||
|
|
5a46cf1d48 | ||
|
|
f1e241940b | ||
|
|
47b344ba12 | ||
|
|
afbb06a72f | ||
|
|
e336cd463f | ||
|
|
3a8315971e | ||
|
|
4ccfa98771 | ||
|
|
1db120bf06 | ||
|
|
262cf63956 | ||
|
|
fe2ae4c6c3 | ||
|
|
16d9944dbb | ||
|
|
e9956cc71e | ||
|
|
59a85c1d75 | ||
|
|
4427149a38 | ||
|
|
20dee618ea | ||
|
|
37ebbb53be | ||
|
|
ba019efaf1 | ||
|
|
ce948fc512 | ||
|
|
2cd9e7fb55 | ||
|
|
1e2d151684 | ||
|
|
ce5651f5fa | ||
|
|
20ba0bf4ed | ||
|
|
c325ffd0f8 | ||
|
|
6e88d9688b | ||
|
|
bf898f10fb | ||
|
|
c891999e1d | ||
|
|
938e287501 | ||
|
|
edcfc32b1a | ||
|
|
904b211d98 | ||
|
|
af08567f24 | ||
|
|
75ef658962 | ||
|
|
fe2dd79838 | ||
|
|
bbe7e6525d | ||
|
|
ef20df719c | ||
|
|
68399601ce | ||
|
|
aa637fd942 | ||
|
|
601c97c015 | ||
|
|
6b47052491 | ||
|
|
297da94319 | ||
|
|
64f101f534 | ||
|
|
45917f278a | ||
|
|
ddd2759cec | ||
|
|
70d8903d3c | ||
|
|
f66c7dc09c | ||
|
|
82c6caef85 | ||
|
|
46ec72412a | ||
|
|
ead09395d9 | ||
|
|
7106fc5304 | ||
|
|
d16dcb9f19 | ||
|
|
1aaf34b0ed | ||
|
|
39a3b8922d | ||
|
|
9b78582475 | ||
|
|
3a84224b93 | ||
|
|
2592ba7399 | ||
|
|
1795e0a290 | ||
|
|
c959f59581 | ||
|
|
2449723a1c | ||
|
|
ae0e56e98d | ||
|
|
6efe521e44 | ||
|
|
bccd21ac14 | ||
|
|
8449a65cdf | ||
|
|
fc47562983 | ||
|
|
76900ae291 | ||
|
|
ec55559ff1 | ||
|
|
3daa26e1f7 | ||
|
|
9ea8b6f659 | ||
|
|
387f2f0a94 | ||
|
|
b0d95d02be | ||
|
|
3a98f01d31 | ||
|
|
d305752749 | ||
|
|
2ba4b235fc | ||
|
|
6820c0a5d7 | ||
|
|
048883ad27 | ||
|
|
08e7ada242 | ||
|
|
d3ddfa31f7 | ||
|
|
4b899a813e | ||
|
|
15ee9a5cac | ||
|
|
58945a429f | ||
|
|
9f4111015e | ||
|
|
04b960b415 | ||
|
|
33267f2178 | ||
|
|
970e50d1f1 | ||
|
|
d4199c2d08 | ||
|
|
cf4ca7b6a8 | ||
|
|
d8b335ce65 | ||
|
|
39a2934b05 | ||
|
|
7d1c720b84 | ||
|
|
51743461ee | ||
|
|
53cf5ca762 | ||
|
|
b5ef42b0a1 | ||
|
|
0521ddd858 | ||
|
|
b7bb3bfee2 | ||
|
|
4183044e96 | ||
|
|
27448bde20 | ||
|
|
87b9e8fbaf | ||
|
|
9d79859ba6 | ||
|
|
25bb55491a | ||
|
|
198cbacc3e | ||
|
|
3f842221f7 | ||
|
|
5d0183a9ed | ||
|
|
99df4d660b | ||
|
|
f2adfde1a8 | ||
|
|
1e915a2903 | ||
|
|
e2dc3e9ff3 | ||
|
|
f1bb8daaab | ||
|
|
b0fcbebdae | ||
|
|
34f72ecf8f | ||
|
|
9d348319fd | ||
|
|
c55fee69de | ||
|
|
ce31cb072b | ||
|
|
6b91fc9c91 | ||
|
|
e34f77ba0e | ||
|
|
bc3b7401a1 | ||
|
|
85677eaf1a | ||
|
|
75d5e74059 | ||
|
|
c4d15b3b95 | ||
|
|
aa168ec2d6 | ||
|
|
4ae0efe887 | ||
|
|
86a57d8b56 | ||
|
|
9dda7485eb | ||
|
|
8b9670add9 | ||
|
|
978aebd79c | ||
|
|
ac079f0f83 | ||
|
|
e82e912151 | ||
|
|
5488ae5b89 | ||
|
|
15b875b116 | ||
|
|
dedf835aa6 | ||
|
|
e62b9c6009 | ||
|
|
53da778506 | ||
|
|
4360b2c815 | ||
|
|
1e15b1e0be | ||
|
|
0bc50f7284 | ||
|
|
435f9113e8 | ||
|
|
8818c4785b | ||
|
|
2e003e5404 | ||
|
|
e791a8ea07 | ||
|
|
2fb8eb755b | ||
|
|
d031f958a9 | ||
|
|
9bbadac9dc | ||
|
|
b012f77475 | ||
|
|
3cf36b1773 | ||
|
|
8f93c046a9 | ||
|
|
90af68901a | ||
|
|
c17507b216 | ||
|
|
b110b7c3f7 | ||
|
|
36431b3dcd | ||
|
|
609294deee | ||
|
|
fffae9a741 | ||
|
|
598ce4bb5f | ||
|
|
212f6dc9e0 | ||
|
|
cd05f1c3d7 | ||
|
|
d7a0691c99 | ||
|
|
86346aa332 | ||
|
|
b162b1fa34 | ||
|
|
ea9f8b0ceb | ||
|
|
6210b9e746 | ||
|
|
a778b410b9 | ||
|
|
8a674c8bc3 | ||
|
|
aaf625c624 | ||
|
|
d4079a3273 | ||
|
|
8d94fe3346 | ||
|
|
ce510e55ae | ||
|
|
5419ff9a71 | ||
|
|
ade437d625 | ||
|
|
87780a5b7e | ||
|
|
f6f6f261ed | ||
|
|
65acc7c9ad | ||
|
|
31d95ac9e6 | ||
|
|
964d17d05a | ||
|
|
665c5992f0 | ||
|
|
5f52e0581d | ||
|
|
870e3a45ef | ||
|
|
a5fe4a3694 | ||
|
|
838670ccbc | ||
|
|
baf4cc225e | ||
|
|
93ac1605bd | ||
|
|
c8a68001c1 | ||
|
|
244a22755c | ||
|
|
79c3ea82c7 | ||
|
|
4b92960975 | ||
|
|
ef616ff25b | ||
|
|
7fb1a470ce | ||
|
|
fc6b2d9193 | ||
|
|
e5dc66e7e5 | ||
|
|
bb01b76582 | ||
|
|
f1aff0fd96 | ||
|
|
5656be5206 | ||
|
|
484ce8e488 | ||
|
|
dcadefd133 | ||
|
|
28366677b0 | ||
|
|
d65302742c | ||
|
|
b2cf28efdd | ||
|
|
41e20bb6b7 | ||
|
|
6e670a2499 | ||
|
|
481b2186cb | ||
|
|
1bc1c0b14f | ||
|
|
e9d27b9d2b | ||
|
|
828bbc407f | ||
|
|
e50469d84e | ||
|
|
d3a9b126a6 | ||
|
|
9eb185ec39 | ||
|
|
fcf60e7f7c | ||
|
|
0ebee92f7d | ||
|
|
64b42bba2e | ||
|
|
d297f9e032 | ||
|
|
30aabf1da9 | ||
|
|
eebdaa2f27 | ||
|
|
c3c9c4cde5 | ||
|
|
640d5135df | ||
|
|
cbbd20a687 | ||
|
|
1a2a27b988 | ||
|
|
d819151020 | ||
|
|
d089436546 | ||
|
|
289d604690 | ||
|
|
2979e0e964 | ||
|
|
5338f1cfbd | ||
|
|
214f18cbfd | ||
|
|
9b11609b63 | ||
|
|
d476c2b613 | ||
|
|
590afebc0a | ||
|
|
02bd1af293 | ||
|
|
2fde82528d | ||
|
|
6c383e279f | ||
|
|
5c07477de4 | ||
|
|
146a284315 | ||
|
|
a8faeeac73 | ||
|
|
69e385e4cd | ||
|
|
41b8dd2863 | ||
|
|
493dc8fcd5 | ||
|
|
87764445e8 | ||
|
|
0bb31e16c9 | ||
|
|
72c90abe36 | ||
|
|
c4d8d33a60 | ||
|
|
a267bca8fb | ||
|
|
32d2e78e3c | ||
|
|
555e70ebec | ||
|
|
cd1b2aab46 | ||
|
|
9edce23e76 | ||
|
|
756a8a35e3 | ||
|
|
f3057c61a7 | ||
|
|
25345f08e7 | ||
|
|
2091e12e82 | ||
|
|
3eb000fa60 | ||
|
|
3059b36118 | ||
|
|
35b1887e17 | ||
|
|
8f9b8a8550 | ||
|
|
174befe729 | ||
|
|
e212b64823 | ||
|
|
991dc32a0b | ||
|
|
a76efd4166 | ||
|
|
8a768baaaa | ||
|
|
997692b494 | ||
|
|
59ffec4e39 | ||
|
|
56d0ecc253 | ||
|
|
e863746bd7 | ||
|
|
d4dc7911eb | ||
|
|
f561d3261a | ||
|
|
fdf8ee7015 | ||
|
|
5ec95086f2 | ||
|
|
26e4669316 | ||
|
|
6c352dca74 | ||
|
|
9d816694ba | ||
|
|
39ef35db0c | ||
|
|
b8ed135183 | ||
|
|
6f750582dd | ||
|
|
5f93fbd471 | ||
|
|
0e2653b7dd | ||
|
|
47554b562d | ||
|
|
99427d649e | ||
|
|
7bc4589d4d | ||
|
|
9af586d4ac | ||
|
|
87e68cac6c | ||
|
|
7d5a98409b | ||
|
|
14817e31f6 | ||
|
|
fbdbd722b1 | ||
|
|
b34102cd11 | ||
|
|
3c51cd6626 | ||
|
|
b0b34236e3 | ||
|
|
a502836002 | ||
|
|
09417d4b83 | ||
|
|
83ef2fa84c | ||
|
|
e596a45e9f | ||
|
|
24e5000c37 | ||
|
|
e3bcfa17f8 | ||
|
|
3b512676b7 | ||
|
|
928198bbfe | ||
|
|
1fb56f0ad2 | ||
|
|
9797f62cb8 | ||
|
|
4ddd87e773 | ||
|
|
7fd2e4d2db | ||
|
|
55c7d86205 | ||
|
|
737a28050c | ||
|
|
434ecdac6b | ||
|
|
709570afcc | ||
|
|
b084b4faaf | ||
|
|
d96ce23451 | ||
|
|
760a9d6d35 | ||
|
|
8e624cedb1 | ||
|
|
39cf269d6b | ||
|
|
6a00b5a79e | ||
|
|
2ce674e3fd | ||
|
|
0fcc25d7c9 | ||
|
|
63bd0136fb | ||
|
|
80a2a934dd | ||
|
|
e13976a3b3 | ||
|
|
5144330807 | ||
|
|
bb29639183 | ||
|
|
4667cb9de9 | ||
|
|
c34f3defe1 | ||
|
|
eb0d742672 | ||
|
|
d9b0a73787 | ||
|
|
4810879b2f | ||
|
|
f4b6704aad | ||
|
|
b1a31d3b30 | ||
|
|
bf909db3f9 | ||
|
|
9c68be4d5e | ||
|
|
d7956dd495 | ||
|
|
37a473e7d6 | ||
|
|
5a1c885e8f | ||
|
|
0b1136ad82 | ||
|
|
45af549897 | ||
|
|
97844603fc | ||
|
|
c07b39e58b | ||
|
|
384c543ab9 | ||
|
|
592b13d7db | ||
|
|
6fdba3c02e | ||
|
|
cbf758ead9 | ||
|
|
d1ad778a64 | ||
|
|
ce5ad296ae | ||
|
|
797e105786 | ||
|
|
d17d80747e | ||
|
|
55ea207a55 | ||
|
|
6384d1e5a3 | ||
|
|
aba01cdace | ||
|
|
517b7a14b4 | ||
|
|
2927de7cf9 | ||
|
|
6471ba70e4 | ||
|
|
9f9de01c51 | ||
|
|
3662decb8b | ||
|
|
583bcfb3c7 | ||
|
|
c45e3fa4d5 | ||
|
|
24cbcef620 | ||
|
|
e2a520ff49 | ||
|
|
a5e3317e28 | ||
|
|
5638c4ba87 | ||
|
|
bf7a128142 | ||
|
|
c5243cd4d5 | ||
|
|
db868ed29d | ||
|
|
450c7d80f8 | ||
|
|
abbb001975 | ||
|
|
f35d83ae48 | ||
|
|
a2315dc95e | ||
|
|
4e2feb6fbc | ||
|
|
13602b6769 | ||
|
|
85dba25246 | ||
|
|
66432672b3 | ||
|
|
e6d96e4c18 | ||
|
|
9812305bb9 | ||
|
|
f680a63a1f | ||
|
|
781d63cb2a | ||
|
|
9ff04ee3d8 | ||
|
|
5d85a24977 | ||
|
|
1e51fca0b0 | ||
|
|
5537d53f9a | ||
|
|
50a4170541 | ||
|
|
3a8255bda1 | ||
|
|
a617846f0f | ||
|
|
5772588c29 | ||
|
|
9d0dc45f74 | ||
|
|
c6aefbc9a0 | ||
|
|
dbbafb0cc9 | ||
|
|
6e8272f78f | ||
|
|
baf8a63121 | ||
|
|
fc4a76ee50 | ||
|
|
2117d1d035 | ||
|
|
0a70e0b7b6 | ||
|
|
64ffac5671 | ||
|
|
ac384e8a9c | ||
|
|
f97c8222c7 | ||
|
|
728289ee3a | ||
|
|
5faa16f9ee | ||
|
|
4e608b116a | ||
|
|
521b49166e | ||
|
|
8f32decf2d | ||
|
|
0d51f83d2d | ||
|
|
78c6a68db9 | ||
|
|
2949ab73e2 | ||
|
|
223741820d | ||
|
|
4b57821f52 | ||
|
|
74271a479f | ||
|
|
c377177108 | ||
|
|
84eb729bd4 | ||
|
|
14aea365c5 | ||
|
|
97cb3fa5a5 | ||
|
|
b5368db704 | ||
|
|
8c442b72f3 | ||
|
|
f8f6791d39 | ||
|
|
0c09f077aa | ||
|
|
af2831d7b6 | ||
|
|
64d5d4aec7 | ||
|
|
619a6b2adb | ||
|
|
33a26bc0cf | ||
|
|
b445a7c4d3 | ||
|
|
e6892d0c3e | ||
|
|
33e9a88b56 | ||
|
|
df00a2251e | ||
|
|
92c44c8abe | ||
|
|
8e4f7bbd3e | ||
|
|
a40217cf07 | ||
|
|
e586fda5f2 | ||
|
|
a58564ff88 | ||
|
|
89885b9fb9 | ||
|
|
5c7d977ae0 | ||
|
|
2cd3ee9698 | ||
|
|
dd3080e018 | ||
|
|
5915e8e86a | ||
|
|
3c67c06654 | ||
|
|
76232ca573 | ||
|
|
5235e82bda | ||
|
|
10f0713257 | ||
|
|
e9c7970ea4 | ||
|
|
1a6ac4aeb1 | ||
|
|
f633bdddf0 | ||
|
|
de0b91d157 | ||
|
|
2e77e498f5 | ||
|
|
4ac67eb1f9 | ||
|
|
2b536de37f | ||
|
|
2ffa92ba1b | ||
|
|
6ecddd8388 | ||
|
|
bd2772ea4c | ||
|
|
92bf79d53b | ||
|
|
eebe0eeb71 | ||
|
|
1068eaa0b9 | ||
|
|
faac3e7d7c | ||
|
|
dab4340207 | ||
|
|
fd2567748f | ||
|
|
c2daedbd11 | ||
|
|
7c604beb73 | ||
|
|
8c42aea827 | ||
|
|
cf1bfdfb61 | ||
|
|
75b26513e1 | ||
|
|
6c09a77a97 | ||
|
|
67389c39fb | ||
|
|
c326103e6e | ||
|
|
c2120a16da | ||
|
|
258ad4352e | ||
|
|
435d3958f4 | ||
|
|
b0408ef5c6 | ||
|
|
1c41b0bc2f | ||
|
|
aa827f3042 | ||
|
|
f44f5964bb | ||
|
|
91ba93bd7a | ||
|
|
0abe4cefb4 | ||
|
|
bccd460f3b | ||
|
|
d1023004e1 | ||
|
|
04a5f9cb04 | ||
|
|
9818e2b550 | ||
|
|
fe43e3b89d | ||
|
|
e1f1ae041f | ||
|
|
5bcf26e324 | ||
|
|
5f47a8149f | ||
|
|
00b662b53a | ||
|
|
faf519ab1b | ||
|
|
fce73f6f17 | ||
|
|
887890baf5 | ||
|
|
c66b24feeb | ||
|
|
84c6f147ad | ||
|
|
0cdb0daa8c | ||
|
|
eee702f299 | ||
|
|
df65247325 | ||
|
|
1a174e75d3 | ||
|
|
9e1fd3454f | ||
|
|
3b1603cadf | ||
|
|
8803bac708 | ||
|
|
3a01eaa4a6 | ||
|
|
9f84c1c448 | ||
|
|
dda0390156 | ||
|
|
c74509dd5f | ||
|
|
f61bbb2ff4 | ||
|
|
e7f60161a3 | ||
|
|
ebec4fbc24 | ||
|
|
1d4105ae3d | ||
|
|
586d49f0c3 | ||
|
|
5b0fab0697 | ||
|
|
2b3359dff3 | ||
|
|
63203aa14c | ||
|
|
716a8329c2 | ||
|
|
dab0aec85e | ||
|
|
1f1ab017c0 | ||
|
|
b6912ef95e | ||
|
|
db54dca694 | ||
|
|
0e751b983c | ||
|
|
997b20a975 | ||
|
|
386f9c42c2 | ||
|
|
cfae06db65 | ||
|
|
44260b7b5c | ||
|
|
13063b957f | ||
|
|
ee05e12480 | ||
|
|
5538545fb0 | ||
|
|
bc1167c2c5 | ||
|
|
c57656e4c3 | ||
|
|
264400a984 | ||
|
|
408db4eb1d | ||
|
|
9347f223ef | ||
|
|
518aa30c9c | ||
|
|
6bbf1f9355 | ||
|
|
b221e4d445 | ||
|
|
580fccbfca | ||
|
|
045916efcc | ||
|
|
4f92482294 | ||
|
|
2f055a75a0 | ||
|
|
f0621207e3 | ||
|
|
d657bc4e3d | ||
|
|
a1fd07b27c | ||
|
|
52219c5f3f | ||
|
|
1a66461e07 | ||
|
|
d20df12168 | ||
|
|
668b429615 | ||
|
|
7db528be39 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -9,3 +9,4 @@ coverage.out
|
||||
files/pidx
|
||||
bin
|
||||
perfstats*.csv
|
||||
coverage.xml
|
||||
|
||||
19
.travis.yml
19
.travis.yml
@@ -1,19 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.3
|
||||
- tip
|
||||
|
||||
install:
|
||||
- export PATH=$PATH:$HOME/gopath/bin
|
||||
- ./build.sh setup
|
||||
|
||||
script:
|
||||
- ./build.sh test-cov
|
||||
|
||||
after_success:
|
||||
- goveralls -coverprofile=coverage.out -service=travis-ci -package=syncthing/syncthing -repotoken="$COVERALLS_TOKEN"
|
||||
|
||||
env:
|
||||
global:
|
||||
secure: "TSPJDsokGCQhKLjgG3c58qHn8Qxhh4zEkWFf0XIOOY2nlDVzdgXDsC+Nq0YaP4106Ee4FgkSefsUTQV5lq/IyYW8elgqlgghjOtOi6RJa14eIS9Yy5Bkx6MXn0QfZX/lG+sy42pKSNk43y9GWx/qrt4nkfTtTvI5cXgwDGYdmX8="
|
||||
29
AUTHORS
Normal file
29
AUTHORS
Normal file
@@ -0,0 +1,29 @@
|
||||
# This is the official list of Syncthing authors for copyright purposes.
|
||||
|
||||
Aaron Bieber <qbit@deftly.net>
|
||||
Alexander Graf <register-github@alex-graf.de>
|
||||
Andrew Dunham <andrew@du.nham.ca>
|
||||
Audrius Butkevicius <audrius.butkevicius@gmail.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
|
||||
Ben Sidhom <bsidhom@gmail.com>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Caleb Callaway <enlightened.despot@gmail.com>
|
||||
Chris Joel <chris@scriptolo.gy>
|
||||
Daniel Martí <mvdan@mvdan.cc>
|
||||
Emil Hessman <emil@hessman.se>
|
||||
Felix Ableitner <me@nutomic.com>
|
||||
Felix Unterpaintner <bigbear2nd@gmail.com>
|
||||
Gilli Sigurdsson <gilli@vx.is>
|
||||
Jakob Borg <jakob@nym.se>
|
||||
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
|
||||
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
|
||||
Jochen Voss <voss@seehuhn.de>
|
||||
Lode Hoste <zillode@zillode.be>
|
||||
Marcin Dziadus <dziadus.marcin@gmail.com>
|
||||
Michael Tilli <pyfisch@gmail.com>
|
||||
Philippe Schommers <philippe@schommers.be>
|
||||
Phill Luby <phill.luby@newredo.com>
|
||||
Ryan Sullivan <kayoticsully@gmail.com>
|
||||
Tully Robinson <tully@tojr.org>
|
||||
Veeti Paananen <veeti.paananen@rojekti.fi>
|
||||
Vil Brekin <vilbrekin@gmail.com>
|
||||
@@ -12,7 +12,7 @@ least the following:
|
||||
- What operating system, operating system version and version of
|
||||
Syncthing you are running
|
||||
|
||||
- The same for other connected nodes, where relevant
|
||||
- The same for other connected devices, where relevant
|
||||
|
||||
- Screenshot if the issue concerns something visible in the GUI
|
||||
|
||||
@@ -31,21 +31,47 @@ latest info on Transifex.
|
||||
|
||||
## Contributing Code
|
||||
|
||||
Please do contribute! If you want to contribute but are unsure where to
|
||||
start, the [Contributions Needed
|
||||
topic](http://discourse.syncthing.net/t/49) lists areas in need of
|
||||
attention. In general, any open issues are fair game!
|
||||
Every contribution is welcome. If you want to contribute but are unsure
|
||||
where to start, any open issues are fair game! Be prepared for a
|
||||
[certain amount of review](https://discourse.syncthing.net/t/733); it's
|
||||
all in the name of quality. :) Following the points below will make this
|
||||
a smoother process.
|
||||
|
||||
## Coding Style
|
||||
|
||||
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
|
||||
as much as makes sense.
|
||||
|
||||
- All text files use Unix line endings.
|
||||
|
||||
- Each commit should be `go fmt` clean.
|
||||
|
||||
- The commit message subject should be a single short sentence
|
||||
describing the change, starting with a capital letter.
|
||||
|
||||
- Commits that resolve an existing issue must include the issue number
|
||||
as `(fixes #123)` at the end of the commit message subject.
|
||||
|
||||
- Imports are grouped per `goimports` standard; that is, standard
|
||||
library first, then third party libraries after a blank line.
|
||||
|
||||
- A contribution solving a single issue or introducing a single new
|
||||
feature should probably be a single commit based on the current
|
||||
`master` branch. You may be asked to "rebase" or "squash" your pull
|
||||
request to make sure this is the case, especially if there have been
|
||||
amendments during review.
|
||||
|
||||
## Licensing
|
||||
|
||||
All contributions are made under the same MIT License as the rest of the
|
||||
project, except documentation which is licensed under the Creative
|
||||
Commons Attribution 4.0 International License. You retain the copyright
|
||||
to code you have written.
|
||||
All contributions are made under the same GPL license as the rest of the
|
||||
project, except documentation, user interface text and translation
|
||||
strings which are licensed under the Creative Commons Attribution 4.0
|
||||
International License. You retain the copyright to code you have
|
||||
written.
|
||||
|
||||
When accepting your first contribution, the maintainer of the project
|
||||
will ensure that you are added to the CONTRIBUTORS file. You are welcome
|
||||
to add yourself as a separate commit in your first pull request.
|
||||
will ensure that you are added to the AUTHORS file. You are welcome to
|
||||
add yourself as a separate commit in your first pull request.
|
||||
|
||||
## Building
|
||||
|
||||
@@ -75,16 +101,10 @@ signed by GPG key BCE524C7.
|
||||
|
||||
Yes please!
|
||||
|
||||
## Style
|
||||
|
||||
- `go fmt`
|
||||
|
||||
- Unix line breaks
|
||||
|
||||
## Documentation
|
||||
|
||||
[Over here!](http://discourse.syncthing.net/category/documentation)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
GPLv3
|
||||
|
||||
13
CONTRIBUTORS
13
CONTRIBUTORS
@@ -1,13 +0,0 @@
|
||||
Aaron Bieber <qbit@deftly.net>
|
||||
Andrew Dunham <andrew@du.nham.ca>
|
||||
Audrius Butkevicius <audrius.butkevicius@gmail.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com>
|
||||
Ben Sidhom <bsidhom@gmail.com>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Gilli Sigurdsson <gilli@vx.is>
|
||||
James Patterson <jamespatterson@operamail.com>
|
||||
Jens Diemer <github.com@jensdiemer.de>
|
||||
Philippe Schommers <philippe@schommers.be>
|
||||
Ryan Sullivan <kayoticsully@gmail.com>
|
||||
Tully Robinson <tully@tojr.org>
|
||||
Veeti Paananen <veeti.paananen@rojekti.fi>
|
||||
30
Godeps/Godeps.json
generated
30
Godeps/Godeps.json
generated
@@ -1,15 +1,10 @@
|
||||
{
|
||||
"ImportPath": "github.com/syncthing/syncthing",
|
||||
"GoVersion": "go1.3.1",
|
||||
"GoVersion": "go1.4rc1",
|
||||
"Packages": [
|
||||
"./cmd/..."
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "bitbucket.org/kardianos/osext",
|
||||
"Comment": "null-13",
|
||||
"Rev": "5d3ddcf53a508cc2f7404eaebf546ef2cb5cdb6e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/go.crypto/bcrypt",
|
||||
"Comment": "null-216",
|
||||
@@ -31,17 +26,24 @@
|
||||
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/snappy-go/snappy",
|
||||
"Comment": "null-15",
|
||||
"Rev": "12e4b4183793ac4b061921e7980845e750679fd0"
|
||||
"ImportPath": "github.com/AudriusButkevicius/lfu-go",
|
||||
"Rev": "164bcecceb92fd6037f4d18a8d97b495ec6ef669"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/bkaradzic/go-lz4",
|
||||
"Rev": "77e2ba877bde9da31213bec75dbbe197fa507c21"
|
||||
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/logger",
|
||||
"Rev": "f50d32b313bec2933a3e1049f7416a29f3413d29"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/osext",
|
||||
"Rev": "9bf61584e5f1f172e8766ddc9022d9c401faaa5e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/xdr",
|
||||
"Rev": "e1714bbe4764b15490fcc8ebd25d4bd9ea50a4b9"
|
||||
"Rev": "ec3d404f43731551258977b38dd72cf557d00398"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/juju/ratelimit",
|
||||
@@ -49,7 +51,11 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "a44c00531ccc005546f20c6e00ab7bb9a8f6b2e0"
|
||||
"Rev": "97e257099d2ab9578151ba85e2641e2cd14d3ca8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/gosnappy/snappy",
|
||||
"Rev": "ce8acff4829e0c2458a67ead32390ac0a381c862"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/coding",
|
||||
|
||||
19
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/LICENSE
generated
vendored
Normal file
19
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
Copyright (C) 2012 Dave Grijalva
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a
|
||||
copy of this software and associated documentation files (the "Software"),
|
||||
to deal in the Software without restriction, including without limitation
|
||||
the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
and/or sell copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included
|
||||
in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
19
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/README.md
generated
vendored
Normal file
19
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/README.md
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
A simple LFU cache for golang. Based on the paper [An O(1) algorithm for implementing the LFU cache eviction scheme](http://dhruvbird.com/lfu.pdf).
|
||||
|
||||
Usage:
|
||||
|
||||
```go
|
||||
import "github.com/dgrijalva/lfu-go"
|
||||
|
||||
// Make a new thing
|
||||
c := lfu.New()
|
||||
|
||||
// Set some values
|
||||
c.Set("myKey", myValue)
|
||||
|
||||
// Retrieve some values
|
||||
myValue = c.Get("myKey")
|
||||
|
||||
// Evict some values
|
||||
c.Evict(1)
|
||||
```
|
||||
156
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/lfu.go
generated
vendored
Normal file
156
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/lfu.go
generated
vendored
Normal file
@@ -0,0 +1,156 @@
|
||||
package lfu
|
||||
|
||||
import (
|
||||
"container/list"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type Eviction struct {
|
||||
Key string
|
||||
Value interface{}
|
||||
}
|
||||
|
||||
type Cache struct {
|
||||
// If len > UpperBound, cache will automatically evict
|
||||
// down to LowerBound. If either value is 0, this behavior
|
||||
// is disabled.
|
||||
UpperBound int
|
||||
LowerBound int
|
||||
values map[string]*cacheEntry
|
||||
freqs *list.List
|
||||
len int
|
||||
lock *sync.Mutex
|
||||
EvictionChannel chan<- Eviction
|
||||
}
|
||||
|
||||
type cacheEntry struct {
|
||||
key string
|
||||
value interface{}
|
||||
freqNode *list.Element
|
||||
}
|
||||
|
||||
type listEntry struct {
|
||||
entries map[*cacheEntry]byte
|
||||
freq int
|
||||
}
|
||||
|
||||
func New() *Cache {
|
||||
c := new(Cache)
|
||||
c.values = make(map[string]*cacheEntry)
|
||||
c.freqs = list.New()
|
||||
c.lock = new(sync.Mutex)
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *Cache) Get(key string) interface{} {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
if e, ok := c.values[key]; ok {
|
||||
c.increment(e)
|
||||
return e.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Cache) Set(key string, value interface{}) {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
if e, ok := c.values[key]; ok {
|
||||
// value already exists for key. overwrite
|
||||
e.value = value
|
||||
c.increment(e)
|
||||
} else {
|
||||
// value doesn't exist. insert
|
||||
e := new(cacheEntry)
|
||||
e.key = key
|
||||
e.value = value
|
||||
c.values[key] = e
|
||||
c.increment(e)
|
||||
c.len++
|
||||
// bounds mgmt
|
||||
if c.UpperBound > 0 && c.LowerBound > 0 {
|
||||
if c.len > c.UpperBound {
|
||||
c.evict(c.len - c.LowerBound)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Cache) Len() int {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
return c.len
|
||||
}
|
||||
|
||||
func (c *Cache) Evict(count int) int {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
return c.evict(count)
|
||||
}
|
||||
|
||||
func (c *Cache) evict(count int) int {
|
||||
// No lock here so it can be called
|
||||
// from within the lock (during Set)
|
||||
var evicted int
|
||||
for i := 0; i < count; {
|
||||
if place := c.freqs.Front(); place != nil {
|
||||
for entry, _ := range place.Value.(*listEntry).entries {
|
||||
if i < count {
|
||||
if c.EvictionChannel != nil {
|
||||
c.EvictionChannel <- Eviction{
|
||||
Key: entry.key,
|
||||
Value: entry.value,
|
||||
}
|
||||
}
|
||||
delete(c.values, entry.key)
|
||||
c.remEntry(place, entry)
|
||||
evicted++
|
||||
c.len--
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return evicted
|
||||
}
|
||||
|
||||
func (c *Cache) increment(e *cacheEntry) {
|
||||
currentPlace := e.freqNode
|
||||
var nextFreq int
|
||||
var nextPlace *list.Element
|
||||
if currentPlace == nil {
|
||||
// new entry
|
||||
nextFreq = 1
|
||||
nextPlace = c.freqs.Front()
|
||||
} else {
|
||||
// move up
|
||||
nextFreq = currentPlace.Value.(*listEntry).freq + 1
|
||||
nextPlace = currentPlace.Next()
|
||||
}
|
||||
|
||||
if nextPlace == nil || nextPlace.Value.(*listEntry).freq != nextFreq {
|
||||
// create a new list entry
|
||||
li := new(listEntry)
|
||||
li.freq = nextFreq
|
||||
li.entries = make(map[*cacheEntry]byte)
|
||||
if currentPlace != nil {
|
||||
nextPlace = c.freqs.InsertAfter(li, currentPlace)
|
||||
} else {
|
||||
nextPlace = c.freqs.PushFront(li)
|
||||
}
|
||||
}
|
||||
e.freqNode = nextPlace
|
||||
nextPlace.Value.(*listEntry).entries[e] = 1
|
||||
if currentPlace != nil {
|
||||
// remove from current position
|
||||
c.remEntry(currentPlace, e)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Cache) remEntry(place *list.Element, entry *cacheEntry) {
|
||||
entries := place.Value.(*listEntry).entries
|
||||
delete(entries, entry)
|
||||
if len(entries) == 0 {
|
||||
c.freqs.Remove(place)
|
||||
}
|
||||
}
|
||||
68
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/lfu_test.go
generated
vendored
Normal file
68
Godeps/_workspace/src/github.com/AudriusButkevicius/lfu-go/lfu_test.go
generated
vendored
Normal file
@@ -0,0 +1,68 @@
|
||||
package lfu
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLFU(t *testing.T) {
|
||||
c := New()
|
||||
c.Set("a", "a")
|
||||
if v := c.Get("a"); v != "a" {
|
||||
t.Errorf("Value was not saved: %v != 'a'", v)
|
||||
}
|
||||
if l := c.Len(); l != 1 {
|
||||
t.Errorf("Length was not updated: %v != 1", l)
|
||||
}
|
||||
|
||||
c.Set("b", "b")
|
||||
if v := c.Get("b"); v != "b" {
|
||||
t.Errorf("Value was not saved: %v != 'b'", v)
|
||||
}
|
||||
if l := c.Len(); l != 2 {
|
||||
t.Errorf("Length was not updated: %v != 2", l)
|
||||
}
|
||||
|
||||
c.Get("a")
|
||||
evicted := c.Evict(1)
|
||||
if v := c.Get("a"); v != "a" {
|
||||
t.Errorf("Value was improperly evicted: %v != 'a'", v)
|
||||
}
|
||||
if v := c.Get("b"); v != nil {
|
||||
t.Errorf("Value was not evicted: %v", v)
|
||||
}
|
||||
if l := c.Len(); l != 1 {
|
||||
t.Errorf("Length was not updated: %v != 1", l)
|
||||
}
|
||||
if evicted != 1 {
|
||||
t.Errorf("Number of evicted items is wrong: %v != 1", evicted)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBoundsMgmt(t *testing.T) {
|
||||
c := New()
|
||||
c.UpperBound = 10
|
||||
c.LowerBound = 5
|
||||
|
||||
for i := 0; i < 100; i++ {
|
||||
c.Set(fmt.Sprintf("%v", i), i)
|
||||
}
|
||||
if c.Len() > 10 {
|
||||
t.Errorf("Bounds management failed to evict properly: %v", c.Len())
|
||||
}
|
||||
}
|
||||
|
||||
func TestEviction(t *testing.T) {
|
||||
ch := make(chan Eviction, 1)
|
||||
|
||||
c := New()
|
||||
c.EvictionChannel = ch
|
||||
c.Set("a", "b")
|
||||
c.Evict(1)
|
||||
|
||||
ev := <-ch
|
||||
|
||||
if ev.Key != "a" || ev.Value.(string) != "b" {
|
||||
t.Error("Incorrect item")
|
||||
}
|
||||
}
|
||||
12
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/lz4-example/main.go
generated
vendored
12
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/lz4-example/main.go
generated
vendored
@@ -72,10 +72,18 @@ func main() {
|
||||
|
||||
if *decompress {
|
||||
data, _ = ioutil.ReadAll(input)
|
||||
data, _ = lz4.Decode(nil, data)
|
||||
data, err = lz4.Decode(nil, data)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to decode:", err)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
data, _ = ioutil.ReadAll(input)
|
||||
data, _ = lz4.Encode(nil, data)
|
||||
data, err = lz4.Encode(nil, data)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to encode:", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(args[1], data, 0644)
|
||||
|
||||
4
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/writer.go
generated
vendored
4
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/writer.go
generated
vendored
@@ -121,7 +121,7 @@ func Encode(dst, src []byte) ([]byte, error) {
|
||||
)
|
||||
|
||||
for {
|
||||
if int(e.pos)+4 >= len(e.src) {
|
||||
if int(e.pos)+12 >= len(e.src) {
|
||||
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
|
||||
return e.dst[:e.dpos], nil
|
||||
}
|
||||
@@ -158,7 +158,7 @@ func Encode(dst, src []byte) ([]byte, error) {
|
||||
ref += minMatch
|
||||
e.anchor = e.pos
|
||||
|
||||
for int(e.pos) < len(e.src) && e.src[e.pos] == e.src[ref] {
|
||||
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
|
||||
e.pos++
|
||||
ref++
|
||||
}
|
||||
|
||||
19
Godeps/_workspace/src/github.com/calmh/logger/LICENSE
generated
vendored
Normal file
19
Godeps/_workspace/src/github.com/calmh/logger/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
Copyright (C) 2013 Jakob Borg
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so, subject to the following conditions:
|
||||
|
||||
- The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
15
Godeps/_workspace/src/github.com/calmh/logger/README.md
generated
vendored
Normal file
15
Godeps/_workspace/src/github.com/calmh/logger/README.md
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
logger
|
||||
======
|
||||
|
||||
A small wrapper around `log` to provide log levels.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
http://godoc.org/github.com/calmh/logger
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
MIT
|
||||
|
||||
35
logger/logger.go → Godeps/_workspace/src/github.com/calmh/logger/logger.go
generated
vendored
35
logger/logger.go → Godeps/_workspace/src/github.com/calmh/logger/logger.go
generated
vendored
@@ -1,6 +1,5 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
// Package logger implements a standardized logger with callback functionality
|
||||
package logger
|
||||
@@ -24,6 +23,7 @@ const (
|
||||
NumLevels
|
||||
)
|
||||
|
||||
// A MessageHandler is called with the log level and message text.
|
||||
type MessageHandler func(l LogLevel, msg string)
|
||||
|
||||
type Logger struct {
|
||||
@@ -32,6 +32,7 @@ type Logger struct {
|
||||
mut sync.Mutex
|
||||
}
|
||||
|
||||
// The default logger logs to standard output with a time prefix.
|
||||
var DefaultLogger = New()
|
||||
|
||||
func New() *Logger {
|
||||
@@ -40,16 +41,20 @@ func New() *Logger {
|
||||
}
|
||||
}
|
||||
|
||||
// AddHandler registers a new MessageHandler to receive messages with the
|
||||
// specified log level or above.
|
||||
func (l *Logger) AddHandler(level LogLevel, h MessageHandler) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
l.handlers[level] = append(l.handlers[level], h)
|
||||
}
|
||||
|
||||
// See log.SetFlags
|
||||
func (l *Logger) SetFlags(flag int) {
|
||||
l.logger.SetFlags(flag)
|
||||
}
|
||||
|
||||
// See log.SetPrefix
|
||||
func (l *Logger) SetPrefix(prefix string) {
|
||||
l.logger.SetPrefix(prefix)
|
||||
}
|
||||
@@ -60,6 +65,7 @@ func (l *Logger) callHandlers(level LogLevel, s string) {
|
||||
}
|
||||
}
|
||||
|
||||
// Debugln logs a line with a DEBUG prefix.
|
||||
func (l *Logger) Debugln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -68,6 +74,7 @@ func (l *Logger) Debugln(vals ...interface{}) {
|
||||
l.callHandlers(LevelDebug, s)
|
||||
}
|
||||
|
||||
// Debugf logs a formatted line with a DEBUG prefix.
|
||||
func (l *Logger) Debugf(format string, vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -75,6 +82,8 @@ func (l *Logger) Debugf(format string, vals ...interface{}) {
|
||||
l.logger.Output(2, "DEBUG: "+s)
|
||||
l.callHandlers(LevelDebug, s)
|
||||
}
|
||||
|
||||
// Infoln logs a line with an INFO prefix.
|
||||
func (l *Logger) Infoln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -83,6 +92,7 @@ func (l *Logger) Infoln(vals ...interface{}) {
|
||||
l.callHandlers(LevelInfo, s)
|
||||
}
|
||||
|
||||
// Infof logs a formatted line with an INFO prefix.
|
||||
func (l *Logger) Infof(format string, vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -91,6 +101,7 @@ func (l *Logger) Infof(format string, vals ...interface{}) {
|
||||
l.callHandlers(LevelInfo, s)
|
||||
}
|
||||
|
||||
// Okln logs a line with an OK prefix.
|
||||
func (l *Logger) Okln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -99,6 +110,7 @@ func (l *Logger) Okln(vals ...interface{}) {
|
||||
l.callHandlers(LevelOK, s)
|
||||
}
|
||||
|
||||
// Okf logs a formatted line with an OK prefix.
|
||||
func (l *Logger) Okf(format string, vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -107,6 +119,7 @@ func (l *Logger) Okf(format string, vals ...interface{}) {
|
||||
l.callHandlers(LevelOK, s)
|
||||
}
|
||||
|
||||
// Warnln logs a formatted line with a WARNING prefix.
|
||||
func (l *Logger) Warnln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -115,6 +128,7 @@ func (l *Logger) Warnln(vals ...interface{}) {
|
||||
l.callHandlers(LevelWarn, s)
|
||||
}
|
||||
|
||||
// Warnf logs a formatted line with a WARNING prefix.
|
||||
func (l *Logger) Warnf(format string, vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -123,6 +137,8 @@ func (l *Logger) Warnf(format string, vals ...interface{}) {
|
||||
l.callHandlers(LevelWarn, s)
|
||||
}
|
||||
|
||||
// Fatalln logs a line with a FATAL prefix and exits the process with exit
|
||||
// code 1.
|
||||
func (l *Logger) Fatalln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -132,6 +148,8 @@ func (l *Logger) Fatalln(vals ...interface{}) {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Fatalf logs a formatted line with a FATAL prefix and exits the process with
|
||||
// exit code 1.
|
||||
func (l *Logger) Fatalf(format string, vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
@@ -140,14 +158,3 @@ func (l *Logger) Fatalf(format string, vals ...interface{}) {
|
||||
l.callHandlers(LevelFatal, s)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
func (l *Logger) FatalErr(err error) {
|
||||
if err != nil {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
l.logger.SetFlags(l.logger.Flags() | log.Lshortfile)
|
||||
l.logger.Output(2, "FATAL: "+err.Error())
|
||||
l.callHandlers(LevelFatal, err.Error())
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
5
logger/logger_test.go → Godeps/_workspace/src/github.com/calmh/logger/logger_test.go
generated
vendored
5
logger/logger_test.go → Godeps/_workspace/src/github.com/calmh/logger/logger_test.go
generated
vendored
@@ -1,6 +1,5 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
package logger
|
||||
|
||||
@@ -2,12 +2,13 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build linux netbsd openbsd
|
||||
// +build linux netbsd openbsd solaris
|
||||
|
||||
package osext
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
)
|
||||
@@ -20,6 +21,8 @@ func executable() (string, error) {
|
||||
return os.Readlink("/proc/curproc/exe")
|
||||
case "openbsd":
|
||||
return os.Readlink("/proc/curproc/file")
|
||||
case "solaris":
|
||||
return os.Readlink(fmt.Sprintf("/proc/%d/path/a.out", os.Getpid()))
|
||||
}
|
||||
return "", errors.New("ExecPath not implemented for " + runtime.GOOS)
|
||||
}
|
||||
8
Godeps/_workspace/src/github.com/calmh/xdr/bench_test.go
generated
vendored
8
Godeps/_workspace/src/github.com/calmh/xdr/bench_test.go
generated
vendored
@@ -33,11 +33,15 @@ var s = XDRBenchStruct{
|
||||
S0: "Hello World! String one.",
|
||||
S1: "Hello World! String two.",
|
||||
}
|
||||
var e = s.MarshalXDR()
|
||||
var e []byte
|
||||
|
||||
func init() {
|
||||
e, _ = s.MarshalXDR()
|
||||
}
|
||||
|
||||
func BenchmarkThisMarshal(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
res = s.MarshalXDR()
|
||||
res, _ = s.MarshalXDR()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
40
Godeps/_workspace/src/github.com/calmh/xdr/bench_xdr_test.go
generated
vendored
40
Godeps/_workspace/src/github.com/calmh/xdr/bench_xdr_test.go
generated
vendored
@@ -72,15 +72,23 @@ func (o XDRBenchStruct) EncodeXDR(w io.Writer) (int, error) {
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o XDRBenchStruct) MarshalXDR() []byte {
|
||||
func (o XDRBenchStruct) MarshalXDR() ([]byte, error) {
|
||||
return o.AppendXDR(make([]byte, 0, 128))
|
||||
}
|
||||
|
||||
func (o XDRBenchStruct) AppendXDR(bs []byte) []byte {
|
||||
func (o XDRBenchStruct) MustMarshalXDR() []byte {
|
||||
bs, err := o.MarshalXDR()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return bs
|
||||
}
|
||||
|
||||
func (o XDRBenchStruct) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
o.encodeXDR(xw)
|
||||
return []byte(aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
@@ -88,13 +96,13 @@ func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(o.I2)
|
||||
xw.WriteUint16(o.I3)
|
||||
xw.WriteUint8(o.I4)
|
||||
if len(o.Bs0) > 128 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
if l := len(o.Bs0); l > 128 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Bs0", l, 128)
|
||||
}
|
||||
xw.WriteBytes(o.Bs0)
|
||||
xw.WriteBytes(o.Bs1)
|
||||
if len(o.S0) > 128 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
if l := len(o.S0); l > 128 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("S0", l, 128)
|
||||
}
|
||||
xw.WriteString(o.S0)
|
||||
xw.WriteString(o.S1)
|
||||
@@ -150,15 +158,23 @@ func (o repeatReader) EncodeXDR(w io.Writer) (int, error) {
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o repeatReader) MarshalXDR() []byte {
|
||||
func (o repeatReader) MarshalXDR() ([]byte, error) {
|
||||
return o.AppendXDR(make([]byte, 0, 128))
|
||||
}
|
||||
|
||||
func (o repeatReader) AppendXDR(bs []byte) []byte {
|
||||
func (o repeatReader) MustMarshalXDR() []byte {
|
||||
bs, err := o.MarshalXDR()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return bs
|
||||
}
|
||||
|
||||
func (o repeatReader) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
o.encodeXDR(xw)
|
||||
return []byte(aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o repeatReader) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
26
Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
generated
vendored
26
Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
generated
vendored
@@ -53,15 +53,23 @@ func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
|
||||
return o.encodeXDR(xw)
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) MarshalXDR() []byte {
|
||||
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
|
||||
return o.AppendXDR(make([]byte, 0, 128))
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) AppendXDR(bs []byte) []byte {
|
||||
func (o {{.TypeName}}) MustMarshalXDR() []byte {
|
||||
bs, err := o.MarshalXDR()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return bs
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
o.encodeXDR(xw)
|
||||
return []byte(aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
return []byte(aw), err
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
@@ -71,8 +79,8 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.Write{{$fieldInfo.Encoder}}({{$fieldInfo.Convert}}(o.{{$fieldInfo.Name}}))
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if len(o.{{$fieldInfo.Name}}) > {{$fieldInfo.Max}} {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
if l := len(o.{{$fieldInfo.Name}}); l > {{$fieldInfo.Max}} {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", l, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{end}}
|
||||
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
|
||||
@@ -84,8 +92,8 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
{{end}}
|
||||
{{else}}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if len(o.{{$fieldInfo.Name}}) > {{$fieldInfo.Max}} {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
if l := len(o.{{$fieldInfo.Name}}); l > {{$fieldInfo.Max}} {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", l, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{end}}
|
||||
xw.WriteUint32(uint32(len(o.{{$fieldInfo.Name}})))
|
||||
@@ -135,7 +143,7 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
|
||||
_{{$fieldInfo.Name}}Size := int(xr.ReadUint32())
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if _{{$fieldInfo.Name}}Size > {{$fieldInfo.Max}} {
|
||||
return xdr.ErrElementSizeExceeded
|
||||
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{end}}
|
||||
o.{{$fieldInfo.Name}} = make([]{{$fieldInfo.FieldType}}, _{{$fieldInfo.Name}}Size)
|
||||
|
||||
12
Godeps/_workspace/src/github.com/calmh/xdr/encdec_test.go
generated
vendored
12
Godeps/_workspace/src/github.com/calmh/xdr/encdec_test.go
generated
vendored
@@ -24,9 +24,10 @@ type TestStruct struct {
|
||||
UI32 uint32
|
||||
I64 int64
|
||||
UI64 uint64
|
||||
BS []byte
|
||||
S string
|
||||
BS []byte // max:1024
|
||||
S string // max:1024
|
||||
C Opaque
|
||||
SS []string // max:1024
|
||||
}
|
||||
|
||||
type Opaque [32]byte
|
||||
@@ -49,9 +50,12 @@ func (Opaque) Generate(rand *rand.Rand, size int) reflect.Value {
|
||||
|
||||
func TestEncDec(t *testing.T) {
|
||||
fn := func(t0 TestStruct) bool {
|
||||
bs := t0.MarshalXDR()
|
||||
bs, err := t0.MarshalXDR()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var t1 TestStruct
|
||||
err := t1.UnmarshalXDR(bs)
|
||||
err = t1.UnmarshalXDR(bs)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
54
Godeps/_workspace/src/github.com/calmh/xdr/encdec_xdr_test.go
generated
vendored
54
Godeps/_workspace/src/github.com/calmh/xdr/encdec_xdr_test.go
generated
vendored
@@ -54,6 +54,14 @@ TestStruct Structure:
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Opaque |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Number of SS |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of SS |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ SS (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
|
||||
struct TestStruct {
|
||||
@@ -66,9 +74,10 @@ struct TestStruct {
|
||||
unsigned int UI32;
|
||||
hyper I64;
|
||||
unsigned hyper UI64;
|
||||
opaque BS<>;
|
||||
string S<>;
|
||||
opaque BS<1024>;
|
||||
string S<1024>;
|
||||
Opaque C;
|
||||
string SS<1024>;
|
||||
}
|
||||
|
||||
*/
|
||||
@@ -78,15 +87,23 @@ func (o TestStruct) EncodeXDR(w io.Writer) (int, error) {
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o TestStruct) MarshalXDR() []byte {
|
||||
func (o TestStruct) MarshalXDR() ([]byte, error) {
|
||||
return o.AppendXDR(make([]byte, 0, 128))
|
||||
}
|
||||
|
||||
func (o TestStruct) AppendXDR(bs []byte) []byte {
|
||||
func (o TestStruct) MustMarshalXDR() []byte {
|
||||
bs, err := o.MarshalXDR()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return bs
|
||||
}
|
||||
|
||||
func (o TestStruct) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
o.encodeXDR(xw)
|
||||
return []byte(aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
@@ -99,12 +116,25 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(o.UI32)
|
||||
xw.WriteUint64(uint64(o.I64))
|
||||
xw.WriteUint64(o.UI64)
|
||||
if l := len(o.BS); l > 1024 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("BS", l, 1024)
|
||||
}
|
||||
xw.WriteBytes(o.BS)
|
||||
if l := len(o.S); l > 1024 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("S", l, 1024)
|
||||
}
|
||||
xw.WriteString(o.S)
|
||||
_, err := o.C.encodeXDR(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
if l := len(o.SS); l > 1024 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("SS", l, 1024)
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.SS)))
|
||||
for i := range o.SS {
|
||||
xw.WriteString(o.SS[i])
|
||||
}
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
@@ -129,8 +159,16 @@ func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
|
||||
o.UI32 = xr.ReadUint32()
|
||||
o.I64 = int64(xr.ReadUint64())
|
||||
o.UI64 = xr.ReadUint64()
|
||||
o.BS = xr.ReadBytes()
|
||||
o.S = xr.ReadString()
|
||||
o.BS = xr.ReadBytesMax(1024)
|
||||
o.S = xr.ReadStringMax(1024)
|
||||
(&o.C).decodeXDR(xr)
|
||||
_SSSize := int(xr.ReadUint32())
|
||||
if _SSSize > 1024 {
|
||||
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
|
||||
}
|
||||
o.SS = make([]string, _SSSize)
|
||||
for i := range o.SS {
|
||||
o.SS[i] = xr.ReadString()
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
10
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
10
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
@@ -5,14 +5,12 @@
|
||||
package xdr
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
var ErrElementSizeExceeded = errors.New("element size exceeded")
|
||||
|
||||
type Reader struct {
|
||||
r io.Reader
|
||||
err error
|
||||
@@ -71,7 +69,7 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
|
||||
return nil
|
||||
}
|
||||
if max > 0 && l > max {
|
||||
r.err = ErrElementSizeExceeded
|
||||
r.err = ElementSizeExceeded("bytes field", l, max)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -162,3 +160,7 @@ func (r *Reader) Error() error {
|
||||
}
|
||||
return XDRError{"read", r.err}
|
||||
}
|
||||
|
||||
func ElementSizeExceeded(field string, size, limit int) error {
|
||||
return fmt.Errorf("%s exceeds size limit; %d > %d", field, size, limit)
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
4
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
@@ -22,7 +22,7 @@ func (r *Reader) ReadUint8() uint8 {
|
||||
}
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint8=%d (0x%08x)", r.b[0], r.b[0])
|
||||
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
|
||||
}
|
||||
return r.b[0]
|
||||
}
|
||||
@@ -43,7 +43,7 @@ func (r *Reader) ReadUint16() uint16 {
|
||||
v := uint16(r.b[1]) | uint16(r.b[0])<<8
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint16=%d (0x%08x)", v, v)
|
||||
dl.Printf("rd uint16=%d (0x%04x)", v, v)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
5
Godeps/_workspace/src/github.com/calmh/xdr/xdr_test.go
generated
vendored
5
Godeps/_workspace/src/github.com/calmh/xdr/xdr_test.go
generated
vendored
@@ -5,6 +5,7 @@ package xdr
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"testing"
|
||||
"testing/quick"
|
||||
)
|
||||
@@ -60,7 +61,7 @@ func TestReadBytesMaxInto(t *testing.T) {
|
||||
if read := len(bs); read != tot {
|
||||
t.Errorf("Incorrect read bytes, wrote=%d, buf=%d, max=%d, read=%d", tot, tot+diff, max, read)
|
||||
}
|
||||
} else if r.err != ErrElementSizeExceeded {
|
||||
} else if !strings.Contains(r.err.Error(), "exceeds size") {
|
||||
t.Errorf("Unexpected non-ErrElementSizeExceeded error for wrote=%d, max=%d: %v", tot, max, r.err)
|
||||
}
|
||||
}
|
||||
@@ -84,7 +85,7 @@ func TestReadStringMax(t *testing.T) {
|
||||
if read != tot {
|
||||
t.Errorf("Incorrect read bytes, wrote=%d, max=%d, read=%d", tot, max, read)
|
||||
}
|
||||
} else if r.err != ErrElementSizeExceeded {
|
||||
} else if !strings.Contains(r.err.Error(), "exceeds size") {
|
||||
t.Errorf("Unexpected non-ErrElementSizeExceeded error for wrote=%d, max=%d, read=%d: %v", tot, max, read, r.err)
|
||||
}
|
||||
}
|
||||
|
||||
228
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/batch.go
generated
vendored
228
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/batch.go
generated
vendored
@@ -8,65 +8,84 @@ package leveldb
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/memdb"
|
||||
)
|
||||
|
||||
var (
|
||||
errBatchTooShort = errors.New("leveldb: batch is too short")
|
||||
errBatchBadRecord = errors.New("leveldb: bad record in batch")
|
||||
type ErrBatchCorrupted struct {
|
||||
Reason string
|
||||
}
|
||||
|
||||
func (e *ErrBatchCorrupted) Error() string {
|
||||
return fmt.Sprintf("leveldb: batch corrupted: %s", e.Reason)
|
||||
}
|
||||
|
||||
func newErrBatchCorrupted(reason string) error {
|
||||
return errors.NewErrCorrupted(nil, &ErrBatchCorrupted{reason})
|
||||
}
|
||||
|
||||
const (
|
||||
batchHdrLen = 8 + 4
|
||||
batchGrowRec = 3000
|
||||
)
|
||||
|
||||
const kBatchHdrLen = 8 + 4
|
||||
|
||||
type batchReplay interface {
|
||||
put(key, value []byte, seq uint64)
|
||||
delete(key []byte, seq uint64)
|
||||
type BatchReplay interface {
|
||||
Put(key, value []byte)
|
||||
Delete(key []byte)
|
||||
}
|
||||
|
||||
// Batch is a write batch.
|
||||
type Batch struct {
|
||||
buf []byte
|
||||
data []byte
|
||||
rLen, bLen int
|
||||
seq uint64
|
||||
sync bool
|
||||
}
|
||||
|
||||
func (b *Batch) grow(n int) {
|
||||
off := len(b.buf)
|
||||
off := len(b.data)
|
||||
if off == 0 {
|
||||
// include headers
|
||||
off = kBatchHdrLen
|
||||
n += off
|
||||
off = batchHdrLen
|
||||
if b.data != nil {
|
||||
b.data = b.data[:off]
|
||||
}
|
||||
}
|
||||
if cap(b.buf)-off >= n {
|
||||
return
|
||||
if cap(b.data)-off < n {
|
||||
if b.data == nil {
|
||||
b.data = make([]byte, off, off+n)
|
||||
} else {
|
||||
odata := b.data
|
||||
div := 1
|
||||
if b.rLen > batchGrowRec {
|
||||
div = b.rLen / batchGrowRec
|
||||
}
|
||||
b.data = make([]byte, off, off+n+(off-batchHdrLen)/div)
|
||||
copy(b.data, odata)
|
||||
}
|
||||
}
|
||||
buf := make([]byte, 2*cap(b.buf)+n)
|
||||
copy(buf, b.buf)
|
||||
b.buf = buf[:off]
|
||||
}
|
||||
|
||||
func (b *Batch) appendRec(t vType, key, value []byte) {
|
||||
func (b *Batch) appendRec(kt kType, key, value []byte) {
|
||||
n := 1 + binary.MaxVarintLen32 + len(key)
|
||||
if t == tVal {
|
||||
if kt == ktVal {
|
||||
n += binary.MaxVarintLen32 + len(value)
|
||||
}
|
||||
b.grow(n)
|
||||
off := len(b.buf)
|
||||
buf := b.buf[:off+n]
|
||||
buf[off] = byte(t)
|
||||
off := len(b.data)
|
||||
data := b.data[:off+n]
|
||||
data[off] = byte(kt)
|
||||
off += 1
|
||||
off += binary.PutUvarint(buf[off:], uint64(len(key)))
|
||||
copy(buf[off:], key)
|
||||
off += binary.PutUvarint(data[off:], uint64(len(key)))
|
||||
copy(data[off:], key)
|
||||
off += len(key)
|
||||
if t == tVal {
|
||||
off += binary.PutUvarint(buf[off:], uint64(len(value)))
|
||||
copy(buf[off:], value)
|
||||
if kt == ktVal {
|
||||
off += binary.PutUvarint(data[off:], uint64(len(value)))
|
||||
copy(data[off:], value)
|
||||
off += len(value)
|
||||
}
|
||||
b.buf = buf[:off]
|
||||
b.data = data[:off]
|
||||
b.rLen++
|
||||
// Include 8-byte ikey header
|
||||
b.bLen += len(key) + len(value) + 8
|
||||
@@ -75,18 +94,51 @@ func (b *Batch) appendRec(t vType, key, value []byte) {
|
||||
// Put appends 'put operation' of the given key/value pair to the batch.
|
||||
// It is safe to modify the contents of the argument after Put returns.
|
||||
func (b *Batch) Put(key, value []byte) {
|
||||
b.appendRec(tVal, key, value)
|
||||
b.appendRec(ktVal, key, value)
|
||||
}
|
||||
|
||||
// Delete appends 'delete operation' of the given key to the batch.
|
||||
// It is safe to modify the contents of the argument after Delete returns.
|
||||
func (b *Batch) Delete(key []byte) {
|
||||
b.appendRec(tDel, key, nil)
|
||||
b.appendRec(ktDel, key, nil)
|
||||
}
|
||||
|
||||
// Dump dumps batch contents. The returned slice can be loaded into the
|
||||
// batch using Load method.
|
||||
// The returned slice is not its own copy, so the contents should not be
|
||||
// modified.
|
||||
func (b *Batch) Dump() []byte {
|
||||
return b.encode()
|
||||
}
|
||||
|
||||
// Load loads given slice into the batch. Previous contents of the batch
|
||||
// will be discarded.
|
||||
// The given slice will not be copied and will be used as batch buffer, so
|
||||
// it is not safe to modify the contents of the slice.
|
||||
func (b *Batch) Load(data []byte) error {
|
||||
return b.decode(0, data)
|
||||
}
|
||||
|
||||
// Replay replays batch contents.
|
||||
func (b *Batch) Replay(r BatchReplay) error {
|
||||
return b.decodeRec(func(i int, kt kType, key, value []byte) {
|
||||
switch kt {
|
||||
case ktVal:
|
||||
r.Put(key, value)
|
||||
case ktDel:
|
||||
r.Delete(key)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Len returns number of records in the batch.
|
||||
func (b *Batch) Len() int {
|
||||
return b.rLen
|
||||
}
|
||||
|
||||
// Reset resets the batch.
|
||||
func (b *Batch) Reset() {
|
||||
b.buf = nil
|
||||
b.data = b.data[:0]
|
||||
b.seq = 0
|
||||
b.rLen = 0
|
||||
b.bLen = 0
|
||||
@@ -97,24 +149,10 @@ func (b *Batch) init(sync bool) {
|
||||
b.sync = sync
|
||||
}
|
||||
|
||||
func (b *Batch) put(key, value []byte, seq uint64) {
|
||||
if b.rLen == 0 {
|
||||
b.seq = seq
|
||||
}
|
||||
b.Put(key, value)
|
||||
}
|
||||
|
||||
func (b *Batch) delete(key []byte, seq uint64) {
|
||||
if b.rLen == 0 {
|
||||
b.seq = seq
|
||||
}
|
||||
b.Delete(key)
|
||||
}
|
||||
|
||||
func (b *Batch) append(p *Batch) {
|
||||
if p.rLen > 0 {
|
||||
b.grow(len(p.buf) - kBatchHdrLen)
|
||||
b.buf = append(b.buf, p.buf[kBatchHdrLen:]...)
|
||||
b.grow(len(p.data) - batchHdrLen)
|
||||
b.data = append(b.data, p.data[batchHdrLen:]...)
|
||||
b.rLen += p.rLen
|
||||
}
|
||||
if p.sync {
|
||||
@@ -122,95 +160,93 @@ func (b *Batch) append(p *Batch) {
|
||||
}
|
||||
}
|
||||
|
||||
func (b *Batch) len() int {
|
||||
return b.rLen
|
||||
}
|
||||
|
||||
// size returns sums of key/value pair length plus 8-bytes ikey.
|
||||
func (b *Batch) size() int {
|
||||
return b.bLen
|
||||
}
|
||||
|
||||
func (b *Batch) encode() []byte {
|
||||
b.grow(0)
|
||||
binary.LittleEndian.PutUint64(b.buf, b.seq)
|
||||
binary.LittleEndian.PutUint32(b.buf[8:], uint32(b.rLen))
|
||||
binary.LittleEndian.PutUint64(b.data, b.seq)
|
||||
binary.LittleEndian.PutUint32(b.data[8:], uint32(b.rLen))
|
||||
|
||||
return b.buf
|
||||
return b.data
|
||||
}
|
||||
|
||||
func (b *Batch) decode(buf []byte) error {
|
||||
if len(buf) < kBatchHdrLen {
|
||||
return errBatchTooShort
|
||||
func (b *Batch) decode(prevSeq uint64, data []byte) error {
|
||||
if len(data) < batchHdrLen {
|
||||
return newErrBatchCorrupted("too short")
|
||||
}
|
||||
|
||||
b.seq = binary.LittleEndian.Uint64(buf)
|
||||
b.rLen = int(binary.LittleEndian.Uint32(buf[8:]))
|
||||
b.seq = binary.LittleEndian.Uint64(data)
|
||||
if b.seq < prevSeq {
|
||||
return newErrBatchCorrupted("invalid sequence number")
|
||||
}
|
||||
b.rLen = int(binary.LittleEndian.Uint32(data[8:]))
|
||||
if b.rLen < 0 {
|
||||
return newErrBatchCorrupted("invalid records length")
|
||||
}
|
||||
// No need to be precise at this point, it won't be used anyway
|
||||
b.bLen = len(buf) - kBatchHdrLen
|
||||
b.buf = buf
|
||||
b.bLen = len(data) - batchHdrLen
|
||||
b.data = data
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *Batch) decodeRec(f func(i int, t vType, key, value []byte)) error {
|
||||
off := kBatchHdrLen
|
||||
func (b *Batch) decodeRec(f func(i int, kt kType, key, value []byte)) (err error) {
|
||||
off := batchHdrLen
|
||||
for i := 0; i < b.rLen; i++ {
|
||||
if off >= len(b.buf) {
|
||||
return errors.New("leveldb: invalid batch record length")
|
||||
if off >= len(b.data) {
|
||||
return newErrBatchCorrupted("invalid records length")
|
||||
}
|
||||
|
||||
t := vType(b.buf[off])
|
||||
if t > tVal {
|
||||
return errors.New("leveldb: invalid batch record type in batch")
|
||||
kt := kType(b.data[off])
|
||||
if kt > ktVal {
|
||||
return newErrBatchCorrupted("bad record: invalid type")
|
||||
}
|
||||
off += 1
|
||||
|
||||
x, n := binary.Uvarint(b.buf[off:])
|
||||
x, n := binary.Uvarint(b.data[off:])
|
||||
off += n
|
||||
if n <= 0 || off+int(x) > len(b.buf) {
|
||||
return errBatchBadRecord
|
||||
if n <= 0 || off+int(x) > len(b.data) {
|
||||
return newErrBatchCorrupted("bad record: invalid key length")
|
||||
}
|
||||
key := b.buf[off : off+int(x)]
|
||||
key := b.data[off : off+int(x)]
|
||||
off += int(x)
|
||||
|
||||
var value []byte
|
||||
if t == tVal {
|
||||
x, n := binary.Uvarint(b.buf[off:])
|
||||
if kt == ktVal {
|
||||
x, n := binary.Uvarint(b.data[off:])
|
||||
off += n
|
||||
if n <= 0 || off+int(x) > len(b.buf) {
|
||||
return errBatchBadRecord
|
||||
if n <= 0 || off+int(x) > len(b.data) {
|
||||
return newErrBatchCorrupted("bad record: invalid value length")
|
||||
}
|
||||
value = b.buf[off : off+int(x)]
|
||||
value = b.data[off : off+int(x)]
|
||||
off += int(x)
|
||||
}
|
||||
|
||||
f(i, t, key, value)
|
||||
f(i, kt, key, value)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *Batch) replay(to batchReplay) error {
|
||||
return b.decodeRec(func(i int, t vType, key, value []byte) {
|
||||
switch t {
|
||||
case tVal:
|
||||
to.put(key, value, b.seq+uint64(i))
|
||||
case tDel:
|
||||
to.delete(key, b.seq+uint64(i))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func (b *Batch) memReplay(to *memdb.DB) error {
|
||||
return b.decodeRec(func(i int, t vType, key, value []byte) {
|
||||
ikey := newIKey(key, b.seq+uint64(i), t)
|
||||
return b.decodeRec(func(i int, kt kType, key, value []byte) {
|
||||
ikey := newIkey(key, b.seq+uint64(i), kt)
|
||||
to.Put(ikey, value)
|
||||
})
|
||||
}
|
||||
|
||||
func (b *Batch) memDecodeAndReplay(prevSeq uint64, data []byte, to *memdb.DB) error {
|
||||
if err := b.decode(prevSeq, data); err != nil {
|
||||
return err
|
||||
}
|
||||
return b.memReplay(to)
|
||||
}
|
||||
|
||||
func (b *Batch) revertMemReplay(to *memdb.DB) error {
|
||||
return b.decodeRec(func(i int, t vType, key, value []byte) {
|
||||
ikey := newIKey(key, b.seq+uint64(i), t)
|
||||
return b.decodeRec(func(i int, kt kType, key, value []byte) {
|
||||
ikey := newIkey(key, b.seq+uint64(i), kt)
|
||||
to.Delete(ikey)
|
||||
})
|
||||
}
|
||||
|
||||
26
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/batch_test.go
generated
vendored
26
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/batch_test.go
generated
vendored
@@ -15,7 +15,7 @@ import (
|
||||
)
|
||||
|
||||
type tbRec struct {
|
||||
t vType
|
||||
kt kType
|
||||
key, value []byte
|
||||
}
|
||||
|
||||
@@ -23,39 +23,39 @@ type testBatch struct {
|
||||
rec []*tbRec
|
||||
}
|
||||
|
||||
func (p *testBatch) put(key, value []byte, seq uint64) {
|
||||
p.rec = append(p.rec, &tbRec{tVal, key, value})
|
||||
func (p *testBatch) Put(key, value []byte) {
|
||||
p.rec = append(p.rec, &tbRec{ktVal, key, value})
|
||||
}
|
||||
|
||||
func (p *testBatch) delete(key []byte, seq uint64) {
|
||||
p.rec = append(p.rec, &tbRec{tDel, key, nil})
|
||||
func (p *testBatch) Delete(key []byte) {
|
||||
p.rec = append(p.rec, &tbRec{ktDel, key, nil})
|
||||
}
|
||||
|
||||
func compareBatch(t *testing.T, b1, b2 *Batch) {
|
||||
if b1.seq != b2.seq {
|
||||
t.Errorf("invalid seq number want %d, got %d", b1.seq, b2.seq)
|
||||
}
|
||||
if b1.len() != b2.len() {
|
||||
t.Fatalf("invalid record length want %d, got %d", b1.len(), b2.len())
|
||||
if b1.Len() != b2.Len() {
|
||||
t.Fatalf("invalid record length want %d, got %d", b1.Len(), b2.Len())
|
||||
}
|
||||
p1, p2 := new(testBatch), new(testBatch)
|
||||
err := b1.replay(p1)
|
||||
err := b1.Replay(p1)
|
||||
if err != nil {
|
||||
t.Fatal("error when replaying batch 1: ", err)
|
||||
}
|
||||
err = b2.replay(p2)
|
||||
err = b2.Replay(p2)
|
||||
if err != nil {
|
||||
t.Fatal("error when replaying batch 2: ", err)
|
||||
}
|
||||
for i := range p1.rec {
|
||||
r1, r2 := p1.rec[i], p2.rec[i]
|
||||
if r1.t != r2.t {
|
||||
t.Errorf("invalid type on record '%d' want %d, got %d", i, r1.t, r2.t)
|
||||
if r1.kt != r2.kt {
|
||||
t.Errorf("invalid type on record '%d' want %d, got %d", i, r1.kt, r2.kt)
|
||||
}
|
||||
if !bytes.Equal(r1.key, r2.key) {
|
||||
t.Errorf("invalid key on record '%d' want %s, got %s", i, string(r1.key), string(r2.key))
|
||||
}
|
||||
if r1.t == tVal {
|
||||
if r1.kt == ktVal {
|
||||
if !bytes.Equal(r1.value, r2.value) {
|
||||
t.Errorf("invalid value on record '%d' want %s, got %s", i, string(r1.value), string(r2.value))
|
||||
}
|
||||
@@ -75,7 +75,7 @@ func TestBatch_EncodeDecode(t *testing.T) {
|
||||
b1.Delete([]byte("k"))
|
||||
buf := b1.encode()
|
||||
b2 := new(Batch)
|
||||
err := b2.decode(buf)
|
||||
err := b2.decode(0, buf)
|
||||
if err != nil {
|
||||
t.Error("error when decoding batch: ", err)
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/bench_test.go
generated
vendored
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/bench_test.go
generated
vendored
@@ -249,7 +249,9 @@ func (p *dbBench) newIter() iterator.Iterator {
|
||||
}
|
||||
|
||||
func (p *dbBench) close() {
|
||||
p.b.Log(p.db.s.tops.bpool)
|
||||
if bp, err := p.db.GetProperty("leveldb.blockpool"); err == nil {
|
||||
p.b.Log("Block pool stats: ", bp)
|
||||
}
|
||||
p.db.Close()
|
||||
p.stor.Close()
|
||||
os.RemoveAll(benchDB)
|
||||
|
||||
148
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache.go
generated
vendored
148
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache.go
generated
vendored
@@ -11,115 +11,149 @@ import (
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// SetFunc used by Namespace.Get method to create a cache object. SetFunc
|
||||
// may return ok false, in that case the cache object will not be created.
|
||||
type SetFunc func() (ok bool, value interface{}, charge int, fin SetFin)
|
||||
// SetFunc is the function that will be called by Namespace.Get to create
|
||||
// a cache object, if charge is less than one than the cache object will
|
||||
// not be registered to cache tree, if value is nil then the cache object
|
||||
// will not be created.
|
||||
type SetFunc func() (charge int, value interface{})
|
||||
|
||||
// SetFin will be called when corresponding cache object are released.
|
||||
type SetFin func()
|
||||
// DelFin is the function that will be called as the result of a delete operation.
|
||||
// Exist == true is indication that the object is exist, and pending == true is
|
||||
// indication of deletion already happen but haven't done yet (wait for all handles
|
||||
// to be released). And exist == false means the object doesn't exist.
|
||||
type DelFin func(exist, pending bool)
|
||||
|
||||
// DelFin will be called when corresponding cache object are released.
|
||||
// DelFin will be called after SetFin. The exist is true if the corresponding
|
||||
// cache object is actually exist in the cache tree.
|
||||
type DelFin func(exist bool)
|
||||
|
||||
// PurgeFin will be called when corresponding cache object are released.
|
||||
// PurgeFin will be called after SetFin. If PurgeFin present DelFin will
|
||||
// not be executed but passed to the PurgeFin, it is up to the caller
|
||||
// to call it or not.
|
||||
type PurgeFin func(ns, key uint64, delfin DelFin)
|
||||
// PurgeFin is the function that will be called as the result of a purge operation.
|
||||
type PurgeFin func(ns, key uint64)
|
||||
|
||||
// Cache is a cache tree. A cache instance must be goroutine-safe.
|
||||
type Cache interface {
|
||||
// SetCapacity sets cache capacity.
|
||||
// SetCapacity sets cache tree capacity.
|
||||
SetCapacity(capacity int)
|
||||
|
||||
// GetNamespace gets or creates a cache namespace for the given id.
|
||||
// Capacity returns cache tree capacity.
|
||||
Capacity() int
|
||||
|
||||
// Used returns used cache tree capacity.
|
||||
Used() int
|
||||
|
||||
// Size returns entire alive cache objects size.
|
||||
Size() int
|
||||
|
||||
// NumObjects returns number of alive objects.
|
||||
NumObjects() int
|
||||
|
||||
// GetNamespace gets cache namespace with the given id.
|
||||
// GetNamespace is never return nil.
|
||||
GetNamespace(id uint64) Namespace
|
||||
|
||||
// Purge purges all cache namespaces, read Namespace.Purge method documentation.
|
||||
// PurgeNamespace purges cache namespace with the given id from this cache tree.
|
||||
// Also read Namespace.Purge.
|
||||
PurgeNamespace(id uint64, fin PurgeFin)
|
||||
|
||||
// ZapNamespace detaches cache namespace with the given id from this cache tree.
|
||||
// Also read Namespace.Zap.
|
||||
ZapNamespace(id uint64)
|
||||
|
||||
// Purge purges all cache namespace from this cache tree.
|
||||
// This is behave the same as calling Namespace.Purge method on all cache namespace.
|
||||
Purge(fin PurgeFin)
|
||||
|
||||
// Zap zaps all cache namespaces, read Namespace.Zap method documentation.
|
||||
Zap(closed bool)
|
||||
// Zap detaches all cache namespace from this cache tree.
|
||||
// This is behave the same as calling Namespace.Zap method on all cache namespace.
|
||||
Zap()
|
||||
}
|
||||
|
||||
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
|
||||
type Namespace interface {
|
||||
// Get gets cache object for the given key. The given SetFunc (if not nil) will
|
||||
// be called if the given key does not exist.
|
||||
// If the given key does not exist, SetFunc is nil or SetFunc return ok false, Get
|
||||
// will return ok false.
|
||||
Get(key uint64, setf SetFunc) (obj Object, ok bool)
|
||||
|
||||
// Get deletes cache object for the given key. If exist the cache object will
|
||||
// be deleted later when all of its handles have been released (i.e. no one use
|
||||
// it anymore) and the given DelFin (if not nil) will finally be executed. If
|
||||
// such cache object does not exist the given DelFin will be executed anyway.
|
||||
// Get gets cache object with the given key.
|
||||
// If cache object is not found and setf is not nil, Get will atomically creates
|
||||
// the cache object by calling setf. Otherwise Get will returns nil.
|
||||
//
|
||||
// Delete returns true if such cache object exist.
|
||||
// The returned cache handle should be released after use by calling Release
|
||||
// method.
|
||||
Get(key uint64, setf SetFunc) Handle
|
||||
|
||||
// Delete removes cache object with the given key from cache tree.
|
||||
// A deleted cache object will be released as soon as all of its handles have
|
||||
// been released.
|
||||
// Delete only happen once, subsequent delete will consider cache object doesn't
|
||||
// exist, even if the cache object ins't released yet.
|
||||
//
|
||||
// If not nil, fin will be called if the cache object doesn't exist or when
|
||||
// finally be released.
|
||||
//
|
||||
// Delete returns true if such cache object exist and never been deleted.
|
||||
Delete(key uint64, fin DelFin) bool
|
||||
|
||||
// Purge deletes all cache objects, read Delete method documentation.
|
||||
// Purge removes all cache objects within this namespace from cache tree.
|
||||
// This is the same as doing delete on all cache objects.
|
||||
//
|
||||
// If not nil, fin will be called on all cache objects when its finally be
|
||||
// released.
|
||||
Purge(fin PurgeFin)
|
||||
|
||||
// Zap detaches the namespace from the cache tree and delete all its cache
|
||||
// objects. The cache objects deletion and finalizers execution are happen
|
||||
// immediately, even if its existing handles haven't yet been released.
|
||||
// A zapped namespace can't never be filled again.
|
||||
// If closed is false then the Get function will always call the given SetFunc
|
||||
// if it is not nil, but resultant of the SetFunc will not be cached.
|
||||
Zap(closed bool)
|
||||
// Zap detaches namespace from cache tree and release all its cache objects.
|
||||
// A zapped namespace can never be filled again.
|
||||
// Calling Get on zapped namespace will always return nil.
|
||||
Zap()
|
||||
}
|
||||
|
||||
// Object is a cache object.
|
||||
type Object interface {
|
||||
// Release releases the cache object. Other methods should not be called
|
||||
// after the cache object has been released.
|
||||
// Handle is a cache handle.
|
||||
type Handle interface {
|
||||
// Release releases this cache handle. This method can be safely called mutiple
|
||||
// times.
|
||||
Release()
|
||||
|
||||
// Value returns value of the cache object.
|
||||
// Value returns value of this cache handle.
|
||||
// Value will returns nil after this cache handle have be released.
|
||||
Value() interface{}
|
||||
}
|
||||
|
||||
const (
|
||||
DelNotExist = iota
|
||||
DelExist
|
||||
DelPendig
|
||||
)
|
||||
|
||||
// Namespace state.
|
||||
type nsState int
|
||||
|
||||
const (
|
||||
nsEffective nsState = iota
|
||||
nsZapped
|
||||
nsClosed
|
||||
)
|
||||
|
||||
// Node state.
|
||||
type nodeState int
|
||||
|
||||
const (
|
||||
nodeEffective nodeState = iota
|
||||
nodeZero nodeState = iota
|
||||
nodeEffective
|
||||
nodeEvicted
|
||||
nodeRemoved
|
||||
nodeDeleted
|
||||
)
|
||||
|
||||
// Fake object.
|
||||
type fakeObject struct {
|
||||
// Fake handle.
|
||||
type fakeHandle struct {
|
||||
value interface{}
|
||||
fin func()
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *fakeObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.value
|
||||
func (h *fakeHandle) Value() interface{} {
|
||||
if atomic.LoadUint32(&h.once) == 0 {
|
||||
return h.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *fakeObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
func (h *fakeHandle) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
if o.fin != nil {
|
||||
o.fin()
|
||||
o.fin = nil
|
||||
if h.fin != nil {
|
||||
h.fin()
|
||||
h.fin = nil
|
||||
}
|
||||
}
|
||||
|
||||
553
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache_test.go
generated
vendored
553
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache_test.go
generated
vendored
@@ -8,14 +8,32 @@ package cache
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func set(ns Namespace, key uint64, value interface{}, charge int, fin func()) Object {
|
||||
obj, _ := ns.Get(key, func() (bool, interface{}, int, SetFin) {
|
||||
return true, value, charge, fin
|
||||
type releaserFunc struct {
|
||||
fn func()
|
||||
value interface{}
|
||||
}
|
||||
|
||||
func (r releaserFunc) Release() {
|
||||
if r.fn != nil {
|
||||
r.fn()
|
||||
}
|
||||
}
|
||||
|
||||
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
|
||||
return ns.Get(key, func() (int, interface{}) {
|
||||
if relf != nil {
|
||||
return charge, releaserFunc{relf, value}
|
||||
} else {
|
||||
return charge, value
|
||||
}
|
||||
})
|
||||
return obj
|
||||
}
|
||||
|
||||
func TestCache_HitMiss(t *testing.T) {
|
||||
@@ -43,29 +61,31 @@ func TestCache_HitMiss(t *testing.T) {
|
||||
setfin++
|
||||
}).Release()
|
||||
for j, y := range cases {
|
||||
r, ok := ns.Get(y.key, nil)
|
||||
h := ns.Get(y.key, nil)
|
||||
if j <= i {
|
||||
// should hit
|
||||
if !ok {
|
||||
if h == nil {
|
||||
t.Errorf("case '%d' iteration '%d' is miss", i, j)
|
||||
} else if r.Value().(string) != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
|
||||
} else {
|
||||
if x := h.Value().(releaserFunc).value.(string); x != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// should miss
|
||||
if ok {
|
||||
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, r.Value().(string))
|
||||
if h != nil {
|
||||
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, h.Value().(releaserFunc).value.(string))
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
r.Release()
|
||||
if h != nil {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i, x := range cases {
|
||||
finalizerOk := false
|
||||
ns.Delete(x.key, func(exist bool) {
|
||||
ns.Delete(x.key, func(exist, pending bool) {
|
||||
finalizerOk = true
|
||||
})
|
||||
|
||||
@@ -74,22 +94,24 @@ func TestCache_HitMiss(t *testing.T) {
|
||||
}
|
||||
|
||||
for j, y := range cases {
|
||||
r, ok := ns.Get(y.key, nil)
|
||||
h := ns.Get(y.key, nil)
|
||||
if j > i {
|
||||
// should hit
|
||||
if !ok {
|
||||
if h == nil {
|
||||
t.Errorf("case '%d' iteration '%d' is miss", i, j)
|
||||
} else if r.Value().(string) != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
|
||||
} else {
|
||||
if x := h.Value().(releaserFunc).value.(string); x != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// should miss
|
||||
if ok {
|
||||
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, r.Value().(string))
|
||||
if h != nil {
|
||||
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, h.Value().(releaserFunc).value.(string))
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
r.Release()
|
||||
if h != nil {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -107,42 +129,42 @@ func TestLRUCache_Eviction(t *testing.T) {
|
||||
set(ns, 3, 3, 1, nil).Release()
|
||||
set(ns, 4, 4, 1, nil).Release()
|
||||
set(ns, 5, 5, 1, nil).Release()
|
||||
if r, ok := ns.Get(2, nil); ok { // 1,3,4,5,2
|
||||
r.Release()
|
||||
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
|
||||
h.Release()
|
||||
}
|
||||
set(ns, 9, 9, 10, nil).Release() // 5,2,9
|
||||
|
||||
for _, x := range []uint64{9, 2, 5, 1} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{9, 2, 5, 1} {
|
||||
h := ns.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
o1.Release()
|
||||
for _, x := range []uint64{1, 2, 5} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{1, 2, 5} {
|
||||
h := ns.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
for _, x := range []uint64{3, 4, 9} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if ok {
|
||||
t.Errorf("hit for key '%d'", x)
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
for _, key := range []uint64{3, 4, 9} {
|
||||
h := ns.Get(key, nil)
|
||||
if h != nil {
|
||||
t.Errorf("hit for key '%d'", key)
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -153,16 +175,15 @@ func TestLRUCache_SetGet(t *testing.T) {
|
||||
for i := 0; i < 200; i++ {
|
||||
n := uint64(rand.Intn(99999) % 20)
|
||||
set(ns, n, n, 1, nil).Release()
|
||||
if p, ok := ns.Get(n, nil); ok {
|
||||
if p.Value() == nil {
|
||||
if h := ns.Get(n, nil); h != nil {
|
||||
if h.Value() == nil {
|
||||
t.Errorf("key '%d' contains nil value", n)
|
||||
} else {
|
||||
got := p.Value().(uint64)
|
||||
if got != n {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, got)
|
||||
if x := h.Value().(uint64); x != n {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
|
||||
}
|
||||
}
|
||||
p.Release()
|
||||
h.Release()
|
||||
} else {
|
||||
t.Errorf("key '%d' doesn't exist", n)
|
||||
}
|
||||
@@ -176,31 +197,429 @@ func TestLRUCache_Purge(t *testing.T) {
|
||||
o2 := set(ns1, 2, 2, 1, nil)
|
||||
ns1.Purge(nil)
|
||||
set(ns1, 3, 3, 1, nil).Release()
|
||||
for _, x := range []uint64{1, 2, 3} {
|
||||
r, ok := ns1.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{1, 2, 3} {
|
||||
h := ns1.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
o1.Release()
|
||||
o2.Release()
|
||||
for _, x := range []uint64{1, 2} {
|
||||
r, ok := ns1.Get(x, nil)
|
||||
if ok {
|
||||
t.Errorf("hit for key '%d'", x)
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
for _, key := range []uint64{1, 2} {
|
||||
h := ns1.Get(key, nil)
|
||||
if h != nil {
|
||||
t.Errorf("hit for key '%d'", key)
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type testingCacheObjectCounter struct {
|
||||
created uint
|
||||
released uint
|
||||
}
|
||||
|
||||
func (c *testingCacheObjectCounter) createOne() {
|
||||
c.created++
|
||||
}
|
||||
|
||||
func (c *testingCacheObjectCounter) releaseOne() {
|
||||
c.released++
|
||||
}
|
||||
|
||||
type testingCacheObject struct {
|
||||
t *testing.T
|
||||
cnt *testingCacheObjectCounter
|
||||
|
||||
ns, key uint64
|
||||
|
||||
releaseCalled bool
|
||||
}
|
||||
|
||||
func (x *testingCacheObject) Release() {
|
||||
if !x.releaseCalled {
|
||||
x.releaseCalled = true
|
||||
x.cnt.releaseOne()
|
||||
} else {
|
||||
x.t.Errorf("duplicate setfin NS#%d KEY#%d", x.ns, x.key)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLRUCache_ConcurrentSetGet(t *testing.T) {
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
|
||||
seed := time.Now().UnixNano()
|
||||
t.Logf("seed=%d", seed)
|
||||
|
||||
const (
|
||||
N = 2000000
|
||||
M = 4000
|
||||
C = 3
|
||||
)
|
||||
|
||||
var set, get uint32
|
||||
|
||||
wg := &sync.WaitGroup{}
|
||||
c := NewLRUCache(M / 4)
|
||||
for ni := uint64(0); ni < C; ni++ {
|
||||
r0 := rand.New(rand.NewSource(seed + int64(ni)))
|
||||
r1 := rand.New(rand.NewSource(seed + int64(ni) + 1))
|
||||
ns := c.GetNamespace(ni)
|
||||
|
||||
wg.Add(2)
|
||||
go func(ns Namespace, r *rand.Rand) {
|
||||
for i := 0; i < N; i++ {
|
||||
x := uint64(r.Int63n(M))
|
||||
o := ns.Get(x, func() (int, interface{}) {
|
||||
atomic.AddUint32(&set, 1)
|
||||
return 1, x
|
||||
})
|
||||
if v := o.Value().(uint64); v != x {
|
||||
t.Errorf("#%d invalid value, got=%d", x, v)
|
||||
}
|
||||
o.Release()
|
||||
}
|
||||
wg.Done()
|
||||
}(ns, r0)
|
||||
go func(ns Namespace, r *rand.Rand) {
|
||||
for i := 0; i < N; i++ {
|
||||
x := uint64(r.Int63n(M))
|
||||
o := ns.Get(x, nil)
|
||||
if o != nil {
|
||||
atomic.AddUint32(&get, 1)
|
||||
if v := o.Value().(uint64); v != x {
|
||||
t.Errorf("#%d invalid value, got=%d", x, v)
|
||||
}
|
||||
o.Release()
|
||||
}
|
||||
}
|
||||
wg.Done()
|
||||
}(ns, r1)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
t.Logf("set=%d get=%d", set, get)
|
||||
}
|
||||
|
||||
func TestLRUCache_Finalizer(t *testing.T) {
|
||||
const (
|
||||
capacity = 100
|
||||
goroutines = 100
|
||||
iterations = 10000
|
||||
keymax = 8000
|
||||
)
|
||||
|
||||
cnt := &testingCacheObjectCounter{}
|
||||
|
||||
c := NewLRUCache(capacity)
|
||||
|
||||
type instance struct {
|
||||
seed int64
|
||||
rnd *rand.Rand
|
||||
nsid uint64
|
||||
ns Namespace
|
||||
effective int
|
||||
handles []Handle
|
||||
handlesMap map[uint64]int
|
||||
|
||||
delete bool
|
||||
purge bool
|
||||
zap bool
|
||||
wantDel int
|
||||
delfinCalled int
|
||||
delfinCalledAll int
|
||||
delfinCalledEff int
|
||||
purgefinCalled int
|
||||
}
|
||||
|
||||
instanceGet := func(p *instance, key uint64) {
|
||||
h := p.ns.Get(key, func() (charge int, value interface{}) {
|
||||
to := &testingCacheObject{
|
||||
t: t, cnt: cnt,
|
||||
ns: p.nsid,
|
||||
key: key,
|
||||
}
|
||||
p.effective++
|
||||
cnt.createOne()
|
||||
return 1, releaserFunc{func() {
|
||||
to.Release()
|
||||
p.effective--
|
||||
}, to}
|
||||
})
|
||||
p.handles = append(p.handles, h)
|
||||
p.handlesMap[key] = p.handlesMap[key] + 1
|
||||
}
|
||||
instanceRelease := func(p *instance, i int) {
|
||||
h := p.handles[i]
|
||||
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
|
||||
if n := p.handlesMap[key]; n == 0 {
|
||||
t.Fatal("key ref == 0")
|
||||
} else if n > 1 {
|
||||
p.handlesMap[key] = n - 1
|
||||
} else {
|
||||
delete(p.handlesMap, key)
|
||||
}
|
||||
h.Release()
|
||||
p.handles = append(p.handles[:i], p.handles[i+1:]...)
|
||||
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
|
||||
}
|
||||
|
||||
seed := time.Now().UnixNano()
|
||||
t.Logf("seed=%d", seed)
|
||||
|
||||
instances := make([]*instance, goroutines)
|
||||
for i := range instances {
|
||||
p := &instance{}
|
||||
p.handlesMap = make(map[uint64]int)
|
||||
p.seed = seed + int64(i)
|
||||
p.rnd = rand.New(rand.NewSource(p.seed))
|
||||
p.nsid = uint64(i)
|
||||
p.ns = c.GetNamespace(p.nsid)
|
||||
p.delete = i%6 == 0
|
||||
p.purge = i%8 == 0
|
||||
p.zap = i%12 == 0 || i%3 == 0
|
||||
instances[i] = p
|
||||
}
|
||||
|
||||
runr := rand.New(rand.NewSource(seed - 1))
|
||||
run := func(rnd *rand.Rand, x []*instance, init func(p *instance) bool, fn func(p *instance, i int) bool) {
|
||||
var (
|
||||
rx []*instance
|
||||
rn []int
|
||||
)
|
||||
if init == nil {
|
||||
rx = append([]*instance{}, x...)
|
||||
rn = make([]int, len(x))
|
||||
} else {
|
||||
for _, p := range x {
|
||||
if init(p) {
|
||||
rx = append(rx, p)
|
||||
rn = append(rn, 0)
|
||||
}
|
||||
}
|
||||
}
|
||||
for len(rx) > 0 {
|
||||
i := rand.Intn(len(rx))
|
||||
if fn(rx[i], rn[i]) {
|
||||
rn[i]++
|
||||
} else {
|
||||
rx = append(rx[:i], rx[i+1:]...)
|
||||
rn = append(rn[:i], rn[i+1:]...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Get and release.
|
||||
run(runr, instances, nil, func(p *instance, i int) bool {
|
||||
if i < iterations {
|
||||
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
|
||||
instanceGet(p, uint64(p.rnd.Intn(keymax)))
|
||||
} else {
|
||||
instanceRelease(p, p.rnd.Intn(len(p.handles)))
|
||||
}
|
||||
return true
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
})
|
||||
|
||||
if used, cap := c.Used(), c.Capacity(); used > cap {
|
||||
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
|
||||
}
|
||||
|
||||
// Check effective objects.
|
||||
for i, p := range instances {
|
||||
if int(p.effective) < len(p.handlesMap) {
|
||||
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
|
||||
}
|
||||
}
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// First delete.
|
||||
run(runr, instances, func(p *instance) bool {
|
||||
p.wantDel = p.effective
|
||||
return p.delete
|
||||
}, func(p *instance, i int) bool {
|
||||
key := uint64(i)
|
||||
if key < keymax {
|
||||
_, wantExist := p.handlesMap[key]
|
||||
gotExist := p.ns.Delete(key, func(exist, pending bool) {
|
||||
p.delfinCalledAll++
|
||||
if exist {
|
||||
p.delfinCalledEff++
|
||||
}
|
||||
})
|
||||
if !gotExist && wantExist {
|
||||
t.Errorf("delete on NS#%d KEY#%d not found", p.nsid, key)
|
||||
}
|
||||
return true
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
})
|
||||
|
||||
// Second delete.
|
||||
run(runr, instances, func(p *instance) bool {
|
||||
p.delfinCalled = 0
|
||||
return p.delete
|
||||
}, func(p *instance, i int) bool {
|
||||
key := uint64(i)
|
||||
if key < keymax {
|
||||
gotExist := p.ns.Delete(key, func(exist, pending bool) {
|
||||
if exist && !pending {
|
||||
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.nsid, key)
|
||||
}
|
||||
p.delfinCalled++
|
||||
})
|
||||
if gotExist {
|
||||
t.Errorf("delete on NS#%d KEY#%d found", p.nsid, key)
|
||||
}
|
||||
return true
|
||||
} else {
|
||||
if p.delfinCalled != keymax {
|
||||
t.Errorf("(2) NS#%d not all delete fin called, diff=%d", p.nsid, keymax-p.delfinCalled)
|
||||
}
|
||||
return false
|
||||
}
|
||||
})
|
||||
|
||||
// Purge.
|
||||
run(runr, instances, func(p *instance) bool {
|
||||
return p.purge
|
||||
}, func(p *instance, i int) bool {
|
||||
p.ns.Purge(func(ns, key uint64) {
|
||||
p.purgefinCalled++
|
||||
})
|
||||
return false
|
||||
})
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Release.
|
||||
run(runr, instances, func(p *instance) bool {
|
||||
return !p.zap
|
||||
}, func(p *instance, i int) bool {
|
||||
if len(p.handles) > 0 {
|
||||
instanceRelease(p, len(p.handles)-1)
|
||||
return true
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
})
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Zap.
|
||||
run(runr, instances, func(p *instance) bool {
|
||||
return p.zap
|
||||
}, func(p *instance, i int) bool {
|
||||
p.ns.Zap()
|
||||
p.handles = nil
|
||||
p.handlesMap = nil
|
||||
return false
|
||||
})
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
|
||||
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
|
||||
}
|
||||
|
||||
c.Purge(nil)
|
||||
|
||||
for _, p := range instances {
|
||||
if p.delete {
|
||||
if p.delfinCalledAll != keymax {
|
||||
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.nsid, p.purge, p.zap, keymax-p.delfinCalledAll)
|
||||
}
|
||||
if p.delfinCalledEff != p.wantDel {
|
||||
t.Errorf("#%d not all effective delete fin called, diff=%d", p.nsid, p.wantDel-p.delfinCalledEff)
|
||||
}
|
||||
if p.purge && p.purgefinCalled > 0 {
|
||||
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.nsid, p.delete, p.zap, p.purgefinCalled)
|
||||
}
|
||||
} else {
|
||||
if p.purge {
|
||||
if p.purgefinCalled != p.wantDel {
|
||||
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.nsid, p.delete, p.zap, p.wantDel-p.purgefinCalled)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if cnt.created != cnt.released {
|
||||
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Set(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, "", 1, nil)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Get(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, "", 1, nil)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
ns.Get(i, nil)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Get2(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, "", 1, nil)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
ns.Get(i, func() (charge int, value interface{}) {
|
||||
return 0, nil
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Release(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
handles := make([]Handle, b.N)
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
handles[i] = set(ns, i, "", 1, nil)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for _, h := range handles {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_SetRelease(b *testing.B) {
|
||||
capacity := b.N / 100
|
||||
if capacity <= 0 {
|
||||
@@ -210,7 +629,7 @@ func BenchmarkLRUCache_SetRelease(b *testing.B) {
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, nil, 1, nil).Release()
|
||||
set(ns, i, "", 1, nil).Release()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -227,10 +646,10 @@ func BenchmarkLRUCache_SetReleaseTwice(b *testing.B) {
|
||||
nb := b.N - na
|
||||
|
||||
for i := uint64(0); i < uint64(na); i++ {
|
||||
set(ns, i, nil, 1, nil).Release()
|
||||
set(ns, i, "", 1, nil).Release()
|
||||
}
|
||||
|
||||
for i := uint64(0); i < uint64(nb); i++ {
|
||||
set(ns, i, nil, 1, nil).Release()
|
||||
set(ns, i, "", 1, nil).Release()
|
||||
}
|
||||
}
|
||||
|
||||
246
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/empty_cache.go
generated
vendored
246
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/empty_cache.go
generated
vendored
@@ -1,246 +0,0 @@
|
||||
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package cache
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type emptyCache struct {
|
||||
sync.Mutex
|
||||
table map[uint64]*emptyNS
|
||||
}
|
||||
|
||||
// NewEmptyCache creates a new initialized empty cache.
|
||||
func NewEmptyCache() Cache {
|
||||
return &emptyCache{
|
||||
table: make(map[uint64]*emptyNS),
|
||||
}
|
||||
}
|
||||
|
||||
func (c *emptyCache) GetNamespace(id uint64) Namespace {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
if ns, ok := c.table[id]; ok {
|
||||
return ns
|
||||
}
|
||||
|
||||
ns := &emptyNS{
|
||||
cache: c,
|
||||
id: id,
|
||||
table: make(map[uint64]*emptyNode),
|
||||
}
|
||||
c.table[id] = ns
|
||||
return ns
|
||||
}
|
||||
|
||||
func (c *emptyCache) Purge(fin PurgeFin) {
|
||||
c.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.Unlock()
|
||||
}
|
||||
|
||||
func (c *emptyCache) Zap(closed bool) {
|
||||
c.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.zapNB(closed)
|
||||
}
|
||||
c.table = make(map[uint64]*emptyNS)
|
||||
c.Unlock()
|
||||
}
|
||||
|
||||
func (*emptyCache) SetCapacity(capacity int) {}
|
||||
|
||||
type emptyNS struct {
|
||||
cache *emptyCache
|
||||
id uint64
|
||||
table map[uint64]*emptyNode
|
||||
state nsState
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Get(key uint64, setf SetFunc) (o Object, ok bool) {
|
||||
ns.cache.Lock()
|
||||
|
||||
switch ns.state {
|
||||
case nsZapped:
|
||||
ns.cache.Unlock()
|
||||
if setf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if ok {
|
||||
o = &fakeObject{
|
||||
value: value,
|
||||
fin: fin,
|
||||
}
|
||||
}
|
||||
return
|
||||
case nsClosed:
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if ok {
|
||||
n.ref++
|
||||
} else {
|
||||
if setf == nil {
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if !ok {
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n = &emptyNode{
|
||||
ns: ns,
|
||||
key: key,
|
||||
value: value,
|
||||
setfin: fin,
|
||||
ref: 1,
|
||||
}
|
||||
ns.table[key] = n
|
||||
}
|
||||
|
||||
ns.cache.Unlock()
|
||||
o = &emptyObject{node: n}
|
||||
return
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Delete(key uint64, fin DelFin) bool {
|
||||
ns.cache.Lock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
ns.cache.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if !ok {
|
||||
ns.cache.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
n.delfin = fin
|
||||
ns.cache.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
func (ns *emptyNS) purgeNB(fin PurgeFin) {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
n.purgefin = fin
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Purge(fin PurgeFin) {
|
||||
ns.cache.Lock()
|
||||
ns.purgeNB(fin)
|
||||
ns.cache.Unlock()
|
||||
}
|
||||
|
||||
func (ns *emptyNS) zapNB(closed bool) {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
n.execFin()
|
||||
}
|
||||
if closed {
|
||||
ns.state = nsClosed
|
||||
} else {
|
||||
ns.state = nsZapped
|
||||
}
|
||||
ns.table = nil
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Zap(closed bool) {
|
||||
ns.cache.Lock()
|
||||
ns.zapNB(closed)
|
||||
delete(ns.cache.table, ns.id)
|
||||
ns.cache.Unlock()
|
||||
}
|
||||
|
||||
type emptyNode struct {
|
||||
ns *emptyNS
|
||||
key uint64
|
||||
value interface{}
|
||||
ref int
|
||||
setfin SetFin
|
||||
delfin DelFin
|
||||
purgefin PurgeFin
|
||||
}
|
||||
|
||||
func (n *emptyNode) execFin() {
|
||||
if n.setfin != nil {
|
||||
n.setfin()
|
||||
n.setfin = nil
|
||||
}
|
||||
if n.purgefin != nil {
|
||||
n.purgefin(n.ns.id, n.key, n.delfin)
|
||||
n.delfin = nil
|
||||
n.purgefin = nil
|
||||
} else if n.delfin != nil {
|
||||
n.delfin(true)
|
||||
n.delfin = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (n *emptyNode) evict() {
|
||||
n.ns.cache.Lock()
|
||||
n.ref--
|
||||
if n.ref == 0 {
|
||||
if n.ns.state == nsEffective {
|
||||
// Remove elem.
|
||||
delete(n.ns.table, n.key)
|
||||
// Execute finalizer.
|
||||
n.execFin()
|
||||
}
|
||||
} else if n.ref < 0 {
|
||||
panic("leveldb/cache: emptyNode: negative node reference")
|
||||
}
|
||||
n.ns.cache.Unlock()
|
||||
}
|
||||
|
||||
type emptyObject struct {
|
||||
node *emptyNode
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *emptyObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.node.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *emptyObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
o.node.evict()
|
||||
o.node = nil
|
||||
}
|
||||
628
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/lru_cache.go
generated
vendored
628
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/lru_cache.go
generated
vendored
@@ -9,16 +9,24 @@ package cache
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
// The LLRB implementation were taken from https://github.com/petar/GoLLRB.
|
||||
// Which contains the following header:
|
||||
//
|
||||
// Copyright 2010 Petar Maymounkov. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// lruCache represent a LRU cache state.
|
||||
type lruCache struct {
|
||||
sync.Mutex
|
||||
|
||||
recent lruNode
|
||||
table map[uint64]*lruNs
|
||||
capacity int
|
||||
size int
|
||||
mu sync.Mutex
|
||||
recent lruNode
|
||||
table map[uint64]*lruNs
|
||||
capacity int
|
||||
used, size, alive int
|
||||
}
|
||||
|
||||
// NewLRUCache creates a new initialized LRU cache with the given capacity.
|
||||
@@ -32,245 +40,415 @@ func NewLRUCache(capacity int) Cache {
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *lruCache) Capacity() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.capacity
|
||||
}
|
||||
|
||||
func (c *lruCache) Used() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.used
|
||||
}
|
||||
|
||||
func (c *lruCache) Size() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.size
|
||||
}
|
||||
|
||||
func (c *lruCache) NumObjects() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.alive
|
||||
}
|
||||
|
||||
// SetCapacity set cache capacity.
|
||||
func (c *lruCache) SetCapacity(capacity int) {
|
||||
c.Lock()
|
||||
c.mu.Lock()
|
||||
c.capacity = capacity
|
||||
c.evict()
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// GetNamespace return namespace object for given id.
|
||||
func (c *lruCache) GetNamespace(id uint64) Namespace {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if p, ok := c.table[id]; ok {
|
||||
return p
|
||||
if ns, ok := c.table[id]; ok {
|
||||
return ns
|
||||
}
|
||||
|
||||
p := &lruNs{
|
||||
lru: c,
|
||||
id: id,
|
||||
table: make(map[uint64]*lruNode),
|
||||
ns := &lruNs{lru: c, id: id}
|
||||
c.table[id] = ns
|
||||
return ns
|
||||
}
|
||||
|
||||
func (c *lruCache) ZapNamespace(id uint64) {
|
||||
c.mu.Lock()
|
||||
if ns, exist := c.table[id]; exist {
|
||||
ns.zapNB()
|
||||
delete(c.table, id)
|
||||
}
|
||||
c.table[id] = p
|
||||
return p
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
|
||||
c.mu.Lock()
|
||||
if ns, exist := c.table[id]; exist {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// Purge purge entire cache.
|
||||
func (c *lruCache) Purge(fin PurgeFin) {
|
||||
c.Lock()
|
||||
c.mu.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) Zap(closed bool) {
|
||||
c.Lock()
|
||||
func (c *lruCache) Zap() {
|
||||
c.mu.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.zapNB(closed)
|
||||
ns.zapNB()
|
||||
}
|
||||
c.table = make(map[uint64]*lruNs)
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) evict() {
|
||||
top := &c.recent
|
||||
for n := c.recent.rPrev; c.size > c.capacity && n != top; {
|
||||
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
|
||||
if n.state != nodeEffective {
|
||||
panic("evicting non effective node")
|
||||
}
|
||||
n.state = nodeEvicted
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
c.size -= n.charge
|
||||
n.derefNB()
|
||||
c.used -= n.charge
|
||||
n = c.recent.rPrev
|
||||
}
|
||||
}
|
||||
|
||||
type lruNs struct {
|
||||
lru *lruCache
|
||||
id uint64
|
||||
table map[uint64]*lruNode
|
||||
state nsState
|
||||
lru *lruCache
|
||||
id uint64
|
||||
rbRoot *lruNode
|
||||
state nsState
|
||||
}
|
||||
|
||||
func (ns *lruNs) Get(key uint64, setf SetFunc) (o Object, ok bool) {
|
||||
lru := ns.lru
|
||||
lru.Lock()
|
||||
|
||||
switch ns.state {
|
||||
case nsZapped:
|
||||
lru.Unlock()
|
||||
if setf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if ok {
|
||||
o = &fakeObject{
|
||||
value: value,
|
||||
fin: fin,
|
||||
}
|
||||
}
|
||||
return
|
||||
case nsClosed:
|
||||
lru.Unlock()
|
||||
return
|
||||
func (ns *lruNs) rbGetOrCreateNode(h *lruNode, key uint64) (hn, n *lruNode) {
|
||||
if h == nil {
|
||||
n = &lruNode{ns: ns, key: key}
|
||||
return n, n
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if ok {
|
||||
switch n.state {
|
||||
case nodeEvicted:
|
||||
// Insert to recent list.
|
||||
n.state = nodeEffective
|
||||
n.ref++
|
||||
lru.size += n.charge
|
||||
lru.evict()
|
||||
fallthrough
|
||||
case nodeEffective:
|
||||
// Bump to front
|
||||
n.rRemove()
|
||||
n.rInsert(&lru.recent)
|
||||
if key < h.key {
|
||||
hn, n = ns.rbGetOrCreateNode(h.rbLeft, key)
|
||||
if hn != nil {
|
||||
h.rbLeft = hn
|
||||
} else {
|
||||
return nil, n
|
||||
}
|
||||
} else if key > h.key {
|
||||
hn, n = ns.rbGetOrCreateNode(h.rbRight, key)
|
||||
if hn != nil {
|
||||
h.rbRight = hn
|
||||
} else {
|
||||
return nil, n
|
||||
}
|
||||
n.ref++
|
||||
} else {
|
||||
if setf == nil {
|
||||
lru.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var charge int
|
||||
var fin func()
|
||||
ok, value, charge, fin = setf()
|
||||
if !ok {
|
||||
lru.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n = &lruNode{
|
||||
ns: ns,
|
||||
key: key,
|
||||
value: value,
|
||||
charge: charge,
|
||||
setfin: fin,
|
||||
ref: 2,
|
||||
}
|
||||
ns.table[key] = n
|
||||
n.rInsert(&lru.recent)
|
||||
|
||||
lru.size += charge
|
||||
lru.evict()
|
||||
return nil, h
|
||||
}
|
||||
|
||||
lru.Unlock()
|
||||
o = &lruObject{node: n}
|
||||
return
|
||||
if rbIsRed(h.rbRight) && !rbIsRed(h.rbLeft) {
|
||||
h = rbRotLeft(h)
|
||||
}
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
}
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
|
||||
rbFlip(h)
|
||||
}
|
||||
return h, n
|
||||
}
|
||||
|
||||
func (ns *lruNs) getOrCreateNode(key uint64) *lruNode {
|
||||
hn, n := ns.rbGetOrCreateNode(ns.rbRoot, key)
|
||||
if hn != nil {
|
||||
ns.rbRoot = hn
|
||||
ns.rbRoot.rbBlack = true
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (ns *lruNs) rbGetNode(key uint64) *lruNode {
|
||||
h := ns.rbRoot
|
||||
for h != nil {
|
||||
switch {
|
||||
case key < h.key:
|
||||
h = h.rbLeft
|
||||
case key > h.key:
|
||||
h = h.rbRight
|
||||
default:
|
||||
return h
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ns *lruNs) getNode(key uint64) *lruNode {
|
||||
return ns.rbGetNode(key)
|
||||
}
|
||||
|
||||
func (ns *lruNs) rbDeleteNode(h *lruNode, key uint64) *lruNode {
|
||||
if h == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if key < h.key {
|
||||
if h.rbLeft == nil { // key not present. Nothing to delete
|
||||
return h
|
||||
}
|
||||
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbMoveLeft(h)
|
||||
}
|
||||
h.rbLeft = ns.rbDeleteNode(h.rbLeft, key)
|
||||
} else {
|
||||
if rbIsRed(h.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
}
|
||||
// If @key equals @h.key and no right children at @h
|
||||
if h.key == key && h.rbRight == nil {
|
||||
return nil
|
||||
}
|
||||
if h.rbRight != nil && !rbIsRed(h.rbRight) && !rbIsRed(h.rbRight.rbLeft) {
|
||||
h = rbMoveRight(h)
|
||||
}
|
||||
// If @key equals @h.key, and (from above) 'h.Right != nil'
|
||||
if h.key == key {
|
||||
var x *lruNode
|
||||
h.rbRight, x = rbDeleteMin(h.rbRight)
|
||||
if x == nil {
|
||||
panic("logic")
|
||||
}
|
||||
x.rbLeft, h.rbLeft = h.rbLeft, nil
|
||||
x.rbRight, h.rbRight = h.rbRight, nil
|
||||
x.rbBlack = h.rbBlack
|
||||
h = x
|
||||
} else { // Else, @key is bigger than @h.key
|
||||
h.rbRight = ns.rbDeleteNode(h.rbRight, key)
|
||||
}
|
||||
}
|
||||
|
||||
return rbFixup(h)
|
||||
}
|
||||
|
||||
func (ns *lruNs) deleteNode(key uint64) {
|
||||
ns.rbRoot = ns.rbDeleteNode(ns.rbRoot, key)
|
||||
if ns.rbRoot != nil {
|
||||
ns.rbRoot.rbBlack = true
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *lruNs) rbIterateNodes(h *lruNode, pivot uint64, iter func(n *lruNode) bool) bool {
|
||||
if h == nil {
|
||||
return true
|
||||
}
|
||||
if h.key >= pivot {
|
||||
if !ns.rbIterateNodes(h.rbLeft, pivot, iter) {
|
||||
return false
|
||||
}
|
||||
if !iter(h) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return ns.rbIterateNodes(h.rbRight, pivot, iter)
|
||||
}
|
||||
|
||||
func (ns *lruNs) iterateNodes(iter func(n *lruNode) bool) {
|
||||
ns.rbIterateNodes(ns.rbRoot, 0, iter)
|
||||
}
|
||||
|
||||
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
|
||||
ns.lru.mu.Lock()
|
||||
defer ns.lru.mu.Unlock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
return nil
|
||||
}
|
||||
|
||||
var n *lruNode
|
||||
if setf == nil {
|
||||
n = ns.getNode(key)
|
||||
if n == nil {
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
n = ns.getOrCreateNode(key)
|
||||
}
|
||||
switch n.state {
|
||||
case nodeZero:
|
||||
charge, value := setf()
|
||||
if value == nil {
|
||||
ns.deleteNode(key)
|
||||
return nil
|
||||
}
|
||||
if charge < 0 {
|
||||
charge = 0
|
||||
}
|
||||
|
||||
n.value = value
|
||||
n.charge = charge
|
||||
n.state = nodeEvicted
|
||||
|
||||
ns.lru.size += charge
|
||||
ns.lru.alive++
|
||||
|
||||
fallthrough
|
||||
case nodeEvicted:
|
||||
if n.charge == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
// Insert to recent list.
|
||||
n.state = nodeEffective
|
||||
n.ref++
|
||||
ns.lru.used += n.charge
|
||||
ns.lru.evict()
|
||||
|
||||
fallthrough
|
||||
case nodeEffective:
|
||||
// Bump to front.
|
||||
n.rRemove()
|
||||
n.rInsert(&ns.lru.recent)
|
||||
case nodeDeleted:
|
||||
// Do nothing.
|
||||
default:
|
||||
panic("invalid state")
|
||||
}
|
||||
n.ref++
|
||||
|
||||
return &lruHandle{node: n}
|
||||
}
|
||||
|
||||
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
|
||||
lru := ns.lru
|
||||
lru.Lock()
|
||||
ns.lru.mu.Lock()
|
||||
defer ns.lru.mu.Unlock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
lru.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
fin(false, false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if !ok {
|
||||
lru.Unlock()
|
||||
n := ns.getNode(key)
|
||||
if n == nil {
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
fin(false, false)
|
||||
}
|
||||
return false
|
||||
|
||||
}
|
||||
|
||||
n.delfin = fin
|
||||
switch n.state {
|
||||
case nodeRemoved:
|
||||
lru.Unlock()
|
||||
return false
|
||||
case nodeEffective:
|
||||
lru.size -= n.charge
|
||||
ns.lru.used -= n.charge
|
||||
n.state = nodeDeleted
|
||||
n.delfin = fin
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
n.derefNB()
|
||||
case nodeEvicted:
|
||||
n.state = nodeDeleted
|
||||
n.delfin = fin
|
||||
case nodeDeleted:
|
||||
if fin != nil {
|
||||
fin(true, true)
|
||||
}
|
||||
return false
|
||||
default:
|
||||
panic("invalid state")
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
|
||||
lru.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
func (ns *lruNs) purgeNB(fin PurgeFin) {
|
||||
lru := ns.lru
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
|
||||
for _, n := range ns.table {
|
||||
n.purgefin = fin
|
||||
if n.state == nodeEffective {
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
if ns.state == nsEffective {
|
||||
var nodes []*lruNode
|
||||
ns.iterateNodes(func(n *lruNode) bool {
|
||||
nodes = append(nodes, n)
|
||||
return true
|
||||
})
|
||||
for _, n := range nodes {
|
||||
switch n.state {
|
||||
case nodeEffective:
|
||||
ns.lru.used -= n.charge
|
||||
n.state = nodeDeleted
|
||||
n.purgefin = fin
|
||||
n.rRemove()
|
||||
n.derefNB()
|
||||
case nodeEvicted:
|
||||
n.state = nodeDeleted
|
||||
n.purgefin = fin
|
||||
case nodeDeleted:
|
||||
default:
|
||||
panic("invalid state")
|
||||
}
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *lruNs) Purge(fin PurgeFin) {
|
||||
ns.lru.Lock()
|
||||
ns.lru.mu.Lock()
|
||||
ns.purgeNB(fin)
|
||||
ns.lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ns *lruNs) zapNB(closed bool) {
|
||||
lru := ns.lru
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
|
||||
if closed {
|
||||
ns.state = nsClosed
|
||||
} else {
|
||||
func (ns *lruNs) zapNB() {
|
||||
if ns.state == nsEffective {
|
||||
ns.state = nsZapped
|
||||
|
||||
ns.iterateNodes(func(n *lruNode) bool {
|
||||
if n.state == nodeEffective {
|
||||
ns.lru.used -= n.charge
|
||||
n.rRemove()
|
||||
}
|
||||
ns.lru.size -= n.charge
|
||||
n.state = nodeDeleted
|
||||
n.fin()
|
||||
|
||||
return true
|
||||
})
|
||||
ns.rbRoot = nil
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
if n.state == nodeEffective {
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
n.execFin()
|
||||
}
|
||||
ns.table = nil
|
||||
}
|
||||
|
||||
func (ns *lruNs) Zap(closed bool) {
|
||||
ns.lru.Lock()
|
||||
ns.zapNB(closed)
|
||||
func (ns *lruNs) Zap() {
|
||||
ns.lru.mu.Lock()
|
||||
ns.zapNB()
|
||||
delete(ns.lru.table, ns.id)
|
||||
ns.lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
type lruNode struct {
|
||||
ns *lruNs
|
||||
|
||||
rNext, rPrev *lruNode
|
||||
rNext, rPrev *lruNode
|
||||
rbLeft, rbRight *lruNode
|
||||
rbBlack bool
|
||||
|
||||
key uint64
|
||||
value interface{}
|
||||
charge int
|
||||
ref int
|
||||
state nodeState
|
||||
setfin SetFin
|
||||
delfin DelFin
|
||||
purgefin PurgeFin
|
||||
}
|
||||
@@ -284,7 +462,6 @@ func (n *lruNode) rInsert(at *lruNode) {
|
||||
}
|
||||
|
||||
func (n *lruNode) rRemove() bool {
|
||||
// only remove if not already removed
|
||||
if n.rPrev == nil {
|
||||
return false
|
||||
}
|
||||
@@ -297,58 +474,149 @@ func (n *lruNode) rRemove() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (n *lruNode) execFin() {
|
||||
if n.setfin != nil {
|
||||
n.setfin()
|
||||
n.setfin = nil
|
||||
func (n *lruNode) fin() {
|
||||
if r, ok := n.value.(util.Releaser); ok {
|
||||
r.Release()
|
||||
}
|
||||
if n.purgefin != nil {
|
||||
n.purgefin(n.ns.id, n.key, n.delfin)
|
||||
n.delfin = nil
|
||||
if n.delfin != nil {
|
||||
panic("conflicting delete and purge fin")
|
||||
}
|
||||
n.purgefin(n.ns.id, n.key)
|
||||
n.purgefin = nil
|
||||
} else if n.delfin != nil {
|
||||
n.delfin(true)
|
||||
n.delfin(true, false)
|
||||
n.delfin = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (n *lruNode) evictNB() {
|
||||
func (n *lruNode) derefNB() {
|
||||
n.ref--
|
||||
if n.ref == 0 {
|
||||
if n.ns.state == nsEffective {
|
||||
// remove elem
|
||||
delete(n.ns.table, n.key)
|
||||
// execute finalizer
|
||||
n.execFin()
|
||||
// Remove elemement.
|
||||
n.ns.deleteNode(n.key)
|
||||
n.ns.lru.size -= n.charge
|
||||
n.ns.lru.alive--
|
||||
n.fin()
|
||||
}
|
||||
n.value = nil
|
||||
} else if n.ref < 0 {
|
||||
panic("leveldb/cache: lruCache: negative node reference")
|
||||
}
|
||||
}
|
||||
|
||||
func (n *lruNode) evict() {
|
||||
n.ns.lru.Lock()
|
||||
n.evictNB()
|
||||
n.ns.lru.Unlock()
|
||||
func (n *lruNode) deref() {
|
||||
n.ns.lru.mu.Lock()
|
||||
n.derefNB()
|
||||
n.ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
type lruObject struct {
|
||||
type lruHandle struct {
|
||||
node *lruNode
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *lruObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.node.value
|
||||
func (h *lruHandle) Value() interface{} {
|
||||
if atomic.LoadUint32(&h.once) == 0 {
|
||||
return h.node.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *lruObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
func (h *lruHandle) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
|
||||
o.node.evict()
|
||||
o.node = nil
|
||||
h.node.deref()
|
||||
h.node = nil
|
||||
}
|
||||
|
||||
func rbIsRed(h *lruNode) bool {
|
||||
if h == nil {
|
||||
return false
|
||||
}
|
||||
return !h.rbBlack
|
||||
}
|
||||
|
||||
func rbRotLeft(h *lruNode) *lruNode {
|
||||
x := h.rbRight
|
||||
if x.rbBlack {
|
||||
panic("rotating a black link")
|
||||
}
|
||||
h.rbRight = x.rbLeft
|
||||
x.rbLeft = h
|
||||
x.rbBlack = h.rbBlack
|
||||
h.rbBlack = false
|
||||
return x
|
||||
}
|
||||
|
||||
func rbRotRight(h *lruNode) *lruNode {
|
||||
x := h.rbLeft
|
||||
if x.rbBlack {
|
||||
panic("rotating a black link")
|
||||
}
|
||||
h.rbLeft = x.rbRight
|
||||
x.rbRight = h
|
||||
x.rbBlack = h.rbBlack
|
||||
h.rbBlack = false
|
||||
return x
|
||||
}
|
||||
|
||||
func rbFlip(h *lruNode) {
|
||||
h.rbBlack = !h.rbBlack
|
||||
h.rbLeft.rbBlack = !h.rbLeft.rbBlack
|
||||
h.rbRight.rbBlack = !h.rbRight.rbBlack
|
||||
}
|
||||
|
||||
func rbMoveLeft(h *lruNode) *lruNode {
|
||||
rbFlip(h)
|
||||
if rbIsRed(h.rbRight.rbLeft) {
|
||||
h.rbRight = rbRotRight(h.rbRight)
|
||||
h = rbRotLeft(h)
|
||||
rbFlip(h)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func rbMoveRight(h *lruNode) *lruNode {
|
||||
rbFlip(h)
|
||||
if rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
rbFlip(h)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func rbFixup(h *lruNode) *lruNode {
|
||||
if rbIsRed(h.rbRight) {
|
||||
h = rbRotLeft(h)
|
||||
}
|
||||
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
}
|
||||
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
|
||||
rbFlip(h)
|
||||
}
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
func rbDeleteMin(h *lruNode) (hn, n *lruNode) {
|
||||
if h == nil {
|
||||
return nil, nil
|
||||
}
|
||||
if h.rbLeft == nil {
|
||||
return nil, h
|
||||
}
|
||||
|
||||
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbMoveLeft(h)
|
||||
}
|
||||
|
||||
h.rbLeft, n = rbDeleteMin(h.rbLeft)
|
||||
|
||||
return rbFixup(h), n
|
||||
}
|
||||
|
||||
40
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/config.go
generated
vendored
40
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/config.go
generated
vendored
@@ -1,40 +0,0 @@
|
||||
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package leveldb
|
||||
|
||||
const (
|
||||
kNumLevels = 7
|
||||
|
||||
// Level-0 compaction is started when we hit this many files.
|
||||
kL0_CompactionTrigger float64 = 4
|
||||
|
||||
// Soft limit on number of level-0 files. We slow down writes at this point.
|
||||
kL0_SlowdownWritesTrigger = 8
|
||||
|
||||
// Maximum number of level-0 files. We stop writes at this point.
|
||||
kL0_StopWritesTrigger = 12
|
||||
|
||||
// Maximum level to which a new compacted memdb is pushed if it
|
||||
// does not create overlap. We try to push to level 2 to avoid the
|
||||
// relatively expensive level 0=>1 compactions and to avoid some
|
||||
// expensive manifest file operations. We do not push all the way to
|
||||
// the largest level since that can generate a lot of wasted disk
|
||||
// space if the same key space is being repeatedly overwritten.
|
||||
kMaxMemCompactLevel = 2
|
||||
|
||||
// Maximum size of a table.
|
||||
kMaxTableSize = 2 * 1048576
|
||||
|
||||
// Maximum bytes of overlaps in grandparent (i.e., level+2) before we
|
||||
// stop building a single file in a level->level+1 compaction.
|
||||
kMaxGrandParentOverlapBytes = 10 * kMaxTableSize
|
||||
|
||||
// Maximum number of bytes in all compacted files. We avoid expanding
|
||||
// the lower level file set of a compaction if it would make the
|
||||
// total compaction cover more than this many bytes.
|
||||
kExpCompactionMaxBytes = 25 * kMaxTableSize
|
||||
)
|
||||
60
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/corrupt_test.go
generated
vendored
60
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/corrupt_test.go
generated
vendored
@@ -14,6 +14,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/cache"
|
||||
"github.com/syndtr/goleveldb/leveldb/filter"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
"github.com/syndtr/goleveldb/leveldb/storage"
|
||||
)
|
||||
@@ -96,21 +97,22 @@ func (h *dbCorruptHarness) deleteRand(n, max int, rnd *rand.Rand) {
|
||||
}
|
||||
}
|
||||
|
||||
func (h *dbCorruptHarness) corrupt(ft storage.FileType, offset, n int) {
|
||||
func (h *dbCorruptHarness) corrupt(ft storage.FileType, fi, offset, n int) {
|
||||
p := &h.dbHarness
|
||||
t := p.t
|
||||
|
||||
var file storage.File
|
||||
ff, _ := p.stor.GetFiles(ft)
|
||||
for _, f := range ff {
|
||||
if file == nil || f.Num() > file.Num() {
|
||||
file = f
|
||||
}
|
||||
sff := files(ff)
|
||||
sff.sort()
|
||||
if fi < 0 {
|
||||
fi = len(sff) - 1
|
||||
}
|
||||
if file == nil {
|
||||
t.Fatalf("no such file with type %q", ft)
|
||||
if fi >= len(sff) {
|
||||
t.Fatalf("no such file with type %q with index %d", ft, fi)
|
||||
}
|
||||
|
||||
file := sff[fi]
|
||||
|
||||
r, err := file.Open()
|
||||
if err != nil {
|
||||
t.Fatal("cannot open file: ", err)
|
||||
@@ -225,8 +227,8 @@ func TestCorruptDB_Journal(t *testing.T) {
|
||||
h.build(100)
|
||||
h.check(100, 100)
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeJournal, 19, 1)
|
||||
h.corrupt(storage.TypeJournal, 32*1024+1000, 1)
|
||||
h.corrupt(storage.TypeJournal, -1, 19, 1)
|
||||
h.corrupt(storage.TypeJournal, -1, 32*1024+1000, 1)
|
||||
|
||||
h.openDB()
|
||||
h.check(36, 36)
|
||||
@@ -242,7 +244,7 @@ func TestCorruptDB_Table(t *testing.T) {
|
||||
h.compactRangeAt(0, "", "")
|
||||
h.compactRangeAt(1, "", "")
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeTable, 100, 1)
|
||||
h.corrupt(storage.TypeTable, -1, 100, 1)
|
||||
|
||||
h.openDB()
|
||||
h.check(99, 99)
|
||||
@@ -256,7 +258,7 @@ func TestCorruptDB_TableIndex(t *testing.T) {
|
||||
h.build(10000)
|
||||
h.compactMem()
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeTable, -2000, 500)
|
||||
h.corrupt(storage.TypeTable, -1, -2000, 500)
|
||||
|
||||
h.openDB()
|
||||
h.check(5000, 9999)
|
||||
@@ -355,7 +357,7 @@ func TestCorruptDB_CorruptedManifest(t *testing.T) {
|
||||
h.compactMem()
|
||||
h.compactRange("", "")
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeManifest, 0, 1000)
|
||||
h.corrupt(storage.TypeManifest, -1, 0, 1000)
|
||||
h.openAssert(false)
|
||||
|
||||
h.recover()
|
||||
@@ -370,7 +372,7 @@ func TestCorruptDB_CompactionInputError(t *testing.T) {
|
||||
h.build(10)
|
||||
h.compactMem()
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeTable, 100, 1)
|
||||
h.corrupt(storage.TypeTable, -1, 100, 1)
|
||||
|
||||
h.openDB()
|
||||
h.check(9, 9)
|
||||
@@ -387,7 +389,7 @@ func TestCorruptDB_UnrelatedKeys(t *testing.T) {
|
||||
h.build(10)
|
||||
h.compactMem()
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeTable, 100, 1)
|
||||
h.corrupt(storage.TypeTable, -1, 100, 1)
|
||||
|
||||
h.openDB()
|
||||
h.put(string(tkey(1000)), string(tval(1000, ctValSize)))
|
||||
@@ -470,3 +472,31 @@ func TestCorruptDB_MissingTableFiles(t *testing.T) {
|
||||
|
||||
h.close()
|
||||
}
|
||||
|
||||
func TestCorruptDB_RecoverTable(t *testing.T) {
|
||||
h := newDbCorruptHarnessWopt(t, &opt.Options{
|
||||
WriteBuffer: 112 * opt.KiB,
|
||||
CompactionTableSize: 90 * opt.KiB,
|
||||
Filter: filter.NewBloomFilter(10),
|
||||
})
|
||||
|
||||
h.build(1000)
|
||||
h.compactMem()
|
||||
h.compactRangeAt(0, "", "")
|
||||
h.compactRangeAt(1, "", "")
|
||||
seq := h.db.seq
|
||||
h.closeDB()
|
||||
h.corrupt(storage.TypeTable, 0, 1000, 1)
|
||||
h.corrupt(storage.TypeTable, 3, 10000, 1)
|
||||
// Corrupted filter shouldn't affect recovery.
|
||||
h.corrupt(storage.TypeTable, 3, 113888, 10)
|
||||
h.corrupt(storage.TypeTable, -1, 20000, 1)
|
||||
|
||||
h.recover()
|
||||
if h.db.seq != seq {
|
||||
t.Errorf("invalid seq, want=%d got=%d", seq, h.db.seq)
|
||||
}
|
||||
h.check(985, 985)
|
||||
|
||||
h.close()
|
||||
}
|
||||
|
||||
292
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db.go
generated
vendored
292
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db.go
generated
vendored
@@ -7,15 +7,17 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"container/list"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/journal"
|
||||
"github.com/syndtr/goleveldb/leveldb/memdb"
|
||||
@@ -35,7 +37,7 @@ type DB struct {
|
||||
|
||||
// MemDB.
|
||||
memMu sync.RWMutex
|
||||
memPool *util.Pool
|
||||
memPool chan *memdb.DB
|
||||
mem, frozenMem *memDB
|
||||
journal *journal.Writer
|
||||
journalWriter storage.Writer
|
||||
@@ -45,25 +47,29 @@ type DB struct {
|
||||
|
||||
// Snapshot.
|
||||
snapsMu sync.Mutex
|
||||
snapsRoot snapshotElement
|
||||
snapsList *list.List
|
||||
|
||||
// Stats.
|
||||
aliveSnaps, aliveIters int32
|
||||
|
||||
// Write.
|
||||
writeC chan *Batch
|
||||
writeMergedC chan bool
|
||||
writeLockC chan struct{}
|
||||
writeAckC chan error
|
||||
writeDelay time.Duration
|
||||
writeDelayN int
|
||||
journalC chan *Batch
|
||||
journalAckC chan error
|
||||
|
||||
// Compaction.
|
||||
tcompCmdC chan cCmd
|
||||
tcompPauseC chan chan<- struct{}
|
||||
tcompTriggerC chan struct{}
|
||||
mcompCmdC chan cCmd
|
||||
mcompTriggerC chan struct{}
|
||||
compErrC chan error
|
||||
compErrSetC chan error
|
||||
compStats [kNumLevels]cStats
|
||||
tcompCmdC chan cCmd
|
||||
tcompPauseC chan chan<- struct{}
|
||||
mcompCmdC chan cCmd
|
||||
compErrC chan error
|
||||
compPerErrC chan error
|
||||
compErrSetC chan error
|
||||
compStats []cStats
|
||||
|
||||
// Close.
|
||||
closeW sync.WaitGroup
|
||||
@@ -78,9 +84,11 @@ func openDB(s *session) (*DB, error) {
|
||||
db := &DB{
|
||||
s: s,
|
||||
// Initial sequence
|
||||
seq: s.stSeq,
|
||||
seq: s.stSeqNum,
|
||||
// MemDB
|
||||
memPool: util.NewPool(1),
|
||||
memPool: make(chan *memdb.DB, 1),
|
||||
// Snapshot
|
||||
snapsList: list.New(),
|
||||
// Write
|
||||
writeC: make(chan *Batch),
|
||||
writeMergedC: make(chan bool),
|
||||
@@ -89,17 +97,16 @@ func openDB(s *session) (*DB, error) {
|
||||
journalC: make(chan *Batch),
|
||||
journalAckC: make(chan error),
|
||||
// Compaction
|
||||
tcompCmdC: make(chan cCmd),
|
||||
tcompPauseC: make(chan chan<- struct{}),
|
||||
tcompTriggerC: make(chan struct{}, 1),
|
||||
mcompCmdC: make(chan cCmd),
|
||||
mcompTriggerC: make(chan struct{}, 1),
|
||||
compErrC: make(chan error),
|
||||
compErrSetC: make(chan error),
|
||||
tcompCmdC: make(chan cCmd),
|
||||
tcompPauseC: make(chan chan<- struct{}),
|
||||
mcompCmdC: make(chan cCmd),
|
||||
compErrC: make(chan error),
|
||||
compPerErrC: make(chan error),
|
||||
compErrSetC: make(chan error),
|
||||
compStats: make([]cStats, s.o.GetNumLevel()),
|
||||
// Close
|
||||
closeC: make(chan struct{}),
|
||||
}
|
||||
db.initSnapshot()
|
||||
|
||||
if err := db.recoverJournal(); err != nil {
|
||||
return nil, err
|
||||
@@ -115,8 +122,9 @@ func openDB(s *session) (*DB, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Don't include compaction error goroutine into wait group.
|
||||
// Doesn't need to be included in the wait group.
|
||||
go db.compactionError()
|
||||
go db.mpoolDrain()
|
||||
|
||||
db.closeW.Add(3)
|
||||
go db.tCompaction()
|
||||
@@ -248,6 +256,10 @@ func RecoverFile(path string, o *opt.Options) (db *DB, err error) {
|
||||
}
|
||||
|
||||
func recoverTable(s *session, o *opt.Options) error {
|
||||
o = dupOptions(o)
|
||||
// Mask StrictReader, lets StrictRecovery doing its job.
|
||||
o.Strict &= ^opt.StrictReader
|
||||
|
||||
// Get all tables and sort it by file number.
|
||||
tableFiles_, err := s.getFiles(storage.TypeTable)
|
||||
if err != nil {
|
||||
@@ -256,10 +268,16 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
tableFiles := files(tableFiles_)
|
||||
tableFiles.sort()
|
||||
|
||||
var mSeq uint64
|
||||
var good, corrupted int
|
||||
rec := new(sessionRecord)
|
||||
bpool := util.NewBufferPool(o.GetBlockSize() + 5)
|
||||
var (
|
||||
maxSeq uint64
|
||||
recoveredKey, goodKey, corruptedKey, corruptedBlock, droppedTable int
|
||||
|
||||
// We will drop corrupted table.
|
||||
strict = o.GetStrict(opt.StrictRecovery)
|
||||
|
||||
rec = &sessionRecord{numLevel: o.GetNumLevel()}
|
||||
bpool = util.NewBufferPool(o.GetBlockSize() + 5)
|
||||
)
|
||||
buildTable := func(iter iterator.Iterator) (tmp storage.File, size int64, err error) {
|
||||
tmp = s.newTemp()
|
||||
writer, err := tmp.Create()
|
||||
@@ -306,7 +324,12 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer reader.Close()
|
||||
var closed bool
|
||||
defer func() {
|
||||
if !closed {
|
||||
reader.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
// Get file size.
|
||||
size, err := reader.Seek(0, 2)
|
||||
@@ -314,25 +337,32 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
return err
|
||||
}
|
||||
|
||||
var tSeq uint64
|
||||
var tgood, tcorrupted, blockerr int
|
||||
var imin, imax []byte
|
||||
tr := table.NewReader(reader, size, nil, bpool, o)
|
||||
var (
|
||||
tSeq uint64
|
||||
tgoodKey, tcorruptedKey, tcorruptedBlock int
|
||||
imin, imax []byte
|
||||
)
|
||||
tr, err := table.NewReader(reader, size, storage.NewFileInfo(file), nil, bpool, o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
iter := tr.NewIterator(nil, nil)
|
||||
iter.(iterator.ErrorCallbackSetter).SetErrorCallback(func(err error) {
|
||||
s.logf("table@recovery found error @%d %q", file.Num(), err)
|
||||
blockerr++
|
||||
if errors.IsCorrupted(err) {
|
||||
s.logf("table@recovery block corruption @%d %q", file.Num(), err)
|
||||
tcorruptedBlock++
|
||||
}
|
||||
})
|
||||
|
||||
// Scan the table.
|
||||
for iter.Next() {
|
||||
key := iter.Key()
|
||||
_, seq, _, ok := parseIkey(key)
|
||||
if !ok {
|
||||
tcorrupted++
|
||||
_, seq, _, kerr := parseIkey(key)
|
||||
if kerr != nil {
|
||||
tcorruptedKey++
|
||||
continue
|
||||
}
|
||||
tgood++
|
||||
tgoodKey++
|
||||
if seq > tSeq {
|
||||
tSeq = seq
|
||||
}
|
||||
@@ -347,8 +377,18 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
}
|
||||
iter.Release()
|
||||
|
||||
if tgood > 0 {
|
||||
if tcorrupted > 0 || blockerr > 0 {
|
||||
goodKey += tgoodKey
|
||||
corruptedKey += tcorruptedKey
|
||||
corruptedBlock += tcorruptedBlock
|
||||
|
||||
if strict && (tcorruptedKey > 0 || tcorruptedBlock > 0) {
|
||||
droppedTable++
|
||||
s.logf("table@recovery dropped @%d Gk·%d Ck·%d Cb·%d S·%d Q·%d", file.Num(), tgoodKey, tcorruptedKey, tcorruptedBlock, size, tSeq)
|
||||
return nil
|
||||
}
|
||||
|
||||
if tgoodKey > 0 {
|
||||
if tcorruptedKey > 0 || tcorruptedBlock > 0 {
|
||||
// Rebuild the table.
|
||||
s.logf("table@recovery rebuilding @%d", file.Num())
|
||||
iter := tr.NewIterator(nil, nil)
|
||||
@@ -357,25 +397,25 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
closed = true
|
||||
reader.Close()
|
||||
if err := file.Replace(tmp); err != nil {
|
||||
return err
|
||||
}
|
||||
size = newSize
|
||||
}
|
||||
if tSeq > mSeq {
|
||||
mSeq = tSeq
|
||||
if tSeq > maxSeq {
|
||||
maxSeq = tSeq
|
||||
}
|
||||
recoveredKey += tgoodKey
|
||||
// Add table to level 0.
|
||||
rec.addTable(0, file.Num(), uint64(size), imin, imax)
|
||||
s.logf("table@recovery recovered @%d N·%d C·%d B·%d S·%d Q·%d", file.Num(), tgood, tcorrupted, blockerr, size, tSeq)
|
||||
s.logf("table@recovery recovered @%d Gk·%d Ck·%d Cb·%d S·%d Q·%d", file.Num(), tgoodKey, tcorruptedKey, tcorruptedBlock, size, tSeq)
|
||||
} else {
|
||||
s.logf("table@recovery unrecoverable @%d C·%d B·%d S·%d", file.Num(), tcorrupted, blockerr, size)
|
||||
droppedTable++
|
||||
s.logf("table@recovery unrecoverable @%d Ck·%d Cb·%d S·%d", file.Num(), tcorruptedKey, tcorruptedBlock, size)
|
||||
}
|
||||
|
||||
good += tgood
|
||||
corrupted += tcorrupted
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -392,11 +432,11 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
}
|
||||
}
|
||||
|
||||
s.logf("table@recovery recovered F·%d N·%d C·%d Q·%d", len(tableFiles), good, corrupted, mSeq)
|
||||
s.logf("table@recovery recovered F·%d N·%d Gk·%d Ck·%d Q·%d", len(tableFiles), recoveredKey, goodKey, corruptedKey, maxSeq)
|
||||
}
|
||||
|
||||
// Set sequence number.
|
||||
rec.setSeq(mSeq + 1)
|
||||
rec.setSeqNum(maxSeq)
|
||||
|
||||
// Create new manifest.
|
||||
if err := s.create(); err != nil {
|
||||
@@ -479,26 +519,30 @@ func (db *DB) recoverJournal() error {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return err
|
||||
return errors.SetFile(err, file)
|
||||
}
|
||||
|
||||
buf.Reset()
|
||||
if _, err := buf.ReadFrom(r); err != nil {
|
||||
if err == io.ErrUnexpectedEOF {
|
||||
// This is error returned due to corruption, with strict == false.
|
||||
continue
|
||||
} else {
|
||||
return err
|
||||
return errors.SetFile(err, file)
|
||||
}
|
||||
}
|
||||
if err := batch.decode(buf.Bytes()); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := batch.memReplay(mem); err != nil {
|
||||
return err
|
||||
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mem); err != nil {
|
||||
if strict || !errors.IsCorrupted(err) {
|
||||
return errors.SetFile(err, file)
|
||||
} else {
|
||||
db.s.logf("journal error: %v (skipped)", err)
|
||||
// We won't apply sequence number as it might be corrupted.
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Save sequence number.
|
||||
db.seq = batch.seq + uint64(batch.len())
|
||||
db.seq = batch.seq + uint64(batch.Len())
|
||||
|
||||
// Flush it if large enough.
|
||||
if mem.Size() >= writeBuffer {
|
||||
@@ -559,7 +603,7 @@ func (db *DB) recoverJournal() error {
|
||||
}
|
||||
|
||||
func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err error) {
|
||||
ikey := newIKey(key, seq, tSeek)
|
||||
ikey := newIkey(key, seq, ktSeek)
|
||||
|
||||
em, fm := db.getMems()
|
||||
for _, m := range [...]*memDB{em, fm} {
|
||||
@@ -568,11 +612,15 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
|
||||
}
|
||||
defer m.decref()
|
||||
|
||||
mk, mv, me := m.db.Find(ikey)
|
||||
mk, mv, me := m.mdb.Find(ikey)
|
||||
if me == nil {
|
||||
ukey, _, t, ok := parseIkey(mk)
|
||||
if ok && db.s.icmp.uCompare(ukey, key) == 0 {
|
||||
if t == tDel {
|
||||
ukey, _, kt, kerr := parseIkey(mk)
|
||||
if kerr != nil {
|
||||
// Shouldn't have had happen.
|
||||
panic(kerr)
|
||||
}
|
||||
if db.s.icmp.uCompare(ukey, key) == 0 {
|
||||
if kt == ktDel {
|
||||
return nil, ErrNotFound
|
||||
}
|
||||
return append([]byte{}, mv...), nil
|
||||
@@ -583,17 +631,60 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
|
||||
}
|
||||
|
||||
v := db.s.version()
|
||||
value, cSched, err := v.get(ikey, ro)
|
||||
value, cSched, err := v.get(ikey, ro, false)
|
||||
v.release()
|
||||
if cSched {
|
||||
// Trigger table compaction.
|
||||
db.compTrigger(db.tcompTriggerC)
|
||||
db.compSendTrigger(db.tcompCmdC)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (db *DB) has(key []byte, seq uint64, ro *opt.ReadOptions) (ret bool, err error) {
|
||||
ikey := newIkey(key, seq, ktSeek)
|
||||
|
||||
em, fm := db.getMems()
|
||||
for _, m := range [...]*memDB{em, fm} {
|
||||
if m == nil {
|
||||
continue
|
||||
}
|
||||
defer m.decref()
|
||||
|
||||
mk, _, me := m.mdb.Find(ikey)
|
||||
if me == nil {
|
||||
ukey, _, kt, kerr := parseIkey(mk)
|
||||
if kerr != nil {
|
||||
// Shouldn't have had happen.
|
||||
panic(kerr)
|
||||
}
|
||||
if db.s.icmp.uCompare(ukey, key) == 0 {
|
||||
if kt == ktDel {
|
||||
return false, nil
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
} else if me != ErrNotFound {
|
||||
return false, me
|
||||
}
|
||||
}
|
||||
|
||||
v := db.s.version()
|
||||
_, cSched, err := v.get(ikey, ro, true)
|
||||
v.release()
|
||||
if cSched {
|
||||
// Trigger table compaction.
|
||||
db.compSendTrigger(db.tcompCmdC)
|
||||
}
|
||||
if err == nil {
|
||||
ret = true
|
||||
} else if err == ErrNotFound {
|
||||
err = nil
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Get gets the value for the given key. It returns ErrNotFound if the
|
||||
// DB does not contain the key.
|
||||
// DB does not contains the key.
|
||||
//
|
||||
// The returned slice is its own copy, it is safe to modify the contents
|
||||
// of the returned slice.
|
||||
@@ -604,7 +695,23 @@ func (db *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
return db.get(key, db.getSeq(), ro)
|
||||
se := db.acquireSnapshot()
|
||||
defer db.releaseSnapshot(se)
|
||||
return db.get(key, se.seq, ro)
|
||||
}
|
||||
|
||||
// Has returns true if the DB does contains the given key.
|
||||
//
|
||||
// It is safe to modify the contents of the argument after Get returns.
|
||||
func (db *DB) Has(key []byte, ro *opt.ReadOptions) (ret bool, err error) {
|
||||
err = db.ok()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
se := db.acquireSnapshot()
|
||||
defer db.releaseSnapshot(se)
|
||||
return db.has(key, se.seq, ro)
|
||||
}
|
||||
|
||||
// NewIterator returns an iterator for the latest snapshot of the
|
||||
@@ -628,9 +735,11 @@ func (db *DB) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Itera
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
|
||||
snap := db.newSnapshot()
|
||||
defer snap.Release()
|
||||
return snap.NewIterator(slice, ro)
|
||||
se := db.acquireSnapshot()
|
||||
defer db.releaseSnapshot(se)
|
||||
// Iterator holds 'version' lock, 'version' is immutable so snapshot
|
||||
// can be released after iterator created.
|
||||
return db.newIterator(se.seq, slice, ro)
|
||||
}
|
||||
|
||||
// GetSnapshot returns a latest snapshot of the underlying DB. A snapshot
|
||||
@@ -650,11 +759,21 @@ func (db *DB) GetSnapshot() (*Snapshot, error) {
|
||||
//
|
||||
// Property names:
|
||||
// leveldb.num-files-at-level{n}
|
||||
// Returns the number of filer at level 'n'.
|
||||
// Returns the number of files at level 'n'.
|
||||
// leveldb.stats
|
||||
// Returns statistics of the underlying DB.
|
||||
// leveldb.sstables
|
||||
// Returns sstables list for each level.
|
||||
// leveldb.blockpool
|
||||
// Returns block pool stats.
|
||||
// leveldb.cachedblock
|
||||
// Returns size of cached block.
|
||||
// leveldb.openedtables
|
||||
// Returns number of opened tables.
|
||||
// leveldb.alivesnaps
|
||||
// Returns number of alive snapshots.
|
||||
// leveldb.aliveiters
|
||||
// Returns number of alive iterators.
|
||||
func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
err = db.ok()
|
||||
if err != nil {
|
||||
@@ -670,12 +789,13 @@ func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
v := db.s.version()
|
||||
defer v.release()
|
||||
|
||||
numFilesPrefix := "num-files-at-level"
|
||||
switch {
|
||||
case strings.HasPrefix(p, "num-files-at-level"):
|
||||
case strings.HasPrefix(p, numFilesPrefix):
|
||||
var level uint
|
||||
var rest string
|
||||
n, _ := fmt.Scanf("%d%s", &level, &rest)
|
||||
if n != 1 || level >= kNumLevels {
|
||||
n, _ := fmt.Sscanf(p[len(numFilesPrefix):], "%d%s", &level, &rest)
|
||||
if n != 1 || int(level) >= db.s.o.GetNumLevel() {
|
||||
err = errors.New("leveldb: GetProperty: invalid property: " + name)
|
||||
} else {
|
||||
value = fmt.Sprint(v.tLen(int(level)))
|
||||
@@ -700,6 +820,20 @@ func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
value += fmt.Sprintf("%d:%d[%q .. %q]\n", t.file.Num(), t.size, t.imin, t.imax)
|
||||
}
|
||||
}
|
||||
case p == "blockpool":
|
||||
value = fmt.Sprintf("%v", db.s.tops.bpool)
|
||||
case p == "cachedblock":
|
||||
if bc := db.s.o.GetBlockCache(); bc != nil {
|
||||
value = fmt.Sprintf("%d", bc.Size())
|
||||
} else {
|
||||
value = "<nil>"
|
||||
}
|
||||
case p == "openedtables":
|
||||
value = fmt.Sprintf("%d", db.s.tops.cache.Size())
|
||||
case p == "alivesnaps":
|
||||
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveSnaps))
|
||||
case p == "aliveiters":
|
||||
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
|
||||
default:
|
||||
err = errors.New("leveldb: GetProperty: unknown property: " + name)
|
||||
}
|
||||
@@ -723,8 +857,8 @@ func (db *DB) SizeOf(ranges []util.Range) (Sizes, error) {
|
||||
|
||||
sizes := make(Sizes, 0, len(ranges))
|
||||
for _, r := range ranges {
|
||||
imin := newIKey(r.Start, kMaxSeq, tSeek)
|
||||
imax := newIKey(r.Limit, kMaxSeq, tSeek)
|
||||
imin := newIkey(r.Start, kMaxSeq, ktSeek)
|
||||
imax := newIkey(r.Limit, kMaxSeq, ktSeek)
|
||||
start, err := v.offsetOf(imin)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -767,18 +901,23 @@ func (db *DB) Close() error {
|
||||
default:
|
||||
}
|
||||
|
||||
// Signal all goroutines.
|
||||
close(db.closeC)
|
||||
|
||||
// Wait for the close WaitGroup.
|
||||
// Wait for all gorotines to exit.
|
||||
db.closeW.Wait()
|
||||
|
||||
// Close journal.
|
||||
// Lock writer and closes journal.
|
||||
db.writeLockC <- struct{}{}
|
||||
if db.journal != nil {
|
||||
db.journal.Close()
|
||||
db.journalWriter.Close()
|
||||
}
|
||||
|
||||
if db.writeDelayN > 0 {
|
||||
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
|
||||
}
|
||||
|
||||
// Close session.
|
||||
db.s.close()
|
||||
db.logf("db@close done T·%v", time.Since(start))
|
||||
@@ -798,7 +937,6 @@ func (db *DB) Close() error {
|
||||
db.journalWriter = nil
|
||||
db.journalFile = nil
|
||||
db.frozenJournalFile = nil
|
||||
db.snapsRoot = snapshotElement{}
|
||||
db.closer = nil
|
||||
|
||||
return err
|
||||
|
||||
634
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_compaction.go
generated
vendored
634
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_compaction.go
generated
vendored
@@ -7,11 +7,12 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/memdb"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -68,7 +69,7 @@ type cMem struct {
|
||||
}
|
||||
|
||||
func newCMem(s *session) *cMem {
|
||||
return &cMem{s: s, rec: new(sessionRecord)}
|
||||
return &cMem{s: s, rec: &sessionRecord{numLevel: s.o.GetNumLevel()}}
|
||||
}
|
||||
|
||||
func (c *cMem) flush(mem *memdb.DB, level int) error {
|
||||
@@ -84,7 +85,9 @@ func (c *cMem) flush(mem *memdb.DB, level int) error {
|
||||
|
||||
// Pick level.
|
||||
if level < 0 {
|
||||
level = s.version_NB().pickLevel(t.imin.ukey(), t.imax.ukey())
|
||||
v := s.version()
|
||||
level = v.pickLevel(t.imin.ukey(), t.imax.ukey())
|
||||
v.release()
|
||||
}
|
||||
c.rec.addTableFile(level, t)
|
||||
|
||||
@@ -95,24 +98,32 @@ func (c *cMem) flush(mem *memdb.DB, level int) error {
|
||||
}
|
||||
|
||||
func (c *cMem) reset() {
|
||||
c.rec = new(sessionRecord)
|
||||
c.rec = &sessionRecord{numLevel: c.s.o.GetNumLevel()}
|
||||
}
|
||||
|
||||
func (c *cMem) commit(journal, seq uint64) error {
|
||||
c.rec.setJournalNum(journal)
|
||||
c.rec.setSeq(seq)
|
||||
c.rec.setSeqNum(seq)
|
||||
|
||||
// Commit changes.
|
||||
return c.s.commit(c.rec)
|
||||
}
|
||||
|
||||
func (db *DB) compactionError() {
|
||||
var err error
|
||||
var (
|
||||
err error
|
||||
wlocked bool
|
||||
)
|
||||
noerr:
|
||||
// No error.
|
||||
for {
|
||||
select {
|
||||
case err = <-db.compErrSetC:
|
||||
if err != nil {
|
||||
switch {
|
||||
case err == nil:
|
||||
case errors.IsCorrupted(err):
|
||||
goto hasperr
|
||||
default:
|
||||
goto haserr
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
@@ -120,17 +131,39 @@ noerr:
|
||||
}
|
||||
}
|
||||
haserr:
|
||||
// Transient error.
|
||||
for {
|
||||
select {
|
||||
case db.compErrC <- err:
|
||||
case err = <-db.compErrSetC:
|
||||
if err == nil {
|
||||
switch {
|
||||
case err == nil:
|
||||
goto noerr
|
||||
case errors.IsCorrupted(err):
|
||||
goto hasperr
|
||||
default:
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
return
|
||||
}
|
||||
}
|
||||
hasperr:
|
||||
// Persistent error.
|
||||
for {
|
||||
select {
|
||||
case db.compErrC <- err:
|
||||
case db.compPerErrC <- err:
|
||||
case db.writeLockC <- struct{}{}:
|
||||
// Hold write lock, so that write won't pass-through.
|
||||
wlocked = true
|
||||
case _, _ = <-db.closeC:
|
||||
if wlocked {
|
||||
// We should release the lock or Close will hang.
|
||||
<-db.writeLockC
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type compactionTransactCounter int
|
||||
@@ -139,12 +172,17 @@ func (cnt *compactionTransactCounter) incr() {
|
||||
*cnt++
|
||||
}
|
||||
|
||||
func (db *DB) compactionTransact(name string, exec func(cnt *compactionTransactCounter) error, rollback func() error) {
|
||||
type compactionTransactInterface interface {
|
||||
run(cnt *compactionTransactCounter) error
|
||||
revert() error
|
||||
}
|
||||
|
||||
func (db *DB) compactionTransact(name string, t compactionTransactInterface) {
|
||||
defer func() {
|
||||
if x := recover(); x != nil {
|
||||
if x == errCompactionTransactExiting && rollback != nil {
|
||||
if err := rollback(); err != nil {
|
||||
db.logf("%s rollback error %q", name, err)
|
||||
if x == errCompactionTransactExiting {
|
||||
if err := t.revert(); err != nil {
|
||||
db.logf("%s revert error %q", name, err)
|
||||
}
|
||||
}
|
||||
panic(x)
|
||||
@@ -156,9 +194,13 @@ func (db *DB) compactionTransact(name string, exec func(cnt *compactionTransactC
|
||||
backoffMax = 8 * time.Second
|
||||
backoffMul = 2 * time.Second
|
||||
)
|
||||
backoff := backoffMin
|
||||
backoffT := time.NewTimer(backoff)
|
||||
lastCnt := compactionTransactCounter(0)
|
||||
var (
|
||||
backoff = backoffMin
|
||||
backoffT = time.NewTimer(backoff)
|
||||
lastCnt = compactionTransactCounter(0)
|
||||
|
||||
disableBackoff = db.s.o.GetDisableCompactionBackoff()
|
||||
)
|
||||
for n := 0; ; n++ {
|
||||
// Check wether the DB is closed.
|
||||
if db.isClosed() {
|
||||
@@ -170,11 +212,19 @@ func (db *DB) compactionTransact(name string, exec func(cnt *compactionTransactC
|
||||
|
||||
// Execute.
|
||||
cnt := compactionTransactCounter(0)
|
||||
err := exec(&cnt)
|
||||
err := t.run(&cnt)
|
||||
if err != nil {
|
||||
db.logf("%s error I·%d %q", name, cnt, err)
|
||||
}
|
||||
|
||||
// Set compaction error status.
|
||||
select {
|
||||
case db.compErrSetC <- err:
|
||||
case perr := <-db.compPerErrC:
|
||||
if err != nil {
|
||||
db.logf("%s exiting (persistent error %q)", name, perr)
|
||||
db.compactionExitTransact()
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
db.logf("%s exiting", name)
|
||||
db.compactionExitTransact()
|
||||
@@ -182,31 +232,56 @@ func (db *DB) compactionTransact(name string, exec func(cnt *compactionTransactC
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
db.logf("%s error I·%d %q", name, cnt, err)
|
||||
|
||||
// Reset backoff duration if counter is advancing.
|
||||
if cnt > lastCnt {
|
||||
backoff = backoffMin
|
||||
lastCnt = cnt
|
||||
}
|
||||
|
||||
// Backoff.
|
||||
backoffT.Reset(backoff)
|
||||
if backoff < backoffMax {
|
||||
backoff *= backoffMul
|
||||
if backoff > backoffMax {
|
||||
backoff = backoffMax
|
||||
}
|
||||
}
|
||||
select {
|
||||
case <-backoffT.C:
|
||||
case _, _ = <-db.closeC:
|
||||
db.logf("%s exiting", name)
|
||||
if errors.IsCorrupted(err) {
|
||||
db.logf("%s exiting (corruption detected)", name)
|
||||
db.compactionExitTransact()
|
||||
}
|
||||
|
||||
if !disableBackoff {
|
||||
// Reset backoff duration if counter is advancing.
|
||||
if cnt > lastCnt {
|
||||
backoff = backoffMin
|
||||
lastCnt = cnt
|
||||
}
|
||||
|
||||
// Backoff.
|
||||
backoffT.Reset(backoff)
|
||||
if backoff < backoffMax {
|
||||
backoff *= backoffMul
|
||||
if backoff > backoffMax {
|
||||
backoff = backoffMax
|
||||
}
|
||||
}
|
||||
select {
|
||||
case <-backoffT.C:
|
||||
case _, _ = <-db.closeC:
|
||||
db.logf("%s exiting", name)
|
||||
db.compactionExitTransact()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type compactionTransactFunc struct {
|
||||
runFunc func(cnt *compactionTransactCounter) error
|
||||
revertFunc func() error
|
||||
}
|
||||
|
||||
func (t *compactionTransactFunc) run(cnt *compactionTransactCounter) error {
|
||||
return t.runFunc(cnt)
|
||||
}
|
||||
|
||||
func (t *compactionTransactFunc) revert() error {
|
||||
if t.revertFunc != nil {
|
||||
return t.revertFunc()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (db *DB) compactionTransactFunc(name string, run func(cnt *compactionTransactCounter) error, revert func() error) {
|
||||
db.compactionTransact(name, &compactionTransactFunc{run, revert})
|
||||
}
|
||||
|
||||
func (db *DB) compactionExitTransact() {
|
||||
panic(errCompactionTransactExiting)
|
||||
}
|
||||
@@ -221,10 +296,10 @@ func (db *DB) memCompaction() {
|
||||
c := newCMem(db.s)
|
||||
stats := new(cStatsStaging)
|
||||
|
||||
db.logf("mem@flush N·%d S·%s", mem.db.Len(), shortenb(mem.db.Size()))
|
||||
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
|
||||
|
||||
// Don't compact empty memdb.
|
||||
if mem.db.Len() == 0 {
|
||||
if mem.mdb.Len() == 0 {
|
||||
db.logf("mem@flush skipping")
|
||||
// drop frozen mem
|
||||
db.dropFrozenMem()
|
||||
@@ -232,20 +307,23 @@ func (db *DB) memCompaction() {
|
||||
}
|
||||
|
||||
// Pause table compaction.
|
||||
ch := make(chan struct{})
|
||||
resumeC := make(chan struct{})
|
||||
select {
|
||||
case db.tcompPauseC <- (chan<- struct{})(ch):
|
||||
case db.tcompPauseC <- (chan<- struct{})(resumeC):
|
||||
case <-db.compPerErrC:
|
||||
close(resumeC)
|
||||
resumeC = nil
|
||||
case _, _ = <-db.closeC:
|
||||
return
|
||||
}
|
||||
|
||||
db.compactionTransact("mem@flush", func(cnt *compactionTransactCounter) (err error) {
|
||||
db.compactionTransactFunc("mem@flush", func(cnt *compactionTransactCounter) (err error) {
|
||||
stats.startTimer()
|
||||
defer stats.stopTimer()
|
||||
return c.flush(mem.db, -1)
|
||||
return c.flush(mem.mdb, -1)
|
||||
}, func() error {
|
||||
for _, r := range c.rec.addedTables {
|
||||
db.logf("mem@flush rollback @%d", r.num)
|
||||
db.logf("mem@flush revert @%d", r.num)
|
||||
f := db.s.getTableFile(r.num)
|
||||
if err := f.Remove(); err != nil {
|
||||
return err
|
||||
@@ -254,13 +332,13 @@ func (db *DB) memCompaction() {
|
||||
return nil
|
||||
})
|
||||
|
||||
db.compactionTransact("mem@commit", func(cnt *compactionTransactCounter) (err error) {
|
||||
db.compactionTransactFunc("mem@commit", func(cnt *compactionTransactCounter) (err error) {
|
||||
stats.startTimer()
|
||||
defer stats.stopTimer()
|
||||
return c.commit(db.journalFile.Num(), db.frozenSeq)
|
||||
}, nil)
|
||||
|
||||
db.logf("mem@flush commited F·%d T·%v", len(c.rec.addedTables), stats.duration)
|
||||
db.logf("mem@flush committed F·%d T·%v", len(c.rec.addedTables), stats.duration)
|
||||
|
||||
for _, r := range c.rec.addedTables {
|
||||
stats.write += r.size
|
||||
@@ -271,26 +349,223 @@ func (db *DB) memCompaction() {
|
||||
db.dropFrozenMem()
|
||||
|
||||
// Resume table compaction.
|
||||
select {
|
||||
case <-ch:
|
||||
case _, _ = <-db.closeC:
|
||||
return
|
||||
if resumeC != nil {
|
||||
select {
|
||||
case <-resumeC:
|
||||
close(resumeC)
|
||||
case _, _ = <-db.closeC:
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Trigger table compaction.
|
||||
db.compTrigger(db.mcompTriggerC)
|
||||
db.compSendTrigger(db.tcompCmdC)
|
||||
}
|
||||
|
||||
type tableCompactionBuilder struct {
|
||||
db *DB
|
||||
s *session
|
||||
c *compaction
|
||||
rec *sessionRecord
|
||||
stat0, stat1 *cStatsStaging
|
||||
|
||||
snapHasLastUkey bool
|
||||
snapLastUkey []byte
|
||||
snapLastSeq uint64
|
||||
snapIter int
|
||||
snapKerrCnt int
|
||||
snapDropCnt int
|
||||
|
||||
kerrCnt int
|
||||
dropCnt int
|
||||
|
||||
minSeq uint64
|
||||
strict bool
|
||||
tableSize int
|
||||
|
||||
tw *tWriter
|
||||
}
|
||||
|
||||
func (b *tableCompactionBuilder) appendKV(key, value []byte) error {
|
||||
// Create new table if not already.
|
||||
if b.tw == nil {
|
||||
// Check for pause event.
|
||||
if b.db != nil {
|
||||
select {
|
||||
case ch := <-b.db.tcompPauseC:
|
||||
b.db.pauseCompaction(ch)
|
||||
case _, _ = <-b.db.closeC:
|
||||
b.db.compactionExitTransact()
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
// Create new table.
|
||||
var err error
|
||||
b.tw, err = b.s.tops.create()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Write key/value into table.
|
||||
return b.tw.append(key, value)
|
||||
}
|
||||
|
||||
func (b *tableCompactionBuilder) needFlush() bool {
|
||||
return b.tw.tw.BytesLen() >= b.tableSize
|
||||
}
|
||||
|
||||
func (b *tableCompactionBuilder) flush() error {
|
||||
t, err := b.tw.finish()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b.rec.addTableFile(b.c.level+1, t)
|
||||
b.stat1.write += t.size
|
||||
b.s.logf("table@build created L%d@%d N·%d S·%s %q:%q", b.c.level+1, t.file.Num(), b.tw.tw.EntriesLen(), shortenb(int(t.size)), t.imin, t.imax)
|
||||
b.tw = nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *tableCompactionBuilder) cleanup() {
|
||||
if b.tw != nil {
|
||||
b.tw.drop()
|
||||
b.tw = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (b *tableCompactionBuilder) run(cnt *compactionTransactCounter) error {
|
||||
snapResumed := b.snapIter > 0
|
||||
hasLastUkey := b.snapHasLastUkey // The key might has zero length, so this is necessary.
|
||||
lastUkey := append([]byte{}, b.snapLastUkey...)
|
||||
lastSeq := b.snapLastSeq
|
||||
b.kerrCnt = b.snapKerrCnt
|
||||
b.dropCnt = b.snapDropCnt
|
||||
// Restore compaction state.
|
||||
b.c.restore()
|
||||
|
||||
defer b.cleanup()
|
||||
|
||||
b.stat1.startTimer()
|
||||
defer b.stat1.stopTimer()
|
||||
|
||||
iter := b.c.newIterator()
|
||||
defer iter.Release()
|
||||
for i := 0; iter.Next(); i++ {
|
||||
// Incr transact counter.
|
||||
cnt.incr()
|
||||
|
||||
// Skip until last state.
|
||||
if i < b.snapIter {
|
||||
continue
|
||||
}
|
||||
|
||||
resumed := false
|
||||
if snapResumed {
|
||||
resumed = true
|
||||
snapResumed = false
|
||||
}
|
||||
|
||||
ikey := iter.Key()
|
||||
ukey, seq, kt, kerr := parseIkey(ikey)
|
||||
|
||||
if kerr == nil {
|
||||
shouldStop := !resumed && b.c.shouldStopBefore(ikey)
|
||||
|
||||
if !hasLastUkey || b.s.icmp.uCompare(lastUkey, ukey) != 0 {
|
||||
// First occurrence of this user key.
|
||||
|
||||
// Only rotate tables if ukey doesn't hop across.
|
||||
if b.tw != nil && (shouldStop || b.needFlush()) {
|
||||
if err := b.flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Creates snapshot of the state.
|
||||
b.c.save()
|
||||
b.snapHasLastUkey = hasLastUkey
|
||||
b.snapLastUkey = append(b.snapLastUkey[:0], lastUkey...)
|
||||
b.snapLastSeq = lastSeq
|
||||
b.snapIter = i
|
||||
b.snapKerrCnt = b.kerrCnt
|
||||
b.snapDropCnt = b.dropCnt
|
||||
}
|
||||
|
||||
hasLastUkey = true
|
||||
lastUkey = append(lastUkey[:0], ukey...)
|
||||
lastSeq = kMaxSeq
|
||||
}
|
||||
|
||||
switch {
|
||||
case lastSeq <= b.minSeq:
|
||||
// Dropped because newer entry for same user key exist
|
||||
fallthrough // (A)
|
||||
case kt == ktDel && seq <= b.minSeq && b.c.baseLevelForKey(lastUkey):
|
||||
// For this user key:
|
||||
// (1) there is no data in higher levels
|
||||
// (2) data in lower levels will have larger seq numbers
|
||||
// (3) data in layers that are being compacted here and have
|
||||
// smaller seq numbers will be dropped in the next
|
||||
// few iterations of this loop (by rule (A) above).
|
||||
// Therefore this deletion marker is obsolete and can be dropped.
|
||||
lastSeq = seq
|
||||
b.dropCnt++
|
||||
continue
|
||||
default:
|
||||
lastSeq = seq
|
||||
}
|
||||
} else {
|
||||
if b.strict {
|
||||
return kerr
|
||||
}
|
||||
|
||||
// Don't drop corrupted keys.
|
||||
hasLastUkey = false
|
||||
lastUkey = lastUkey[:0]
|
||||
lastSeq = kMaxSeq
|
||||
b.kerrCnt++
|
||||
}
|
||||
|
||||
if err := b.appendKV(ikey, iter.Value()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := iter.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Finish last table.
|
||||
if b.tw != nil && !b.tw.empty() {
|
||||
return b.flush()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *tableCompactionBuilder) revert() error {
|
||||
for _, at := range b.rec.addedTables {
|
||||
b.s.logf("table@build revert @%d", at.num)
|
||||
f := b.s.getTableFile(at.num)
|
||||
if err := f.Remove(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (db *DB) tableCompaction(c *compaction, noTrivial bool) {
|
||||
rec := new(sessionRecord)
|
||||
rec.addCompactionPointer(c.level, c.imax)
|
||||
defer c.release()
|
||||
|
||||
rec := &sessionRecord{numLevel: db.s.o.GetNumLevel()}
|
||||
rec.addCompPtr(c.level, c.imax)
|
||||
|
||||
if !noTrivial && c.trivial() {
|
||||
t := c.tables[0][0]
|
||||
db.logf("table@move L%d@%d -> L%d", c.level, t.file.Num(), c.level+1)
|
||||
rec.deleteTable(c.level, t.file.Num())
|
||||
rec.delTable(c.level, t.file.Num())
|
||||
rec.addTableFile(c.level+1, t)
|
||||
db.compactionTransact("table@move", func(cnt *compactionTransactCounter) (err error) {
|
||||
db.compactionTransactFunc("table@move", func(cnt *compactionTransactCounter) (err error) {
|
||||
return db.s.commit(rec)
|
||||
}, nil)
|
||||
return
|
||||
@@ -301,184 +576,34 @@ func (db *DB) tableCompaction(c *compaction, noTrivial bool) {
|
||||
for _, t := range tables {
|
||||
stats[i].read += t.size
|
||||
// Insert deleted tables into record
|
||||
rec.deleteTable(c.level+i, t.file.Num())
|
||||
rec.delTable(c.level+i, t.file.Num())
|
||||
}
|
||||
}
|
||||
sourceSize := int(stats[0].read + stats[1].read)
|
||||
minSeq := db.minSeq()
|
||||
db.logf("table@compaction L%d·%d -> L%d·%d S·%s Q·%d", c.level, len(c.tables[0]), c.level+1, len(c.tables[1]), shortenb(sourceSize), minSeq)
|
||||
|
||||
var snapUkey []byte
|
||||
var snapHasUkey bool
|
||||
var snapSeq uint64
|
||||
var snapIter int
|
||||
var snapDropCnt int
|
||||
var dropCnt int
|
||||
db.compactionTransact("table@build", func(cnt *compactionTransactCounter) (err error) {
|
||||
ukey := append([]byte{}, snapUkey...)
|
||||
hasUkey := snapHasUkey
|
||||
lseq := snapSeq
|
||||
dropCnt = snapDropCnt
|
||||
snapSched := snapIter == 0
|
||||
|
||||
var tw *tWriter
|
||||
finish := func() error {
|
||||
t, err := tw.finish()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rec.addTableFile(c.level+1, t)
|
||||
stats[1].write += t.size
|
||||
db.logf("table@build created L%d@%d N·%d S·%s %q:%q", c.level+1, t.file.Num(), tw.tw.EntriesLen(), shortenb(int(t.size)), t.imin, t.imax)
|
||||
return nil
|
||||
}
|
||||
|
||||
defer func() {
|
||||
stats[1].stopTimer()
|
||||
if tw != nil {
|
||||
tw.drop()
|
||||
tw = nil
|
||||
}
|
||||
}()
|
||||
|
||||
stats[1].startTimer()
|
||||
iter := c.newIterator()
|
||||
defer iter.Release()
|
||||
for i := 0; iter.Next(); i++ {
|
||||
// Incr transact counter.
|
||||
cnt.incr()
|
||||
|
||||
// Skip until last state.
|
||||
if i < snapIter {
|
||||
continue
|
||||
}
|
||||
|
||||
ikey := iKey(iter.Key())
|
||||
|
||||
if c.shouldStopBefore(ikey) && tw != nil {
|
||||
err = finish()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
snapSched = true
|
||||
tw = nil
|
||||
}
|
||||
|
||||
// Scheduled for snapshot, snapshot will used to retry compaction
|
||||
// if error occured.
|
||||
if snapSched {
|
||||
snapUkey = append(snapUkey[:0], ukey...)
|
||||
snapHasUkey = hasUkey
|
||||
snapSeq = lseq
|
||||
snapIter = i
|
||||
snapDropCnt = dropCnt
|
||||
snapSched = false
|
||||
}
|
||||
|
||||
if seq, vt, ok := ikey.parseNum(); !ok {
|
||||
// Don't drop error keys
|
||||
ukey = ukey[:0]
|
||||
hasUkey = false
|
||||
lseq = kMaxSeq
|
||||
} else {
|
||||
if !hasUkey || db.s.icmp.uCompare(ikey.ukey(), ukey) != 0 {
|
||||
// First occurrence of this user key
|
||||
ukey = append(ukey[:0], ikey.ukey()...)
|
||||
hasUkey = true
|
||||
lseq = kMaxSeq
|
||||
}
|
||||
|
||||
drop := false
|
||||
if lseq <= minSeq {
|
||||
// Dropped because newer entry for same user key exist
|
||||
drop = true // (A)
|
||||
} else if vt == tDel && seq <= minSeq && c.baseLevelForKey(ukey) {
|
||||
// For this user key:
|
||||
// (1) there is no data in higher levels
|
||||
// (2) data in lower levels will have larger seq numbers
|
||||
// (3) data in layers that are being compacted here and have
|
||||
// smaller seq numbers will be dropped in the next
|
||||
// few iterations of this loop (by rule (A) above).
|
||||
// Therefore this deletion marker is obsolete and can be dropped.
|
||||
drop = true
|
||||
}
|
||||
|
||||
lseq = seq
|
||||
if drop {
|
||||
dropCnt++
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Create new table if not already
|
||||
if tw == nil {
|
||||
// Check for pause event.
|
||||
select {
|
||||
case ch := <-db.tcompPauseC:
|
||||
db.pauseCompaction(ch)
|
||||
case _, _ = <-db.closeC:
|
||||
db.compactionExitTransact()
|
||||
default:
|
||||
}
|
||||
|
||||
// Create new table.
|
||||
tw, err = db.s.tops.create()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Write key/value into table
|
||||
err = tw.append(ikey, iter.Value())
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Finish table if it is big enough
|
||||
if tw.tw.BytesLen() >= kMaxTableSize {
|
||||
err = finish()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
snapSched = true
|
||||
tw = nil
|
||||
}
|
||||
}
|
||||
|
||||
err = iter.Error()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Finish last table
|
||||
if tw != nil && !tw.empty() {
|
||||
err = finish()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
tw = nil
|
||||
}
|
||||
return
|
||||
}, func() error {
|
||||
for _, r := range rec.addedTables {
|
||||
db.logf("table@build rollback @%d", r.num)
|
||||
f := db.s.getTableFile(r.num)
|
||||
if err := f.Remove(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
b := &tableCompactionBuilder{
|
||||
db: db,
|
||||
s: db.s,
|
||||
c: c,
|
||||
rec: rec,
|
||||
stat1: &stats[1],
|
||||
minSeq: minSeq,
|
||||
strict: db.s.o.GetStrict(opt.StrictCompaction),
|
||||
tableSize: db.s.o.GetCompactionTableSize(c.level + 1),
|
||||
}
|
||||
db.compactionTransact("table@build", b)
|
||||
|
||||
// Commit changes
|
||||
db.compactionTransact("table@commit", func(cnt *compactionTransactCounter) (err error) {
|
||||
db.compactionTransactFunc("table@commit", func(cnt *compactionTransactCounter) (err error) {
|
||||
stats[1].startTimer()
|
||||
defer stats[1].stopTimer()
|
||||
return db.s.commit(rec)
|
||||
}, nil)
|
||||
|
||||
resultSize := int(stats[1].write)
|
||||
db.logf("table@compaction commited F%s S%s D·%d T·%v", sint(len(rec.addedTables)-len(rec.deletedTables)), sshortenb(resultSize-sourceSize), dropCnt, stats[1].duration)
|
||||
db.logf("table@compaction committed F%s S%s Ke·%d D·%d T·%v", sint(len(rec.addedTables)-len(rec.deletedTables)), sshortenb(resultSize-sourceSize), b.kerrCnt, b.dropCnt, stats[1].duration)
|
||||
|
||||
// Save compaction stats
|
||||
for i := range stats {
|
||||
@@ -494,14 +619,14 @@ func (db *DB) tableRangeCompaction(level int, umin, umax []byte) {
|
||||
db.tableCompaction(c, true)
|
||||
}
|
||||
} else {
|
||||
v := db.s.version_NB()
|
||||
|
||||
v := db.s.version()
|
||||
m := 1
|
||||
for i, t := range v.tables[1:] {
|
||||
if t.overlaps(db.s.icmp, umin, umax, false) {
|
||||
m = i + 1
|
||||
}
|
||||
}
|
||||
v.release()
|
||||
|
||||
for level := 0; level < m; level++ {
|
||||
if c := db.s.getCompactionRange(level, umin, umax); c != nil {
|
||||
@@ -518,7 +643,9 @@ func (db *DB) tableAutoCompaction() {
|
||||
}
|
||||
|
||||
func (db *DB) tableNeedCompaction() bool {
|
||||
return db.s.version_NB().needCompaction()
|
||||
v := db.s.version()
|
||||
defer v.release()
|
||||
return v.needCompaction()
|
||||
}
|
||||
|
||||
func (db *DB) pauseCompaction(ch chan<- struct{}) {
|
||||
@@ -538,7 +665,12 @@ type cIdle struct {
|
||||
}
|
||||
|
||||
func (r cIdle) ack(err error) {
|
||||
r.ackC <- err
|
||||
if r.ackC != nil {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
r.ackC <- err
|
||||
}
|
||||
}
|
||||
|
||||
type cRange struct {
|
||||
@@ -548,29 +680,45 @@ type cRange struct {
|
||||
}
|
||||
|
||||
func (r cRange) ack(err error) {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
if r.ackC != nil {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
r.ackC <- err
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) compSendIdle(compC chan<- cCmd) error {
|
||||
// This will trigger auto compation and/or wait for all compaction to be done.
|
||||
func (db *DB) compSendIdle(compC chan<- cCmd) (err error) {
|
||||
ch := make(chan error)
|
||||
defer close(ch)
|
||||
// Send cmd.
|
||||
select {
|
||||
case compC <- cIdle{ch}:
|
||||
case err := <-db.compErrC:
|
||||
return err
|
||||
case err = <-db.compErrC:
|
||||
return
|
||||
case _, _ = <-db.closeC:
|
||||
return ErrClosed
|
||||
}
|
||||
// Wait cmd.
|
||||
return <-ch
|
||||
select {
|
||||
case err = <-ch:
|
||||
case err = <-db.compErrC:
|
||||
case _, _ = <-db.closeC:
|
||||
return ErrClosed
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// This will trigger auto compaction but will not wait for it.
|
||||
func (db *DB) compSendTrigger(compC chan<- cCmd) {
|
||||
select {
|
||||
case compC <- cIdle{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
// Send range compaction request.
|
||||
func (db *DB) compSendRange(compC chan<- cCmd, level int, min, max []byte) (err error) {
|
||||
ch := make(chan error)
|
||||
defer close(ch)
|
||||
@@ -584,19 +732,14 @@ func (db *DB) compSendRange(compC chan<- cCmd, level int, min, max []byte) (err
|
||||
}
|
||||
// Wait cmd.
|
||||
select {
|
||||
case err = <-db.compErrC:
|
||||
case err = <-ch:
|
||||
case err = <-db.compErrC:
|
||||
case _, _ = <-db.closeC:
|
||||
return ErrClosed
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (db *DB) compTrigger(compTriggerC chan struct{}) {
|
||||
select {
|
||||
case compTriggerC <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mCompaction() {
|
||||
var x cCmd
|
||||
|
||||
@@ -615,11 +758,14 @@ func (db *DB) mCompaction() {
|
||||
for {
|
||||
select {
|
||||
case x = <-db.mcompCmdC:
|
||||
db.memCompaction()
|
||||
x.ack(nil)
|
||||
x = nil
|
||||
case <-db.mcompTriggerC:
|
||||
db.memCompaction()
|
||||
switch x.(type) {
|
||||
case cIdle:
|
||||
db.memCompaction()
|
||||
x.ack(nil)
|
||||
x = nil
|
||||
default:
|
||||
panic("leveldb: unknown command")
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
return
|
||||
}
|
||||
@@ -650,7 +796,6 @@ func (db *DB) tCompaction() {
|
||||
if db.tableNeedCompaction() {
|
||||
select {
|
||||
case x = <-db.tcompCmdC:
|
||||
case <-db.tcompTriggerC:
|
||||
case ch := <-db.tcompPauseC:
|
||||
db.pauseCompaction(ch)
|
||||
continue
|
||||
@@ -666,7 +811,6 @@ func (db *DB) tCompaction() {
|
||||
ackQ = ackQ[:0]
|
||||
select {
|
||||
case x = <-db.tcompCmdC:
|
||||
case <-db.tcompTriggerC:
|
||||
case ch := <-db.tcompPauseC:
|
||||
db.pauseCompaction(ch)
|
||||
continue
|
||||
@@ -681,6 +825,8 @@ func (db *DB) tCompaction() {
|
||||
case cRange:
|
||||
db.tableRangeCompaction(cmd.level, cmd.min, cmd.max)
|
||||
x.ack(nil)
|
||||
default:
|
||||
panic("leveldb: unknown command")
|
||||
}
|
||||
x = nil
|
||||
}
|
||||
|
||||
52
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_iter.go
generated
vendored
52
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_iter.go
generated
vendored
@@ -10,6 +10,7 @@ import (
|
||||
"errors"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -38,16 +39,16 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
|
||||
ti := v.getIterators(slice, ro)
|
||||
n := len(ti) + 2
|
||||
i := make([]iterator.Iterator, 0, n)
|
||||
emi := em.db.NewIterator(slice)
|
||||
emi := em.mdb.NewIterator(slice)
|
||||
emi.SetReleaser(&memdbReleaser{m: em})
|
||||
i = append(i, emi)
|
||||
if fm != nil {
|
||||
fmi := fm.db.NewIterator(slice)
|
||||
fmi := fm.mdb.NewIterator(slice)
|
||||
fmi.SetReleaser(&memdbReleaser{m: fm})
|
||||
i = append(i, fmi)
|
||||
}
|
||||
i = append(i, ti...)
|
||||
strict := db.s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
|
||||
strict := opt.GetStrict(db.s.o.Options, ro, opt.StrictReader)
|
||||
mi := iterator.NewMergedIterator(i, db.s.icmp, strict)
|
||||
mi.SetReleaser(&versionReleaser{v: v})
|
||||
return mi
|
||||
@@ -58,21 +59,23 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
|
||||
if slice != nil {
|
||||
islice = &util.Range{}
|
||||
if slice.Start != nil {
|
||||
islice.Start = newIKey(slice.Start, kMaxSeq, tSeek)
|
||||
islice.Start = newIkey(slice.Start, kMaxSeq, ktSeek)
|
||||
}
|
||||
if slice.Limit != nil {
|
||||
islice.Limit = newIKey(slice.Limit, kMaxSeq, tSeek)
|
||||
islice.Limit = newIkey(slice.Limit, kMaxSeq, ktSeek)
|
||||
}
|
||||
}
|
||||
rawIter := db.newRawIterator(islice, ro)
|
||||
iter := &dbIter{
|
||||
db: db,
|
||||
icmp: db.s.icmp,
|
||||
iter: rawIter,
|
||||
seq: seq,
|
||||
strict: db.s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator),
|
||||
strict: opt.GetStrict(db.s.o.Options, ro, opt.StrictReader),
|
||||
key: make([]byte, 0),
|
||||
value: make([]byte, 0),
|
||||
}
|
||||
atomic.AddInt32(&db.aliveIters, 1)
|
||||
runtime.SetFinalizer(iter, (*dbIter).Release)
|
||||
return iter
|
||||
}
|
||||
@@ -89,6 +92,7 @@ const (
|
||||
|
||||
// dbIter represent an interator states over a database session.
|
||||
type dbIter struct {
|
||||
db *DB
|
||||
icmp *iComparer
|
||||
iter iterator.Iterator
|
||||
seq uint64
|
||||
@@ -158,7 +162,7 @@ func (i *dbIter) Seek(key []byte) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
ikey := newIKey(key, i.seq, tSeek)
|
||||
ikey := newIkey(key, i.seq, ktSeek)
|
||||
if i.iter.Seek(ikey) {
|
||||
i.dir = dirSOI
|
||||
return i.next()
|
||||
@@ -170,15 +174,14 @@ func (i *dbIter) Seek(key []byte) bool {
|
||||
|
||||
func (i *dbIter) next() bool {
|
||||
for {
|
||||
ukey, seq, t, ok := parseIkey(i.iter.Key())
|
||||
if ok {
|
||||
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
|
||||
if seq <= i.seq {
|
||||
switch t {
|
||||
case tDel:
|
||||
switch kt {
|
||||
case ktDel:
|
||||
// Skip deleted key.
|
||||
i.key = append(i.key[:0], ukey...)
|
||||
i.dir = dirForward
|
||||
case tVal:
|
||||
case ktVal:
|
||||
if i.dir == dirSOI || i.icmp.uCompare(ukey, i.key) > 0 {
|
||||
i.key = append(i.key[:0], ukey...)
|
||||
i.value = append(i.value[:0], i.iter.Value()...)
|
||||
@@ -188,7 +191,7 @@ func (i *dbIter) next() bool {
|
||||
}
|
||||
}
|
||||
} else if i.strict {
|
||||
i.setErr(errInvalidIkey)
|
||||
i.setErr(kerr)
|
||||
break
|
||||
}
|
||||
if !i.iter.Next() {
|
||||
@@ -221,20 +224,19 @@ func (i *dbIter) prev() bool {
|
||||
del := true
|
||||
if i.iter.Valid() {
|
||||
for {
|
||||
ukey, seq, t, ok := parseIkey(i.iter.Key())
|
||||
if ok {
|
||||
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
|
||||
if seq <= i.seq {
|
||||
if !del && i.icmp.uCompare(ukey, i.key) < 0 {
|
||||
return true
|
||||
}
|
||||
del = (t == tDel)
|
||||
del = (kt == ktDel)
|
||||
if !del {
|
||||
i.key = append(i.key[:0], ukey...)
|
||||
i.value = append(i.value[:0], i.iter.Value()...)
|
||||
}
|
||||
}
|
||||
} else if i.strict {
|
||||
i.setErr(errInvalidIkey)
|
||||
i.setErr(kerr)
|
||||
return false
|
||||
}
|
||||
if !i.iter.Prev() {
|
||||
@@ -263,13 +265,12 @@ func (i *dbIter) Prev() bool {
|
||||
return i.Last()
|
||||
case dirForward:
|
||||
for i.iter.Prev() {
|
||||
ukey, _, _, ok := parseIkey(i.iter.Key())
|
||||
if ok {
|
||||
if ukey, _, _, kerr := parseIkey(i.iter.Key()); kerr == nil {
|
||||
if i.icmp.uCompare(ukey, i.key) < 0 {
|
||||
goto cont
|
||||
}
|
||||
} else if i.strict {
|
||||
i.setErr(errInvalidIkey)
|
||||
i.setErr(kerr)
|
||||
return false
|
||||
}
|
||||
}
|
||||
@@ -303,6 +304,7 @@ func (i *dbIter) Release() {
|
||||
|
||||
if i.releaser != nil {
|
||||
i.releaser.Release()
|
||||
i.releaser = nil
|
||||
}
|
||||
|
||||
i.dir = dirReleased
|
||||
@@ -310,13 +312,19 @@ func (i *dbIter) Release() {
|
||||
i.value = nil
|
||||
i.iter.Release()
|
||||
i.iter = nil
|
||||
atomic.AddInt32(&i.db.aliveIters, -1)
|
||||
i.db = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (i *dbIter) SetReleaser(releaser util.Releaser) {
|
||||
if i.dir != dirReleased {
|
||||
i.releaser = releaser
|
||||
if i.dir == dirReleased {
|
||||
panic(util.ErrReleased)
|
||||
}
|
||||
if i.releaser != nil && releaser != nil {
|
||||
panic(util.ErrHasReleaser)
|
||||
}
|
||||
i.releaser = releaser
|
||||
}
|
||||
|
||||
func (i *dbIter) Error() error {
|
||||
|
||||
94
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_snapshot.go
generated
vendored
94
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_snapshot.go
generated
vendored
@@ -7,8 +7,10 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"container/list"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -18,51 +20,41 @@ import (
|
||||
type snapshotElement struct {
|
||||
seq uint64
|
||||
ref int
|
||||
// Next and previous pointers in the doubly-linked list of elements.
|
||||
next, prev *snapshotElement
|
||||
}
|
||||
|
||||
// Initialize the snapshot.
|
||||
func (db *DB) initSnapshot() {
|
||||
db.snapsRoot.next = &db.snapsRoot
|
||||
db.snapsRoot.prev = &db.snapsRoot
|
||||
e *list.Element
|
||||
}
|
||||
|
||||
// Acquires a snapshot, based on latest sequence.
|
||||
func (db *DB) acquireSnapshot() *snapshotElement {
|
||||
db.snapsMu.Lock()
|
||||
defer db.snapsMu.Unlock()
|
||||
|
||||
seq := db.getSeq()
|
||||
elem := db.snapsRoot.prev
|
||||
if elem == &db.snapsRoot || elem.seq != seq {
|
||||
at := db.snapsRoot.prev
|
||||
next := at.next
|
||||
elem = &snapshotElement{
|
||||
seq: seq,
|
||||
prev: at,
|
||||
next: next,
|
||||
|
||||
if e := db.snapsList.Back(); e != nil {
|
||||
se := e.Value.(*snapshotElement)
|
||||
if se.seq == seq {
|
||||
se.ref++
|
||||
return se
|
||||
} else if seq < se.seq {
|
||||
panic("leveldb: sequence number is not increasing")
|
||||
}
|
||||
at.next = elem
|
||||
next.prev = elem
|
||||
}
|
||||
elem.ref++
|
||||
db.snapsMu.Unlock()
|
||||
return elem
|
||||
se := &snapshotElement{seq: seq, ref: 1}
|
||||
se.e = db.snapsList.PushBack(se)
|
||||
return se
|
||||
}
|
||||
|
||||
// Releases given snapshot element.
|
||||
func (db *DB) releaseSnapshot(elem *snapshotElement) {
|
||||
if !db.isClosed() {
|
||||
db.snapsMu.Lock()
|
||||
elem.ref--
|
||||
if elem.ref == 0 {
|
||||
elem.prev.next = elem.next
|
||||
elem.next.prev = elem.prev
|
||||
elem.next = nil
|
||||
elem.prev = nil
|
||||
} else if elem.ref < 0 {
|
||||
panic("leveldb: Snapshot: negative element reference")
|
||||
}
|
||||
db.snapsMu.Unlock()
|
||||
func (db *DB) releaseSnapshot(se *snapshotElement) {
|
||||
db.snapsMu.Lock()
|
||||
defer db.snapsMu.Unlock()
|
||||
|
||||
se.ref--
|
||||
if se.ref == 0 {
|
||||
db.snapsList.Remove(se.e)
|
||||
se.e = nil
|
||||
} else if se.ref < 0 {
|
||||
panic("leveldb: Snapshot: negative element reference")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -70,10 +62,11 @@ func (db *DB) releaseSnapshot(elem *snapshotElement) {
|
||||
func (db *DB) minSeq() uint64 {
|
||||
db.snapsMu.Lock()
|
||||
defer db.snapsMu.Unlock()
|
||||
elem := db.snapsRoot.prev
|
||||
if elem != &db.snapsRoot {
|
||||
return elem.seq
|
||||
|
||||
if e := db.snapsList.Front(); e != nil {
|
||||
return e.Value.(*snapshotElement).seq
|
||||
}
|
||||
|
||||
return db.getSeq()
|
||||
}
|
||||
|
||||
@@ -81,7 +74,7 @@ func (db *DB) minSeq() uint64 {
|
||||
type Snapshot struct {
|
||||
db *DB
|
||||
elem *snapshotElement
|
||||
mu sync.Mutex
|
||||
mu sync.RWMutex
|
||||
released bool
|
||||
}
|
||||
|
||||
@@ -91,12 +84,13 @@ func (db *DB) newSnapshot() *Snapshot {
|
||||
db: db,
|
||||
elem: db.acquireSnapshot(),
|
||||
}
|
||||
atomic.AddInt32(&db.aliveSnaps, 1)
|
||||
runtime.SetFinalizer(snap, (*Snapshot).Release)
|
||||
return snap
|
||||
}
|
||||
|
||||
// Get gets the value for the given key. It returns ErrNotFound if
|
||||
// the DB does not contain the key.
|
||||
// the DB does not contains the key.
|
||||
//
|
||||
// The caller should not modify the contents of the returned slice, but
|
||||
// it is safe to modify the contents of the argument after Get returns.
|
||||
@@ -105,8 +99,8 @@ func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err er
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
snap.mu.Lock()
|
||||
defer snap.mu.Unlock()
|
||||
snap.mu.RLock()
|
||||
defer snap.mu.RUnlock()
|
||||
if snap.released {
|
||||
err = ErrSnapshotReleased
|
||||
return
|
||||
@@ -114,6 +108,23 @@ func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err er
|
||||
return snap.db.get(key, snap.elem.seq, ro)
|
||||
}
|
||||
|
||||
// Has returns true if the DB does contains the given key.
|
||||
//
|
||||
// It is safe to modify the contents of the argument after Get returns.
|
||||
func (snap *Snapshot) Has(key []byte, ro *opt.ReadOptions) (ret bool, err error) {
|
||||
err = snap.db.ok()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
snap.mu.RLock()
|
||||
defer snap.mu.RUnlock()
|
||||
if snap.released {
|
||||
err = ErrSnapshotReleased
|
||||
return
|
||||
}
|
||||
return snap.db.has(key, snap.elem.seq, ro)
|
||||
}
|
||||
|
||||
// NewIterator returns an iterator for the snapshot of the uderlying DB.
|
||||
// The returned iterator is not goroutine-safe, but it is safe to use
|
||||
// multiple iterators concurrently, with each in a dedicated goroutine.
|
||||
@@ -160,6 +171,7 @@ func (snap *Snapshot) Release() {
|
||||
|
||||
snap.released = true
|
||||
snap.db.releaseSnapshot(snap.elem)
|
||||
atomic.AddInt32(&snap.db.aliveSnaps, -1)
|
||||
snap.db = nil
|
||||
snap.elem = nil
|
||||
}
|
||||
|
||||
70
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_state.go
generated
vendored
70
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_state.go
generated
vendored
@@ -8,16 +8,16 @@ package leveldb
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/journal"
|
||||
"github.com/syndtr/goleveldb/leveldb/memdb"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
type memDB struct {
|
||||
pool *util.Pool
|
||||
db *memdb.DB
|
||||
ref int32
|
||||
db *DB
|
||||
mdb *memdb.DB
|
||||
ref int32
|
||||
}
|
||||
|
||||
func (m *memDB) incref() {
|
||||
@@ -26,7 +26,13 @@ func (m *memDB) incref() {
|
||||
|
||||
func (m *memDB) decref() {
|
||||
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
|
||||
m.pool.Put(m)
|
||||
// Only put back memdb with std capacity.
|
||||
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
|
||||
m.mdb.Reset()
|
||||
m.db.mpoolPut(m.mdb)
|
||||
}
|
||||
m.db = nil
|
||||
m.mdb = nil
|
||||
} else if ref < 0 {
|
||||
panic("negative memdb ref")
|
||||
}
|
||||
@@ -42,6 +48,41 @@ func (db *DB) addSeq(delta uint64) {
|
||||
atomic.AddUint64(&db.seq, delta)
|
||||
}
|
||||
|
||||
func (db *DB) mpoolPut(mem *memdb.DB) {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
select {
|
||||
case db.memPool <- mem:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mpoolGet() *memdb.DB {
|
||||
select {
|
||||
case mem := <-db.memPool:
|
||||
return mem
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mpoolDrain() {
|
||||
ticker := time.NewTicker(30 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
select {
|
||||
case <-db.memPool:
|
||||
default:
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
close(db.memPool)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create new memdb and froze the old one; need external synchronization.
|
||||
// newMem only called synchronously by the writer.
|
||||
func (db *DB) newMem(n int) (mem *memDB, err error) {
|
||||
@@ -70,18 +111,15 @@ func (db *DB) newMem(n int) (mem *memDB, err error) {
|
||||
db.journalWriter = w
|
||||
db.journalFile = file
|
||||
db.frozenMem = db.mem
|
||||
mem, ok := db.memPool.Get().(*memDB)
|
||||
if ok && mem.db.Capacity() >= n {
|
||||
mem.db.Reset()
|
||||
mem.incref()
|
||||
} else {
|
||||
mem = &memDB{
|
||||
pool: db.memPool,
|
||||
db: memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n)),
|
||||
ref: 1,
|
||||
}
|
||||
mdb := db.mpoolGet()
|
||||
if mdb == nil || mdb.Capacity() < n {
|
||||
mdb = memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n))
|
||||
}
|
||||
mem = &memDB{
|
||||
db: db,
|
||||
mdb: mdb,
|
||||
ref: 2,
|
||||
}
|
||||
mem.incref()
|
||||
db.mem = mem
|
||||
// The seq only incremented by the writer. And whoever called newMem
|
||||
// should hold write lock, so no need additional synchronization here.
|
||||
|
||||
847
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_test.go
generated
vendored
847
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_test.go
generated
vendored
File diff suppressed because it is too large
Load Diff
11
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_util.go
generated
vendored
11
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_util.go
generated
vendored
@@ -7,8 +7,7 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
"github.com/syndtr/goleveldb/leveldb/storage"
|
||||
@@ -38,7 +37,9 @@ func (db *DB) logf(format string, v ...interface{}) { db.s.logf(format, v...) }
|
||||
|
||||
// Check and clean files.
|
||||
func (db *DB) checkAndCleanFiles() error {
|
||||
v := db.s.version_NB()
|
||||
v := db.s.version()
|
||||
defer v.release()
|
||||
|
||||
tablesMap := make(map[uint64]bool)
|
||||
for _, tables := range v.tables {
|
||||
for _, t := range tables {
|
||||
@@ -78,12 +79,14 @@ func (db *DB) checkAndCleanFiles() error {
|
||||
}
|
||||
|
||||
if nTables != len(tablesMap) {
|
||||
var missing []*storage.FileInfo
|
||||
for num, present := range tablesMap {
|
||||
if !present {
|
||||
missing = append(missing, &storage.FileInfo{Type: storage.TypeTable, Num: num})
|
||||
db.logf("db@janitor table missing @%d", num)
|
||||
}
|
||||
}
|
||||
return ErrCorrupted{Type: MissingFiles, Err: errors.New("leveldb: table files missing")}
|
||||
return errors.NewErrCorrupted(nil, &errors.ErrMissingFiles{Files: missing})
|
||||
}
|
||||
|
||||
db.logf("db@janitor F·%d G·%d", len(files), len(rem))
|
||||
|
||||
65
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_write.go
generated
vendored
65
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_write.go
generated
vendored
@@ -59,7 +59,7 @@ func (db *DB) rotateMem(n int) (mem *memDB, err error) {
|
||||
}
|
||||
|
||||
// Schedule memdb compaction.
|
||||
db.compTrigger(db.mcompTriggerC)
|
||||
db.compSendTrigger(db.mcompCmdC)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -75,14 +75,14 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
mem = nil
|
||||
}
|
||||
}()
|
||||
nn = mem.db.Free()
|
||||
nn = mem.mdb.Free()
|
||||
switch {
|
||||
case v.tLen(0) >= kL0_SlowdownWritesTrigger && !delayed:
|
||||
case v.tLen(0) >= db.s.o.GetWriteL0SlowdownTrigger() && !delayed:
|
||||
delayed = true
|
||||
time.Sleep(time.Millisecond)
|
||||
case nn >= n:
|
||||
return false
|
||||
case v.tLen(0) >= kL0_StopWritesTrigger:
|
||||
case v.tLen(0) >= db.s.o.GetWriteL0PauseTrigger():
|
||||
delayed = true
|
||||
err = db.compSendIdle(db.tcompCmdC)
|
||||
if err != nil {
|
||||
@@ -90,13 +90,13 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
}
|
||||
default:
|
||||
// Allow memdb to grow if it has no entry.
|
||||
if mem.db.Len() == 0 {
|
||||
if mem.mdb.Len() == 0 {
|
||||
nn = n
|
||||
} else {
|
||||
mem.decref()
|
||||
mem, err = db.rotateMem(n)
|
||||
if err == nil {
|
||||
nn = mem.db.Free()
|
||||
nn = mem.mdb.Free()
|
||||
} else {
|
||||
nn = 0
|
||||
}
|
||||
@@ -109,7 +109,12 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
for flush() {
|
||||
}
|
||||
if delayed {
|
||||
db.logf("db@write delayed T·%v", time.Since(start))
|
||||
db.writeDelay += time.Since(start)
|
||||
db.writeDelayN++
|
||||
} else if db.writeDelayN > 0 {
|
||||
db.writeDelay = 0
|
||||
db.writeDelayN = 0
|
||||
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -120,28 +125,33 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
// It is safe to modify the contents of the arguments after Write returns.
|
||||
func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
|
||||
err = db.ok()
|
||||
if err != nil || b == nil || b.len() == 0 {
|
||||
if err != nil || b == nil || b.Len() == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
b.init(wo.GetSync())
|
||||
|
||||
// The write happen synchronously.
|
||||
retry:
|
||||
select {
|
||||
case db.writeC <- b:
|
||||
if <-db.writeMergedC {
|
||||
return <-db.writeAckC
|
||||
}
|
||||
goto retry
|
||||
case db.writeLockC <- struct{}{}:
|
||||
case err = <-db.compPerErrC:
|
||||
return
|
||||
case _, _ = <-db.closeC:
|
||||
return ErrClosed
|
||||
}
|
||||
|
||||
merged := 0
|
||||
danglingMerge := false
|
||||
defer func() {
|
||||
<-db.writeLockC
|
||||
if danglingMerge {
|
||||
db.writeMergedC <- false
|
||||
} else {
|
||||
<-db.writeLockC
|
||||
}
|
||||
for i := 0; i < merged; i++ {
|
||||
db.writeAckC <- err
|
||||
}
|
||||
@@ -170,7 +180,7 @@ drain:
|
||||
db.writeMergedC <- true
|
||||
merged++
|
||||
} else {
|
||||
db.writeMergedC <- false
|
||||
danglingMerge = true
|
||||
break drain
|
||||
}
|
||||
default:
|
||||
@@ -185,35 +195,43 @@ drain:
|
||||
if b.size() >= (128 << 10) {
|
||||
// Push the write batch to the journal writer
|
||||
select {
|
||||
case db.journalC <- b:
|
||||
// Write into memdb
|
||||
if berr := b.memReplay(mem.mdb); berr != nil {
|
||||
panic(berr)
|
||||
}
|
||||
case err = <-db.compPerErrC:
|
||||
return
|
||||
case _, _ = <-db.closeC:
|
||||
err = ErrClosed
|
||||
return
|
||||
case db.journalC <- b:
|
||||
// Write into memdb
|
||||
b.memReplay(mem.db)
|
||||
}
|
||||
// Wait for journal writer
|
||||
select {
|
||||
case _, _ = <-db.closeC:
|
||||
err = ErrClosed
|
||||
return
|
||||
case err = <-db.journalAckC:
|
||||
if err != nil {
|
||||
// Revert memdb if error detected
|
||||
b.revertMemReplay(mem.db)
|
||||
if berr := b.revertMemReplay(mem.mdb); berr != nil {
|
||||
panic(berr)
|
||||
}
|
||||
return
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
err = ErrClosed
|
||||
return
|
||||
}
|
||||
} else {
|
||||
err = db.writeJournal(b)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
b.memReplay(mem.db)
|
||||
if berr := b.memReplay(mem.mdb); berr != nil {
|
||||
panic(berr)
|
||||
}
|
||||
}
|
||||
|
||||
// Set last seq number.
|
||||
db.addSeq(uint64(b.len()))
|
||||
db.addSeq(uint64(b.Len()))
|
||||
|
||||
if b.size() >= memFree {
|
||||
db.rotateMem(0)
|
||||
@@ -262,8 +280,11 @@ func (db *DB) CompactRange(r util.Range) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Lock writer.
|
||||
select {
|
||||
case db.writeLockC <- struct{}{}:
|
||||
case err := <-db.compPerErrC:
|
||||
return err
|
||||
case _, _ = <-db.closeC:
|
||||
return ErrClosed
|
||||
}
|
||||
@@ -271,7 +292,7 @@ func (db *DB) CompactRange(r util.Range) error {
|
||||
// Check for overlaps in memdb.
|
||||
mem := db.getEffectiveMem()
|
||||
defer mem.decref()
|
||||
if isMemOverlaps(db.s.icmp, mem.db, r.Start, r.Limit) {
|
||||
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
|
||||
// Memdb compaction.
|
||||
if _, err := db.rotateMem(0); err != nil {
|
||||
<-db.writeLockC
|
||||
|
||||
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/doc.go
generated
vendored
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/doc.go
generated
vendored
@@ -37,6 +37,16 @@
|
||||
// err = iter.Error()
|
||||
// ...
|
||||
//
|
||||
// Iterate over subset of database content with a particular prefix:
|
||||
// iter := db.NewIterator(util.BytesPrefix([]byte("foo-")), nil)
|
||||
// for iter.Next() {
|
||||
// // Use key/value.
|
||||
// ...
|
||||
// }
|
||||
// iter.Release()
|
||||
// err = iter.Error()
|
||||
// ...
|
||||
//
|
||||
// Seek-then-Iterate:
|
||||
//
|
||||
// iter := db.NewIterator(nil, nil)
|
||||
|
||||
@@ -7,32 +7,12 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotFound = util.ErrNotFound
|
||||
ErrNotFound = errors.ErrNotFound
|
||||
ErrSnapshotReleased = errors.New("leveldb: snapshot released")
|
||||
ErrIterReleased = errors.New("leveldb: iterator released")
|
||||
ErrClosed = errors.New("leveldb: closed")
|
||||
)
|
||||
|
||||
type CorruptionType int
|
||||
|
||||
const (
|
||||
CorruptedManifest CorruptionType = iota
|
||||
MissingFiles
|
||||
)
|
||||
|
||||
// ErrCorrupted is the type that wraps errors that indicate corruption in
|
||||
// the database.
|
||||
type ErrCorrupted struct {
|
||||
Type CorruptionType
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e ErrCorrupted) Error() string {
|
||||
return e.Err.Error()
|
||||
}
|
||||
76
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/errors/errors.go
generated
vendored
Normal file
76
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/errors/errors.go
generated
vendored
Normal file
@@ -0,0 +1,76 @@
|
||||
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// Package errors provides common error types used throughout leveldb.
|
||||
package errors
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/storage"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotFound = New("leveldb: not found")
|
||||
ErrReleased = util.ErrReleased
|
||||
ErrHasReleaser = util.ErrHasReleaser
|
||||
)
|
||||
|
||||
// New returns an error that formats as the given text.
|
||||
func New(text string) error {
|
||||
return errors.New(text)
|
||||
}
|
||||
|
||||
// ErrCorrupted is the type that wraps errors that indicate corruption in
|
||||
// the database.
|
||||
type ErrCorrupted struct {
|
||||
File *storage.FileInfo
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e *ErrCorrupted) Error() string {
|
||||
if e.File != nil {
|
||||
return fmt.Sprintf("%v [file=%v]", e.Err, e.File)
|
||||
} else {
|
||||
return e.Err.Error()
|
||||
}
|
||||
}
|
||||
|
||||
// NewErrCorrupted creates new ErrCorrupted error.
|
||||
func NewErrCorrupted(f storage.File, err error) error {
|
||||
return &ErrCorrupted{storage.NewFileInfo(f), err}
|
||||
}
|
||||
|
||||
// IsCorrupted returns a boolean indicating whether the error is indicating
|
||||
// a corruption.
|
||||
func IsCorrupted(err error) bool {
|
||||
switch err.(type) {
|
||||
case *ErrCorrupted:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ErrMissingFiles is the type that indicating a corruption due to missing
|
||||
// files.
|
||||
type ErrMissingFiles struct {
|
||||
Files []*storage.FileInfo
|
||||
}
|
||||
|
||||
func (e *ErrMissingFiles) Error() string { return "file missing" }
|
||||
|
||||
// SetFile sets 'file info' of the given error with the given file.
|
||||
// Currently only ErrCorrupted is supported, otherwise will do nothing.
|
||||
func SetFile(err error, f storage.File) error {
|
||||
switch x := err.(type) {
|
||||
case *ErrCorrupted:
|
||||
x.File = storage.NewFileInfo(f)
|
||||
return x
|
||||
}
|
||||
return err
|
||||
}
|
||||
12
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/external_test.go
generated
vendored
12
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/external_test.go
generated
vendored
@@ -19,11 +19,12 @@ var _ = testutil.Defer(func() {
|
||||
o := &opt.Options{
|
||||
BlockCache: opt.NoCache,
|
||||
BlockRestartInterval: 5,
|
||||
BlockSize: 50,
|
||||
BlockSize: 80,
|
||||
Compression: opt.NoCompression,
|
||||
MaxOpenFiles: 0,
|
||||
CachedOpenFiles: -1,
|
||||
Strict: opt.StrictAll,
|
||||
WriteBuffer: 1000,
|
||||
CompactionTableSize: 2000,
|
||||
}
|
||||
|
||||
Describe("write test", func() {
|
||||
@@ -40,18 +41,17 @@ var _ = testutil.Defer(func() {
|
||||
})
|
||||
|
||||
Describe("read test", func() {
|
||||
testutil.AllKeyValueTesting(nil, func(kv testutil.KeyValue) testutil.DB {
|
||||
testutil.AllKeyValueTesting(nil, nil, func(kv testutil.KeyValue) testutil.DB {
|
||||
// Building the DB.
|
||||
db := newTestingDB(o, nil, nil)
|
||||
kv.IterateShuffled(nil, func(i int, key, value []byte) {
|
||||
err := db.TestPut(key, value)
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
})
|
||||
testutil.Defer("teardown", func() {
|
||||
db.TestClose()
|
||||
})
|
||||
|
||||
return db
|
||||
}, func(db testutil.DB) {
|
||||
db.(*testingDB).TestClose()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/array_iter.go
generated
vendored
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/array_iter.go
generated
vendored
@@ -40,13 +40,19 @@ type basicArrayIterator struct {
|
||||
util.BasicReleaser
|
||||
array BasicArray
|
||||
pos int
|
||||
err error
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) Valid() bool {
|
||||
return i.pos >= 0 && i.pos < i.array.Len()
|
||||
return i.pos >= 0 && i.pos < i.array.Len() && !i.Released()
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) First() bool {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
if i.array.Len() == 0 {
|
||||
i.pos = -1
|
||||
return false
|
||||
@@ -56,6 +62,11 @@ func (i *basicArrayIterator) First() bool {
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) Last() bool {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
n := i.array.Len()
|
||||
if n == 0 {
|
||||
i.pos = 0
|
||||
@@ -66,6 +77,11 @@ func (i *basicArrayIterator) Last() bool {
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) Seek(key []byte) bool {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
n := i.array.Len()
|
||||
if n == 0 {
|
||||
i.pos = 0
|
||||
@@ -79,6 +95,11 @@ func (i *basicArrayIterator) Seek(key []byte) bool {
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) Next() bool {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
i.pos++
|
||||
if n := i.array.Len(); i.pos >= n {
|
||||
i.pos = n
|
||||
@@ -88,6 +109,11 @@ func (i *basicArrayIterator) Next() bool {
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) Prev() bool {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
i.pos--
|
||||
if i.pos < 0 {
|
||||
i.pos = -1
|
||||
@@ -96,7 +122,7 @@ func (i *basicArrayIterator) Prev() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (i *basicArrayIterator) Error() error { return nil }
|
||||
func (i *basicArrayIterator) Error() error { return i.err }
|
||||
|
||||
type arrayIterator struct {
|
||||
basicArrayIterator
|
||||
|
||||
73
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/indexed_iter.go
generated
vendored
73
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/indexed_iter.go
generated
vendored
@@ -7,6 +7,7 @@
|
||||
package iterator
|
||||
|
||||
import (
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
@@ -22,13 +23,13 @@ type IteratorIndexer interface {
|
||||
|
||||
type indexedIterator struct {
|
||||
util.BasicReleaser
|
||||
index IteratorIndexer
|
||||
strict bool
|
||||
strictGet bool
|
||||
index IteratorIndexer
|
||||
strict bool
|
||||
|
||||
data Iterator
|
||||
err error
|
||||
errf func(err error)
|
||||
data Iterator
|
||||
err error
|
||||
errf func(err error)
|
||||
closed bool
|
||||
}
|
||||
|
||||
func (i *indexedIterator) setData() {
|
||||
@@ -36,11 +37,6 @@ func (i *indexedIterator) setData() {
|
||||
i.data.Release()
|
||||
}
|
||||
i.data = i.index.Get()
|
||||
if i.strictGet {
|
||||
if err := i.data.Error(); err != nil {
|
||||
i.err = err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (i *indexedIterator) clearData() {
|
||||
@@ -50,14 +46,21 @@ func (i *indexedIterator) clearData() {
|
||||
i.data = nil
|
||||
}
|
||||
|
||||
func (i *indexedIterator) dataErr() bool {
|
||||
if i.errf != nil {
|
||||
if err := i.data.Error(); err != nil {
|
||||
func (i *indexedIterator) indexErr() {
|
||||
if err := i.index.Error(); err != nil {
|
||||
if i.errf != nil {
|
||||
i.errf(err)
|
||||
}
|
||||
i.err = err
|
||||
}
|
||||
if i.strict {
|
||||
if err := i.data.Error(); err != nil {
|
||||
}
|
||||
|
||||
func (i *indexedIterator) dataErr() bool {
|
||||
if err := i.data.Error(); err != nil {
|
||||
if i.errf != nil {
|
||||
i.errf(err)
|
||||
}
|
||||
if i.strict || !errors.IsCorrupted(err) {
|
||||
i.err = err
|
||||
return true
|
||||
}
|
||||
@@ -72,9 +75,13 @@ func (i *indexedIterator) Valid() bool {
|
||||
func (i *indexedIterator) First() bool {
|
||||
if i.err != nil {
|
||||
return false
|
||||
} else if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
if !i.index.First() {
|
||||
i.indexErr()
|
||||
i.clearData()
|
||||
return false
|
||||
}
|
||||
@@ -85,9 +92,13 @@ func (i *indexedIterator) First() bool {
|
||||
func (i *indexedIterator) Last() bool {
|
||||
if i.err != nil {
|
||||
return false
|
||||
} else if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
if !i.index.Last() {
|
||||
i.indexErr()
|
||||
i.clearData()
|
||||
return false
|
||||
}
|
||||
@@ -105,9 +116,13 @@ func (i *indexedIterator) Last() bool {
|
||||
func (i *indexedIterator) Seek(key []byte) bool {
|
||||
if i.err != nil {
|
||||
return false
|
||||
} else if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
if !i.index.Seek(key) {
|
||||
i.indexErr()
|
||||
i.clearData()
|
||||
return false
|
||||
}
|
||||
@@ -125,6 +140,9 @@ func (i *indexedIterator) Seek(key []byte) bool {
|
||||
func (i *indexedIterator) Next() bool {
|
||||
if i.err != nil {
|
||||
return false
|
||||
} else if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
switch {
|
||||
@@ -136,6 +154,7 @@ func (i *indexedIterator) Next() bool {
|
||||
fallthrough
|
||||
case i.data == nil:
|
||||
if !i.index.Next() {
|
||||
i.indexErr()
|
||||
return false
|
||||
}
|
||||
i.setData()
|
||||
@@ -147,6 +166,9 @@ func (i *indexedIterator) Next() bool {
|
||||
func (i *indexedIterator) Prev() bool {
|
||||
if i.err != nil {
|
||||
return false
|
||||
} else if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
switch {
|
||||
@@ -158,6 +180,7 @@ func (i *indexedIterator) Prev() bool {
|
||||
fallthrough
|
||||
case i.data == nil:
|
||||
if !i.index.Prev() {
|
||||
i.indexErr()
|
||||
return false
|
||||
}
|
||||
i.setData()
|
||||
@@ -206,16 +229,14 @@ func (i *indexedIterator) SetErrorCallback(f func(err error)) {
|
||||
i.errf = f
|
||||
}
|
||||
|
||||
// NewIndexedIterator returns an indexed iterator. An index is iterator
|
||||
// that returns another iterator, a data iterator. A data iterator is the
|
||||
// NewIndexedIterator returns an 'indexed iterator'. An index is iterator
|
||||
// that returns another iterator, a 'data iterator'. A 'data iterator' is the
|
||||
// iterator that contains actual key/value pairs.
|
||||
//
|
||||
// If strict is true then error yield by data iterator will halt the indexed
|
||||
// iterator, on contrary if strict is false then the indexed iterator will
|
||||
// ignore those error and move on to the next index. If strictGet is true and
|
||||
// index.Get() yield an 'error iterator' then the indexed iterator will be halted.
|
||||
// An 'error iterator' is iterator which its Error() method always return non-nil
|
||||
// even before any 'seeks method' is called.
|
||||
func NewIndexedIterator(index IteratorIndexer, strict, strictGet bool) Iterator {
|
||||
return &indexedIterator{index: index, strict: strict, strictGet: strictGet}
|
||||
// If strict is true the any 'corruption errors' (i.e errors.IsCorrupted(err) == true)
|
||||
// won't be ignored and will halt 'indexed iterator', otherwise the iterator will
|
||||
// continue to the next 'data iterator'. Corruption on 'index iterator' will not be
|
||||
// ignored and will halt the iterator.
|
||||
func NewIndexedIterator(index IteratorIndexer, strict bool) Iterator {
|
||||
return &indexedIterator{index: index, strict: strict}
|
||||
}
|
||||
|
||||
@@ -65,7 +65,7 @@ var _ = testutil.Defer(func() {
|
||||
// Test the iterator.
|
||||
t := testutil.IteratorTesting{
|
||||
KeyValue: kv.Clone(),
|
||||
Iter: NewIndexedIterator(NewArrayIndexer(index), true, true),
|
||||
Iter: NewIndexedIterator(NewArrayIndexer(index), true),
|
||||
}
|
||||
testutil.DoIteratorTesting(&t)
|
||||
done <- true
|
||||
|
||||
27
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/iter.go
generated
vendored
27
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/iter.go
generated
vendored
@@ -14,6 +14,10 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrIterReleased = errors.New("leveldb/iterator: iterator released")
|
||||
)
|
||||
|
||||
// IteratorSeeker is the interface that wraps the 'seeks method'.
|
||||
type IteratorSeeker interface {
|
||||
// First moves the iterator to the first key/value pair. If the iterator
|
||||
@@ -100,28 +104,13 @@ type ErrorCallbackSetter interface {
|
||||
}
|
||||
|
||||
type emptyIterator struct {
|
||||
releaser util.Releaser
|
||||
released bool
|
||||
err error
|
||||
util.BasicReleaser
|
||||
err error
|
||||
}
|
||||
|
||||
func (i *emptyIterator) rErr() {
|
||||
if i.err == nil && i.released {
|
||||
i.err = errors.New("leveldb/iterator: iterator released")
|
||||
}
|
||||
}
|
||||
|
||||
func (i *emptyIterator) Release() {
|
||||
if i.releaser != nil {
|
||||
i.releaser.Release()
|
||||
i.releaser = nil
|
||||
}
|
||||
i.released = true
|
||||
}
|
||||
|
||||
func (i *emptyIterator) SetReleaser(releaser util.Releaser) {
|
||||
if !i.released {
|
||||
i.releaser = releaser
|
||||
if i.err == nil && i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -3,15 +3,9 @@ package iterator_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/testutil"
|
||||
)
|
||||
|
||||
func TestIterator(t *testing.T) {
|
||||
testutil.RunDefer()
|
||||
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Iterator Suite")
|
||||
testutil.RunSuite(t, "Iterator Suite")
|
||||
}
|
||||
|
||||
29
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/merged_iter.go
generated
vendored
29
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/iterator/merged_iter.go
generated
vendored
@@ -7,16 +7,11 @@
|
||||
package iterator
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/comparer"
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrIterReleased = errors.New("leveldb/iterator: iterator released")
|
||||
)
|
||||
|
||||
type dir int
|
||||
|
||||
const (
|
||||
@@ -48,13 +43,11 @@ func assertKey(key []byte) []byte {
|
||||
}
|
||||
|
||||
func (i *mergedIterator) iterErr(iter Iterator) bool {
|
||||
if i.errf != nil {
|
||||
if err := iter.Error(); err != nil {
|
||||
if err := iter.Error(); err != nil {
|
||||
if i.errf != nil {
|
||||
i.errf(err)
|
||||
}
|
||||
}
|
||||
if i.strict {
|
||||
if err := iter.Error(); err != nil {
|
||||
if i.strict || !errors.IsCorrupted(err) {
|
||||
i.err = err
|
||||
return true
|
||||
}
|
||||
@@ -274,9 +267,13 @@ func (i *mergedIterator) Release() {
|
||||
}
|
||||
|
||||
func (i *mergedIterator) SetReleaser(releaser util.Releaser) {
|
||||
if i.dir != dirReleased {
|
||||
i.releaser = releaser
|
||||
if i.dir == dirReleased {
|
||||
panic(util.ErrReleased)
|
||||
}
|
||||
if i.releaser != nil && releaser != nil {
|
||||
panic(util.ErrHasReleaser)
|
||||
}
|
||||
i.releaser = releaser
|
||||
}
|
||||
|
||||
func (i *mergedIterator) Error() error {
|
||||
@@ -294,9 +291,9 @@ func (i *mergedIterator) SetErrorCallback(f func(err error)) {
|
||||
// keys: if iters[i] contains a key k then iters[j] will not contain that key k.
|
||||
// None of the iters may be nil.
|
||||
//
|
||||
// If strict is true then error yield by any iterators will halt the merged
|
||||
// iterator, on contrary if strict is false then the merged iterator will
|
||||
// ignore those error and move on to the next iterator.
|
||||
// If strict is true the any 'corruption errors' (i.e errors.IsCorrupted(err) == true)
|
||||
// won't be ignored and will halt 'merged iterator', otherwise the iterator will
|
||||
// continue to the next 'input iterator'.
|
||||
func NewMergedIterator(iters []Iterator, cmp comparer.Comparer, strict bool) Iterator {
|
||||
return &mergedIterator{
|
||||
iters: iters,
|
||||
|
||||
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/journal/journal.go
generated
vendored
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/journal/journal.go
generated
vendored
@@ -79,10 +79,10 @@ package journal
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
@@ -109,7 +109,7 @@ type ErrCorrupted struct {
|
||||
Reason string
|
||||
}
|
||||
|
||||
func (e ErrCorrupted) Error() string {
|
||||
func (e *ErrCorrupted) Error() string {
|
||||
return fmt.Sprintf("leveldb/journal: block/chunk corrupted: %s (%d bytes)", e.Reason, e.Size)
|
||||
}
|
||||
|
||||
@@ -162,10 +162,10 @@ var errSkip = errors.New("leveldb/journal: skipped")
|
||||
|
||||
func (r *Reader) corrupt(n int, reason string, skip bool) error {
|
||||
if r.dropper != nil {
|
||||
r.dropper.Drop(ErrCorrupted{n, reason})
|
||||
r.dropper.Drop(&ErrCorrupted{n, reason})
|
||||
}
|
||||
if r.strict && !skip {
|
||||
r.err = ErrCorrupted{n, reason}
|
||||
r.err = errors.NewErrCorrupted(nil, &ErrCorrupted{n, reason})
|
||||
return r.err
|
||||
}
|
||||
return errSkip
|
||||
|
||||
133
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/key.go
generated
vendored
133
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/key.go
generated
vendored
@@ -9,15 +9,30 @@ package leveldb
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
)
|
||||
|
||||
type vType int
|
||||
type ErrIkeyCorrupted struct {
|
||||
Ikey []byte
|
||||
Reason string
|
||||
}
|
||||
|
||||
func (t vType) String() string {
|
||||
switch t {
|
||||
case tDel:
|
||||
func (e *ErrIkeyCorrupted) Error() string {
|
||||
return fmt.Sprintf("leveldb: iKey %q corrupted: %s", e.Ikey, e.Reason)
|
||||
}
|
||||
|
||||
func newErrIkeyCorrupted(ikey []byte, reason string) error {
|
||||
return errors.NewErrCorrupted(nil, &ErrIkeyCorrupted{append([]byte{}, ikey...), reason})
|
||||
}
|
||||
|
||||
type kType int
|
||||
|
||||
func (kt kType) String() string {
|
||||
switch kt {
|
||||
case ktDel:
|
||||
return "d"
|
||||
case tVal:
|
||||
case ktVal:
|
||||
return "v"
|
||||
}
|
||||
return "x"
|
||||
@@ -26,16 +41,16 @@ func (t vType) String() string {
|
||||
// Value types encoded as the last component of internal keys.
|
||||
// Don't modify; this value are saved to disk.
|
||||
const (
|
||||
tDel vType = iota
|
||||
tVal
|
||||
ktDel kType = iota
|
||||
ktVal
|
||||
)
|
||||
|
||||
// tSeek defines the vType that should be passed when constructing an
|
||||
// ktSeek defines the kType that should be passed when constructing an
|
||||
// internal key for seeking to a particular sequence number (since we
|
||||
// sort sequence numbers in decreasing order and the value type is
|
||||
// embedded as the low 8 bits in the sequence number in internal keys,
|
||||
// we need to use the highest-numbered ValueType, not the lowest).
|
||||
const tSeek = tVal
|
||||
const ktSeek = ktVal
|
||||
|
||||
const (
|
||||
// Maximum value possible for sequence number; the 8-bits are
|
||||
@@ -43,7 +58,7 @@ const (
|
||||
// 64-bit integer.
|
||||
kMaxSeq uint64 = (uint64(1) << 56) - 1
|
||||
// Maximum value possible for packed sequence number and type.
|
||||
kMaxNum uint64 = (kMaxSeq << 8) | uint64(tSeek)
|
||||
kMaxNum uint64 = (kMaxSeq << 8) | uint64(ktSeek)
|
||||
)
|
||||
|
||||
// Maximum number encoded in bytes.
|
||||
@@ -55,85 +70,73 @@ func init() {
|
||||
|
||||
type iKey []byte
|
||||
|
||||
func newIKey(ukey []byte, seq uint64, t vType) iKey {
|
||||
if seq > kMaxSeq || t > tVal {
|
||||
panic("invalid seq number or value type")
|
||||
func newIkey(ukey []byte, seq uint64, kt kType) iKey {
|
||||
if seq > kMaxSeq {
|
||||
panic("leveldb: invalid sequence number")
|
||||
} else if kt > ktVal {
|
||||
panic("leveldb: invalid type")
|
||||
}
|
||||
|
||||
b := make(iKey, len(ukey)+8)
|
||||
copy(b, ukey)
|
||||
binary.LittleEndian.PutUint64(b[len(ukey):], (seq<<8)|uint64(t))
|
||||
return b
|
||||
ik := make(iKey, len(ukey)+8)
|
||||
copy(ik, ukey)
|
||||
binary.LittleEndian.PutUint64(ik[len(ukey):], (seq<<8)|uint64(kt))
|
||||
return ik
|
||||
}
|
||||
|
||||
func parseIkey(p []byte) (ukey []byte, seq uint64, t vType, ok bool) {
|
||||
if len(p) < 8 {
|
||||
return
|
||||
func parseIkey(ik []byte) (ukey []byte, seq uint64, kt kType, err error) {
|
||||
if len(ik) < 8 {
|
||||
return nil, 0, 0, newErrIkeyCorrupted(ik, "invalid length")
|
||||
}
|
||||
num := binary.LittleEndian.Uint64(p[len(p)-8:])
|
||||
seq, t = uint64(num>>8), vType(num&0xff)
|
||||
if t > tVal {
|
||||
return
|
||||
num := binary.LittleEndian.Uint64(ik[len(ik)-8:])
|
||||
seq, kt = uint64(num>>8), kType(num&0xff)
|
||||
if kt > ktVal {
|
||||
return nil, 0, 0, newErrIkeyCorrupted(ik, "invalid type")
|
||||
}
|
||||
ukey = p[:len(p)-8]
|
||||
ok = true
|
||||
ukey = ik[:len(ik)-8]
|
||||
return
|
||||
}
|
||||
|
||||
func validIkey(p []byte) bool {
|
||||
_, _, _, ok := parseIkey(p)
|
||||
return ok
|
||||
func validIkey(ik []byte) bool {
|
||||
_, _, _, err := parseIkey(ik)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
func (p iKey) assert() {
|
||||
if p == nil {
|
||||
panic("nil iKey")
|
||||
func (ik iKey) assert() {
|
||||
if ik == nil {
|
||||
panic("leveldb: nil iKey")
|
||||
}
|
||||
if len(p) < 8 {
|
||||
panic(fmt.Sprintf("invalid iKey %q, len=%d", []byte(p), len(p)))
|
||||
if len(ik) < 8 {
|
||||
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", ik, len(ik)))
|
||||
}
|
||||
}
|
||||
|
||||
func (p iKey) ok() bool {
|
||||
if len(p) < 8 {
|
||||
return false
|
||||
}
|
||||
_, _, ok := p.parseNum()
|
||||
return ok
|
||||
func (ik iKey) ukey() []byte {
|
||||
ik.assert()
|
||||
return ik[:len(ik)-8]
|
||||
}
|
||||
|
||||
func (p iKey) ukey() []byte {
|
||||
p.assert()
|
||||
return p[:len(p)-8]
|
||||
func (ik iKey) num() uint64 {
|
||||
ik.assert()
|
||||
return binary.LittleEndian.Uint64(ik[len(ik)-8:])
|
||||
}
|
||||
|
||||
func (p iKey) num() uint64 {
|
||||
p.assert()
|
||||
return binary.LittleEndian.Uint64(p[len(p)-8:])
|
||||
}
|
||||
|
||||
func (p iKey) parseNum() (seq uint64, t vType, ok bool) {
|
||||
if p == nil {
|
||||
panic("nil iKey")
|
||||
func (ik iKey) parseNum() (seq uint64, kt kType) {
|
||||
num := ik.num()
|
||||
seq, kt = uint64(num>>8), kType(num&0xff)
|
||||
if kt > ktVal {
|
||||
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", ik, len(ik), kt))
|
||||
}
|
||||
if len(p) < 8 {
|
||||
return
|
||||
}
|
||||
num := p.num()
|
||||
seq, t = uint64(num>>8), vType(num&0xff)
|
||||
if t > tVal {
|
||||
return 0, 0, false
|
||||
}
|
||||
ok = true
|
||||
return
|
||||
}
|
||||
|
||||
func (p iKey) String() string {
|
||||
if len(p) == 0 {
|
||||
func (ik iKey) String() string {
|
||||
if ik == nil {
|
||||
return "<nil>"
|
||||
}
|
||||
if seq, t, ok := p.parseNum(); ok {
|
||||
return fmt.Sprintf("%s,%s%d", shorten(string(p.ukey())), t, seq)
|
||||
|
||||
if ukey, seq, kt, err := parseIkey(ik); err == nil {
|
||||
return fmt.Sprintf("%s,%s%d", shorten(string(ukey)), kt, seq)
|
||||
} else {
|
||||
return "<invalid>"
|
||||
}
|
||||
return "<invalid>"
|
||||
}
|
||||
|
||||
94
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/key_test.go
generated
vendored
94
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/key_test.go
generated
vendored
@@ -15,8 +15,8 @@ import (
|
||||
|
||||
var defaultIComparer = &iComparer{comparer.DefaultComparer}
|
||||
|
||||
func ikey(key string, seq uint64, t vType) iKey {
|
||||
return newIKey([]byte(key), uint64(seq), t)
|
||||
func ikey(key string, seq uint64, kt kType) iKey {
|
||||
return newIkey([]byte(key), uint64(seq), kt)
|
||||
}
|
||||
|
||||
func shortSep(a, b []byte) []byte {
|
||||
@@ -37,27 +37,37 @@ func shortSuccessor(b []byte) []byte {
|
||||
return dst
|
||||
}
|
||||
|
||||
func testSingleKey(t *testing.T, key string, seq uint64, vt vType) {
|
||||
ik := ikey(key, seq, vt)
|
||||
func testSingleKey(t *testing.T, key string, seq uint64, kt kType) {
|
||||
ik := ikey(key, seq, kt)
|
||||
|
||||
if !bytes.Equal(ik.ukey(), []byte(key)) {
|
||||
t.Errorf("user key does not equal, got %v, want %v", string(ik.ukey()), key)
|
||||
}
|
||||
|
||||
if rseq, rt, ok := ik.parseNum(); ok {
|
||||
rseq, rt := ik.parseNum()
|
||||
if rseq != seq {
|
||||
t.Errorf("seq number does not equal, got %v, want %v", rseq, seq)
|
||||
}
|
||||
if rt != kt {
|
||||
t.Errorf("type does not equal, got %v, want %v", rt, kt)
|
||||
}
|
||||
|
||||
if rukey, rseq, rt, kerr := parseIkey(ik); kerr == nil {
|
||||
if !bytes.Equal(rukey, []byte(key)) {
|
||||
t.Errorf("user key does not equal, got %v, want %v", string(ik.ukey()), key)
|
||||
}
|
||||
if rseq != seq {
|
||||
t.Errorf("seq number does not equal, got %v, want %v", rseq, seq)
|
||||
}
|
||||
|
||||
if rt != vt {
|
||||
t.Errorf("type does not equal, got %v, want %v", rt, vt)
|
||||
if rt != kt {
|
||||
t.Errorf("type does not equal, got %v, want %v", rt, kt)
|
||||
}
|
||||
} else {
|
||||
t.Error("cannot parse seq and type")
|
||||
t.Errorf("key error: %v", kerr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIKey_EncodeDecode(t *testing.T) {
|
||||
func TestIkey_EncodeDecode(t *testing.T) {
|
||||
keys := []string{"", "k", "hello", "longggggggggggggggggggggg"}
|
||||
seqs := []uint64{
|
||||
1, 2, 3,
|
||||
@@ -67,8 +77,8 @@ func TestIKey_EncodeDecode(t *testing.T) {
|
||||
}
|
||||
for _, key := range keys {
|
||||
for _, seq := range seqs {
|
||||
testSingleKey(t, key, seq, tVal)
|
||||
testSingleKey(t, "hello", 1, tDel)
|
||||
testSingleKey(t, key, seq, ktVal)
|
||||
testSingleKey(t, "hello", 1, ktDel)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -79,45 +89,45 @@ func assertBytes(t *testing.T, want, got []byte) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIKeyShortSeparator(t *testing.T) {
|
||||
func TestIkeyShortSeparator(t *testing.T) {
|
||||
// When user keys are same
|
||||
assertBytes(t, ikey("foo", 100, tVal),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("foo", 99, tVal)))
|
||||
assertBytes(t, ikey("foo", 100, tVal),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("foo", 101, tVal)))
|
||||
assertBytes(t, ikey("foo", 100, tVal),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("foo", 100, tVal)))
|
||||
assertBytes(t, ikey("foo", 100, tVal),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("foo", 100, tDel)))
|
||||
assertBytes(t, ikey("foo", 100, ktVal),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("foo", 99, ktVal)))
|
||||
assertBytes(t, ikey("foo", 100, ktVal),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("foo", 101, ktVal)))
|
||||
assertBytes(t, ikey("foo", 100, ktVal),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("foo", 100, ktVal)))
|
||||
assertBytes(t, ikey("foo", 100, ktVal),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("foo", 100, ktDel)))
|
||||
|
||||
// When user keys are misordered
|
||||
assertBytes(t, ikey("foo", 100, tVal),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("bar", 99, tVal)))
|
||||
assertBytes(t, ikey("foo", 100, ktVal),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("bar", 99, ktVal)))
|
||||
|
||||
// When user keys are different, but correctly ordered
|
||||
assertBytes(t, ikey("g", uint64(kMaxSeq), tSeek),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("hello", 200, tVal)))
|
||||
assertBytes(t, ikey("g", uint64(kMaxSeq), ktSeek),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("hello", 200, ktVal)))
|
||||
|
||||
// When start user key is prefix of limit user key
|
||||
assertBytes(t, ikey("foo", 100, tVal),
|
||||
shortSep(ikey("foo", 100, tVal),
|
||||
ikey("foobar", 200, tVal)))
|
||||
assertBytes(t, ikey("foo", 100, ktVal),
|
||||
shortSep(ikey("foo", 100, ktVal),
|
||||
ikey("foobar", 200, ktVal)))
|
||||
|
||||
// When limit user key is prefix of start user key
|
||||
assertBytes(t, ikey("foobar", 100, tVal),
|
||||
shortSep(ikey("foobar", 100, tVal),
|
||||
ikey("foo", 200, tVal)))
|
||||
assertBytes(t, ikey("foobar", 100, ktVal),
|
||||
shortSep(ikey("foobar", 100, ktVal),
|
||||
ikey("foo", 200, ktVal)))
|
||||
}
|
||||
|
||||
func TestIKeyShortestSuccessor(t *testing.T) {
|
||||
assertBytes(t, ikey("g", uint64(kMaxSeq), tSeek),
|
||||
shortSuccessor(ikey("foo", 100, tVal)))
|
||||
assertBytes(t, ikey("\xff\xff", 100, tVal),
|
||||
shortSuccessor(ikey("\xff\xff", 100, tVal)))
|
||||
func TestIkeyShortestSuccessor(t *testing.T) {
|
||||
assertBytes(t, ikey("g", uint64(kMaxSeq), ktSeek),
|
||||
shortSuccessor(ikey("foo", 100, ktVal)))
|
||||
assertBytes(t, ikey("\xff\xff", 100, ktVal),
|
||||
shortSuccessor(ikey("\xff\xff", 100, ktVal)))
|
||||
}
|
||||
|
||||
13
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/leveldb_suite_test.go
generated
vendored
13
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/leveldb_suite_test.go
generated
vendored
@@ -3,18 +3,9 @@ package leveldb
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/testutil"
|
||||
)
|
||||
|
||||
func TestLeveldb(t *testing.T) {
|
||||
testutil.RunDefer()
|
||||
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Leveldb Suite")
|
||||
|
||||
RegisterTestingT(t)
|
||||
testutil.RunDefer("teardown")
|
||||
func TestLevelDB(t *testing.T) {
|
||||
testutil.RunSuite(t, "LevelDB Suite")
|
||||
}
|
||||
|
||||
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb.go
generated
vendored
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb.go
generated
vendored
@@ -12,12 +12,14 @@ import (
|
||||
"sync"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/comparer"
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotFound = util.ErrNotFound
|
||||
ErrNotFound = errors.ErrNotFound
|
||||
ErrIterReleased = errors.New("leveldb/memdb: iterator released")
|
||||
)
|
||||
|
||||
const tMaxHeight = 12
|
||||
@@ -29,6 +31,7 @@ type dbIter struct {
|
||||
node int
|
||||
forward bool
|
||||
key, value []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (i *dbIter) fill(checkStart, checkLimit bool) bool {
|
||||
@@ -59,6 +62,11 @@ func (i *dbIter) Valid() bool {
|
||||
}
|
||||
|
||||
func (i *dbIter) First() bool {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
i.forward = true
|
||||
i.p.mu.RLock()
|
||||
defer i.p.mu.RUnlock()
|
||||
@@ -71,9 +79,11 @@ func (i *dbIter) First() bool {
|
||||
}
|
||||
|
||||
func (i *dbIter) Last() bool {
|
||||
if i.p == nil {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
i.forward = false
|
||||
i.p.mu.RLock()
|
||||
defer i.p.mu.RUnlock()
|
||||
@@ -86,9 +96,11 @@ func (i *dbIter) Last() bool {
|
||||
}
|
||||
|
||||
func (i *dbIter) Seek(key []byte) bool {
|
||||
if i.p == nil {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
i.forward = true
|
||||
i.p.mu.RLock()
|
||||
defer i.p.mu.RUnlock()
|
||||
@@ -100,9 +112,11 @@ func (i *dbIter) Seek(key []byte) bool {
|
||||
}
|
||||
|
||||
func (i *dbIter) Next() bool {
|
||||
if i.p == nil {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
if i.node == 0 {
|
||||
if !i.forward {
|
||||
return i.First()
|
||||
@@ -117,9 +131,11 @@ func (i *dbIter) Next() bool {
|
||||
}
|
||||
|
||||
func (i *dbIter) Prev() bool {
|
||||
if i.p == nil {
|
||||
if i.Released() {
|
||||
i.err = ErrIterReleased
|
||||
return false
|
||||
}
|
||||
|
||||
if i.node == 0 {
|
||||
if i.forward {
|
||||
return i.Last()
|
||||
@@ -141,10 +157,10 @@ func (i *dbIter) Value() []byte {
|
||||
return i.value
|
||||
}
|
||||
|
||||
func (i *dbIter) Error() error { return nil }
|
||||
func (i *dbIter) Error() error { return i.err }
|
||||
|
||||
func (i *dbIter) Release() {
|
||||
if i.p != nil {
|
||||
if !i.Released() {
|
||||
i.p = nil
|
||||
i.node = 0
|
||||
i.key = nil
|
||||
|
||||
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb_suite_test.go
generated
vendored
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb_suite_test.go
generated
vendored
@@ -3,15 +3,9 @@ package memdb
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/testutil"
|
||||
)
|
||||
|
||||
func TestMemdb(t *testing.T) {
|
||||
testutil.RunDefer()
|
||||
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Memdb Suite")
|
||||
func TestMemDB(t *testing.T) {
|
||||
testutil.RunSuite(t, "MemDB Suite")
|
||||
}
|
||||
|
||||
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb_test.go
generated
vendored
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb_test.go
generated
vendored
@@ -129,7 +129,7 @@ var _ = testutil.Defer(func() {
|
||||
}
|
||||
|
||||
return db
|
||||
})
|
||||
}, nil, nil)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
333
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
333
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/cache"
|
||||
"github.com/syndtr/goleveldb/leveldb/comparer"
|
||||
"github.com/syndtr/goleveldb/leveldb/filter"
|
||||
"math"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -20,20 +21,38 @@ const (
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultBlockCacheSize = 8 * MiB
|
||||
DefaultBlockRestartInterval = 16
|
||||
DefaultBlockSize = 4 * KiB
|
||||
DefaultCompressionType = SnappyCompression
|
||||
DefaultMaxOpenFiles = 1000
|
||||
DefaultWriteBuffer = 4 * MiB
|
||||
DefaultBlockCacheSize = 8 * MiB
|
||||
DefaultBlockRestartInterval = 16
|
||||
DefaultBlockSize = 4 * KiB
|
||||
DefaultCompactionExpandLimitFactor = 25
|
||||
DefaultCompactionGPOverlapsFactor = 10
|
||||
DefaultCompactionL0Trigger = 4
|
||||
DefaultCompactionSourceLimitFactor = 1
|
||||
DefaultCompactionTableSize = 2 * MiB
|
||||
DefaultCompactionTableSizeMultiplier = 1.0
|
||||
DefaultCompactionTotalSize = 10 * MiB
|
||||
DefaultCompactionTotalSizeMultiplier = 10.0
|
||||
DefaultCompressionType = SnappyCompression
|
||||
DefaultCachedOpenFiles = 500
|
||||
DefaultMaxMemCompationLevel = 2
|
||||
DefaultNumLevel = 7
|
||||
DefaultWriteBuffer = 4 * MiB
|
||||
DefaultWriteL0PauseTrigger = 12
|
||||
DefaultWriteL0SlowdownTrigger = 8
|
||||
)
|
||||
|
||||
type noCache struct{}
|
||||
|
||||
func (noCache) SetCapacity(capacity int) {}
|
||||
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
|
||||
func (noCache) Purge(fin cache.PurgeFin) {}
|
||||
func (noCache) Zap(closed bool) {}
|
||||
func (noCache) SetCapacity(capacity int) {}
|
||||
func (noCache) Capacity() int { return 0 }
|
||||
func (noCache) Used() int { return 0 }
|
||||
func (noCache) Size() int { return 0 }
|
||||
func (noCache) NumObjects() int { return 0 }
|
||||
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
|
||||
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
|
||||
func (noCache) ZapNamespace(id uint64) {}
|
||||
func (noCache) Purge(fin cache.PurgeFin) {}
|
||||
func (noCache) Zap() {}
|
||||
|
||||
var NoCache cache.Cache = noCache{}
|
||||
|
||||
@@ -59,34 +78,47 @@ const (
|
||||
nCompression
|
||||
)
|
||||
|
||||
// Strict is the DB strict level.
|
||||
// Strict is the DB 'strict level'.
|
||||
type Strict uint
|
||||
|
||||
const (
|
||||
// If present then a corrupted or invalid chunk or block in manifest
|
||||
// journal will cause an error istead of being dropped.
|
||||
// journal will cause an error instead of being dropped.
|
||||
// This will prevent database with corrupted manifest to be opened.
|
||||
StrictManifest Strict = 1 << iota
|
||||
|
||||
// If present then a corrupted or invalid chunk or block in journal
|
||||
// will cause an error istead of being dropped.
|
||||
StrictJournal
|
||||
|
||||
// If present then journal chunk checksum will be verified.
|
||||
StrictJournalChecksum
|
||||
|
||||
// If present then an invalid key/value pair will cause an error
|
||||
// instead of being skipped.
|
||||
StrictIterator
|
||||
// If present then a corrupted or invalid chunk or block in journal
|
||||
// will cause an error instead of being dropped.
|
||||
// This will prevent database with corrupted journal to be opened.
|
||||
StrictJournal
|
||||
|
||||
// If present then 'sorted table' block checksum will be verified.
|
||||
// This has effect on both 'read operation' and compaction.
|
||||
StrictBlockChecksum
|
||||
|
||||
// If present then a corrupted 'sorted table' will fails compaction.
|
||||
// The database will enter read-only mode.
|
||||
StrictCompaction
|
||||
|
||||
// If present then a corrupted 'sorted table' will halts 'read operation'.
|
||||
StrictReader
|
||||
|
||||
// If present then leveldb.Recover will drop corrupted 'sorted table'.
|
||||
StrictRecovery
|
||||
|
||||
// This only applicable for ReadOptions, if present then this ReadOptions
|
||||
// 'strict level' will override global ones.
|
||||
StrictOverride
|
||||
|
||||
// StrictAll enables all strict flags.
|
||||
StrictAll = StrictManifest | StrictJournal | StrictJournalChecksum | StrictIterator | StrictBlockChecksum
|
||||
StrictAll = StrictManifest | StrictJournalChecksum | StrictJournal | StrictBlockChecksum | StrictCompaction | StrictReader | StrictRecovery
|
||||
|
||||
// DefaultStrict is the default strict flags. Specify any strict flags
|
||||
// will override default strict flags as whole (i.e. not OR'ed).
|
||||
DefaultStrict = StrictJournalChecksum | StrictBlockChecksum
|
||||
DefaultStrict = StrictJournalChecksum | StrictBlockChecksum | StrictCompaction | StrictReader
|
||||
|
||||
// NoStrict disables all strict flags. Override default strict flags.
|
||||
NoStrict = ^StrictAll
|
||||
@@ -104,9 +136,14 @@ type Options struct {
|
||||
// BlockCache provides per-block caching for LevelDB. Specify NoCache to
|
||||
// disable block caching.
|
||||
//
|
||||
// By default LevelDB will create LRU-cache with capacity of 8MiB.
|
||||
// By default LevelDB will create LRU-cache with capacity of BlockCacheSize.
|
||||
BlockCache cache.Cache
|
||||
|
||||
// BlockCacheSize defines the capacity of the default 'block cache'.
|
||||
//
|
||||
// The default value is 8MiB.
|
||||
BlockCacheSize int
|
||||
|
||||
// BlockRestartInterval is the number of keys between restart points for
|
||||
// delta encoding of keys.
|
||||
//
|
||||
@@ -119,6 +156,80 @@ type Options struct {
|
||||
// The default value is 4KiB.
|
||||
BlockSize int
|
||||
|
||||
// CachedOpenFiles defines number of open files to kept around when not
|
||||
// in-use, the counting includes still in-use files.
|
||||
// Set this to negative value to disable caching.
|
||||
//
|
||||
// The default value is 500.
|
||||
CachedOpenFiles int
|
||||
|
||||
// CompactionExpandLimitFactor limits compaction size after expanded.
|
||||
// This will be multiplied by table size limit at compaction target level.
|
||||
//
|
||||
// The default value is 25.
|
||||
CompactionExpandLimitFactor int
|
||||
|
||||
// CompactionGPOverlapsFactor limits overlaps in grandparent (Level + 2) that a
|
||||
// single 'sorted table' generates.
|
||||
// This will be multiplied by table size limit at grandparent level.
|
||||
//
|
||||
// The default value is 10.
|
||||
CompactionGPOverlapsFactor int
|
||||
|
||||
// CompactionL0Trigger defines number of 'sorted table' at level-0 that will
|
||||
// trigger compaction.
|
||||
//
|
||||
// The default value is 4.
|
||||
CompactionL0Trigger int
|
||||
|
||||
// CompactionSourceLimitFactor limits compaction source size. This doesn't apply to
|
||||
// level-0.
|
||||
// This will be multiplied by table size limit at compaction target level.
|
||||
//
|
||||
// The default value is 1.
|
||||
CompactionSourceLimitFactor int
|
||||
|
||||
// CompactionTableSize limits size of 'sorted table' that compaction generates.
|
||||
// The limits for each level will be calculated as:
|
||||
// CompactionTableSize * (CompactionTableSizeMultiplier ^ Level)
|
||||
// The multiplier for each level can also fine-tuned using CompactionTableSizeMultiplierPerLevel.
|
||||
//
|
||||
// The default value is 2MiB.
|
||||
CompactionTableSize int
|
||||
|
||||
// CompactionTableSizeMultiplier defines multiplier for CompactionTableSize.
|
||||
//
|
||||
// The default value is 1.
|
||||
CompactionTableSizeMultiplier float64
|
||||
|
||||
// CompactionTableSizeMultiplierPerLevel defines per-level multiplier for
|
||||
// CompactionTableSize.
|
||||
// Use zero to skip a level.
|
||||
//
|
||||
// The default value is nil.
|
||||
CompactionTableSizeMultiplierPerLevel []float64
|
||||
|
||||
// CompactionTotalSize limits total size of 'sorted table' for each level.
|
||||
// The limits for each level will be calculated as:
|
||||
// CompactionTotalSize * (CompactionTotalSizeMultiplier ^ Level)
|
||||
// The multiplier for each level can also fine-tuned using
|
||||
// CompactionTotalSizeMultiplierPerLevel.
|
||||
//
|
||||
// The default value is 10MiB.
|
||||
CompactionTotalSize int
|
||||
|
||||
// CompactionTotalSizeMultiplier defines multiplier for CompactionTotalSize.
|
||||
//
|
||||
// The default value is 10.
|
||||
CompactionTotalSizeMultiplier float64
|
||||
|
||||
// CompactionTotalSizeMultiplierPerLevel defines per-level multiplier for
|
||||
// CompactionTotalSize.
|
||||
// Use zero to skip a level.
|
||||
//
|
||||
// The default value is nil.
|
||||
CompactionTotalSizeMultiplierPerLevel []float64
|
||||
|
||||
// Comparer defines a total ordering over the space of []byte keys: a 'less
|
||||
// than' relationship. The same comparison algorithm must be used for reads
|
||||
// and writes over the lifetime of the DB.
|
||||
@@ -131,6 +242,11 @@ type Options struct {
|
||||
// The default value (DefaultCompression) uses snappy compression.
|
||||
Compression Compression
|
||||
|
||||
// DisableCompactionBackoff allows disable compaction retry backoff.
|
||||
//
|
||||
// The default value is false.
|
||||
DisableCompactionBackoff bool
|
||||
|
||||
// ErrorIfExist defines whether an error should returned if the DB already
|
||||
// exist.
|
||||
//
|
||||
@@ -159,12 +275,18 @@ type Options struct {
|
||||
// The default value is nil.
|
||||
Filter filter.Filter
|
||||
|
||||
// MaxOpenFiles defines maximum number of open files to kept around
|
||||
// (cached). This is not an hard limit, actual open files may exceed
|
||||
// the defined value.
|
||||
// MaxMemCompationLevel defines maximum level a newly compacted 'memdb'
|
||||
// will be pushed into if doesn't creates overlap. This should less than
|
||||
// NumLevel. Use -1 for level-0.
|
||||
//
|
||||
// The default value is 1000.
|
||||
MaxOpenFiles int
|
||||
// The default is 2.
|
||||
MaxMemCompationLevel int
|
||||
|
||||
// NumLevel defines number of database level. The level shouldn't changed
|
||||
// between opens, or the database will panic.
|
||||
//
|
||||
// The default is 7.
|
||||
NumLevel int
|
||||
|
||||
// Strict defines the DB strict level.
|
||||
Strict Strict
|
||||
@@ -177,6 +299,18 @@ type Options struct {
|
||||
//
|
||||
// The default value is 4MiB.
|
||||
WriteBuffer int
|
||||
|
||||
// WriteL0StopTrigger defines number of 'sorted table' at level-0 that will
|
||||
// pause write.
|
||||
//
|
||||
// The default value is 12.
|
||||
WriteL0PauseTrigger int
|
||||
|
||||
// WriteL0SlowdownTrigger defines number of 'sorted table' at level-0 that
|
||||
// will trigger write slowdown.
|
||||
//
|
||||
// The default value is 8.
|
||||
WriteL0SlowdownTrigger int
|
||||
}
|
||||
|
||||
func (o *Options) GetAltFilters() []filter.Filter {
|
||||
@@ -193,6 +327,13 @@ func (o *Options) GetBlockCache() cache.Cache {
|
||||
return o.BlockCache
|
||||
}
|
||||
|
||||
func (o *Options) GetBlockCacheSize() int {
|
||||
if o == nil || o.BlockCacheSize <= 0 {
|
||||
return DefaultBlockCacheSize
|
||||
}
|
||||
return o.BlockCacheSize
|
||||
}
|
||||
|
||||
func (o *Options) GetBlockRestartInterval() int {
|
||||
if o == nil || o.BlockRestartInterval <= 0 {
|
||||
return DefaultBlockRestartInterval
|
||||
@@ -207,6 +348,88 @@ func (o *Options) GetBlockSize() int {
|
||||
return o.BlockSize
|
||||
}
|
||||
|
||||
func (o *Options) GetCachedOpenFiles() int {
|
||||
if o == nil || o.CachedOpenFiles == 0 {
|
||||
return DefaultCachedOpenFiles
|
||||
} else if o.CachedOpenFiles < 0 {
|
||||
return 0
|
||||
}
|
||||
return o.CachedOpenFiles
|
||||
}
|
||||
|
||||
func (o *Options) GetCompactionExpandLimit(level int) int {
|
||||
factor := DefaultCompactionExpandLimitFactor
|
||||
if o != nil && o.CompactionExpandLimitFactor > 0 {
|
||||
factor = o.CompactionExpandLimitFactor
|
||||
}
|
||||
return o.GetCompactionTableSize(level+1) * factor
|
||||
}
|
||||
|
||||
func (o *Options) GetCompactionGPOverlaps(level int) int {
|
||||
factor := DefaultCompactionGPOverlapsFactor
|
||||
if o != nil && o.CompactionGPOverlapsFactor > 0 {
|
||||
factor = o.CompactionGPOverlapsFactor
|
||||
}
|
||||
return o.GetCompactionTableSize(level+2) * factor
|
||||
}
|
||||
|
||||
func (o *Options) GetCompactionL0Trigger() int {
|
||||
if o == nil || o.CompactionL0Trigger == 0 {
|
||||
return DefaultCompactionL0Trigger
|
||||
}
|
||||
return o.CompactionL0Trigger
|
||||
}
|
||||
|
||||
func (o *Options) GetCompactionSourceLimit(level int) int {
|
||||
factor := DefaultCompactionSourceLimitFactor
|
||||
if o != nil && o.CompactionSourceLimitFactor > 0 {
|
||||
factor = o.CompactionSourceLimitFactor
|
||||
}
|
||||
return o.GetCompactionTableSize(level+1) * factor
|
||||
}
|
||||
|
||||
func (o *Options) GetCompactionTableSize(level int) int {
|
||||
var (
|
||||
base = DefaultCompactionTableSize
|
||||
mult float64
|
||||
)
|
||||
if o != nil {
|
||||
if o.CompactionTableSize > 0 {
|
||||
base = o.CompactionTableSize
|
||||
}
|
||||
if len(o.CompactionTableSizeMultiplierPerLevel) > level && o.CompactionTableSizeMultiplierPerLevel[level] > 0 {
|
||||
mult = o.CompactionTableSizeMultiplierPerLevel[level]
|
||||
} else if o.CompactionTableSizeMultiplier > 0 {
|
||||
mult = math.Pow(o.CompactionTableSizeMultiplier, float64(level))
|
||||
}
|
||||
}
|
||||
if mult == 0 {
|
||||
mult = math.Pow(DefaultCompactionTableSizeMultiplier, float64(level))
|
||||
}
|
||||
return int(float64(base) * mult)
|
||||
}
|
||||
|
||||
func (o *Options) GetCompactionTotalSize(level int) int64 {
|
||||
var (
|
||||
base = DefaultCompactionTotalSize
|
||||
mult float64
|
||||
)
|
||||
if o != nil {
|
||||
if o.CompactionTotalSize > 0 {
|
||||
base = o.CompactionTotalSize
|
||||
}
|
||||
if len(o.CompactionTotalSizeMultiplierPerLevel) > level && o.CompactionTotalSizeMultiplierPerLevel[level] > 0 {
|
||||
mult = o.CompactionTotalSizeMultiplierPerLevel[level]
|
||||
} else if o.CompactionTotalSizeMultiplier > 0 {
|
||||
mult = math.Pow(o.CompactionTotalSizeMultiplier, float64(level))
|
||||
}
|
||||
}
|
||||
if mult == 0 {
|
||||
mult = math.Pow(DefaultCompactionTotalSizeMultiplier, float64(level))
|
||||
}
|
||||
return int64(float64(base) * mult)
|
||||
}
|
||||
|
||||
func (o *Options) GetComparer() comparer.Comparer {
|
||||
if o == nil || o.Comparer == nil {
|
||||
return comparer.DefaultComparer
|
||||
@@ -221,6 +444,13 @@ func (o *Options) GetCompression() Compression {
|
||||
return o.Compression
|
||||
}
|
||||
|
||||
func (o *Options) GetDisableCompactionBackoff() bool {
|
||||
if o == nil {
|
||||
return false
|
||||
}
|
||||
return o.DisableCompactionBackoff
|
||||
}
|
||||
|
||||
func (o *Options) GetErrorIfExist() bool {
|
||||
if o == nil {
|
||||
return false
|
||||
@@ -242,11 +472,26 @@ func (o *Options) GetFilter() filter.Filter {
|
||||
return o.Filter
|
||||
}
|
||||
|
||||
func (o *Options) GetMaxOpenFiles() int {
|
||||
if o == nil || o.MaxOpenFiles <= 0 {
|
||||
return DefaultMaxOpenFiles
|
||||
func (o *Options) GetMaxMemCompationLevel() int {
|
||||
level := DefaultMaxMemCompationLevel
|
||||
if o != nil {
|
||||
if o.MaxMemCompationLevel > 0 {
|
||||
level = o.MaxMemCompationLevel
|
||||
} else if o.MaxMemCompationLevel == -1 {
|
||||
level = 0
|
||||
}
|
||||
}
|
||||
return o.MaxOpenFiles
|
||||
if level >= o.GetNumLevel() {
|
||||
return o.GetNumLevel() - 1
|
||||
}
|
||||
return level
|
||||
}
|
||||
|
||||
func (o *Options) GetNumLevel() int {
|
||||
if o == nil || o.NumLevel <= 0 {
|
||||
return DefaultNumLevel
|
||||
}
|
||||
return o.NumLevel
|
||||
}
|
||||
|
||||
func (o *Options) GetStrict(strict Strict) bool {
|
||||
@@ -263,6 +508,20 @@ func (o *Options) GetWriteBuffer() int {
|
||||
return o.WriteBuffer
|
||||
}
|
||||
|
||||
func (o *Options) GetWriteL0PauseTrigger() int {
|
||||
if o == nil || o.WriteL0PauseTrigger == 0 {
|
||||
return DefaultWriteL0PauseTrigger
|
||||
}
|
||||
return o.WriteL0PauseTrigger
|
||||
}
|
||||
|
||||
func (o *Options) GetWriteL0SlowdownTrigger() int {
|
||||
if o == nil || o.WriteL0SlowdownTrigger == 0 {
|
||||
return DefaultWriteL0SlowdownTrigger
|
||||
}
|
||||
return o.WriteL0SlowdownTrigger
|
||||
}
|
||||
|
||||
// ReadOptions holds the optional parameters for 'read operation'. The
|
||||
// 'read operation' includes Get, Find and NewIterator.
|
||||
type ReadOptions struct {
|
||||
@@ -273,8 +532,8 @@ type ReadOptions struct {
|
||||
// The default value is false.
|
||||
DontFillCache bool
|
||||
|
||||
// Strict overrides global DB strict level. Only StrictIterator and
|
||||
// StrictBlockChecksum that does have effects here.
|
||||
// Strict will be OR'ed with global DB 'strict level' unless StrictOverride
|
||||
// is present. Currently only StrictReader that has effect here.
|
||||
Strict Strict
|
||||
}
|
||||
|
||||
@@ -316,3 +575,11 @@ func (wo *WriteOptions) GetSync() bool {
|
||||
}
|
||||
return wo.Sync
|
||||
}
|
||||
|
||||
func GetStrict(o *Options, ro *ReadOptions, strict Strict) bool {
|
||||
if ro.GetStrict(StrictOverride) {
|
||||
return ro.GetStrict(strict)
|
||||
} else {
|
||||
return o.GetStrict(strict) || ro.GetStrict(strict)
|
||||
}
|
||||
}
|
||||
|
||||
77
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/options.go
generated
vendored
77
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/options.go
generated
vendored
@@ -12,30 +12,89 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
)
|
||||
|
||||
func (s *session) setOptions(o *opt.Options) {
|
||||
s.o = &opt.Options{}
|
||||
func dupOptions(o *opt.Options) *opt.Options {
|
||||
newo := &opt.Options{}
|
||||
if o != nil {
|
||||
*s.o = *o
|
||||
*newo = *o
|
||||
}
|
||||
if newo.Strict == 0 {
|
||||
newo.Strict = opt.DefaultStrict
|
||||
}
|
||||
return newo
|
||||
}
|
||||
|
||||
func (s *session) setOptions(o *opt.Options) {
|
||||
no := dupOptions(o)
|
||||
// Alternative filters.
|
||||
if filters := o.GetAltFilters(); len(filters) > 0 {
|
||||
s.o.AltFilters = make([]filter.Filter, len(filters))
|
||||
no.AltFilters = make([]filter.Filter, len(filters))
|
||||
for i, filter := range filters {
|
||||
s.o.AltFilters[i] = &iFilter{filter}
|
||||
no.AltFilters[i] = &iFilter{filter}
|
||||
}
|
||||
}
|
||||
// Block cache.
|
||||
switch o.GetBlockCache() {
|
||||
case nil:
|
||||
s.o.BlockCache = cache.NewLRUCache(opt.DefaultBlockCacheSize)
|
||||
no.BlockCache = cache.NewLRUCache(o.GetBlockCacheSize())
|
||||
case opt.NoCache:
|
||||
s.o.BlockCache = nil
|
||||
no.BlockCache = nil
|
||||
}
|
||||
// Comparer.
|
||||
s.icmp = &iComparer{o.GetComparer()}
|
||||
s.o.Comparer = s.icmp
|
||||
no.Comparer = s.icmp
|
||||
// Filter.
|
||||
if filter := o.GetFilter(); filter != nil {
|
||||
s.o.Filter = &iFilter{filter}
|
||||
no.Filter = &iFilter{filter}
|
||||
}
|
||||
|
||||
s.o = &cachedOptions{Options: no}
|
||||
s.o.cache()
|
||||
}
|
||||
|
||||
type cachedOptions struct {
|
||||
*opt.Options
|
||||
|
||||
compactionExpandLimit []int
|
||||
compactionGPOverlaps []int
|
||||
compactionSourceLimit []int
|
||||
compactionTableSize []int
|
||||
compactionTotalSize []int64
|
||||
}
|
||||
|
||||
func (co *cachedOptions) cache() {
|
||||
numLevel := co.Options.GetNumLevel()
|
||||
|
||||
co.compactionExpandLimit = make([]int, numLevel)
|
||||
co.compactionGPOverlaps = make([]int, numLevel)
|
||||
co.compactionSourceLimit = make([]int, numLevel)
|
||||
co.compactionTableSize = make([]int, numLevel)
|
||||
co.compactionTotalSize = make([]int64, numLevel)
|
||||
|
||||
for level := 0; level < numLevel; level++ {
|
||||
co.compactionExpandLimit[level] = co.Options.GetCompactionExpandLimit(level)
|
||||
co.compactionGPOverlaps[level] = co.Options.GetCompactionGPOverlaps(level)
|
||||
co.compactionSourceLimit[level] = co.Options.GetCompactionSourceLimit(level)
|
||||
co.compactionTableSize[level] = co.Options.GetCompactionTableSize(level)
|
||||
co.compactionTotalSize[level] = co.Options.GetCompactionTotalSize(level)
|
||||
}
|
||||
}
|
||||
|
||||
func (co *cachedOptions) GetCompactionExpandLimit(level int) int {
|
||||
return co.compactionExpandLimit[level]
|
||||
}
|
||||
|
||||
func (co *cachedOptions) GetCompactionGPOverlaps(level int) int {
|
||||
return co.compactionGPOverlaps[level]
|
||||
}
|
||||
|
||||
func (co *cachedOptions) GetCompactionSourceLimit(level int) int {
|
||||
return co.compactionSourceLimit[level]
|
||||
}
|
||||
|
||||
func (co *cachedOptions) GetCompactionTableSize(level int) int {
|
||||
return co.compactionTableSize[level]
|
||||
}
|
||||
|
||||
func (co *cachedOptions) GetCompactionTotalSize(level int) int64 {
|
||||
return co.compactionTotalSize[level]
|
||||
}
|
||||
|
||||
210
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session.go
generated
vendored
210
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session.go
generated
vendored
@@ -7,12 +7,13 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/journal"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -20,18 +21,31 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
type ErrManifestCorrupted struct {
|
||||
Field string
|
||||
Reason string
|
||||
}
|
||||
|
||||
func (e *ErrManifestCorrupted) Error() string {
|
||||
return fmt.Sprintf("leveldb: manifest corrupted (field '%s'): %s", e.Field, e.Reason)
|
||||
}
|
||||
|
||||
func newErrManifestCorrupted(f storage.File, field, reason string) error {
|
||||
return errors.NewErrCorrupted(f, &ErrManifestCorrupted{field, reason})
|
||||
}
|
||||
|
||||
// session represent a persistent database session.
|
||||
type session struct {
|
||||
// Need 64-bit alignment.
|
||||
stFileNum uint64 // current unused file number
|
||||
stNextFileNum uint64 // current unused file number
|
||||
stJournalNum uint64 // current journal file number; need external synchronization
|
||||
stPrevJournalNum uint64 // prev journal file number; no longer used; for compatibility with older version of leveldb
|
||||
stSeq uint64 // last mem compacted seq; need external synchronization
|
||||
stSeqNum uint64 // last mem compacted seq; need external synchronization
|
||||
stTempFileNum uint64
|
||||
|
||||
stor storage.Storage
|
||||
storLock util.Releaser
|
||||
o *opt.Options
|
||||
o *cachedOptions
|
||||
icmp *iComparer
|
||||
tops *tOps
|
||||
|
||||
@@ -39,9 +53,9 @@ type session struct {
|
||||
manifestWriter storage.Writer
|
||||
manifestFile storage.File
|
||||
|
||||
stCptrs [kNumLevels]iKey // compact pointers; need external synchronization
|
||||
stVersion *version // current version
|
||||
vmu sync.Mutex
|
||||
stCompPtrs []iKey // compaction pointers; need external synchronization
|
||||
stVersion *version // current version
|
||||
vmu sync.Mutex
|
||||
}
|
||||
|
||||
// Creates new initialized session instance.
|
||||
@@ -54,13 +68,14 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
|
||||
return
|
||||
}
|
||||
s = &session{
|
||||
stor: stor,
|
||||
storLock: storLock,
|
||||
stor: stor,
|
||||
storLock: storLock,
|
||||
stCompPtrs: make([]iKey, o.GetNumLevel()),
|
||||
}
|
||||
s.setOptions(o)
|
||||
s.tops = newTableOps(s, s.o.GetMaxOpenFiles())
|
||||
s.setVersion(&version{s: s})
|
||||
s.log("log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock D·DeletedEntry L·Level Q·SeqNum T·TimeElapsed")
|
||||
s.tops = newTableOps(s, s.o.GetCachedOpenFiles())
|
||||
s.setVersion(newVersion(s))
|
||||
s.log("log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -100,26 +115,26 @@ func (s *session) recover() (err error) {
|
||||
// Don't return os.ErrNotExist if the underlying storage contains
|
||||
// other files that belong to LevelDB. So the DB won't get trashed.
|
||||
if files, _ := s.stor.GetFiles(storage.TypeAll); len(files) > 0 {
|
||||
err = ErrCorrupted{Type: CorruptedManifest, Err: errors.New("leveldb: manifest file missing")}
|
||||
err = &errors.ErrCorrupted{File: &storage.FileInfo{Type: storage.TypeManifest}, Err: &errors.ErrMissingFiles{}}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
file, err := s.stor.GetManifest()
|
||||
m, err := s.stor.GetManifest()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
reader, err := file.Open()
|
||||
reader, err := m.Open()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer reader.Close()
|
||||
strict := s.o.GetStrict(opt.StrictManifest)
|
||||
jr := journal.NewReader(reader, dropper{s, file}, strict, true)
|
||||
jr := journal.NewReader(reader, dropper{s, m}, strict, true)
|
||||
|
||||
staging := s.version_NB().newStaging()
|
||||
rec := &sessionRecord{}
|
||||
staging := s.stVersion.newStaging()
|
||||
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
|
||||
for {
|
||||
var r io.Reader
|
||||
r, err = jr.Next()
|
||||
@@ -128,51 +143,57 @@ func (s *session) recover() (err error) {
|
||||
err = nil
|
||||
break
|
||||
}
|
||||
return
|
||||
return errors.SetFile(err, m)
|
||||
}
|
||||
|
||||
err = rec.decode(r)
|
||||
if err == nil {
|
||||
// save compact pointers
|
||||
for _, r := range rec.compactionPointers {
|
||||
s.stCptrs[r.level] = iKey(r.ikey)
|
||||
for _, r := range rec.compPtrs {
|
||||
s.stCompPtrs[r.level] = iKey(r.ikey)
|
||||
}
|
||||
// commit record to version staging
|
||||
staging.commit(rec)
|
||||
} else if strict {
|
||||
return ErrCorrupted{Type: CorruptedManifest, Err: err}
|
||||
} else {
|
||||
s.logf("manifest error: %v (skipped)", err)
|
||||
err = errors.SetFile(err, m)
|
||||
if strict || !errors.IsCorrupted(err) {
|
||||
return
|
||||
} else {
|
||||
s.logf("manifest error: %v (skipped)", errors.SetFile(err, m))
|
||||
}
|
||||
}
|
||||
rec.resetCompactionPointers()
|
||||
rec.resetCompPtrs()
|
||||
rec.resetAddedTables()
|
||||
rec.resetDeletedTables()
|
||||
}
|
||||
|
||||
switch {
|
||||
case !rec.has(recComparer):
|
||||
return ErrCorrupted{Type: CorruptedManifest, Err: errors.New("leveldb: manifest missing comparer name")}
|
||||
return newErrManifestCorrupted(m, "comparer", "missing")
|
||||
case rec.comparer != s.icmp.uName():
|
||||
return ErrCorrupted{Type: CorruptedManifest, Err: errors.New("leveldb: comparer mismatch, " + "want '" + s.icmp.uName() + "', " + "got '" + rec.comparer + "'")}
|
||||
case !rec.has(recNextNum):
|
||||
return ErrCorrupted{Type: CorruptedManifest, Err: errors.New("leveldb: manifest missing next file number")}
|
||||
return newErrManifestCorrupted(m, "comparer", fmt.Sprintf("mismatch: want '%s', got '%s'", s.icmp.uName(), rec.comparer))
|
||||
case !rec.has(recNextFileNum):
|
||||
return newErrManifestCorrupted(m, "next-file-num", "missing")
|
||||
case !rec.has(recJournalNum):
|
||||
return ErrCorrupted{Type: CorruptedManifest, Err: errors.New("leveldb: manifest missing journal file number")}
|
||||
case !rec.has(recSeq):
|
||||
return ErrCorrupted{Type: CorruptedManifest, Err: errors.New("leveldb: manifest missing seq number")}
|
||||
return newErrManifestCorrupted(m, "journal-file-num", "missing")
|
||||
case !rec.has(recSeqNum):
|
||||
return newErrManifestCorrupted(m, "seq-num", "missing")
|
||||
}
|
||||
|
||||
s.manifestFile = file
|
||||
s.manifestFile = m
|
||||
s.setVersion(staging.finish())
|
||||
s.setFileNum(rec.nextNum)
|
||||
s.setNextFileNum(rec.nextFileNum)
|
||||
s.recordCommited(rec)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Commit session; need external synchronization.
|
||||
func (s *session) commit(r *sessionRecord) (err error) {
|
||||
v := s.version()
|
||||
defer v.release()
|
||||
|
||||
// spawn new version based on current version
|
||||
nv := s.version_NB().spawn(r)
|
||||
nv := v.spawn(r)
|
||||
|
||||
if s.manifest == nil {
|
||||
// manifest journal writer not yet created, create one
|
||||
@@ -191,13 +212,13 @@ func (s *session) commit(r *sessionRecord) (err error) {
|
||||
|
||||
// Pick a compaction based on current state; need external synchronization.
|
||||
func (s *session) pickCompaction() *compaction {
|
||||
v := s.version_NB()
|
||||
v := s.version()
|
||||
|
||||
var level int
|
||||
var t0 tFiles
|
||||
if v.cScore >= 1 {
|
||||
level = v.cLevel
|
||||
cptr := s.stCptrs[level]
|
||||
cptr := s.stCompPtrs[level]
|
||||
tables := v.tables[level]
|
||||
for _, t := range tables {
|
||||
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
|
||||
@@ -214,27 +235,21 @@ func (s *session) pickCompaction() *compaction {
|
||||
level = ts.level
|
||||
t0 = append(t0, ts.table)
|
||||
} else {
|
||||
v.release()
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
c := &compaction{s: s, v: v, level: level}
|
||||
if level == 0 {
|
||||
imin, imax := t0.getRange(s.icmp)
|
||||
t0 = v.tables[0].getOverlaps(t0[:0], s.icmp, imin.ukey(), imax.ukey(), true)
|
||||
}
|
||||
|
||||
c.tables[0] = t0
|
||||
c.expand()
|
||||
return c
|
||||
return newCompaction(s, v, level, t0)
|
||||
}
|
||||
|
||||
// Create compaction from given level and range; need external synchronization.
|
||||
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
|
||||
v := s.version_NB()
|
||||
v := s.version()
|
||||
|
||||
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
|
||||
if len(t0) == 0 {
|
||||
v.release()
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -243,7 +258,7 @@ func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
|
||||
// and we must not pick one file and drop another older file if the
|
||||
// two files overlap.
|
||||
if level > 0 {
|
||||
limit := uint64(kMaxTableSize)
|
||||
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
|
||||
total := uint64(0)
|
||||
for i, t := range t0 {
|
||||
total += t.size
|
||||
@@ -255,9 +270,20 @@ func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
|
||||
}
|
||||
}
|
||||
|
||||
c := &compaction{s: s, v: v, level: level}
|
||||
c.tables[0] = t0
|
||||
return newCompaction(s, v, level, t0)
|
||||
}
|
||||
|
||||
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
|
||||
c := &compaction{
|
||||
s: s,
|
||||
v: v,
|
||||
level: level,
|
||||
tables: [2]tFiles{t0, nil},
|
||||
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
|
||||
tPtrs: make([]int, s.o.GetNumLevel()),
|
||||
}
|
||||
c.expand()
|
||||
c.save()
|
||||
return c
|
||||
}
|
||||
|
||||
@@ -266,25 +292,57 @@ type compaction struct {
|
||||
s *session
|
||||
v *version
|
||||
|
||||
level int
|
||||
tables [2]tFiles
|
||||
level int
|
||||
tables [2]tFiles
|
||||
maxGPOverlaps uint64
|
||||
|
||||
gp tFiles
|
||||
gpidx int
|
||||
seenKey bool
|
||||
overlappedBytes uint64
|
||||
imin, imax iKey
|
||||
gp tFiles
|
||||
gpi int
|
||||
seenKey bool
|
||||
gpOverlappedBytes uint64
|
||||
imin, imax iKey
|
||||
tPtrs []int
|
||||
released bool
|
||||
|
||||
tPtrs [kNumLevels]int
|
||||
snapGPI int
|
||||
snapSeenKey bool
|
||||
snapGPOverlappedBytes uint64
|
||||
snapTPtrs []int
|
||||
}
|
||||
|
||||
func (c *compaction) save() {
|
||||
c.snapGPI = c.gpi
|
||||
c.snapSeenKey = c.seenKey
|
||||
c.snapGPOverlappedBytes = c.gpOverlappedBytes
|
||||
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
|
||||
}
|
||||
|
||||
func (c *compaction) restore() {
|
||||
c.gpi = c.snapGPI
|
||||
c.seenKey = c.snapSeenKey
|
||||
c.gpOverlappedBytes = c.snapGPOverlappedBytes
|
||||
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
|
||||
}
|
||||
|
||||
func (c *compaction) release() {
|
||||
if !c.released {
|
||||
c.released = true
|
||||
c.v.release()
|
||||
}
|
||||
}
|
||||
|
||||
// Expand compacted tables; need external synchronization.
|
||||
func (c *compaction) expand() {
|
||||
level := c.level
|
||||
vt0, vt1 := c.v.tables[level], c.v.tables[level+1]
|
||||
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
|
||||
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
|
||||
|
||||
t0, t1 := c.tables[0], c.tables[1]
|
||||
imin, imax := t0.getRange(c.s.icmp)
|
||||
// We expand t0 here just incase ukey hop across tables.
|
||||
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
|
||||
if len(t0) != len(c.tables[0]) {
|
||||
imin, imax = t0.getRange(c.s.icmp)
|
||||
}
|
||||
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
|
||||
// Get entire range covered by compaction.
|
||||
amin, amax := append(t0, t1...).getRange(c.s.icmp)
|
||||
@@ -292,13 +350,13 @@ func (c *compaction) expand() {
|
||||
// See if we can grow the number of inputs in "level" without
|
||||
// changing the number of "level+1" files we pick up.
|
||||
if len(t1) > 0 {
|
||||
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), level == 0)
|
||||
if len(exp0) > len(t0) && t1.size()+exp0.size() < kExpCompactionMaxBytes {
|
||||
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
|
||||
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
|
||||
xmin, xmax := exp0.getRange(c.s.icmp)
|
||||
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
|
||||
if len(exp1) == len(t1) {
|
||||
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
|
||||
level, level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
|
||||
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
|
||||
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
|
||||
imin, imax = xmin, xmax
|
||||
t0, t1 = exp0, exp1
|
||||
@@ -309,8 +367,8 @@ func (c *compaction) expand() {
|
||||
|
||||
// Compute the set of grandparent files that overlap this compaction
|
||||
// (parent == level+1; grandparent == level+2)
|
||||
if level+2 < kNumLevels {
|
||||
c.gp = c.v.tables[level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
|
||||
if c.level+2 < c.s.o.GetNumLevel() {
|
||||
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
|
||||
}
|
||||
|
||||
c.tables[0], c.tables[1] = t0, t1
|
||||
@@ -319,7 +377,7 @@ func (c *compaction) expand() {
|
||||
|
||||
// Check whether compaction is trivial.
|
||||
func (c *compaction) trivial() bool {
|
||||
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= kMaxGrandParentOverlapBytes
|
||||
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
|
||||
}
|
||||
|
||||
func (c *compaction) baseLevelForKey(ukey []byte) bool {
|
||||
@@ -341,20 +399,20 @@ func (c *compaction) baseLevelForKey(ukey []byte) bool {
|
||||
}
|
||||
|
||||
func (c *compaction) shouldStopBefore(ikey iKey) bool {
|
||||
for ; c.gpidx < len(c.gp); c.gpidx++ {
|
||||
gp := c.gp[c.gpidx]
|
||||
for ; c.gpi < len(c.gp); c.gpi++ {
|
||||
gp := c.gp[c.gpi]
|
||||
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
|
||||
break
|
||||
}
|
||||
if c.seenKey {
|
||||
c.overlappedBytes += gp.size
|
||||
c.gpOverlappedBytes += gp.size
|
||||
}
|
||||
}
|
||||
c.seenKey = true
|
||||
|
||||
if c.overlappedBytes > kMaxGrandParentOverlapBytes {
|
||||
if c.gpOverlappedBytes > c.maxGPOverlaps {
|
||||
// Too much overlap for current output; start new output.
|
||||
c.overlappedBytes = 0
|
||||
c.gpOverlappedBytes = 0
|
||||
return true
|
||||
}
|
||||
return false
|
||||
@@ -373,8 +431,12 @@ func (c *compaction) newIterator() iterator.Iterator {
|
||||
// Options.
|
||||
ro := &opt.ReadOptions{
|
||||
DontFillCache: true,
|
||||
Strict: opt.StrictOverride,
|
||||
}
|
||||
strict := c.s.o.GetStrict(opt.StrictCompaction)
|
||||
if strict {
|
||||
ro.Strict |= opt.StrictReader
|
||||
}
|
||||
strict := c.s.o.GetStrict(opt.StrictIterator)
|
||||
|
||||
for i, tables := range c.tables {
|
||||
if len(tables) == 0 {
|
||||
@@ -387,10 +449,10 @@ func (c *compaction) newIterator() iterator.Iterator {
|
||||
its = append(its, c.s.tops.newIterator(t, nil, ro))
|
||||
}
|
||||
} else {
|
||||
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict, true)
|
||||
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
|
||||
its = append(its, it)
|
||||
}
|
||||
}
|
||||
|
||||
return iterator.NewMergedIterator(its, c.s.icmp, true)
|
||||
return iterator.NewMergedIterator(its, c.s.icmp, strict)
|
||||
}
|
||||
|
||||
191
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_record.go
generated
vendored
191
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_record.go
generated
vendored
@@ -9,11 +9,11 @@ package leveldb
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
"strings"
|
||||
|
||||
var errCorruptManifest = errors.New("leveldb: corrupt manifest")
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
)
|
||||
|
||||
type byteReader interface {
|
||||
io.Reader
|
||||
@@ -22,13 +22,13 @@ type byteReader interface {
|
||||
|
||||
// These numbers are written to disk and should not be changed.
|
||||
const (
|
||||
recComparer = 1
|
||||
recJournalNum = 2
|
||||
recNextNum = 3
|
||||
recSeq = 4
|
||||
recCompactionPointer = 5
|
||||
recDeletedTable = 6
|
||||
recNewTable = 7
|
||||
recComparer = 1
|
||||
recJournalNum = 2
|
||||
recNextFileNum = 3
|
||||
recSeqNum = 4
|
||||
recCompPtr = 5
|
||||
recDelTable = 6
|
||||
recAddTable = 7
|
||||
// 8 was used for large value refs
|
||||
recPrevJournalNum = 9
|
||||
)
|
||||
@@ -38,7 +38,7 @@ type cpRecord struct {
|
||||
ikey iKey
|
||||
}
|
||||
|
||||
type ntRecord struct {
|
||||
type atRecord struct {
|
||||
level int
|
||||
num uint64
|
||||
size uint64
|
||||
@@ -46,27 +46,26 @@ type ntRecord struct {
|
||||
imax iKey
|
||||
}
|
||||
|
||||
func (r ntRecord) makeFile(s *session) *tFile {
|
||||
return newTableFile(s.getTableFile(r.num), r.size, r.imin, r.imax)
|
||||
}
|
||||
|
||||
type dtRecord struct {
|
||||
level int
|
||||
num uint64
|
||||
}
|
||||
|
||||
type sessionRecord struct {
|
||||
hasRec int
|
||||
comparer string
|
||||
journalNum uint64
|
||||
prevJournalNum uint64
|
||||
nextNum uint64
|
||||
seq uint64
|
||||
compactionPointers []cpRecord
|
||||
addedTables []ntRecord
|
||||
deletedTables []dtRecord
|
||||
scratch [binary.MaxVarintLen64]byte
|
||||
err error
|
||||
numLevel int
|
||||
|
||||
hasRec int
|
||||
comparer string
|
||||
journalNum uint64
|
||||
prevJournalNum uint64
|
||||
nextFileNum uint64
|
||||
seqNum uint64
|
||||
compPtrs []cpRecord
|
||||
addedTables []atRecord
|
||||
deletedTables []dtRecord
|
||||
|
||||
scratch [binary.MaxVarintLen64]byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (p *sessionRecord) has(rec int) bool {
|
||||
@@ -88,29 +87,29 @@ func (p *sessionRecord) setPrevJournalNum(num uint64) {
|
||||
p.prevJournalNum = num
|
||||
}
|
||||
|
||||
func (p *sessionRecord) setNextNum(num uint64) {
|
||||
p.hasRec |= 1 << recNextNum
|
||||
p.nextNum = num
|
||||
func (p *sessionRecord) setNextFileNum(num uint64) {
|
||||
p.hasRec |= 1 << recNextFileNum
|
||||
p.nextFileNum = num
|
||||
}
|
||||
|
||||
func (p *sessionRecord) setSeq(seq uint64) {
|
||||
p.hasRec |= 1 << recSeq
|
||||
p.seq = seq
|
||||
func (p *sessionRecord) setSeqNum(num uint64) {
|
||||
p.hasRec |= 1 << recSeqNum
|
||||
p.seqNum = num
|
||||
}
|
||||
|
||||
func (p *sessionRecord) addCompactionPointer(level int, ikey iKey) {
|
||||
p.hasRec |= 1 << recCompactionPointer
|
||||
p.compactionPointers = append(p.compactionPointers, cpRecord{level, ikey})
|
||||
func (p *sessionRecord) addCompPtr(level int, ikey iKey) {
|
||||
p.hasRec |= 1 << recCompPtr
|
||||
p.compPtrs = append(p.compPtrs, cpRecord{level, ikey})
|
||||
}
|
||||
|
||||
func (p *sessionRecord) resetCompactionPointers() {
|
||||
p.hasRec &= ^(1 << recCompactionPointer)
|
||||
p.compactionPointers = p.compactionPointers[:0]
|
||||
func (p *sessionRecord) resetCompPtrs() {
|
||||
p.hasRec &= ^(1 << recCompPtr)
|
||||
p.compPtrs = p.compPtrs[:0]
|
||||
}
|
||||
|
||||
func (p *sessionRecord) addTable(level int, num, size uint64, imin, imax iKey) {
|
||||
p.hasRec |= 1 << recNewTable
|
||||
p.addedTables = append(p.addedTables, ntRecord{level, num, size, imin, imax})
|
||||
p.hasRec |= 1 << recAddTable
|
||||
p.addedTables = append(p.addedTables, atRecord{level, num, size, imin, imax})
|
||||
}
|
||||
|
||||
func (p *sessionRecord) addTableFile(level int, t *tFile) {
|
||||
@@ -118,17 +117,17 @@ func (p *sessionRecord) addTableFile(level int, t *tFile) {
|
||||
}
|
||||
|
||||
func (p *sessionRecord) resetAddedTables() {
|
||||
p.hasRec &= ^(1 << recNewTable)
|
||||
p.hasRec &= ^(1 << recAddTable)
|
||||
p.addedTables = p.addedTables[:0]
|
||||
}
|
||||
|
||||
func (p *sessionRecord) deleteTable(level int, num uint64) {
|
||||
p.hasRec |= 1 << recDeletedTable
|
||||
func (p *sessionRecord) delTable(level int, num uint64) {
|
||||
p.hasRec |= 1 << recDelTable
|
||||
p.deletedTables = append(p.deletedTables, dtRecord{level, num})
|
||||
}
|
||||
|
||||
func (p *sessionRecord) resetDeletedTables() {
|
||||
p.hasRec &= ^(1 << recDeletedTable)
|
||||
p.hasRec &= ^(1 << recDelTable)
|
||||
p.deletedTables = p.deletedTables[:0]
|
||||
}
|
||||
|
||||
@@ -161,26 +160,26 @@ func (p *sessionRecord) encode(w io.Writer) error {
|
||||
p.putUvarint(w, recJournalNum)
|
||||
p.putUvarint(w, p.journalNum)
|
||||
}
|
||||
if p.has(recNextNum) {
|
||||
p.putUvarint(w, recNextNum)
|
||||
p.putUvarint(w, p.nextNum)
|
||||
if p.has(recNextFileNum) {
|
||||
p.putUvarint(w, recNextFileNum)
|
||||
p.putUvarint(w, p.nextFileNum)
|
||||
}
|
||||
if p.has(recSeq) {
|
||||
p.putUvarint(w, recSeq)
|
||||
p.putUvarint(w, p.seq)
|
||||
if p.has(recSeqNum) {
|
||||
p.putUvarint(w, recSeqNum)
|
||||
p.putUvarint(w, p.seqNum)
|
||||
}
|
||||
for _, r := range p.compactionPointers {
|
||||
p.putUvarint(w, recCompactionPointer)
|
||||
for _, r := range p.compPtrs {
|
||||
p.putUvarint(w, recCompPtr)
|
||||
p.putUvarint(w, uint64(r.level))
|
||||
p.putBytes(w, r.ikey)
|
||||
}
|
||||
for _, r := range p.deletedTables {
|
||||
p.putUvarint(w, recDeletedTable)
|
||||
p.putUvarint(w, recDelTable)
|
||||
p.putUvarint(w, uint64(r.level))
|
||||
p.putUvarint(w, r.num)
|
||||
}
|
||||
for _, r := range p.addedTables {
|
||||
p.putUvarint(w, recNewTable)
|
||||
p.putUvarint(w, recAddTable)
|
||||
p.putUvarint(w, uint64(r.level))
|
||||
p.putUvarint(w, r.num)
|
||||
p.putUvarint(w, r.size)
|
||||
@@ -190,14 +189,16 @@ func (p *sessionRecord) encode(w io.Writer) error {
|
||||
return p.err
|
||||
}
|
||||
|
||||
func (p *sessionRecord) readUvarint(r io.ByteReader) uint64 {
|
||||
func (p *sessionRecord) readUvarintMayEOF(field string, r io.ByteReader, mayEOF bool) uint64 {
|
||||
if p.err != nil {
|
||||
return 0
|
||||
}
|
||||
x, err := binary.ReadUvarint(r)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
p.err = errCorruptManifest
|
||||
if err == io.ErrUnexpectedEOF || (mayEOF == false && err == io.EOF) {
|
||||
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, "short read"})
|
||||
} else if strings.HasPrefix(err.Error(), "binary:") {
|
||||
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, err.Error()})
|
||||
} else {
|
||||
p.err = err
|
||||
}
|
||||
@@ -206,35 +207,39 @@ func (p *sessionRecord) readUvarint(r io.ByteReader) uint64 {
|
||||
return x
|
||||
}
|
||||
|
||||
func (p *sessionRecord) readBytes(r byteReader) []byte {
|
||||
func (p *sessionRecord) readUvarint(field string, r io.ByteReader) uint64 {
|
||||
return p.readUvarintMayEOF(field, r, false)
|
||||
}
|
||||
|
||||
func (p *sessionRecord) readBytes(field string, r byteReader) []byte {
|
||||
if p.err != nil {
|
||||
return nil
|
||||
}
|
||||
n := p.readUvarint(r)
|
||||
n := p.readUvarint(field, r)
|
||||
if p.err != nil {
|
||||
return nil
|
||||
}
|
||||
x := make([]byte, n)
|
||||
_, p.err = io.ReadFull(r, x)
|
||||
if p.err != nil {
|
||||
if p.err == io.EOF {
|
||||
p.err = errCorruptManifest
|
||||
if p.err == io.ErrUnexpectedEOF {
|
||||
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, "short read"})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return x
|
||||
}
|
||||
|
||||
func (p *sessionRecord) readLevel(r io.ByteReader) int {
|
||||
func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
|
||||
if p.err != nil {
|
||||
return 0
|
||||
}
|
||||
x := p.readUvarint(r)
|
||||
x := p.readUvarint(field, r)
|
||||
if p.err != nil {
|
||||
return 0
|
||||
}
|
||||
if x >= kNumLevels {
|
||||
p.err = errCorruptManifest
|
||||
if x >= uint64(p.numLevel) {
|
||||
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, "invalid level number"})
|
||||
return 0
|
||||
}
|
||||
return int(x)
|
||||
@@ -247,59 +252,59 @@ func (p *sessionRecord) decode(r io.Reader) error {
|
||||
}
|
||||
p.err = nil
|
||||
for p.err == nil {
|
||||
rec, err := binary.ReadUvarint(br)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
err = nil
|
||||
rec := p.readUvarintMayEOF("field-header", br, true)
|
||||
if p.err != nil {
|
||||
if p.err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
return p.err
|
||||
}
|
||||
switch rec {
|
||||
case recComparer:
|
||||
x := p.readBytes(br)
|
||||
x := p.readBytes("comparer", br)
|
||||
if p.err == nil {
|
||||
p.setComparer(string(x))
|
||||
}
|
||||
case recJournalNum:
|
||||
x := p.readUvarint(br)
|
||||
x := p.readUvarint("journal-num", br)
|
||||
if p.err == nil {
|
||||
p.setJournalNum(x)
|
||||
}
|
||||
case recPrevJournalNum:
|
||||
x := p.readUvarint(br)
|
||||
x := p.readUvarint("prev-journal-num", br)
|
||||
if p.err == nil {
|
||||
p.setPrevJournalNum(x)
|
||||
}
|
||||
case recNextNum:
|
||||
x := p.readUvarint(br)
|
||||
case recNextFileNum:
|
||||
x := p.readUvarint("next-file-num", br)
|
||||
if p.err == nil {
|
||||
p.setNextNum(x)
|
||||
p.setNextFileNum(x)
|
||||
}
|
||||
case recSeq:
|
||||
x := p.readUvarint(br)
|
||||
case recSeqNum:
|
||||
x := p.readUvarint("seq-num", br)
|
||||
if p.err == nil {
|
||||
p.setSeq(x)
|
||||
p.setSeqNum(x)
|
||||
}
|
||||
case recCompactionPointer:
|
||||
level := p.readLevel(br)
|
||||
ikey := p.readBytes(br)
|
||||
case recCompPtr:
|
||||
level := p.readLevel("comp-ptr.level", br)
|
||||
ikey := p.readBytes("comp-ptr.ikey", br)
|
||||
if p.err == nil {
|
||||
p.addCompactionPointer(level, iKey(ikey))
|
||||
p.addCompPtr(level, iKey(ikey))
|
||||
}
|
||||
case recNewTable:
|
||||
level := p.readLevel(br)
|
||||
num := p.readUvarint(br)
|
||||
size := p.readUvarint(br)
|
||||
imin := p.readBytes(br)
|
||||
imax := p.readBytes(br)
|
||||
case recAddTable:
|
||||
level := p.readLevel("add-table.level", br)
|
||||
num := p.readUvarint("add-table.num", br)
|
||||
size := p.readUvarint("add-table.size", br)
|
||||
imin := p.readBytes("add-table.imin", br)
|
||||
imax := p.readBytes("add-table.imax", br)
|
||||
if p.err == nil {
|
||||
p.addTable(level, num, size, imin, imax)
|
||||
}
|
||||
case recDeletedTable:
|
||||
level := p.readLevel(br)
|
||||
num := p.readUvarint(br)
|
||||
case recDelTable:
|
||||
level := p.readLevel("del-table.level", br)
|
||||
num := p.readUvarint("del-table.num", br)
|
||||
if p.err == nil {
|
||||
p.deleteTable(level, num)
|
||||
p.delTable(level, num)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
18
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_record_test.go
generated
vendored
18
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_record_test.go
generated
vendored
@@ -9,6 +9,8 @@ package leveldb
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
)
|
||||
|
||||
func decodeEncode(v *sessionRecord) (res bool, err error) {
|
||||
@@ -17,7 +19,7 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
v2 := new(sessionRecord)
|
||||
v2 := &sessionRecord{numLevel: opt.DefaultNumLevel}
|
||||
err = v.decode(b)
|
||||
if err != nil {
|
||||
return
|
||||
@@ -32,7 +34,7 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
|
||||
|
||||
func TestSessionRecord_EncodeDecode(t *testing.T) {
|
||||
big := uint64(1) << 50
|
||||
v := new(sessionRecord)
|
||||
v := &sessionRecord{numLevel: opt.DefaultNumLevel}
|
||||
i := uint64(0)
|
||||
test := func() {
|
||||
res, err := decodeEncode(v)
|
||||
@@ -47,16 +49,16 @@ func TestSessionRecord_EncodeDecode(t *testing.T) {
|
||||
for ; i < 4; i++ {
|
||||
test()
|
||||
v.addTable(3, big+300+i, big+400+i,
|
||||
newIKey([]byte("foo"), big+500+1, tVal),
|
||||
newIKey([]byte("zoo"), big+600+1, tDel))
|
||||
v.deleteTable(4, big+700+i)
|
||||
v.addCompactionPointer(int(i), newIKey([]byte("x"), big+900+1, tVal))
|
||||
newIkey([]byte("foo"), big+500+1, ktVal),
|
||||
newIkey([]byte("zoo"), big+600+1, ktDel))
|
||||
v.delTable(4, big+700+i)
|
||||
v.addCompPtr(int(i), newIkey([]byte("x"), big+900+1, ktVal))
|
||||
}
|
||||
|
||||
v.setComparer("foo")
|
||||
v.setJournalNum(big + 100)
|
||||
v.setPrevJournalNum(big + 99)
|
||||
v.setNextNum(big + 200)
|
||||
v.setSeq(big + 1000)
|
||||
v.setNextFileNum(big + 200)
|
||||
v.setSeqNum(big + 1000)
|
||||
test()
|
||||
}
|
||||
|
||||
69
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_util.go
generated
vendored
69
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_util.go
generated
vendored
@@ -22,7 +22,7 @@ type dropper struct {
|
||||
}
|
||||
|
||||
func (d dropper) Drop(err error) {
|
||||
if e, ok := err.(journal.ErrCorrupted); ok {
|
||||
if e, ok := err.(*journal.ErrCorrupted); ok {
|
||||
d.s.logf("journal@drop %s-%d S·%s %q", d.file.Type(), d.file.Num(), shortenb(e.Size), e.Reason)
|
||||
} else {
|
||||
d.s.logf("journal@drop %s-%d %q", d.file.Type(), d.file.Num(), err)
|
||||
@@ -51,9 +51,14 @@ func (s *session) newTemp() storage.File {
|
||||
return s.stor.GetFile(num, storage.TypeTemp)
|
||||
}
|
||||
|
||||
func (s *session) tableFileFromRecord(r atRecord) *tFile {
|
||||
return newTableFile(s.getTableFile(r.num), r.size, r.imin, r.imax)
|
||||
}
|
||||
|
||||
// Session state.
|
||||
|
||||
// Get current version.
|
||||
// Get current version. This will incr version ref, must call
|
||||
// version.release (exactly once) after use.
|
||||
func (s *session) version() *version {
|
||||
s.vmu.Lock()
|
||||
defer s.vmu.Unlock()
|
||||
@@ -61,61 +66,56 @@ func (s *session) version() *version {
|
||||
return s.stVersion
|
||||
}
|
||||
|
||||
// Get current version; no barrier.
|
||||
func (s *session) version_NB() *version {
|
||||
return s.stVersion
|
||||
}
|
||||
|
||||
// Set current version to v.
|
||||
func (s *session) setVersion(v *version) {
|
||||
s.vmu.Lock()
|
||||
v.ref = 1
|
||||
v.ref = 1 // Holds by session.
|
||||
if old := s.stVersion; old != nil {
|
||||
v.ref++
|
||||
v.ref++ // Holds by old version.
|
||||
old.next = v
|
||||
old.release_NB()
|
||||
old.releaseNB()
|
||||
}
|
||||
s.stVersion = v
|
||||
s.vmu.Unlock()
|
||||
}
|
||||
|
||||
// Get current unused file number.
|
||||
func (s *session) fileNum() uint64 {
|
||||
return atomic.LoadUint64(&s.stFileNum)
|
||||
func (s *session) nextFileNum() uint64 {
|
||||
return atomic.LoadUint64(&s.stNextFileNum)
|
||||
}
|
||||
|
||||
// Get current unused file number to num.
|
||||
func (s *session) setFileNum(num uint64) {
|
||||
atomic.StoreUint64(&s.stFileNum, num)
|
||||
// Set current unused file number to num.
|
||||
func (s *session) setNextFileNum(num uint64) {
|
||||
atomic.StoreUint64(&s.stNextFileNum, num)
|
||||
}
|
||||
|
||||
// Mark file number as used.
|
||||
func (s *session) markFileNum(num uint64) {
|
||||
num += 1
|
||||
nextFileNum := num + 1
|
||||
for {
|
||||
old, x := s.stFileNum, num
|
||||
old, x := s.stNextFileNum, nextFileNum
|
||||
if old > x {
|
||||
x = old
|
||||
}
|
||||
if atomic.CompareAndSwapUint64(&s.stFileNum, old, x) {
|
||||
if atomic.CompareAndSwapUint64(&s.stNextFileNum, old, x) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Allocate a file number.
|
||||
func (s *session) allocFileNum() (num uint64) {
|
||||
return atomic.AddUint64(&s.stFileNum, 1) - 1
|
||||
func (s *session) allocFileNum() uint64 {
|
||||
return atomic.AddUint64(&s.stNextFileNum, 1) - 1
|
||||
}
|
||||
|
||||
// Reuse given file number.
|
||||
func (s *session) reuseFileNum(num uint64) {
|
||||
for {
|
||||
old, x := s.stFileNum, num
|
||||
old, x := s.stNextFileNum, num
|
||||
if old != x+1 {
|
||||
x = old
|
||||
}
|
||||
if atomic.CompareAndSwapUint64(&s.stFileNum, old, x) {
|
||||
if atomic.CompareAndSwapUint64(&s.stNextFileNum, old, x) {
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -126,20 +126,20 @@ func (s *session) reuseFileNum(num uint64) {
|
||||
// Fill given session record obj with current states; need external
|
||||
// synchronization.
|
||||
func (s *session) fillRecord(r *sessionRecord, snapshot bool) {
|
||||
r.setNextNum(s.fileNum())
|
||||
r.setNextFileNum(s.nextFileNum())
|
||||
|
||||
if snapshot {
|
||||
if !r.has(recJournalNum) {
|
||||
r.setJournalNum(s.stJournalNum)
|
||||
}
|
||||
|
||||
if !r.has(recSeq) {
|
||||
r.setSeq(s.stSeq)
|
||||
if !r.has(recSeqNum) {
|
||||
r.setSeqNum(s.stSeqNum)
|
||||
}
|
||||
|
||||
for level, ik := range s.stCptrs {
|
||||
for level, ik := range s.stCompPtrs {
|
||||
if ik != nil {
|
||||
r.addCompactionPointer(level, ik)
|
||||
r.addCompPtr(level, ik)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -147,7 +147,7 @@ func (s *session) fillRecord(r *sessionRecord, snapshot bool) {
|
||||
}
|
||||
}
|
||||
|
||||
// Mark if record has been commited, this will update session state;
|
||||
// Mark if record has been committed, this will update session state;
|
||||
// need external synchronization.
|
||||
func (s *session) recordCommited(r *sessionRecord) {
|
||||
if r.has(recJournalNum) {
|
||||
@@ -158,12 +158,12 @@ func (s *session) recordCommited(r *sessionRecord) {
|
||||
s.stPrevJournalNum = r.prevJournalNum
|
||||
}
|
||||
|
||||
if r.has(recSeq) {
|
||||
s.stSeq = r.seq
|
||||
if r.has(recSeqNum) {
|
||||
s.stSeqNum = r.seqNum
|
||||
}
|
||||
|
||||
for _, p := range r.compactionPointers {
|
||||
s.stCptrs[p.level] = iKey(p.ikey)
|
||||
for _, p := range r.compPtrs {
|
||||
s.stCompPtrs[p.level] = iKey(p.ikey)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -178,10 +178,11 @@ func (s *session) newManifest(rec *sessionRecord, v *version) (err error) {
|
||||
jw := journal.NewWriter(writer)
|
||||
|
||||
if v == nil {
|
||||
v = s.version_NB()
|
||||
v = s.version()
|
||||
defer v.release()
|
||||
}
|
||||
if rec == nil {
|
||||
rec = new(sessionRecord)
|
||||
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
|
||||
}
|
||||
s.fillRecord(rec, true)
|
||||
v.fillRecord(rec)
|
||||
|
||||
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/storage/storage.go
generated
vendored
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/storage/storage.go
generated
vendored
@@ -125,3 +125,33 @@ type Storage interface {
|
||||
// Other methods should not be called after the storage has been closed.
|
||||
Close() error
|
||||
}
|
||||
|
||||
// FileInfo wraps basic file info.
|
||||
type FileInfo struct {
|
||||
Type FileType
|
||||
Num uint64
|
||||
}
|
||||
|
||||
func (fi FileInfo) String() string {
|
||||
switch fi.Type {
|
||||
case TypeManifest:
|
||||
return fmt.Sprintf("MANIFEST-%06d", fi.Num)
|
||||
case TypeJournal:
|
||||
return fmt.Sprintf("%06d.log", fi.Num)
|
||||
case TypeTable:
|
||||
return fmt.Sprintf("%06d.ldb", fi.Num)
|
||||
case TypeTemp:
|
||||
return fmt.Sprintf("%06d.tmp", fi.Num)
|
||||
default:
|
||||
return fmt.Sprintf("%#x-%d", fi.Type, fi.Num)
|
||||
}
|
||||
}
|
||||
|
||||
// NewFileInfo creates new FileInfo from the given File. It will returns nil
|
||||
// if File is nil.
|
||||
func NewFileInfo(f File) *FileInfo {
|
||||
if f == nil {
|
||||
return nil
|
||||
}
|
||||
return &FileInfo{f.Type(), f.Num()}
|
||||
}
|
||||
|
||||
162
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/storage_test.go
generated
vendored
162
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/storage_test.go
generated
vendored
@@ -11,6 +11,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
@@ -28,11 +29,25 @@ var (
|
||||
)
|
||||
|
||||
var (
|
||||
tsFSEnv = os.Getenv("GOLEVELDB_USEFS")
|
||||
tsKeepFS = tsFSEnv == "2"
|
||||
tsFS = tsKeepFS || tsFSEnv == "" || tsFSEnv == "1"
|
||||
tsMU = &sync.Mutex{}
|
||||
tsNum = 0
|
||||
tsFSEnv = os.Getenv("GOLEVELDB_USEFS")
|
||||
tsTempdir = os.Getenv("GOLEVELDB_TEMPDIR")
|
||||
tsKeepFS = tsFSEnv == "2"
|
||||
tsFS = tsKeepFS || tsFSEnv == "" || tsFSEnv == "1"
|
||||
tsMU = &sync.Mutex{}
|
||||
tsNum = 0
|
||||
)
|
||||
|
||||
type tsOp uint
|
||||
|
||||
const (
|
||||
tsOpOpen tsOp = iota
|
||||
tsOpCreate
|
||||
tsOpRead
|
||||
tsOpReadAt
|
||||
tsOpWrite
|
||||
tsOpSync
|
||||
|
||||
tsOpNum
|
||||
)
|
||||
|
||||
type tsLock struct {
|
||||
@@ -53,6 +68,9 @@ type tsReader struct {
|
||||
func (tr tsReader) Read(b []byte) (n int, err error) {
|
||||
ts := tr.tf.ts
|
||||
ts.countRead(tr.tf.Type())
|
||||
if tr.tf.shouldErrLocked(tsOpRead) {
|
||||
return 0, errors.New("leveldb.testStorage: emulated read error")
|
||||
}
|
||||
n, err = tr.Reader.Read(b)
|
||||
if err != nil && err != io.EOF {
|
||||
ts.t.Errorf("E: read error, num=%d type=%v n=%d: %v", tr.tf.Num(), tr.tf.Type(), n, err)
|
||||
@@ -63,6 +81,9 @@ func (tr tsReader) Read(b []byte) (n int, err error) {
|
||||
func (tr tsReader) ReadAt(b []byte, off int64) (n int, err error) {
|
||||
ts := tr.tf.ts
|
||||
ts.countRead(tr.tf.Type())
|
||||
if tr.tf.shouldErrLocked(tsOpReadAt) {
|
||||
return 0, errors.New("leveldb.testStorage: emulated readAt error")
|
||||
}
|
||||
n, err = tr.Reader.ReadAt(b, off)
|
||||
if err != nil && err != io.EOF {
|
||||
ts.t.Errorf("E: readAt error, num=%d type=%v off=%d n=%d: %v", tr.tf.Num(), tr.tf.Type(), off, n, err)
|
||||
@@ -82,15 +103,12 @@ type tsWriter struct {
|
||||
}
|
||||
|
||||
func (tw tsWriter) Write(b []byte) (n int, err error) {
|
||||
ts := tw.tf.ts
|
||||
ts.mu.Lock()
|
||||
defer ts.mu.Unlock()
|
||||
if ts.emuWriteErr&tw.tf.Type() != 0 {
|
||||
if tw.tf.shouldErrLocked(tsOpWrite) {
|
||||
return 0, errors.New("leveldb.testStorage: emulated write error")
|
||||
}
|
||||
n, err = tw.Writer.Write(b)
|
||||
if err != nil {
|
||||
ts.t.Errorf("E: write error, num=%d type=%v n=%d: %v", tw.tf.Num(), tw.tf.Type(), n, err)
|
||||
tw.tf.ts.t.Errorf("E: write error, num=%d type=%v n=%d: %v", tw.tf.Num(), tw.tf.Type(), n, err)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -98,23 +116,23 @@ func (tw tsWriter) Write(b []byte) (n int, err error) {
|
||||
func (tw tsWriter) Sync() (err error) {
|
||||
ts := tw.tf.ts
|
||||
ts.mu.Lock()
|
||||
defer ts.mu.Unlock()
|
||||
for ts.emuDelaySync&tw.tf.Type() != 0 {
|
||||
ts.cond.Wait()
|
||||
}
|
||||
if ts.emuSyncErr&tw.tf.Type() != 0 {
|
||||
ts.mu.Unlock()
|
||||
if tw.tf.shouldErrLocked(tsOpSync) {
|
||||
return errors.New("leveldb.testStorage: emulated sync error")
|
||||
}
|
||||
err = tw.Writer.Sync()
|
||||
if err != nil {
|
||||
ts.t.Errorf("E: sync error, num=%d type=%v: %v", tw.tf.Num(), tw.tf.Type(), err)
|
||||
tw.tf.ts.t.Errorf("E: sync error, num=%d type=%v: %v", tw.tf.Num(), tw.tf.Type(), err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (tw tsWriter) Close() (err error) {
|
||||
err = tw.Writer.Close()
|
||||
tw.tf.close("reader", err)
|
||||
tw.tf.close("writer", err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -127,6 +145,16 @@ func (tf tsFile) x() uint64 {
|
||||
return tf.Num()<<typeShift | uint64(tf.Type())
|
||||
}
|
||||
|
||||
func (tf tsFile) shouldErr(op tsOp) bool {
|
||||
return tf.ts.shouldErr(tf, op)
|
||||
}
|
||||
|
||||
func (tf tsFile) shouldErrLocked(op tsOp) bool {
|
||||
tf.ts.mu.Lock()
|
||||
defer tf.ts.mu.Unlock()
|
||||
return tf.shouldErr(op)
|
||||
}
|
||||
|
||||
func (tf tsFile) checkOpen(m string) error {
|
||||
ts := tf.ts
|
||||
if writer, ok := ts.opens[tf.x()]; ok {
|
||||
@@ -163,7 +191,7 @@ func (tf tsFile) Open() (r storage.Reader, err error) {
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if ts.emuOpenErr&tf.Type() != 0 {
|
||||
if tf.shouldErr(tsOpOpen) {
|
||||
err = errors.New("leveldb.testStorage: emulated open error")
|
||||
return
|
||||
}
|
||||
@@ -190,7 +218,7 @@ func (tf tsFile) Create() (w storage.Writer, err error) {
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if ts.emuCreateErr&tf.Type() != 0 {
|
||||
if tf.shouldErr(tsOpCreate) {
|
||||
err = errors.New("leveldb.testStorage: emulated create error")
|
||||
return
|
||||
}
|
||||
@@ -205,6 +233,23 @@ func (tf tsFile) Create() (w storage.Writer, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (tf tsFile) Replace(newfile storage.File) (err error) {
|
||||
ts := tf.ts
|
||||
ts.mu.Lock()
|
||||
defer ts.mu.Unlock()
|
||||
err = tf.checkOpen("replace")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
err = tf.File.Replace(newfile.(tsFile).File)
|
||||
if err != nil {
|
||||
ts.t.Errorf("E: cannot replace file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
|
||||
} else {
|
||||
ts.t.Logf("I: file replace, num=%d type=%v", tf.Num(), tf.Type())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (tf tsFile) Remove() (err error) {
|
||||
ts := tf.ts
|
||||
ts.mu.Lock()
|
||||
@@ -231,25 +276,61 @@ type testStorage struct {
|
||||
cond sync.Cond
|
||||
// Open files, true=writer, false=reader
|
||||
opens map[uint64]bool
|
||||
emuOpenErr storage.FileType
|
||||
emuCreateErr storage.FileType
|
||||
emuDelaySync storage.FileType
|
||||
emuWriteErr storage.FileType
|
||||
emuSyncErr storage.FileType
|
||||
ignoreOpenErr storage.FileType
|
||||
readCnt uint64
|
||||
readCntEn storage.FileType
|
||||
|
||||
emuErr [tsOpNum]storage.FileType
|
||||
emuErrOnce [tsOpNum]storage.FileType
|
||||
emuRandErr [tsOpNum]storage.FileType
|
||||
emuRandErrProb int
|
||||
emuErrOnceMap map[uint64]uint
|
||||
emuRandRand *rand.Rand
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetOpenErr(t storage.FileType) {
|
||||
func (ts *testStorage) shouldErr(tf tsFile, op tsOp) bool {
|
||||
if ts.emuErr[op]&tf.Type() != 0 {
|
||||
return true
|
||||
} else if ts.emuRandErr[op]&tf.Type() != 0 || ts.emuErrOnce[op]&tf.Type() != 0 {
|
||||
sop := uint(1) << op
|
||||
eop := ts.emuErrOnceMap[tf.x()]
|
||||
if eop&sop == 0 && (ts.emuRandRand.Int()%ts.emuRandErrProb == 0 || ts.emuErrOnce[op]&tf.Type() != 0) {
|
||||
ts.emuErrOnceMap[tf.x()] = eop | sop
|
||||
ts.t.Logf("I: emulated error: file=%d type=%v op=%v", tf.Num(), tf.Type(), op)
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetEmuErr(t storage.FileType, ops ...tsOp) {
|
||||
ts.mu.Lock()
|
||||
ts.emuOpenErr = t
|
||||
for _, op := range ops {
|
||||
ts.emuErr[op] = t
|
||||
}
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetCreateErr(t storage.FileType) {
|
||||
func (ts *testStorage) SetEmuErrOnce(t storage.FileType, ops ...tsOp) {
|
||||
ts.mu.Lock()
|
||||
ts.emuCreateErr = t
|
||||
for _, op := range ops {
|
||||
ts.emuErrOnce[op] = t
|
||||
}
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetEmuRandErr(t storage.FileType, ops ...tsOp) {
|
||||
ts.mu.Lock()
|
||||
for _, op := range ops {
|
||||
ts.emuRandErr[op] = t
|
||||
}
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetEmuRandErrProb(prob int) {
|
||||
ts.mu.Lock()
|
||||
ts.emuRandErrProb = prob
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
@@ -267,18 +348,6 @@ func (ts *testStorage) ReleaseSync(t storage.FileType) {
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetWriteErr(t storage.FileType) {
|
||||
ts.mu.Lock()
|
||||
ts.emuWriteErr = t
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ts *testStorage) SetSyncErr(t storage.FileType) {
|
||||
ts.mu.Lock()
|
||||
ts.emuSyncErr = t
|
||||
ts.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ts *testStorage) ReadCounter() uint64 {
|
||||
ts.mu.Lock()
|
||||
defer ts.mu.Unlock()
|
||||
@@ -413,7 +482,11 @@ func newTestStorage(t *testing.T) *testStorage {
|
||||
num := tsNum
|
||||
tsNum++
|
||||
tsMU.Unlock()
|
||||
path := filepath.Join(os.TempDir(), fmt.Sprintf("goleveldb-test%d0%d0%d", os.Getuid(), os.Getpid(), num))
|
||||
tempdir := tsTempdir
|
||||
if tempdir == "" {
|
||||
tempdir = os.TempDir()
|
||||
}
|
||||
path := filepath.Join(tempdir, fmt.Sprintf("goleveldb-test%d0%d0%d", os.Getuid(), os.Getpid(), num))
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
stor, err = storage.OpenFile(path)
|
||||
if err != nil {
|
||||
@@ -436,6 +509,10 @@ func newTestStorage(t *testing.T) *testStorage {
|
||||
}
|
||||
f.Close()
|
||||
}
|
||||
if t.Failed() {
|
||||
t.Logf("testing failed, test DB preserved at %s", path)
|
||||
return nil
|
||||
}
|
||||
if tsKeepFS {
|
||||
return nil
|
||||
}
|
||||
@@ -449,10 +526,13 @@ func newTestStorage(t *testing.T) *testStorage {
|
||||
stor = storage.NewMemStorage()
|
||||
}
|
||||
ts := &testStorage{
|
||||
t: t,
|
||||
Storage: stor,
|
||||
closeFn: closeFn,
|
||||
opens: make(map[uint64]bool),
|
||||
t: t,
|
||||
Storage: stor,
|
||||
closeFn: closeFn,
|
||||
opens: make(map[uint64]bool),
|
||||
emuErrOnceMap: make(map[uint64]uint),
|
||||
emuRandErrProb: 0x999,
|
||||
emuRandRand: rand.New(rand.NewSource(0xfacedead)),
|
||||
}
|
||||
ts.cond.L = &ts.mu
|
||||
return ts
|
||||
|
||||
143
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table.go
generated
vendored
143
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table.go
generated
vendored
@@ -7,6 +7,7 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"sync/atomic"
|
||||
|
||||
@@ -82,6 +83,18 @@ type tFiles []*tFile
|
||||
func (tf tFiles) Len() int { return len(tf) }
|
||||
func (tf tFiles) Swap(i, j int) { tf[i], tf[j] = tf[j], tf[i] }
|
||||
|
||||
func (tf tFiles) nums() string {
|
||||
x := "[ "
|
||||
for i, f := range tf {
|
||||
if i != 0 {
|
||||
x += ", "
|
||||
}
|
||||
x += fmt.Sprint(f.file.Num())
|
||||
}
|
||||
x += " ]"
|
||||
return x
|
||||
}
|
||||
|
||||
// Returns true if i smallest key is less than j.
|
||||
// This used for sort by key in ascending order.
|
||||
func (tf tFiles) lessByKey(icmp *iComparer, i, j int) bool {
|
||||
@@ -149,7 +162,7 @@ func (tf tFiles) overlaps(icmp *iComparer, umin, umax []byte, unsorted bool) boo
|
||||
i := 0
|
||||
if len(umin) > 0 {
|
||||
// Find the earliest possible internal key for min.
|
||||
i = tf.searchMax(icmp, newIKey(umin, kMaxSeq, tSeek))
|
||||
i = tf.searchMax(icmp, newIkey(umin, kMaxSeq, ktSeek))
|
||||
}
|
||||
if i >= len(tf) {
|
||||
// Beginning of range is after all files, so no overlap.
|
||||
@@ -159,24 +172,25 @@ func (tf tFiles) overlaps(icmp *iComparer, umin, umax []byte, unsorted bool) boo
|
||||
}
|
||||
|
||||
// Returns tables whose its key range overlaps with given key range.
|
||||
// If overlapped is true then the search will be expanded to tables that
|
||||
// overlaps with each other.
|
||||
// Range will be expanded if ukey found hop across tables.
|
||||
// If overlapped is true then the search will be restarted if umax
|
||||
// expanded.
|
||||
// The dst content will be overwritten.
|
||||
func (tf tFiles) getOverlaps(dst tFiles, icmp *iComparer, umin, umax []byte, overlapped bool) tFiles {
|
||||
x := len(dst)
|
||||
dst = dst[:0]
|
||||
for i := 0; i < len(tf); {
|
||||
t := tf[i]
|
||||
if t.overlaps(icmp, umin, umax) {
|
||||
if overlapped {
|
||||
// For overlapped files, check if the newly added file has
|
||||
// expanded the range. If so, restart search.
|
||||
if umin != nil && icmp.uCompare(t.imin.ukey(), umin) < 0 {
|
||||
umin = t.imin.ukey()
|
||||
dst = dst[:x]
|
||||
i = 0
|
||||
continue
|
||||
} else if umax != nil && icmp.uCompare(t.imax.ukey(), umax) > 0 {
|
||||
umax = t.imax.ukey()
|
||||
dst = dst[:x]
|
||||
if umin != nil && icmp.uCompare(t.imin.ukey(), umin) < 0 {
|
||||
umin = t.imin.ukey()
|
||||
dst = dst[:0]
|
||||
i = 0
|
||||
continue
|
||||
} else if umax != nil && icmp.uCompare(t.imax.ukey(), umax) > 0 {
|
||||
umax = t.imax.ukey()
|
||||
// Restart search if it is overlapped.
|
||||
if overlapped {
|
||||
dst = dst[:0]
|
||||
i = 0
|
||||
continue
|
||||
}
|
||||
@@ -289,7 +303,7 @@ func (t *tOps) create() (*tWriter, error) {
|
||||
t: t,
|
||||
file: file,
|
||||
w: fw,
|
||||
tw: table.NewWriter(fw, t.s.o),
|
||||
tw: table.NewWriter(fw, t.s.o.Options),
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -297,7 +311,7 @@ func (t *tOps) create() (*tWriter, error) {
|
||||
func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
|
||||
w, err := t.create()
|
||||
if err != nil {
|
||||
return f, n, err
|
||||
return
|
||||
}
|
||||
|
||||
defer func() {
|
||||
@@ -322,33 +336,30 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Opens table. It returns a cache object, which should
|
||||
// Opens table. It returns a cache handle, which should
|
||||
// be released after use.
|
||||
func (t *tOps) open(f *tFile) (c cache.Object, err error) {
|
||||
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
|
||||
num := f.file.Num()
|
||||
c, ok := t.cacheNS.Get(num, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
|
||||
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
|
||||
var r storage.Reader
|
||||
r, err = f.file.Open()
|
||||
if err != nil {
|
||||
return
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
o := t.s.o
|
||||
|
||||
var cacheNS cache.Namespace
|
||||
if bc := o.GetBlockCache(); bc != nil {
|
||||
cacheNS = bc.GetNamespace(num)
|
||||
var bcacheNS cache.Namespace
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bcacheNS = bc.GetNamespace(num)
|
||||
}
|
||||
|
||||
ok = true
|
||||
value = table.NewReader(r, int64(f.size), cacheNS, t.bpool, o)
|
||||
charge = 1
|
||||
fin = func() {
|
||||
var tr *table.Reader
|
||||
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcacheNS, t.bpool, t.s.o.Options)
|
||||
if err != nil {
|
||||
r.Close()
|
||||
return 0, nil
|
||||
}
|
||||
return
|
||||
return 1, tr
|
||||
})
|
||||
if !ok && err == nil {
|
||||
if ch == nil && err == nil {
|
||||
err = ErrClosed
|
||||
}
|
||||
return
|
||||
@@ -357,34 +368,43 @@ func (t *tOps) open(f *tFile) (c cache.Object, err error) {
|
||||
// Finds key/value pair whose key is greater than or equal to the
|
||||
// given key.
|
||||
func (t *tOps) find(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []byte, err error) {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer c.Release()
|
||||
return c.Value().(*table.Reader).Find(key, ro)
|
||||
defer ch.Release()
|
||||
return ch.Value().(*table.Reader).Find(key, true, ro)
|
||||
}
|
||||
|
||||
// Finds key that is greater than or equal to the given key.
|
||||
func (t *tOps) findKey(f *tFile, key []byte, ro *opt.ReadOptions) (rkey []byte, err error) {
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer ch.Release()
|
||||
return ch.Value().(*table.Reader).FindKey(key, true, ro)
|
||||
}
|
||||
|
||||
// Returns approximate offset of the given key.
|
||||
func (t *tOps) offsetOf(f *tFile, key []byte) (offset uint64, err error) {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_offset, err := c.Value().(*table.Reader).OffsetOf(key)
|
||||
offset = uint64(_offset)
|
||||
c.Release()
|
||||
return
|
||||
defer ch.Release()
|
||||
offset_, err := ch.Value().(*table.Reader).OffsetOf(key)
|
||||
return uint64(offset_), err
|
||||
}
|
||||
|
||||
// Creates an iterator from the given table.
|
||||
func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
iter := c.Value().(*table.Reader).NewIterator(slice, ro)
|
||||
iter.SetReleaser(c)
|
||||
iter := ch.Value().(*table.Reader).NewIterator(slice, ro)
|
||||
iter.SetReleaser(ch)
|
||||
return iter
|
||||
}
|
||||
|
||||
@@ -392,14 +412,16 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
|
||||
// no one use the the table.
|
||||
func (t *tOps) remove(f *tFile) {
|
||||
num := f.file.Num()
|
||||
t.cacheNS.Delete(num, func(exist bool) {
|
||||
if err := f.file.Remove(); err != nil {
|
||||
t.s.logf("table@remove removing @%d %q", num, err)
|
||||
} else {
|
||||
t.s.logf("table@remove removed @%d", num)
|
||||
}
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bc.GetNamespace(num).Zap(false)
|
||||
t.cacheNS.Delete(num, func(exist, pending bool) {
|
||||
if !pending {
|
||||
if err := f.file.Remove(); err != nil {
|
||||
t.s.logf("table@remove removing @%d %q", num, err)
|
||||
} else {
|
||||
t.s.logf("table@remove removed @%d", num)
|
||||
}
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bc.ZapNamespace(num)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -407,7 +429,8 @@ func (t *tOps) remove(f *tFile) {
|
||||
// Closes the table ops instance. It will close all tables,
|
||||
// regadless still used or not.
|
||||
func (t *tOps) close() {
|
||||
t.cache.Zap(true)
|
||||
t.cache.Zap()
|
||||
t.bpool.Close()
|
||||
}
|
||||
|
||||
// Creates new initialized table ops instance.
|
||||
@@ -447,28 +470,34 @@ func (w *tWriter) empty() bool {
|
||||
return w.first == nil
|
||||
}
|
||||
|
||||
// Closes the storage.Writer.
|
||||
func (w *tWriter) close() {
|
||||
if w.w != nil {
|
||||
w.w.Close()
|
||||
w.w = nil
|
||||
}
|
||||
}
|
||||
|
||||
// Finalizes the table and returns table file.
|
||||
func (w *tWriter) finish() (f *tFile, err error) {
|
||||
defer w.close()
|
||||
err = w.tw.Close()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
err = w.w.Sync()
|
||||
if err != nil {
|
||||
w.w.Close()
|
||||
return
|
||||
}
|
||||
w.w.Close()
|
||||
f = newTableFile(w.file, uint64(w.tw.BytesLen()), iKey(w.first), iKey(w.last))
|
||||
return
|
||||
}
|
||||
|
||||
// Drops the table.
|
||||
func (w *tWriter) drop() {
|
||||
w.w.Close()
|
||||
w.close()
|
||||
w.file.Remove()
|
||||
w.t.s.reuseFileNum(w.file.Num())
|
||||
w.w = nil
|
||||
w.file = nil
|
||||
w.tw = nil
|
||||
w.first = nil
|
||||
|
||||
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/block_test.go
generated
vendored
30
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/block_test.go
generated
vendored
@@ -19,13 +19,18 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
func (b *block) TestNewIterator(slice *util.Range) iterator.Iterator {
|
||||
return b.newIterator(slice, false, nil)
|
||||
type blockTesting struct {
|
||||
tr *Reader
|
||||
b *block
|
||||
}
|
||||
|
||||
func (t *blockTesting) TestNewIterator(slice *util.Range) iterator.Iterator {
|
||||
return t.tr.newBlockIter(t.b, nil, slice, false)
|
||||
}
|
||||
|
||||
var _ = testutil.Defer(func() {
|
||||
Describe("Block", func() {
|
||||
Build := func(kv *testutil.KeyValue, restartInterval int) *block {
|
||||
Build := func(kv *testutil.KeyValue, restartInterval int) *blockTesting {
|
||||
// Building the block.
|
||||
bw := &blockWriter{
|
||||
restartInterval: restartInterval,
|
||||
@@ -39,11 +44,13 @@ var _ = testutil.Defer(func() {
|
||||
// Opening the block.
|
||||
data := bw.buf.Bytes()
|
||||
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
|
||||
return &block{
|
||||
cmp: comparer.DefaultComparer,
|
||||
data: data,
|
||||
restartsLen: restartsLen,
|
||||
restartsOffset: len(data) - (restartsLen+1)*4,
|
||||
return &blockTesting{
|
||||
tr: &Reader{cmp: comparer.DefaultComparer},
|
||||
b: &block{
|
||||
data: data,
|
||||
restartsLen: restartsLen,
|
||||
restartsOffset: len(data) - (restartsLen+1)*4,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -59,7 +66,7 @@ var _ = testutil.Defer(func() {
|
||||
// Make block.
|
||||
br := Build(kv, restartInterval)
|
||||
// Do testing.
|
||||
testutil.KeyValueTesting(nil, br, kv.Clone())
|
||||
testutil.KeyValueTesting(nil, kv.Clone(), br, nil, nil)
|
||||
}
|
||||
|
||||
Describe(Text(), Test)
|
||||
@@ -102,11 +109,11 @@ var _ = testutil.Defer(func() {
|
||||
for restartInterval := 1; restartInterval <= 5; restartInterval++ {
|
||||
Describe(fmt.Sprintf("with restart interval of %d", restartInterval), func() {
|
||||
// Make block.
|
||||
br := Build(kv, restartInterval)
|
||||
bt := Build(kv, restartInterval)
|
||||
|
||||
Test := func(r *util.Range) func(done Done) {
|
||||
return func(done Done) {
|
||||
iter := br.newIterator(r, false, nil)
|
||||
iter := bt.TestNewIterator(r)
|
||||
Expect(iter.Error()).ShouldNot(HaveOccurred())
|
||||
|
||||
t := testutil.IteratorTesting{
|
||||
@@ -115,6 +122,7 @@ var _ = testutil.Defer(func() {
|
||||
}
|
||||
|
||||
testutil.DoIteratorTesting(&t)
|
||||
iter.Release()
|
||||
done <- true
|
||||
}
|
||||
}
|
||||
|
||||
686
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/reader.go
generated
vendored
686
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/reader.go
generated
vendored
File diff suppressed because it is too large
Load Diff
6
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table.go
generated
vendored
6
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table.go
generated
vendored
@@ -133,9 +133,9 @@ Filter block trailer:
|
||||
|
||||
+- 4-bytes -+
|
||||
/ \
|
||||
+---------------+---------------+---------------+-------------------------+------------------+
|
||||
| offset 1 | .... | offset n | filter offset (4-bytes) | base Lg (1-byte) |
|
||||
+-------------- +---------------+---------------+-------------------------+------------------+
|
||||
+---------------+---------------+---------------+-------------------------------+------------------+
|
||||
| data 1 offset | .... | data n offset | data-offsets offset (4-bytes) | base Lg (1-byte) |
|
||||
+-------------- +---------------+---------------+-------------------------------+------------------+
|
||||
|
||||
|
||||
NOTE: All fixed-length integer are little-endian.
|
||||
|
||||
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_suite_test.go
generated
vendored
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_suite_test.go
generated
vendored
@@ -3,15 +3,9 @@ package table
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/testutil"
|
||||
)
|
||||
|
||||
func TestTable(t *testing.T) {
|
||||
testutil.RunDefer()
|
||||
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Table Suite")
|
||||
testutil.RunSuite(t, "Table Suite")
|
||||
}
|
||||
|
||||
15
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_test.go
generated
vendored
15
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_test.go
generated
vendored
@@ -23,7 +23,7 @@ type tableWrapper struct {
|
||||
}
|
||||
|
||||
func (t tableWrapper) TestFind(key []byte) (rkey, rvalue []byte, err error) {
|
||||
return t.Reader.Find(key, nil)
|
||||
return t.Reader.Find(key, false, nil)
|
||||
}
|
||||
|
||||
func (t tableWrapper) TestGet(key []byte) (value []byte, err error) {
|
||||
@@ -59,7 +59,8 @@ var _ = testutil.Defer(func() {
|
||||
It("Should be able to approximate offset of a key correctly", func() {
|
||||
Expect(err).ShouldNot(HaveOccurred())
|
||||
|
||||
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, o)
|
||||
tr, err := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, nil, o)
|
||||
Expect(err).ShouldNot(HaveOccurred())
|
||||
CheckOffset := func(key string, expect, threshold int) {
|
||||
offset, err := tr.OffsetOf([]byte(key))
|
||||
Expect(err).ShouldNot(HaveOccurred())
|
||||
@@ -95,7 +96,7 @@ var _ = testutil.Defer(func() {
|
||||
tw.Close()
|
||||
|
||||
// Opening the table.
|
||||
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, o)
|
||||
tr, _ := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, nil, o)
|
||||
return tableWrapper{tr}
|
||||
}
|
||||
Test := func(kv *testutil.KeyValue, body func(r *Reader)) func() {
|
||||
@@ -104,14 +105,16 @@ var _ = testutil.Defer(func() {
|
||||
if body != nil {
|
||||
body(db.(tableWrapper).Reader)
|
||||
}
|
||||
testutil.KeyValueTesting(nil, db, *kv)
|
||||
testutil.KeyValueTesting(nil, *kv, db, nil, nil)
|
||||
}
|
||||
}
|
||||
|
||||
testutil.AllKeyValueTesting(nil, Build)
|
||||
testutil.AllKeyValueTesting(nil, Build, nil, nil)
|
||||
Describe("with one key per block", Test(testutil.KeyValue_Generate(nil, 9, 1, 10, 512, 512), func(r *Reader) {
|
||||
It("should have correct blocks number", func() {
|
||||
Expect(r.indexBlock.restartsLen).Should(Equal(9))
|
||||
indexBlock, err := r.readBlock(r.indexBH, true)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(indexBlock.restartsLen).Should(Equal(9))
|
||||
})
|
||||
}))
|
||||
})
|
||||
|
||||
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/writer.go
generated
vendored
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/writer.go
generated
vendored
@@ -12,7 +12,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"code.google.com/p/snappy-go/snappy"
|
||||
"github.com/syndtr/gosnappy/snappy"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/comparer"
|
||||
"github.com/syndtr/goleveldb/leveldb/filter"
|
||||
|
||||
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/db.go
generated
vendored
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/db.go
generated
vendored
@@ -12,6 +12,7 @@ import (
|
||||
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
@@ -34,6 +35,10 @@ type Get interface {
|
||||
TestGet(key []byte) (value []byte, err error)
|
||||
}
|
||||
|
||||
type Has interface {
|
||||
TestHas(key []byte) (ret bool, err error)
|
||||
}
|
||||
|
||||
type NewIterator interface {
|
||||
TestNewIterator(slice *util.Range) iterator.Iterator
|
||||
}
|
||||
@@ -110,7 +115,7 @@ func (t *DBTesting) TestAllPresent() {
|
||||
|
||||
func (t *DBTesting) TestDeletedKey(key []byte) {
|
||||
_, err := t.DB.TestGet(key)
|
||||
Expect(err).Should(Equal(util.ErrNotFound), "Get on deleted key %q, %s", key, t.text())
|
||||
Expect(err).Should(Equal(errors.ErrNotFound), "Get on deleted key %q, %s", key, t.text())
|
||||
}
|
||||
|
||||
func (t *DBTesting) TestAllDeleted() {
|
||||
@@ -212,5 +217,6 @@ func DoDBTesting(t *DBTesting) {
|
||||
}
|
||||
|
||||
DoIteratorTesting(&it)
|
||||
iter.Release()
|
||||
}
|
||||
}
|
||||
|
||||
21
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/ginkgo.go
generated
vendored
Normal file
21
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/ginkgo.go
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
package testutil
|
||||
|
||||
import (
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func RunSuite(t GinkgoTestingT, name string) {
|
||||
RunDefer()
|
||||
|
||||
SynchronizedBeforeSuite(func() []byte {
|
||||
RunDefer("setup")
|
||||
return nil
|
||||
}, func(data []byte) {})
|
||||
SynchronizedAfterSuite(func() {
|
||||
RunDefer("teardown")
|
||||
}, func() {})
|
||||
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, name)
|
||||
}
|
||||
145
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/kvtest.go
generated
vendored
145
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/kvtest.go
generated
vendored
@@ -13,16 +13,28 @@ import (
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB, teardown func(DB)) {
|
||||
if rnd == nil {
|
||||
rnd = NewRand()
|
||||
}
|
||||
|
||||
if db, ok := p.(Find); ok {
|
||||
It("Should find all keys with Find", func() {
|
||||
if p == nil {
|
||||
BeforeEach(func() {
|
||||
p = setup(kv)
|
||||
})
|
||||
if teardown != nil {
|
||||
AfterEach(func() {
|
||||
teardown(p)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
It("Should find all keys with Find", func() {
|
||||
if db, ok := p.(Find); ok {
|
||||
ShuffledIndex(nil, kv.Len(), 1, func(i int) {
|
||||
key_, key, value := kv.IndexInexact(i)
|
||||
|
||||
@@ -38,9 +50,11 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
Expect(rkey).Should(Equal(key))
|
||||
Expect(rvalue).Should(Equal(value), "Value for key %q (%q)", key_, key)
|
||||
})
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
It("Should return error if the key is not present", func() {
|
||||
It("Should return error if the key is not present", func() {
|
||||
if db, ok := p.(Find); ok {
|
||||
var key []byte
|
||||
if kv.Len() > 0 {
|
||||
key_, _ := kv.Index(kv.Len() - 1)
|
||||
@@ -48,12 +62,12 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
}
|
||||
rkey, _, err := db.TestFind(key)
|
||||
Expect(err).Should(HaveOccurred(), "Find for key %q yield key %q", key, rkey)
|
||||
Expect(err).Should(Equal(util.ErrNotFound))
|
||||
})
|
||||
}
|
||||
Expect(err).Should(Equal(errors.ErrNotFound))
|
||||
}
|
||||
})
|
||||
|
||||
if db, ok := p.(Get); ok {
|
||||
It("Should only find exact key with Get", func() {
|
||||
It("Should only find exact key with Get", func() {
|
||||
if db, ok := p.(Get); ok {
|
||||
ShuffledIndex(nil, kv.Len(), 1, func(i int) {
|
||||
key_, key, value := kv.IndexInexact(i)
|
||||
|
||||
@@ -66,14 +80,34 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
if len(key_) > 0 {
|
||||
_, err = db.TestGet(key_)
|
||||
Expect(err).Should(HaveOccurred(), "Error for key %q", key_)
|
||||
Expect(err).Should(Equal(util.ErrNotFound))
|
||||
Expect(err).Should(Equal(errors.ErrNotFound))
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if db, ok := p.(NewIterator); ok {
|
||||
TestIter := func(r *util.Range, _kv KeyValue) {
|
||||
It("Should only find present key with Has", func() {
|
||||
if db, ok := p.(Has); ok {
|
||||
ShuffledIndex(nil, kv.Len(), 1, func(i int) {
|
||||
key_, key, _ := kv.IndexInexact(i)
|
||||
|
||||
// Using exact key.
|
||||
ret, err := db.TestHas(key)
|
||||
Expect(err).ShouldNot(HaveOccurred(), "Error for key %q", key)
|
||||
Expect(ret).Should(BeTrue(), "False for key %q", key)
|
||||
|
||||
// Using inexact key.
|
||||
if len(key_) > 0 {
|
||||
ret, err = db.TestHas(key_)
|
||||
Expect(err).ShouldNot(HaveOccurred(), "Error for key %q", key_)
|
||||
Expect(ret).ShouldNot(BeTrue(), "True for key %q", key)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
TestIter := func(r *util.Range, _kv KeyValue) {
|
||||
if db, ok := p.(NewIterator); ok {
|
||||
iter := db.TestNewIterator(r)
|
||||
Expect(iter.Error()).ShouldNot(HaveOccurred())
|
||||
|
||||
@@ -83,46 +117,62 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
}
|
||||
|
||||
DoIteratorTesting(&t)
|
||||
iter.Release()
|
||||
}
|
||||
}
|
||||
|
||||
It("Should iterates and seeks correctly", func(done Done) {
|
||||
TestIter(nil, kv.Clone())
|
||||
done <- true
|
||||
}, 3.0)
|
||||
|
||||
RandomIndex(rnd, kv.Len(), Min(kv.Len(), 50), func(i int) {
|
||||
type slice struct {
|
||||
r *util.Range
|
||||
start, limit int
|
||||
}
|
||||
|
||||
It("Should iterates and seeks correctly", func(done Done) {
|
||||
TestIter(nil, kv.Clone())
|
||||
done <- true
|
||||
}, 3.0)
|
||||
|
||||
RandomIndex(rnd, kv.Len(), kv.Len(), func(i int) {
|
||||
type slice struct {
|
||||
r *util.Range
|
||||
start, limit int
|
||||
}
|
||||
|
||||
key_, _, _ := kv.IndexInexact(i)
|
||||
for _, x := range []slice{
|
||||
{&util.Range{Start: key_, Limit: nil}, i, kv.Len()},
|
||||
{&util.Range{Start: nil, Limit: key_}, 0, i},
|
||||
} {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", x.start, x.limit), func(done Done) {
|
||||
TestIter(x.r, kv.Slice(x.start, x.limit))
|
||||
done <- true
|
||||
}, 3.0)
|
||||
}
|
||||
})
|
||||
|
||||
RandomRange(rnd, kv.Len(), kv.Len(), func(start, limit int) {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", start, limit), func(done Done) {
|
||||
r := kv.Range(start, limit)
|
||||
TestIter(&r, kv.Slice(start, limit))
|
||||
key_, _, _ := kv.IndexInexact(i)
|
||||
for _, x := range []slice{
|
||||
{&util.Range{Start: key_, Limit: nil}, i, kv.Len()},
|
||||
{&util.Range{Start: nil, Limit: key_}, 0, i},
|
||||
} {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", x.start, x.limit), func(done Done) {
|
||||
TestIter(x.r, kv.Slice(x.start, x.limit))
|
||||
done <- true
|
||||
}, 3.0)
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
RandomRange(rnd, kv.Len(), Min(kv.Len(), 50), func(start, limit int) {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", start, limit), func(done Done) {
|
||||
r := kv.Range(start, limit)
|
||||
TestIter(&r, kv.Slice(start, limit))
|
||||
done <- true
|
||||
}, 3.0)
|
||||
})
|
||||
}
|
||||
|
||||
func AllKeyValueTesting(rnd *rand.Rand, body func(kv KeyValue) DB) {
|
||||
func AllKeyValueTesting(rnd *rand.Rand, body, setup func(KeyValue) DB, teardown func(DB)) {
|
||||
Test := func(kv *KeyValue) func() {
|
||||
return func() {
|
||||
db := body(*kv)
|
||||
KeyValueTesting(rnd, db, *kv)
|
||||
var p DB
|
||||
if setup != nil {
|
||||
Defer("setup", func() {
|
||||
p = setup(*kv)
|
||||
})
|
||||
}
|
||||
if teardown != nil {
|
||||
Defer("teardown", func() {
|
||||
teardown(p)
|
||||
})
|
||||
}
|
||||
if body != nil {
|
||||
p = body(*kv)
|
||||
}
|
||||
KeyValueTesting(rnd, *kv, p, func(KeyValue) DB {
|
||||
return p
|
||||
}, nil)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -133,4 +183,5 @@ func AllKeyValueTesting(rnd *rand.Rand, body func(kv KeyValue) DB) {
|
||||
Describe("with big value", Test(KeyValue_BigValue()))
|
||||
Describe("with special key", Test(KeyValue_SpecialKey()))
|
||||
Describe("with multiple key/value", Test(KeyValue_MultipleKeyValue()))
|
||||
Describe("with generated key/value", Test(KeyValue_Generate(nil, 120, 1, 50, 10, 120)))
|
||||
}
|
||||
|
||||
1
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/storage.go
generated
vendored
1
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/storage.go
generated
vendored
@@ -397,6 +397,7 @@ func (s *Storage) logI(format string, args ...interface{}) {
|
||||
|
||||
func (s *Storage) Log(str string) {
|
||||
s.log(1, "Log: "+str)
|
||||
s.Storage.Log(str)
|
||||
}
|
||||
|
||||
func (s *Storage) Lock() (r util.Releaser, err error) {
|
||||
|
||||
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/util.go
generated
vendored
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/util.go
generated
vendored
@@ -155,3 +155,17 @@ func RandomRange(rnd *rand.Rand, n, round int, fn func(start, limit int)) {
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func Max(x, y int) int {
|
||||
if x > y {
|
||||
return x
|
||||
}
|
||||
return y
|
||||
}
|
||||
|
||||
func Min(x, y int) int {
|
||||
if x < y {
|
||||
return x
|
||||
}
|
||||
return y
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil_test.go
generated
vendored
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil_test.go
generated
vendored
@@ -34,6 +34,10 @@ func (t *testingDB) TestGet(key []byte) (value []byte, err error) {
|
||||
return t.Get(key, t.ro)
|
||||
}
|
||||
|
||||
func (t *testingDB) TestHas(key []byte) (ret bool, err error) {
|
||||
return t.Has(key, t.ro)
|
||||
}
|
||||
|
||||
func (t *testingDB) TestNewIterator(slice *util.Range) iterator.Iterator {
|
||||
return t.NewIterator(slice, t.ro)
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util.go
generated
vendored
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util.go
generated
vendored
@@ -14,10 +14,10 @@ import (
|
||||
)
|
||||
|
||||
func shorten(str string) string {
|
||||
if len(str) <= 4 {
|
||||
if len(str) <= 8 {
|
||||
return str
|
||||
}
|
||||
return str[:1] + ".." + str[len(str)-1:]
|
||||
return str[:3] + ".." + str[len(str)-3:]
|
||||
}
|
||||
|
||||
var bunits = [...]string{"", "Ki", "Mi", "Gi"}
|
||||
|
||||
174
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go
generated
vendored
174
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go
generated
vendored
@@ -4,14 +4,13 @@
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build go1.3
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type buffer struct {
|
||||
@@ -21,13 +20,20 @@ type buffer struct {
|
||||
|
||||
// BufferPool is a 'buffer pool'.
|
||||
type BufferPool struct {
|
||||
pool [4]sync.Pool
|
||||
size [3]uint32
|
||||
sizeMiss [3]uint32
|
||||
pool [6]chan []byte
|
||||
size [5]uint32
|
||||
sizeMiss [5]uint32
|
||||
sizeHalf [5]uint32
|
||||
baseline [4]int
|
||||
baseline0 int
|
||||
baseline1 int
|
||||
baseline2 int
|
||||
|
||||
mu sync.RWMutex
|
||||
closed bool
|
||||
closeC chan struct{}
|
||||
|
||||
get uint32
|
||||
put uint32
|
||||
half uint32
|
||||
less uint32
|
||||
equal uint32
|
||||
greater uint32
|
||||
@@ -35,34 +41,58 @@ type BufferPool struct {
|
||||
}
|
||||
|
||||
func (p *BufferPool) poolNum(n int) int {
|
||||
switch {
|
||||
case n <= p.baseline0:
|
||||
if n <= p.baseline0 && n > p.baseline0/2 {
|
||||
return 0
|
||||
case n <= p.baseline1:
|
||||
return 1
|
||||
case n <= p.baseline2:
|
||||
return 2
|
||||
default:
|
||||
return 3
|
||||
}
|
||||
for i, x := range p.baseline {
|
||||
if n <= x {
|
||||
return i + 1
|
||||
}
|
||||
}
|
||||
return len(p.baseline) + 1
|
||||
}
|
||||
|
||||
// Get returns buffer with length of n.
|
||||
func (p *BufferPool) Get(n int) []byte {
|
||||
if poolNum := p.poolNum(n); poolNum == 0 {
|
||||
if p == nil {
|
||||
return make([]byte, n)
|
||||
}
|
||||
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
|
||||
if p.closed {
|
||||
return make([]byte, n)
|
||||
}
|
||||
|
||||
atomic.AddUint32(&p.get, 1)
|
||||
|
||||
poolNum := p.poolNum(n)
|
||||
pool := p.pool[poolNum]
|
||||
if poolNum == 0 {
|
||||
// Fast path.
|
||||
if b, ok := p.pool[0].Get().([]byte); ok {
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
if cap(b)-n >= n {
|
||||
atomic.AddUint32(&p.half, 1)
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
}
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
panic("not reached")
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
}
|
||||
} else {
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
@@ -70,21 +100,40 @@ func (p *BufferPool) Get(n int) []byte {
|
||||
} else {
|
||||
sizePtr := &p.size[poolNum-1]
|
||||
|
||||
if b, ok := p.pool[poolNum].Get().([]byte); ok {
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
if cap(b)-n >= n {
|
||||
atomic.AddUint32(&p.half, 1)
|
||||
sizeHalfPtr := &p.sizeHalf[poolNum-1]
|
||||
if atomic.AddUint32(sizeHalfPtr, 1) == 20 {
|
||||
atomic.StoreUint32(sizePtr, uint32(cap(b)/2))
|
||||
atomic.StoreUint32(sizeHalfPtr, 0)
|
||||
} else {
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
}
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
if uint32(cap(b)) >= atomic.LoadUint32(sizePtr) {
|
||||
p.pool[poolNum].Put(b)
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
@@ -107,12 +156,68 @@ func (p *BufferPool) Get(n int) []byte {
|
||||
|
||||
// Put adds given buffer to the pool.
|
||||
func (p *BufferPool) Put(b []byte) {
|
||||
p.pool[p.poolNum(cap(b))].Put(b)
|
||||
if p == nil {
|
||||
return
|
||||
}
|
||||
|
||||
p.mu.RLock()
|
||||
defer p.mu.RUnlock()
|
||||
|
||||
if p.closed {
|
||||
return
|
||||
}
|
||||
|
||||
atomic.AddUint32(&p.put, 1)
|
||||
|
||||
pool := p.pool[p.poolNum(cap(b))]
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (p *BufferPool) Close() {
|
||||
if p == nil {
|
||||
return
|
||||
}
|
||||
|
||||
p.mu.Lock()
|
||||
if !p.closed {
|
||||
p.closed = true
|
||||
p.closeC <- struct{}{}
|
||||
}
|
||||
p.mu.Unlock()
|
||||
}
|
||||
|
||||
func (p *BufferPool) String() string {
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v L·%d E·%d G·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.less, p.equal, p.greater, p.miss)
|
||||
if p == nil {
|
||||
return "<nil>"
|
||||
}
|
||||
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v Zh·%v G·%d P·%d H·%d <·%d =·%d >·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.sizeHalf, p.get, p.put, p.half, p.less, p.equal, p.greater, p.miss)
|
||||
}
|
||||
|
||||
func (p *BufferPool) drain() {
|
||||
ticker := time.NewTicker(2 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
for _, ch := range p.pool {
|
||||
select {
|
||||
case <-ch:
|
||||
default:
|
||||
}
|
||||
}
|
||||
case <-p.closeC:
|
||||
close(p.closeC)
|
||||
for _, ch := range p.pool {
|
||||
close(ch)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NewBufferPool creates a new initialized 'buffer pool'.
|
||||
@@ -120,9 +225,14 @@ func NewBufferPool(baseline int) *BufferPool {
|
||||
if baseline <= 0 {
|
||||
panic("baseline can't be <= 0")
|
||||
}
|
||||
return &BufferPool{
|
||||
p := &BufferPool{
|
||||
baseline0: baseline,
|
||||
baseline1: baseline * 2,
|
||||
baseline2: baseline * 4,
|
||||
baseline: [...]int{baseline / 4, baseline / 2, baseline * 2, baseline * 4},
|
||||
closeC: make(chan struct{}, 1),
|
||||
}
|
||||
for i, cap := range []int{2, 2, 4, 4, 2, 1} {
|
||||
p.pool[i] = make(chan []byte, cap)
|
||||
}
|
||||
go p.drain()
|
||||
return p
|
||||
}
|
||||
|
||||
143
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool_legacy.go
generated
vendored
143
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool_legacy.go
generated
vendored
@@ -1,143 +0,0 @@
|
||||
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build !go1.3
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type buffer struct {
|
||||
b []byte
|
||||
miss int
|
||||
}
|
||||
|
||||
// BufferPool is a 'buffer pool'.
|
||||
type BufferPool struct {
|
||||
pool [4]chan []byte
|
||||
size [3]uint32
|
||||
sizeMiss [3]uint32
|
||||
baseline0 int
|
||||
baseline1 int
|
||||
baseline2 int
|
||||
|
||||
less uint32
|
||||
equal uint32
|
||||
greater uint32
|
||||
miss uint32
|
||||
}
|
||||
|
||||
func (p *BufferPool) poolNum(n int) int {
|
||||
switch {
|
||||
case n <= p.baseline0:
|
||||
return 0
|
||||
case n <= p.baseline1:
|
||||
return 1
|
||||
case n <= p.baseline2:
|
||||
return 2
|
||||
default:
|
||||
return 3
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns buffer with length of n.
|
||||
func (p *BufferPool) Get(n int) []byte {
|
||||
poolNum := p.poolNum(n)
|
||||
pool := p.pool[poolNum]
|
||||
if poolNum == 0 {
|
||||
// Fast path.
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
panic("not reached")
|
||||
}
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
return make([]byte, n, p.baseline0)
|
||||
} else {
|
||||
sizePtr := &p.size[poolNum-1]
|
||||
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
if uint32(cap(b)) >= atomic.LoadUint32(sizePtr) {
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
if size := atomic.LoadUint32(sizePtr); uint32(n) > size {
|
||||
if size == 0 {
|
||||
atomic.CompareAndSwapUint32(sizePtr, 0, uint32(n))
|
||||
} else {
|
||||
sizeMissPtr := &p.sizeMiss[poolNum-1]
|
||||
if atomic.AddUint32(sizeMissPtr, 1) == 20 {
|
||||
atomic.StoreUint32(sizePtr, uint32(n))
|
||||
atomic.StoreUint32(sizeMissPtr, 0)
|
||||
}
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
return make([]byte, n, size)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Put adds given buffer to the pool.
|
||||
func (p *BufferPool) Put(b []byte) {
|
||||
pool := p.pool[p.poolNum(cap(b))]
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (p *BufferPool) String() string {
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v L·%d E·%d G·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.less, p.equal, p.greater, p.miss)
|
||||
}
|
||||
|
||||
// NewBufferPool creates a new initialized 'buffer pool'.
|
||||
func NewBufferPool(baseline int) *BufferPool {
|
||||
if baseline <= 0 {
|
||||
panic("baseline can't be <= 0")
|
||||
}
|
||||
p := &BufferPool{
|
||||
baseline0: baseline,
|
||||
baseline1: baseline * 2,
|
||||
baseline2: baseline * 4,
|
||||
}
|
||||
for i, cap := range []int{6, 6, 3, 1} {
|
||||
p.pool[i] = make(chan []byte, cap)
|
||||
}
|
||||
return p
|
||||
}
|
||||
16
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/range.go
generated
vendored
16
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/range.go
generated
vendored
@@ -14,3 +14,19 @@ type Range struct {
|
||||
// Limit of the key range, not include in the range.
|
||||
Limit []byte
|
||||
}
|
||||
|
||||
// BytesPrefix returns key range that satisfy the given prefix.
|
||||
// This only applicable for the standard 'bytes comparer'.
|
||||
func BytesPrefix(prefix []byte) *Range {
|
||||
var limit []byte
|
||||
for i := len(prefix) - 1; i >= 0; i-- {
|
||||
c := prefix[i]
|
||||
if c < 0xff {
|
||||
limit = make([]byte, i+1)
|
||||
copy(limit, prefix)
|
||||
limit[i] = c + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
return &Range{prefix, limit}
|
||||
}
|
||||
|
||||
32
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/util.go
generated
vendored
32
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/util.go
generated
vendored
@@ -12,7 +12,8 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotFound = errors.New("leveldb: not found")
|
||||
ErrReleased = errors.New("leveldb: resource already relesed")
|
||||
ErrHasReleaser = errors.New("leveldb: releaser already defined")
|
||||
)
|
||||
|
||||
// Releaser is the interface that wraps the basic Release method.
|
||||
@@ -27,23 +28,46 @@ type ReleaseSetter interface {
|
||||
// SetReleaser associates the given releaser to the resources. The
|
||||
// releaser will be called once coresponding resources released.
|
||||
// Calling SetReleaser with nil will clear the releaser.
|
||||
//
|
||||
// This will panic if a releaser already present or coresponding
|
||||
// resource is already released. Releaser should be cleared first
|
||||
// before assigned a new one.
|
||||
SetReleaser(releaser Releaser)
|
||||
}
|
||||
|
||||
// BasicReleaser provides basic implementation of Releaser and ReleaseSetter.
|
||||
type BasicReleaser struct {
|
||||
releaser Releaser
|
||||
released bool
|
||||
}
|
||||
|
||||
// Released returns whether Release method already called.
|
||||
func (r *BasicReleaser) Released() bool {
|
||||
return r.released
|
||||
}
|
||||
|
||||
// Release implements Releaser.Release.
|
||||
func (r *BasicReleaser) Release() {
|
||||
if r.releaser != nil {
|
||||
r.releaser.Release()
|
||||
r.releaser = nil
|
||||
if !r.released {
|
||||
if r.releaser != nil {
|
||||
r.releaser.Release()
|
||||
r.releaser = nil
|
||||
}
|
||||
r.released = true
|
||||
}
|
||||
}
|
||||
|
||||
// SetReleaser implements ReleaseSetter.SetReleaser.
|
||||
func (r *BasicReleaser) SetReleaser(releaser Releaser) {
|
||||
if r.released {
|
||||
panic(ErrReleased)
|
||||
}
|
||||
if r.releaser != nil && releaser != nil {
|
||||
panic(ErrHasReleaser)
|
||||
}
|
||||
r.releaser = releaser
|
||||
}
|
||||
|
||||
type NoopReleaser struct{}
|
||||
|
||||
func (NoopReleaser) Release() {}
|
||||
|
||||
142
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/version.go
generated
vendored
142
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/version.go
generated
vendored
@@ -7,7 +7,6 @@
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sync/atomic"
|
||||
"unsafe"
|
||||
|
||||
@@ -16,19 +15,6 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
var levelMaxSize [kNumLevels]float64
|
||||
|
||||
func init() {
|
||||
// Precompute max size of each level
|
||||
for level := range levelMaxSize {
|
||||
res := float64(10 * 1048576)
|
||||
for n := level; n > 1; n-- {
|
||||
res *= 10
|
||||
}
|
||||
levelMaxSize[level] = res
|
||||
}
|
||||
}
|
||||
|
||||
type tSet struct {
|
||||
level int
|
||||
table *tFile
|
||||
@@ -37,7 +23,7 @@ type tSet struct {
|
||||
type version struct {
|
||||
s *session
|
||||
|
||||
tables [kNumLevels]tFiles
|
||||
tables []tFiles
|
||||
|
||||
// Level that should be compacted next and its compaction score.
|
||||
// Score < 1 means compaction is not strictly needed. These fields
|
||||
@@ -47,11 +33,16 @@ type version struct {
|
||||
|
||||
cSeek unsafe.Pointer
|
||||
|
||||
ref int
|
||||
ref int
|
||||
// Succeeding version.
|
||||
next *version
|
||||
}
|
||||
|
||||
func (v *version) release_NB() {
|
||||
func newVersion(s *session) *version {
|
||||
return &version{s: s, tables: make([]tFiles, s.o.GetNumLevel())}
|
||||
}
|
||||
|
||||
func (v *version) releaseNB() {
|
||||
v.ref--
|
||||
if v.ref > 0 {
|
||||
return
|
||||
@@ -77,13 +68,13 @@ func (v *version) release_NB() {
|
||||
}
|
||||
}
|
||||
|
||||
v.next.release_NB()
|
||||
v.next.releaseNB()
|
||||
v.next = nil
|
||||
}
|
||||
|
||||
func (v *version) release() {
|
||||
v.s.vmu.Lock()
|
||||
v.release_NB()
|
||||
v.releaseNB()
|
||||
v.s.vmu.Unlock()
|
||||
}
|
||||
|
||||
@@ -123,17 +114,18 @@ func (v *version) walkOverlapping(ikey iKey, f func(level int, t *tFile) bool, l
|
||||
}
|
||||
}
|
||||
|
||||
func (v *version) get(ikey iKey, ro *opt.ReadOptions) (value []byte, tcomp bool, err error) {
|
||||
func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byte, tcomp bool, err error) {
|
||||
ukey := ikey.ukey()
|
||||
|
||||
var (
|
||||
tset *tSet
|
||||
tseek bool
|
||||
|
||||
l0found bool
|
||||
l0seq uint64
|
||||
l0vt vType
|
||||
l0val []byte
|
||||
// Level-0.
|
||||
zfound bool
|
||||
zseq uint64
|
||||
zkt kType
|
||||
zval []byte
|
||||
)
|
||||
|
||||
err = ErrNotFound
|
||||
@@ -150,55 +142,60 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions) (value []byte, tcomp bool,
|
||||
}
|
||||
}
|
||||
|
||||
ikey__, val_, err_ := v.s.tops.find(t, ikey, ro)
|
||||
switch err_ {
|
||||
var (
|
||||
fikey, fval []byte
|
||||
ferr error
|
||||
)
|
||||
if noValue {
|
||||
fikey, ferr = v.s.tops.findKey(t, ikey, ro)
|
||||
} else {
|
||||
fikey, fval, ferr = v.s.tops.find(t, ikey, ro)
|
||||
}
|
||||
switch ferr {
|
||||
case nil:
|
||||
case ErrNotFound:
|
||||
return true
|
||||
default:
|
||||
err = err_
|
||||
err = ferr
|
||||
return false
|
||||
}
|
||||
|
||||
ikey_ := iKey(ikey__)
|
||||
if seq, vt, ok := ikey_.parseNum(); ok {
|
||||
if v.s.icmp.uCompare(ukey, ikey_.ukey()) != 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
if level == 0 {
|
||||
if seq >= l0seq {
|
||||
l0found = true
|
||||
l0seq = seq
|
||||
l0vt = vt
|
||||
l0val = val_
|
||||
if fukey, fseq, fkt, fkerr := parseIkey(fikey); fkerr == nil {
|
||||
if v.s.icmp.uCompare(ukey, fukey) == 0 {
|
||||
if level == 0 {
|
||||
if fseq >= zseq {
|
||||
zfound = true
|
||||
zseq = fseq
|
||||
zkt = fkt
|
||||
zval = fval
|
||||
}
|
||||
} else {
|
||||
switch fkt {
|
||||
case ktVal:
|
||||
value = fval
|
||||
err = nil
|
||||
case ktDel:
|
||||
default:
|
||||
panic("leveldb: invalid iKey type")
|
||||
}
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
switch vt {
|
||||
case tVal:
|
||||
value = val_
|
||||
err = nil
|
||||
case tDel:
|
||||
default:
|
||||
panic("leveldb: invalid internal key type")
|
||||
}
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
err = errors.New("leveldb: internal key corrupted")
|
||||
err = fkerr
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}, func(level int) bool {
|
||||
if l0found {
|
||||
switch l0vt {
|
||||
case tVal:
|
||||
value = l0val
|
||||
if zfound {
|
||||
switch zkt {
|
||||
case ktVal:
|
||||
value = zval
|
||||
err = nil
|
||||
case tDel:
|
||||
case ktDel:
|
||||
default:
|
||||
panic("leveldb: invalid internal key type")
|
||||
panic("leveldb: invalid iKey type")
|
||||
}
|
||||
return false
|
||||
}
|
||||
@@ -216,13 +213,13 @@ func (v *version) getIterators(slice *util.Range, ro *opt.ReadOptions) (its []it
|
||||
its = append(its, it)
|
||||
}
|
||||
|
||||
strict := v.s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
|
||||
strict := opt.GetStrict(v.s.o.Options, ro, opt.StrictReader)
|
||||
for _, tables := range v.tables[1:] {
|
||||
if len(tables) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
it := iterator.NewIndexedIterator(tables.newIndexIterator(v.s.tops, v.s.icmp, slice, ro), strict, true)
|
||||
it := iterator.NewIndexedIterator(tables.newIndexIterator(v.s.tops, v.s.icmp, slice, ro), strict)
|
||||
its = append(its, it)
|
||||
}
|
||||
|
||||
@@ -230,7 +227,7 @@ func (v *version) getIterators(slice *util.Range, ro *opt.ReadOptions) (its []it
|
||||
}
|
||||
|
||||
func (v *version) newStaging() *versionStaging {
|
||||
return &versionStaging{base: v}
|
||||
return &versionStaging{base: v, tables: make([]tablesScratch, v.s.o.GetNumLevel())}
|
||||
}
|
||||
|
||||
// Spawn a new version based on this version.
|
||||
@@ -285,12 +282,13 @@ func (v *version) offsetOf(ikey iKey) (n uint64, err error) {
|
||||
func (v *version) pickLevel(umin, umax []byte) (level int) {
|
||||
if !v.tables[0].overlaps(v.s.icmp, umin, umax, true) {
|
||||
var overlaps tFiles
|
||||
for ; level < kMaxMemCompactLevel; level++ {
|
||||
maxLevel := v.s.o.GetMaxMemCompationLevel()
|
||||
for ; level < maxLevel; level++ {
|
||||
if v.tables[level+1].overlaps(v.s.icmp, umin, umax, false) {
|
||||
break
|
||||
}
|
||||
overlaps = v.tables[level+2].getOverlaps(overlaps, v.s.icmp, umin, umax, false)
|
||||
if overlaps.size() > kMaxGrandParentOverlapBytes {
|
||||
if overlaps.size() > uint64(v.s.o.GetCompactionGPOverlaps(level)) {
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -318,9 +316,9 @@ func (v *version) computeCompaction() {
|
||||
// file size is small (perhaps because of a small write-buffer
|
||||
// setting, or very high compression ratios, or lots of
|
||||
// overwrites/deletions).
|
||||
score = float64(len(tables)) / kL0_CompactionTrigger
|
||||
score = float64(len(tables)) / float64(v.s.o.GetCompactionL0Trigger())
|
||||
} else {
|
||||
score = float64(tables.size()) / levelMaxSize[level]
|
||||
score = float64(tables.size()) / float64(v.s.o.GetCompactionTotalSize(level))
|
||||
}
|
||||
|
||||
if score > bestScore {
|
||||
@@ -337,12 +335,14 @@ func (v *version) needCompaction() bool {
|
||||
return v.cScore >= 1 || atomic.LoadPointer(&v.cSeek) != nil
|
||||
}
|
||||
|
||||
type tablesScratch struct {
|
||||
added map[uint64]atRecord
|
||||
deleted map[uint64]struct{}
|
||||
}
|
||||
|
||||
type versionStaging struct {
|
||||
base *version
|
||||
tables [kNumLevels]struct {
|
||||
added map[uint64]ntRecord
|
||||
deleted map[uint64]struct{}
|
||||
}
|
||||
tables []tablesScratch
|
||||
}
|
||||
|
||||
func (p *versionStaging) commit(r *sessionRecord) {
|
||||
@@ -367,7 +367,7 @@ func (p *versionStaging) commit(r *sessionRecord) {
|
||||
tm := &(p.tables[r.level])
|
||||
|
||||
if tm.added == nil {
|
||||
tm.added = make(map[uint64]ntRecord)
|
||||
tm.added = make(map[uint64]atRecord)
|
||||
}
|
||||
tm.added[r.num] = r
|
||||
|
||||
@@ -379,7 +379,7 @@ func (p *versionStaging) commit(r *sessionRecord) {
|
||||
|
||||
func (p *versionStaging) finish() *version {
|
||||
// Build new version.
|
||||
nv := &version{s: p.base.s}
|
||||
nv := newVersion(p.base.s)
|
||||
for level, tm := range p.tables {
|
||||
btables := p.base.tables[level]
|
||||
|
||||
@@ -402,7 +402,7 @@ func (p *versionStaging) finish() *version {
|
||||
|
||||
// New tables.
|
||||
for _, r := range tm.added {
|
||||
nt = append(nt, r.makeFile(p.base.s))
|
||||
nt = append(nt, p.base.s.tableFileFromRecord(r))
|
||||
}
|
||||
|
||||
// Sort tables.
|
||||
@@ -429,7 +429,7 @@ func (vr *versionReleaser) Release() {
|
||||
v := vr.v
|
||||
v.s.vmu.Lock()
|
||||
if !vr.once {
|
||||
v.release_NB()
|
||||
v.releaseNB()
|
||||
vr.once = true
|
||||
}
|
||||
v.s.vmu.Unlock()
|
||||
|
||||
687
LICENSE
687
LICENSE
@@ -1,19 +1,674 @@
|
||||
Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
deal in the Software without restriction, including without limitation the
|
||||
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
- The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
Preamble
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
IN THE SOFTWARE.
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
||||
|
||||
34
README.md
34
README.md
@@ -1,18 +1,16 @@
|
||||
syncthing
|
||||
=========
|
||||
|
||||
[](http://build.syncthing.net/job/syncthing/lastSuccessfulBuild/artifact/)
|
||||
[](https://travis-ci.org/syncthing/syncthing)
|
||||
[](https://coveralls.io/r/syncthing/syncthing?branch=master)
|
||||
[](http://build.syncthing.net/job/syncthing/lastBuild/)
|
||||
[](http://godoc.org/github.com/syncthing/syncthing)
|
||||
[](http://opensource.org/licenses/MIT)
|
||||
[](http://opensource.org/licenses/GPL-3.0)
|
||||
|
||||
This is the `syncthing` project. The following are the project goals:
|
||||
|
||||
1. Define a protocol for synchronization of a file repository between a
|
||||
number of collaborating nodes. The protocol should be well defined,
|
||||
unambiguous, easily understood, free to use, efficient, secure and
|
||||
language neutral. This is the [Block Exchange
|
||||
1. Define a protocol for synchronization of a folder between a number of
|
||||
collaborating devices. The protocol should be well defined, unambiguous,
|
||||
easily understood, free to use, efficient, secure and language neutral.
|
||||
This is the [Block Exchange
|
||||
Protocol](https://github.com/syncthing/syncthing/blob/master/protocol/PROTOCOL.md).
|
||||
|
||||
2. Provide the reference implementation to demonstrate the usability of
|
||||
@@ -27,7 +25,20 @@ for incompatible changes.
|
||||
Getting Started
|
||||
---------------
|
||||
|
||||
Take a look at the [getting started guide](http://discourse.syncthing.net/t/getting-started/46).
|
||||
Take a look at the [getting started guide](http://discourse.syncthing.net/t/46).
|
||||
|
||||
There are a few examples for keeping syncthing running in the background
|
||||
on your system in [the etc directory](https://github.com/syncthing/syncthing/blob/master/etc).
|
||||
|
||||
There is an IRC channel, `#syncthing` on Freenode, for talking directly
|
||||
to developers and users (when awake and present, etc.).
|
||||
|
||||
Building
|
||||
--------
|
||||
|
||||
Building Syncthing from source is easy, and there's a
|
||||
[guide](http://discourse.syncthing.net/t/44)
|
||||
that describes it for both Unix and Windows.
|
||||
|
||||
Signed Releases
|
||||
---------------
|
||||
@@ -51,5 +62,6 @@ All documentation and protocol specifications are licensed
|
||||
under the [Creative Commons Attribution 4.0 International
|
||||
License](http://creativecommons.org/licenses/by/4.0/).
|
||||
|
||||
All code is licensed under the [MIT
|
||||
License](https://github.com/syncthing/syncthing/blob/master/LICENSE).
|
||||
All code is licensed under the
|
||||
[GPL](https://github.com/syncthing/syncthing/blob/master/LICENSE), v3 or
|
||||
later.
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
// Package auto contains auto generated files for web assets.
|
||||
package auto
|
||||
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user