Compare commits

..

220 Commits

Author SHA1 Message Date
Jakob Borg
ab1962934d Docs & translations update 2015-12-01 11:24:30 +01:00
Jakob Borg
192455702d Repair AUTHORS 2015-12-01 11:23:35 +01:00
Jakob Borg
cb0d739daf New key for discovery-*-3 2015-12-01 11:20:16 +01:00
Jakob Borg
a937fcc477 Lol shadowing :( 2015-12-01 11:20:16 +01:00
Audrius Butkevicius
9503e60444 Merge pull request #2544 from calmh/negcache2
Accept Retry-After header on discovery lookup failures
2015-12-01 10:14:32 +00:00
Jakob Borg
9f2dc4554d Accept Retry-After header on discovery lookup failures 2015-12-01 11:10:03 +01:00
Jakob Borg
3008dc76d3 Add buinsky 2015-11-30 18:30:36 +01:00
Audrius Butkevicius
596c4b77e8 Merge pull request #2539 from buinsky/master
Fix deleting folders on WinXP (fixes #2522)
2015-11-30 16:21:46 +00:00
buinsky
05a31c2686 Fix deleting folders on WinXP (fixes #2522) 2015-11-30 19:09:36 +03:00
Audrius Butkevicius
45535c0f5a Merge pull request #2538 from canton7/feature/issue-2537
Allow #urPreview to scroll in the browser (fixes #2537)
2015-11-30 13:36:28 +00:00
Antony Male
4bd0dd2123 Allow #urPreview to scroll in the browser (fixes #2537)
This is the same issue as #2014/#2062. Bootstrap doesn't like having two dialogs
open at once: it marks the body has having no dialogs open when the first dialog
is closed, regardless of whether the second dialog is still open.

This means that scrolling doesn't happen properly, and the user cannot
scroll to the dialog's 'close' button.

Work around this by making sure the first dialog (the settings page) is fully closed
before the second dialog (usage preview) is opened.
2015-11-30 13:27:07 +00:00
Jakob Borg
9b2a643626 Update docs and translations 2015-11-29 08:55:38 +01:00
Jakob Borg
1ebc9a9a88 Merge pull request #2525 from kluppy/master
Don't chmod in Atomic on android (fixes  #2472)
2015-11-28 22:57:00 +01:00
Jakob Borg
f0c8b7ce40 Add kluppy 2015-11-28 22:17:05 +01:00
Jakob Borg
ec54550f21 Merge pull request #2529 from AudriusButkevicius/fixlinks
Fix symlinks (fixes #2524)
2015-11-28 21:29:33 +01:00
Audrius Butkevicius
4474d200b0 Fix symlinks (fixes #2524) 2015-11-28 20:23:08 +00:00
kluppy
f062e35641 Don't chmod in Atomic on android (fixes #2472) 2015-11-28 02:46:06 +10:00
Jakob Borg
321ef9816c Merge pull request #2521 from AudriusButkevicius/dialtimeout
Add dialer.DialTimeout, use that when connecting to relays
2015-11-27 10:01:21 +01:00
Audrius Butkevicius
be01e925c7 Merge pull request #2523 from calmh/ecdsa
Generate ECDSA keys instead of RSA
2015-11-27 08:28:51 +00:00
Jakob Borg
6d11006b54 Generate ECDSA keys instead of RSA
This replaces the current 3072 bit RSA certificates with 384 bit ECDSA
certificates. The advantage is these certificates are smaller and
essentially instantaneous to generate. According to RFC4492 (ECC Cipher
Suites for TLS), Table 1: Comparable Key Sizes, ECC has comparable
strength to 3072 bit RSA at 283 bits - so we exceed that.

There is no compatibility issue with existing Syncthing code - this is
verified by the integration test ("h2" instance has the new
certificate).

There are browsers out there that don't understand ECC certificates yet,
although I think they're dying out. In the meantime, I've retained the
RSA code for the HTTPS certificate, but pulled it down to 2048 bits. I
don't think a higher security level there is motivated, is this matches
current industry standard for HTTPS certificates.
2015-11-27 09:15:12 +01:00
Audrius Butkevicius
ed792b97c0 Take timeout into account when dialing 2015-11-26 23:41:11 +00:00
Audrius Butkevicius
a4b8c2298a Add dialer.DialTimeout support 2015-11-26 23:31:37 +00:00
Jakob Borg
e5b33ce9f6 Regenerate XDR for empty struct types 2015-11-24 20:54:49 +01:00
Jakob Borg
30374d46c9 Merge pull request #2510 from plouj/master
Log problems fetching releases
2015-11-24 20:39:29 +01:00
Michael Ploujnikov
9edf8233f7 Improve upgrade error messages 2015-11-24 08:29:42 -05:00
Michael Ploujnikov
bd4a14519c FetchLatestReleases: just log the error here
Since the return value is being ignored by the caller.
2015-11-24 08:29:42 -05:00
Michael Ploujnikov
f12bf8c09a Rename LatestGithubReleases -> FetchLatestReleases 2015-11-24 08:29:42 -05:00
Audrius Butkevicius
53cc45a0d5 Merge pull request #2512 from calmh/compact
Compact database on startup (ref #2400)
2015-11-24 12:43:38 +00:00
Jakob Borg
fa4b4dece1 Compact database on startup (ref #2400) 2015-11-24 13:17:30 +01:00
Jakob Borg
02f044a2a1 Add plouj 2015-11-24 08:35:25 +01:00
Audrius Butkevicius
431d51f5c4 Add timeouts to relay methods 2015-11-23 21:14:46 +00:00
Jakob Borg
45c1357bab Update osext dependency (fixes #1272) 2015-11-23 13:09:42 +01:00
Jakob Borg
5136675fae Docs & translation update 2015-11-22 16:05:20 +01:00
Jakob Borg
db4f23f377 Refactor: extract function generate 2015-11-22 07:35:24 +01:00
Audrius Butkevicius
6ff02761a9 Merge pull request #2500 from calmh/sparse
Handle sparse files (fixes #245)
2015-11-21 17:21:11 +00:00
Jakob Borg
d46f267663 Handle sparse files (fixes #245) 2015-11-21 17:58:09 +01:00
Jakob Borg
6a98534d5d Merge pull request #2498 from AudriusButkevicius/relaystuff
Change the way relays are chosen, add RelayFull message.
2015-11-21 16:51:03 +01:00
Audrius Butkevicius
eeb5d99942 Sort relays in 50ms latency increments, shuffle relays within the same increment 2015-11-21 13:23:49 +00:00
Audrius Butkevicius
dcc5f333c1 Merge pull request #2499 from calmh/metrics
Add metrics for HTTP calls
2015-11-21 13:08:26 +00:00
Jakob Borg
ff8a66d22f Add metrics for HTTP calls 2015-11-21 09:48:57 +01:00
Jakob Borg
f7ad97918a HTTP debug middleware should be behind ShouldDebug() 2015-11-21 09:39:40 +01:00
Audrius Butkevicius
9047d56aa0 Add RelayFull message 2015-11-20 23:42:49 +00:00
Jakob Borg
945ddc2403 Moved to syncthing-nsi repo 2015-11-20 19:56:01 +01:00
Audrius Butkevicius
92158b0611 Merge pull request #2494 from calmh/refactorwalk
Refactor the walk stuff a bit
2015-11-20 11:15:01 +00:00
Jakob Borg
bed6155c79 Refactor: multiple-if to switch 2015-11-20 11:24:50 +01:00
Jakob Borg
f2459e61dd Refactor: break out walkRegular method 2015-11-20 10:32:16 +01:00
Jakob Borg
69dcdafb3d Refactor: break out walkDir method 2015-11-20 09:54:12 +01:00
Jakob Borg
6dbb072d11 Refactor: break out walkSymlink method 2015-11-20 09:50:46 +01:00
Jakob Borg
dc96849718 Refactor: break out normalizePath method 2015-11-20 09:41:44 +01:00
Jakob Borg
d7a3cc505c Refactor: rename p->absPath, rn->relPath 2015-11-20 09:38:45 +01:00
Audrius Butkevicius
c6936ced6c Merge pull request #2486 from uok/patch-1
Add brackets to scanning percentage
2015-11-18 14:19:28 +00:00
Ben S
91048f655c Add brackets to scanning percentage 2015-11-18 15:15:35 +01:00
Audrius Butkevicius
66bf524394 Merge pull request #2485 from calmh/scantime
Add remaining scanning time (fixes #2484)
2015-11-18 11:17:36 +00:00
Jakob Borg
ba9448bdd7 Add remaining scanning time (fixes #2484) 2015-11-18 12:09:10 +01:00
Audrius Butkevicius
c5cb5cba18 Merge pull request #2481 from calmh/scanrate
Add scan rate in web GUI
2015-11-17 20:25:37 +00:00
Jakob Borg
aa853ac833 Fix tests 2015-11-17 21:23:17 +01:00
Jakob Borg
8b759d0e1e Update dependencies 2015-11-17 21:23:17 +01:00
Jakob Borg
a8a2192cf9 Show scan rate in web GUI 2015-11-17 21:23:17 +01:00
Jakob Borg
37f866b47f Use pause/resume device to ensure TestConflictsDefault can run 2015-11-17 13:32:57 +01:00
Jakob Borg
9cf653d673 Rename Model.clusterConfig to Model.generateClusterConfig 2015-11-17 12:10:08 +01:00
Jakob Borg
0167b4b4a3 Don't cause rare spurious event timeout
Correctly resetting timers is surprisingly tricky.
2015-11-17 12:05:22 +01:00
Audrius Butkevicius
dcb773e446 Merge pull request #2478 from calmh/opengui
Warn the user if they're running with an insecure looking setup (fixes #2139)
2015-11-16 22:05:55 +00:00
Jakob Borg
b1a86fbc98 Warn the user if they're running with an insecure looking setup (fixes #2139) 2015-11-16 21:58:08 +01:00
Jakob Borg
cdd0cf7f18 Merge pull request #2477 from MoOx/patch-1
Update CI badge to specify environement (unix/windows)
2015-11-16 20:49:27 +01:00
Jakob Borg
9ae419201d White list commit e37cef 2015-11-16 20:48:40 +01:00
Maxime Thirouin
e37cefdbee Update CI badge to specify environement (unix/windows) 2015-11-16 20:31:16 +01:00
Jakob Borg
ad8c266f76 Update docs & translation 2015-11-15 18:15:10 +01:00
Jakob Borg
807c3bdcc7 Merge pull request #2473 from syncthing/revert-2470-nostoppedrescan
Revert "Don't show "Rescan" button for stopped folders"
2015-11-14 21:10:52 +01:00
Audrius Butkevicius
62efbd17de Revert "Don't show "Rescan" button for stopped folders" 2015-11-14 14:01:22 -05:00
Audrius Butkevicius
d99350fd61 Merge pull request #2470 from calmh/nostoppedrescan
Don't show "Rescan" button for stopped folders
2015-11-14 09:09:28 -05:00
Jakob Borg
40c70a9a2a Don't show "Rescan" button for stopped folders 2015-11-14 12:14:09 +01:00
Audrius Butkevicius
8fb7f40a6a Merge pull request #2468 from calmh/removefolder
Remove folder without restart (fixes #2262)
2015-11-13 09:51:58 -05:00
Jakob Borg
2f12d41d9d Don't dirty blockmap key between lookups (fixes #2455) 2015-11-13 15:44:30 +01:00
Jakob Borg
9fbdb6b305 Cancel a running scan 2015-11-13 15:30:21 +01:00
Jakob Borg
73285cadb6 Remove folder without restart (fixes #2262) 2015-11-13 13:32:52 +01:00
Jakob Borg
56f1c295b6 Merge pull request #2462 from AudriusButkevicius/mtimes
Use virtualMtime when deciding if a file is up to date
2015-11-12 08:48:45 +01:00
Jakob Borg
7f76ed8413 Fix build 2015-11-12 08:46:13 +01:00
AudriusButkevicius
f7edd36931 Use virtualMtime when deciding if a file is up to date 2015-11-12 03:40:06 +00:00
Audrius Butkevicius
c39f2b7c05 Merge pull request #2463 from boone/fix_typos
Fix typos.
2015-11-11 21:29:34 -05:00
Mike Boone
342036408e Fix typos. 2015-11-11 21:20:34 -05:00
Jakob Borg
f4904fce17 Merge pull request #2449 from Stefan-Code/upgrade-system
made upgrade-system smarter (fixes #2446)
2015-11-10 21:15:02 +01:00
Stefan Kuntz
2abb2de753 Made upgrade-system smarter (fixes #2446) 2015-11-10 17:41:50 +01:00
Jakob Borg
ef0a0db07e More local discovery URL debugging (ref #2444) 2015-11-10 10:16:55 +01:00
Jakob Borg
a45795efec Add tylerbrazier 2015-11-10 08:20:16 +01:00
Jakob Borg
88ae353aef Merge pull request #2443 from tylerbrazier/master
Audit logins with new Login event (fixes #2377)
2015-11-10 08:19:03 +01:00
Tyler Brazier
97b9690711 Audit logins with new LoginAttempt event (fixes #2377) 2015-11-10 00:49:51 -05:00
Audrius Butkevicius
48ce356d5c Merge pull request #2448 from calmh/connhandling
Refactor out methods resolveAddresses, connectDirect, connectViaRelay
2015-11-09 22:01:06 -05:00
Audrius Butkevicius
92451f94bc Merge pull request #2450 from syncthing/Zillode-patch-1
Correct commentary for ConnectionStats
2015-11-09 18:32:19 -05:00
Zillode
593632045d Correct commentary for ConnectionStats
Related to https://github.com/syncthing/syncthing-android/issues/473 and 944d9c84a0
ConnectionStats returns connection information for all devices, even when disconnected.
2015-11-09 23:58:06 +01:00
Jakob Borg
e3c55ef307 Fix address list in DeviceDiscovered, add debug prints (ref #2444) 2015-11-09 21:36:17 +01:00
Jakob Borg
edf1730fd2 Merge pull request #2447 from alex2108/master
Add default-v4 and default-v6 as options for discovery
2015-11-09 16:02:37 +01:00
Alexander Graf
34cd8e3f95 Add default-v4 and default-v6 as options for discovery 2015-11-09 15:56:46 +01:00
Jakob Borg
242cce022a Refactor out methods resolveAddresses, connectDirect, connectViaRelay 2015-11-09 15:35:32 +01:00
Jakob Borg
19bf51cefb Revert "Merge pull request #2440 from Stefan-Code/master"
This reverts commit 81bc6bf34b, reversing
changes made to 7de736e8d0.

Unfortunately this tricks the upgrade system into picking the wrong
binary. We need to fix the upgrade system before merging this.
2015-11-09 14:23:26 +01:00
Jakob Borg
74a2e80142 Update docs & translations 2015-11-09 14:00:10 +01:00
Jakob Borg
b9b630e3b6 Change certificate on discovery-2 2015-11-09 13:58:44 +01:00
Jakob Borg
f0bdf833d1 Oh come *on* 2015-11-09 12:39:09 +01:00
Jakob Borg
81bc6bf34b Merge pull request #2440 from Stefan-Code/master
Added ufw firewall application preset
2015-11-09 12:19:42 +01:00
Jakob Borg
7de736e8d0 Ok then, lower case 2015-11-09 12:17:23 +01:00
Jakob Borg
cd097e0fce Add Stefan-Code 2015-11-09 12:14:32 +01:00
Stefan-Code
cc81a7ccfe added ufw firewall application preset (fixes #2435) 2015-11-09 11:58:59 +01:00
Jakob Borg
58d320c270 String slice formatting 2015-11-08 18:06:06 +01:00
Jakob Borg
59565fd1d1 Woops 2015-11-07 11:25:00 +01:00
Jakob Borg
55592137a2 Use constructor functions for FolderConfiguration and DeviceConfiguration 2015-11-07 09:50:04 +01:00
Jakob Borg
58523060f0 Actually do negative caching on failed discovery lookups (fixes #2434) 2015-11-06 17:14:20 +01:00
Audrius Butkevicius
07d53be9fc Merge pull request #2432 from calmh/cachepath
Cache the folderconfig Path() call
2015-11-06 08:37:39 +00:00
Jakob Borg
d4b0235a8b Correctly report the default relay server in usage stats 2015-11-06 07:16:15 +00:00
Jakob Borg
34aa41e17b Cache the Path() call, as it's quite expensive and called a lot 2015-11-06 07:11:22 +00:00
Jakob Borg
36f6a9347c Benchmark must use *db.Instance 2015-11-05 17:46:53 +00:00
Jakob Borg
d49d386ef2 Docs and translation update 2015-11-05 15:47:06 +00:00
Jakob Borg
00c363829c Refactor: move folder prepare to it's own function 2015-11-05 08:01:47 +00:00
Audrius Butkevicius
a9691dbdf4 Merge pull request #2430 from calmh/jsondecode
Run JSON decoding through the usual setting of defaults and fixing up
2015-11-04 20:44:56 +00:00
Jakob Borg
9df701906f Run JSON decoding through the usual setting of defaults and fixing up
I see no reason not to do this, and it gives a unified place (the prepare()
call) to initialize cached attributes and so on.
2015-11-04 20:33:10 +00:00
Jakob Borg
283671fa9d Remove old dead code 2015-11-04 20:15:36 +00:00
Jakob Borg
435c29755d We haven't had cleartext passwords in the config for ages 2015-11-04 20:15:11 +00:00
Jakob Borg
686f91777c Don't force rescan dirs and symlinks
We can't look for changed modtime on these as we don't track the modtime
to start with.
2015-11-04 19:53:07 +00:00
Audrius Butkevicius
2aa028facb Add user-agent header, capitalize headers as others seems to do it (fixes #2422) 2015-10-31 15:36:08 +00:00
Audrius Butkevicius
b4bbd050c2 Merge pull request #2424 from calmh/dbinstance
We should pass around db.Instance instead of leveldb.DB
2015-10-31 12:51:23 +00:00
Jakob Borg
2a4fc28318 We should pass around db.Instance instead of leveldb.DB
We're going to need the db.Instance to keep some state, and for that to
work we need the same one passed around everywhere. Hence this moves the
leveldb-specific file opening stuff into the db package and exports the
dbInstance type.
2015-10-31 12:35:30 +01:00
Jakob Borg
313485e406 Remove file that snuck in by mistake 2015-10-31 11:38:59 +01:00
Jakob Borg
faf4267c73 Refactor: the various db key functions should be instance methods 2015-10-31 11:27:04 +01:00
Jakob Borg
e6277d799f Undo incorrect revert of folder ID in test config 2015-10-31 11:27:04 +01:00
Jakob Borg
cdbc8004fb Comment pedantry 2015-10-31 11:16:07 +01:00
Audrius Butkevicius
1fac2f686d Merge pull request #2423 from calmh/urls
Create a correct URL is more difficult than just slapping on a scheme (fixes #2316)
2015-10-30 20:50:32 +00:00
Jakob Borg
08c8d679ac Create a correct URL is more difficult than just slapping on a scheme (fixes #2316) 2015-10-30 21:22:40 +01:00
Jakob Borg
48c34b7234 Translation update 2015-10-30 10:23:09 +01:00
Audrius Butkevicius
28603f0d2c Merge pull request #2420 from calmh/closelog
Enable log rotation by automatically closing log file (fixes #2251)
2015-10-29 15:25:51 +00:00
Jakob Borg
b2855f02fe Enable log rotation by automatically closing log file (fixes #2251) 2015-10-29 16:04:07 +01:00
Audrius Butkevicius
bef3d88076 Merge pull request #2418 from calmh/fix2416
Rescan changed files before pulling on top of them (fixes #2416)
2015-10-29 08:15:36 +00:00
Jakob Borg
e1a8ea7dec Rescan changed files before pulling on top of them (fixes #2416) 2015-10-29 09:12:37 +01:00
Jakob Borg
c4ad97136f Move leveldb instance and transactions into separate files 2015-10-29 08:07:51 +01:00
Audrius Butkevicius
eab1d6782b Merge pull request #2415 from calmh/dbkeys
Add database and transaction instances
2015-10-28 21:50:29 +00:00
Jakob Borg
fd7b8ec77e Neater transaction handling 2015-10-28 22:04:00 +01:00
Jakob Borg
e28c991331 Create an instance type to tie database methods to 2015-10-28 21:03:05 +01:00
Jakob Borg
a52811dfa3 Don't use godep to run tests 2015-10-28 09:22:07 +01:00
Jakob Borg
9e210d705d The PublicKey() method is an addition in Go 1.4 2015-10-27 16:03:14 +01:00
Jakob Borg
c42f1b53ab pulorder.go -> pullorder.go 2015-10-27 12:14:14 +01:00
Jakob Borg
d171173e90 AlwaysLocalNets should not default to null 2015-10-27 12:04:51 +01:00
Jakob Borg
679f0f9363 Fix some config Copy() things we had forgotten 2015-10-27 11:53:42 +01:00
Jakob Borg
724c1e297f Remove handling of config versions < 10 (v0.11.0) 2015-10-27 11:46:33 +01:00
Jakob Borg
83154569b1 Refactor config types into separate files 2015-10-27 11:37:03 +01:00
Jakob Borg
e3c0fba34b Must not call hex.Dump in non-debug mode... 2015-10-27 10:27:18 +01:00
Jakob Borg
2b6a6b91f3 Remove unused struct field 2015-10-27 09:55:05 +01:00
Audrius Butkevicius
09a555fdd2 Merge pull request #2410 from calmh/hashalloc
Reduce allocations in HashFile
2015-10-27 08:45:38 +00:00
Jakob Borg
dc32f7f0a3 Reduce allocations in HashFile
By using copyBuffer we avoid a buffer allocation for each block we hash,
and by allocating space for the hashes up front we get one large backing
array instead of a small one for each block. For a 17 MiB file this
makes quite a difference in the amount of memory allocated:

	benchmark               old ns/op     new ns/op     delta
	BenchmarkHashFile-8     102045110     100459158     -1.55%

	benchmark               old allocs     new allocs     delta
	BenchmarkHashFile-8     415            144            -65.30%

	benchmark               old bytes     new bytes     delta
	BenchmarkHashFile-8     4504296       48104         -98.93%
2015-10-27 09:37:27 +01:00
Jakob Borg
1efd8d6c75 Add benchmark of HashFile 2015-10-27 09:30:34 +01:00
Jakob Borg
898fc72313 Fixup NICKS/authors 2015-10-27 08:38:25 +01:00
Jakob Borg
21c5806cbf Merge pull request #2405 from acogdev/master
Documentation and examples for autostarting with Upstart
2015-10-27 08:37:14 +01:00
Jakob Borg
464e6bec95 Log lines in REST should have lower case keys 2015-10-27 08:22:35 +01:00
Audrius Butkevicius
2ae832d919 Fix typo introduced 2015-10-25 21:10:55 +00:00
Audrius Butkevicius
5b03c2d949 Remove dead code 2015-10-25 20:46:09 +00:00
Audrius Butkevicius
f629a998a0 Change errNoDevice message to something more human 2015-10-25 13:27:26 +00:00
Jake Peterson
fe88781bc8 Changed system conf file to use $USER 2015-10-24 14:53:08 -06:00
Audrius Butkevicius
e725c97967 Merge pull request #2406 from syncthing/fix-non-local-local-networks
Consider 'AlwaysLocalNets' in bandwidth limiters
2015-10-24 13:07:04 +01:00
Matt Burke
63caf22671 Consider 'AlwaysLocalNets' in bandwidth limiters
'AlwaysLocalNets' was getting printed, but was getting used
when setting up connections. Now, the nets that should be
considered local are printed and used.
2015-10-24 01:14:25 -04:00
Jake Peterson
44790b1333 Added Jake Peterson to AUTHORS 2015-10-23 22:49:25 -06:00
Jake Peterson
b40bb64612 Documentation and examples for Ubuntu-like linux systems using
Upstart as the init system.
2015-10-23 22:37:35 -06:00
Audrius Butkevicius
7b5ab29a6d Because I am a muppet 2015-10-23 20:21:21 +01:00
Audrius Butkevicius
4fd614be09 Add a different mode to stindex 2015-10-23 20:02:38 +01:00
Audrius Butkevicius
73236e58c5 Close channel after the client is stopped 2015-10-22 23:09:02 +01:00
Jakob Borg
32414853c6 Fix Raleway font 2015-10-22 21:08:24 +02:00
Jakob Borg
f3dc78d457 Don't deadlock after checking relay client status (fixes #2404) 2015-10-22 20:32:15 +02:00
Jakob Borg
a32ac62208 Don't expect ending slash on Windows 2015-10-22 13:49:41 +02:00
Jakob Borg
d7a934cf0e Paths must not end with slash on Windows 2015-10-22 11:39:34 +02:00
Jakob Borg
503491392d Correct amount of stack unwinding for debug prints 2015-10-22 11:38:45 +02:00
Jakob Borg
b3a2bf367b Tweak new folder defaults 2015-10-22 09:01:10 +02:00
Jakob Borg
c19eff4872 Revive remote client version in the GUI 2015-10-22 08:53:28 +02:00
Jakob Borg
2941a813c2 Fix upgrade tests 2015-10-22 08:35:48 +02:00
Jakob Borg
0a022d38fa Upgrade lib should use same criteria for beta check as main 2015-10-22 08:28:35 +02:00
Jakob Borg
7ed3b3dd3a Docs update 2015-10-22 08:14:43 +02:00
Jakob Borg
ce52963d2b Update test configs to modern v0.12 defaults 2015-10-22 08:06:17 +02:00
Jakob Borg
9a1922fdc6 Merge pull request #2385 from AudriusButkevicius/you-are-next
Add separate client for dynamic relays (fixes #2368)
2015-10-22 08:01:22 +02:00
Audrius Butkevicius
967424a538 Merge pull request #2402 from calmh/truncfaster
Don't load block list in ...Truncated methods
2015-10-21 23:03:48 +01:00
Jakob Borg
83131103cf Don't load block list in ...Truncated methods
Speeds up and reduces allocations on those operations, at the price of
having a manually tweaked XDR decoder for FileInfoTruncated.

benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             1868198122     1880206886     +0.64%
BenchmarkUpdateOneChanged-8       231852         172695         -25.51%
BenchmarkUpdateOneUnchanged-8     230624         179341         -22.24%
BenchmarkNeedHalf-8               104601744      109461427      +4.65%
BenchmarkHave-8                   29102480       34105026       +17.19%
BenchmarkGlobal-8                 150547687      172778045      +14.77%
BenchmarkNeedHalfTruncated-8      102471355      76564986       -25.28%
BenchmarkHaveTruncated-8          28758368       14277481       -50.35%
BenchmarkGlobalTruncated-8        151192913      106070136      -29.84%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             555577         557554         +0.36%
BenchmarkUpdateOneChanged-8       1135           587            -48.28%
BenchmarkUpdateOneUnchanged-8     1135           587            -48.28%
BenchmarkNeedHalf-8               374780         374775         -0.00%
BenchmarkHave-8                   151992         152085         +0.06%
BenchmarkGlobal-8                 530033         530135         +0.02%
BenchmarkNeedHalfTruncated-8      374699         22160          -94.09%
BenchmarkHaveTruncated-8          151834         4904           -96.77%
BenchmarkGlobalTruncated-8        530037         30536          -94.24%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             1765116216     1765305376     +0.01%
BenchmarkUpdateOneChanged-8       135085         93043          -31.12%
BenchmarkUpdateOneUnchanged-8     134976         92928          -31.15%
BenchmarkNeedHalf-8               44758752       44751791       -0.02%
BenchmarkHave-8                   11845052       11967172       +1.03%
BenchmarkGlobal-8                 80431136       80431065       -0.00%
BenchmarkNeedHalfTruncated-8      46526459       18243543       -60.79%
BenchmarkHaveTruncated-8          11348357       418998         -96.31%
BenchmarkGlobalTruncated-8        80977672       43116991       -46.75%
2015-10-21 23:49:10 +02:00
Audrius Butkevicius
9f4a0d3216 Merge pull request #2401 from calmh/blockmap2
Performance tweaks on leveldb code and blockmap
2015-10-21 22:32:10 +01:00
Jakob Borg
c1591a5efd Only run benchmarks with -tags benchmark
Avoids creating temp database and stuff on a normal test run
2015-10-21 23:19:26 +02:00
Jakob Borg
918ef4dff8 Use batches in blockmap, speeds up and reduces memory usage on large Replace and Update ops
benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             2880834572     1868198122     -35.15%
BenchmarkUpdateOneChanged-8       236596         231852         -2.01%
BenchmarkUpdateOneUnchanged-8     227326         230624         +1.45%
BenchmarkNeedHalf-8               105151538      104601744      -0.52%
BenchmarkHave-8                   28827492       29102480       +0.95%
BenchmarkGlobal-8                 150768724      150547687      -0.15%
BenchmarkNeedHalfTruncated-8      104434216      102471355      -1.88%
BenchmarkHaveTruncated-8          27860093       28758368       +3.22%
BenchmarkGlobalTruncated-8        149972888      151192913      +0.81%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             555451         555577         +0.02%
BenchmarkUpdateOneChanged-8       1135           1135           +0.00%
BenchmarkUpdateOneUnchanged-8     1135           1135           +0.00%
BenchmarkNeedHalf-8               374779         374780         +0.00%
BenchmarkHave-8                   151996         151992         -0.00%
BenchmarkGlobal-8                 530066         530033         -0.01%
BenchmarkNeedHalfTruncated-8      374702         374699         -0.00%
BenchmarkHaveTruncated-8          151834         151834         +0.00%
BenchmarkGlobalTruncated-8        530049         530037         -0.00%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             5018351912     1765116216     -64.83%
BenchmarkUpdateOneChanged-8       135085         135085         +0.00%
BenchmarkUpdateOneUnchanged-8     134976         134976         +0.00%
BenchmarkNeedHalf-8               44769400       44758752       -0.02%
BenchmarkHave-8                   11930612       11845052       -0.72%
BenchmarkGlobal-8                 81523668       80431136       -1.34%
BenchmarkNeedHalfTruncated-8      46692342       46526459       -0.36%
BenchmarkHaveTruncated-8          11348357       11348357       +0.00%
BenchmarkGlobalTruncated-8        81843956       80977672       -1.06%
2015-10-21 23:05:23 +02:00
Jakob Borg
0d9a04c713 Reuse blockkey, speeds up large Update and Replace calls
benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             2866418930     2880834572     +0.50%
BenchmarkUpdateOneChanged-8       226635         236596         +4.40%
BenchmarkUpdateOneUnchanged-8     229090         227326         -0.77%
BenchmarkNeedHalf-8               104483393      105151538      +0.64%
BenchmarkHave-8                   29288220       28827492       -1.57%
BenchmarkGlobal-8                 159269126      150768724      -5.34%
BenchmarkNeedHalfTruncated-8      108235000      104434216      -3.51%
BenchmarkHaveTruncated-8          28945489       27860093       -3.75%
BenchmarkGlobalTruncated-8        149355833      149972888      +0.41%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             1054944        555451         -47.35%
BenchmarkUpdateOneChanged-8       1135           1135           +0.00%
BenchmarkUpdateOneUnchanged-8     1135           1135           +0.00%
BenchmarkNeedHalf-8               374777         374779         +0.00%
BenchmarkHave-8                   151995         151996         +0.00%
BenchmarkGlobal-8                 530063         530066         +0.00%
BenchmarkNeedHalfTruncated-8      374699         374702         +0.00%
BenchmarkHaveTruncated-8          151834         151834         +0.00%
BenchmarkGlobalTruncated-8        530021         530049         +0.01%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             5074297112     5018351912     -1.10%
BenchmarkUpdateOneChanged-8       135097         135085         -0.01%
BenchmarkUpdateOneUnchanged-8     134976         134976         +0.00%
BenchmarkNeedHalf-8               44759436       44769400       +0.02%
BenchmarkHave-8                   11911138       11930612       +0.16%
BenchmarkGlobal-8                 81609867       81523668       -0.11%
BenchmarkNeedHalfTruncated-8      46588024       46692342       +0.22%
BenchmarkHaveTruncated-8          11348354       11348357       +0.00%
BenchmarkGlobalTruncated-8        79485168       81843956       +2.97%
2015-10-21 23:05:23 +02:00
Jakob Borg
0c0c69f0cf The GC runs are legacy and slows things down quite a bit
benchmark                         old ns/op      new ns/op      delta
BenchmarkReplaceAll-8             2942370526     2866418930     -2.58%
BenchmarkUpdateOneChanged-8       7402489        226635         -96.94%
BenchmarkUpdateOneUnchanged-8     7298777        229090         -96.86%
BenchmarkNeedHalf-8               113608416      104483393      -8.03%
BenchmarkHave-8                   29834263       29288220       -1.83%
BenchmarkGlobal-8                 162773699      159269126      -2.15%
BenchmarkNeedHalfTruncated-8      111943400      108235000      -3.31%
BenchmarkHaveTruncated-8          29490369       28945489       -1.85%
BenchmarkGlobalTruncated-8        165841081      149355833      -9.94%

benchmark                         old allocs     new allocs     delta
BenchmarkReplaceAll-8             1054942        1054944        +0.00%
BenchmarkUpdateOneChanged-8       1149           1135           -1.22%
BenchmarkUpdateOneUnchanged-8     1135           1135           +0.00%
BenchmarkNeedHalf-8               374774         374777         +0.00%
BenchmarkHave-8                   151995         151995         +0.00%
BenchmarkGlobal-8                 530042         530063         +0.00%
BenchmarkNeedHalfTruncated-8      374697         374699         +0.00%
BenchmarkHaveTruncated-8          151834         151834         +0.00%
BenchmarkGlobalTruncated-8        530050         530021         -0.01%

benchmark                         old bytes      new bytes      delta
BenchmarkReplaceAll-8             5074294728     5074297112     +0.00%
BenchmarkUpdateOneChanged-8       141048         135097         -4.22%
BenchmarkUpdateOneUnchanged-8     134976         134976         +0.00%
BenchmarkNeedHalf-8               44734813       44759436       +0.06%
BenchmarkHave-8                   11911634       11911138       -0.00%
BenchmarkGlobal-8                 80436854       81609867       +1.46%
BenchmarkNeedHalfTruncated-8      46514673       46588024       +0.16%
BenchmarkHaveTruncated-8          11348357       11348354       -0.00%
BenchmarkGlobalTruncated-8        81730740       79485168       -2.75%
2015-10-21 23:05:22 +02:00
Jakob Borg
943e80e26c Make benchmarks more realistic 2015-10-21 23:04:29 +02:00
Audrius Butkevicius
058a327584 Merge pull request #2397 from calmh/localsize
Keep LocalSize & GlobalSize data in RAM
2015-10-21 21:05:18 +01:00
Jakob Borg
1eca4170f7 Add test for LocalSize/GlobalSize results 2015-10-21 21:58:48 +02:00
Jakob Borg
c268e4ad1b Also keep GlobalSize in RAM 2015-10-21 21:58:48 +02:00
Jakob Borg
d4f81e8791 Keep LocalSize data in RAM 2015-10-21 21:58:48 +02:00
Audrius Butkevicius
4f0680c3c8 Add separate client for dynamic relays (fixes #2368)
Did some manual tests in the playground, such as kicking off two clients in parallel, first connecting,
second one getting a message about already being connected, falling back to the second address.
2015-10-21 20:08:14 +01:00
Jakob Borg
8c26fe44c3 Actually run protocol tests faster with -short (on Go 1.5...) 2015-10-21 14:45:18 +02:00
Jakob Borg
8c7d9f3dd2 Protocol tests should run faster with -short 2015-10-21 14:35:59 +02:00
Jakob Borg
f241b7e79a Global discovery should time out (fixes #2389) 2015-10-21 14:24:55 +02:00
Audrius Butkevicius
dc1f3503be Merge pull request #2391 from burkemw3/warn-overwrite-config-files
Emit warning when sync could overwrite configuration
2015-10-20 21:57:16 +01:00
Matt Burke
c2a5e180b8 Emit warning when sync could overwrite configuration
Overwriting configuration files is likely to happen if a
user syncs their home directories across computers. In this
case, the biggest risk is that all nodes will end up with
the same certificate and thus Device ID.

When the model prepares a folder for syncing, it checks to
see if the configuration files this instance is using are
getting synced. If the are getting synced, and they aren't
getting ignored, a warning is emitted. The model is used
so that when a new folder is added dynamically, a warning
is also emitted.

This will not prevent a user from shooting themselves in
the foot, and will not cover all cases (e.g. symlinks).
It should provide _something_ for many users in this
situation to go on, though.
2015-10-20 12:22:27 -04:00
Jakob Borg
7351217489 Relative GOBIN not allowed in Go 1.5.2+ 2015-10-20 15:59:38 +02:00
Jakob Borg
aa42aafe33 Don't panic on clean shutdown 2015-10-20 15:59:37 +02:00
Stefan Tatschner
b2da0120d6 Add syncthing-localdisco.7 2015-10-20 14:06:14 +02:00
Jakob Borg
9e84e09c26 Update specs link 2015-10-20 13:41:24 +02:00
Jakob Borg
32c1a9bc45 Docs & translation update 2015-10-20 09:59:50 +02:00
Audrius Butkevicius
a7e95922c1 Merge pull request #2395 from calmh/printhashrate
Print the single thread hash performance at startup
2015-10-20 08:10:52 +01:00
Jakob Borg
1392d0bc14 Print the single thread hash performance at startup 2015-10-20 08:51:59 +02:00
Jakob Borg
0f9fa9507e Tests must use locking to avoid race (fixes #2394) 2015-10-20 08:51:31 +02:00
Jakob Borg
1087535d8f Add address for burkemw3 2015-10-20 08:38:12 +02:00
Audrius Butkevicius
0e51f51979 Merge pull request #2379 from calmh/nodbvalidate
Don't validate requests against the database
2015-10-19 16:57:58 +01:00
Jakob Borg
90e0141ac5 Request() should return protocol errors 2015-10-19 15:14:41 +02:00
Jakob Borg
8435a8678e Don't validate requests against the database 2015-10-19 15:14:41 +02:00
Jakob Borg
bd2888fc3b Include maxConflicts -1 in test configs 2015-10-19 15:14:06 +02:00
Jakob Borg
6578ffe2c9 Merge pull request #2320 from AudriusButkevicius/proto
More proto changes
2015-10-18 08:47:30 +02:00
Jakob Borg
175340522f Merge pull request #2375 from AudriusButkevicius/proxy
Add proxy support (fixes #271)
2015-10-18 08:45:17 +02:00
Audrius Butkevicius
afc917b582 Fix tests 2015-10-17 09:48:41 +01:00
Audrius Butkevicius
9f4cd7716e Add more information about the folders to ClusterConfig 2015-10-17 09:46:46 +01:00
Jakob Borg
29b0017445 Merge pull request #2386 from AudriusButkevicius/epoint
Change relaypoolsrv endpoint
2015-10-17 09:14:35 +09:00
Jakob Borg
910a7c619a Merge pull request #2381 from AudriusButkevicius/maxcon
Allow limiting max conflicts (fixes #2282)
2015-10-17 09:10:31 +09:00
Audrius Butkevicius
273fac2028 Change relaypoolsrv endpoint
Just incase we want to show some stats in the future, such as a Geo-IP based map of where relays are, their dot size being proportional to global rate limits,
together with potentially how much data in total has been transferred, and how many sessions there by crawling relay status pages etc ;)
2015-10-17 00:10:01 +01:00
Audrius Butkevicius
a323d85d32 Add more information about the device to ClusterConfig 2015-10-16 19:40:12 +01:00
Audrius Butkevicius
491a33de0b Move device name into the protocol messages 2015-10-16 19:40:12 +01:00
Audrius Butkevicius
d6a0a44432 Update xdr 2015-10-16 19:40:11 +01:00
Audrius Butkevicius
752533489a Allow limiting max conflicts (fixes #2282) 2015-10-16 19:26:38 +01:00
Audrius Butkevicius
e4e3c19e96 Our dialer sets up TCP options 2015-10-16 19:18:22 +01:00
Jakob Borg
4ddb066728 Update lang-en 2015-10-16 18:54:41 +09:00
Jakob Borg
958bbbc8cb Fix mateon1 2015-10-16 18:54:07 +09:00
Audrius Butkevicius
abbcd1f436 Patch up HTTP clients 2015-10-15 21:02:17 +01:00
Audrius Butkevicius
db494f2afc God damn godeps 2015-10-15 21:01:48 +01:00
Audrius Butkevicius
985ea29940 Add proxy support (fixes #271) 2015-10-15 21:01:42 +01:00
278 changed files with 12674 additions and 32323 deletions

10
AUTHORS
View File

@@ -36,6 +36,7 @@ Frank Isemann <frank@isemann.name>
Gilli Sigurdsson <gilli@vx.is>
Jacek Szafarkiewicz <szafar@linux.pl>
Jakob Borg <jakob@nym.se>
Jake Peterson <jake@acogdev.com>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec <dzardacz@gmail.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
@@ -48,9 +49,10 @@ Lord Landon Agahnim <lordlandon@gmail.com>
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol <kilburn@la3.org>
Marcin Dziadus <dziadus.marcin@gmail.com>
Mateon1 <matin1111@wp.pl>
Matt Burke <mburke@amplify.com>
Mateusz Naściszewski <matin1111@wp.pl>
Matt Burke <mburke@amplify.com> <burkemw3@gmail.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Ploujnikov <ploujj@gmail.com>
Michael Tilli <pyfisch@gmail.com>
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>
Peter Hoeg <peter@speartail.com>
@@ -58,12 +60,16 @@ Philippe Schommers <philippe@schommers.be>
Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Scott Klupfel <kluppy@going2blue.com>
Sergey Mishin <ralder@yandex.ru>
Stefan Tatschner <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
Stefan Kuntz <stefan.github@gmail.com> <Stefan.github@gmail.com>
Tim Abell <tim@timwise.co.uk>
Tobias Nygren <tnn@nygren.pp.se>
Tomas Cerveny <kozec@kozec.com>
Tully Robinson <tully@tojr.org>
Tyler Brazier <tyler@tylerbrazier.com>
Veeti Paananen <veeti.paananen@rojekti.fi>
Vil Brekin <vilbrekin@gmail.com>
Victor Buinsky <vix_booja@tut.by>
Yannic A. <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>

24
Godeps/Godeps.json generated
View File

@@ -19,7 +19,7 @@
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "5f7208e86762911861c94f1849eddbfc0a60cbf0"
"Rev": "9eb3e1a622d9364deb39c831f7e5f164393d7e37"
},
{
"ImportPath": "github.com/golang/snappy",
@@ -31,7 +31,11 @@
},
{
"ImportPath": "github.com/kardianos/osext",
"Rev": "6e7f843663477789fac7c02def0d0909e969b4e5"
"Rev": "345163ffe35aa66560a4cd7dddf00f3ae21c9fda"
},
{
"ImportPath": "github.com/rcrowley/go-metrics",
"Rev": "1ce93efbc8f9c568886b2ef85ce305b2217b3de3"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
@@ -56,27 +60,31 @@
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "81bf7719a6b7ce9b665598222362b50122dfc13b"
"Rev": "575fdbe86e5dd89229707ebec0575ce7d088a4a6"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "81bf7719a6b7ce9b665598222362b50122dfc13b"
"Rev": "575fdbe86e5dd89229707ebec0575ce7d088a4a6"
},
{
"ImportPath": "golang.org/x/net/internal/iana",
"Rev": "db8e4de5b2d6653f66aea53094624468caad15d2"
"Rev": "042ba42fa6633b34205efc66ba5719cd3afd8d38"
},
{
"ImportPath": "golang.org/x/net/ipv6",
"Rev": "db8e4de5b2d6653f66aea53094624468caad15d2"
"Rev": "042ba42fa6633b34205efc66ba5719cd3afd8d38"
},
{
"ImportPath": "golang.org/x/net/proxy",
"Rev": "042ba42fa6633b34205efc66ba5719cd3afd8d38"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "723492b65e225eafcba054e76ba18bb9c5ac1ea2"
"Rev": "5eb8d4684c4796dd36c74f6452f2c0fa6c79597e"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "723492b65e225eafcba054e76ba18bb9c5ac1ea2"
"Rev": "5eb8d4684c4796dd36c74f6452f2c0fa6c79597e"
}
]
}

View File

@@ -1,63 +0,0 @@
package lz4
import (
"bytes"
"io/ioutil"
"testing"
)
var testfile, _ = ioutil.ReadFile("testdata/pg1661.txt")
func roundtrip(t *testing.T, input []byte) {
dst, err := Encode(nil, input)
if err != nil {
t.Errorf("got error during compression: %s", err)
}
output, err := Decode(nil, dst)
if err != nil {
t.Errorf("got error during decompress: %s", err)
}
if !bytes.Equal(output, input) {
t.Errorf("roundtrip failed")
}
}
func TestEmpty(t *testing.T) {
roundtrip(t, nil)
}
func TestLengths(t *testing.T) {
for i := 0; i < 1024; i++ {
roundtrip(t, testfile[:i])
}
for i := 1024; i < 4096; i += 23 {
roundtrip(t, testfile[:i])
}
}
func TestWords(t *testing.T) {
roundtrip(t, testfile)
}
func BenchmarkLZ4Encode(b *testing.B) {
for i := 0; i < b.N; i++ {
Encode(nil, testfile)
}
}
func BenchmarkLZ4Decode(b *testing.B) {
var compressed, _ = Encode(nil, testfile)
b.ResetTimer()
for i := 0; i < b.N; i++ {
Decode(nil, compressed)
}
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,59 +0,0 @@
// Copyright (C) 2014 Jakob Borg
package luhn_test
import (
"testing"
"github.com/calmh/luhn"
)
func TestGenerate(t *testing.T) {
// Base 6 Luhn
a := luhn.Alphabet("abcdef")
c, err := a.Generate("abcdef")
if err != nil {
t.Fatal(err)
}
if c != 'e' {
t.Errorf("Incorrect check digit %c != e", c)
}
// Base 10 Luhn
a = luhn.Alphabet("0123456789")
c, err = a.Generate("7992739871")
if err != nil {
t.Fatal(err)
}
if c != '3' {
t.Errorf("Incorrect check digit %c != 3", c)
}
}
func TestInvalidString(t *testing.T) {
a := luhn.Alphabet("ABC")
_, err := a.Generate("7992739871")
t.Log(err)
if err == nil {
t.Error("Unexpected nil error")
}
}
func TestBadAlphabet(t *testing.T) {
a := luhn.Alphabet("01234566789")
_, err := a.Generate("7992739871")
t.Log(err)
if err == nil {
t.Error("Unexpected nil error")
}
}
func TestValidate(t *testing.T) {
a := luhn.Alphabet("abcdef")
if !a.Validate("abcdefe") {
t.Errorf("Incorrect validation response for abcdefe")
}
if a.Validate("abcdefd") {
t.Errorf("Incorrect validation response for abcdefd")
}
}

View File

@@ -4,7 +4,7 @@ go:
install:
- export PATH=$PATH:$HOME/gopath/bin
- go get code.google.com/p/go.tools/cmd/cover
- go get golang.org/x/tools/cover
- go get github.com/mattn/goveralls
script:

View File

@@ -1,7 +1,7 @@
xdr
===
[![Build Status](https://img.shields.io/travis/calmh/xdr.svg?style=flat)](https://travis-ci.org/calmh/xdr)
[![Build Status](https://img.shields.io/circleci/project/calmh/xdr.svg?style=flat-square)](https://circleci.com/gh/calmh/xdr)
[![Coverage Status](https://img.shields.io/coveralls/calmh/xdr.svg?style=flat)](https://coveralls.io/r/calmh/xdr?branch=master)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat)](http://godoc.org/github.com/calmh/xdr)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://opensource.org/licenses/MIT)

View File

@@ -1,117 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr_test
import (
"io"
"io/ioutil"
"testing"
"github.com/calmh/xdr"
)
type XDRBenchStruct struct {
I1 uint64
I2 uint32
I3 uint16
I4 uint8
Bs0 []byte // max:128
Bs1 []byte
S0 string // max:128
S1 string
}
var res []byte // no to be optimized away
var s = XDRBenchStruct{
I1: 42,
I2: 43,
I3: 44,
I4: 45,
Bs0: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18},
Bs1: []byte{11, 12, 13, 14, 15, 16, 17, 18, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
S0: "Hello World! String one.",
S1: "Hello World! String two.",
}
var e []byte
func init() {
e, _ = s.MarshalXDR()
}
func BenchmarkThisMarshal(b *testing.B) {
for i := 0; i < b.N; i++ {
res, _ = s.MarshalXDR()
}
}
func BenchmarkThisUnmarshal(b *testing.B) {
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
err := t.UnmarshalXDR(e)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkThisEncode(b *testing.B) {
for i := 0; i < b.N; i++ {
_, err := s.EncodeXDR(ioutil.Discard)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkThisEncoder(b *testing.B) {
w := xdr.NewWriter(ioutil.Discard)
for i := 0; i < b.N; i++ {
_, err := s.EncodeXDRInto(w)
if err != nil {
b.Fatal(err)
}
}
}
type repeatReader struct {
data []byte
}
func (r *repeatReader) Read(bs []byte) (n int, err error) {
if len(bs) > len(r.data) {
err = io.EOF
}
n = copy(bs, r.data)
r.data = r.data[n:]
return n, err
}
func (r *repeatReader) Reset(bs []byte) {
r.data = bs
}
func BenchmarkThisDecode(b *testing.B) {
rr := &repeatReader{e}
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
err := t.DecodeXDR(rr)
if err != nil {
b.Fatal(err)
}
rr.Reset(e)
}
}
func BenchmarkThisDecoder(b *testing.B) {
rr := &repeatReader{e}
r := xdr.NewReader(rr)
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
err := t.DecodeXDRFrom(r)
if err != nil {
b.Fatal(err)
}
rr.Reset(e)
}
}

View File

@@ -1,201 +0,0 @@
// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package xdr_test
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
/*
XDRBenchStruct Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ I1 (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| I2 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | I3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ uint8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Bs0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Bs0 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Bs1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Bs1 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of S0 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ S0 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of S1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ S1 (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct XDRBenchStruct {
unsigned hyper I1;
unsigned int I2;
unsigned int I3;
uint8 I4;
opaque Bs0<128>;
opaque Bs1<>;
string S0<128>;
string S1<>;
}
*/
func (o XDRBenchStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o XDRBenchStruct) MarshalXDR() ([]byte, error) {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o XDRBenchStruct) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o XDRBenchStruct) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o XDRBenchStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint64(o.I1)
xw.WriteUint32(o.I2)
xw.WriteUint16(o.I3)
xw.WriteUint8(o.I4)
if l := len(o.Bs0); l > 128 {
return xw.Tot(), xdr.ElementSizeExceeded("Bs0", l, 128)
}
xw.WriteBytes(o.Bs0)
xw.WriteBytes(o.Bs1)
if l := len(o.S0); l > 128 {
return xw.Tot(), xdr.ElementSizeExceeded("S0", l, 128)
}
xw.WriteString(o.S0)
xw.WriteString(o.S1)
return xw.Tot(), xw.Error()
}
func (o *XDRBenchStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *XDRBenchStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *XDRBenchStruct) DecodeXDRFrom(xr *xdr.Reader) error {
o.I1 = xr.ReadUint64()
o.I2 = xr.ReadUint32()
o.I3 = xr.ReadUint16()
o.I4 = xr.ReadUint8()
o.Bs0 = xr.ReadBytesMax(128)
o.Bs1 = xr.ReadBytes()
o.S0 = xr.ReadStringMax(128)
o.S1 = xr.ReadString()
return xr.Error()
}
/*
repeatReader Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ data (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct repeatReader {
opaque data<>;
}
*/
func (o repeatReader) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o repeatReader) MarshalXDR() ([]byte, error) {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o repeatReader) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o repeatReader) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o repeatReader) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteBytes(o.data)
return xw.Tot(), xw.Error()
}
func (o *repeatReader) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *repeatReader) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *repeatReader) DecodeXDRFrom(xr *xdr.Reader) error {
o.data = xr.ReadBytes()
return xr.Error()
}

View File

@@ -0,0 +1,3 @@
dependencies:
post:
- ./generate.sh

View File

@@ -28,6 +28,7 @@ type fieldInfo struct {
Encoder string // the encoder name, i.e. "Uint64" for Read/WriteUint64
Convert string // what to convert to when encoding, i.e. "uint64"
Max int // max size for slices and strings
Submax int // max size for strings inside slices
}
type structInfo struct {
@@ -156,7 +157,11 @@ func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
{{if ne $fieldInfo.Convert ""}}
o.{{$fieldInfo.Name}}[i] = {{$fieldInfo.FieldType}}(xr.Read{{$fieldInfo.Encoder}}())
{{else if $fieldInfo.IsBasic}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
{{if ge $fieldInfo.Submax 1}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}Max({{$fieldInfo.Submax}})
{{else}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
{{end}}
{{else}}
(&o.{{$fieldInfo.Name}}[i]).DecodeXDRFrom(xr)
{{end}}
@@ -166,7 +171,40 @@ func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}`))
var maxRe = regexp.MustCompile(`\Wmax:(\d+)`)
var emptyTypeTpl = template.Must(template.New("encoder").Parse(`
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
return 0, nil
}//+n
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
return nil, nil
}//+n
func (o {{.TypeName}}) MustMarshalXDR() []byte {
return nil
}//+n
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
return bs, nil
}//+n
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}//+n
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
return nil
}//+n
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
return nil
}//+n
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}`))
var maxRe = regexp.MustCompile(`(?:\Wmax:)(\d+)(?:\s*,\s*(\d+))?`)
type typeSet struct {
Type string
@@ -198,11 +236,15 @@ func handleStruct(t *ast.StructType) []fieldInfo {
}
fn := sf.Names[0].Name
var max = 0
var max1, max2 int
if sf.Comment != nil {
c := sf.Comment.List[0].Text
if m := maxRe.FindStringSubmatch(c); m != nil {
max, _ = strconv.Atoi(m[1])
m := maxRe.FindStringSubmatch(c)
if len(m) >= 2 {
max1, _ = strconv.Atoi(m[1])
}
if len(m) >= 3 {
max2, _ = strconv.Atoi(m[2])
}
if strings.Contains(c, "noencode") {
continue
@@ -220,14 +262,16 @@ func handleStruct(t *ast.StructType) []fieldInfo {
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max,
Max: max1,
Submax: max2,
}
} else {
f = fieldInfo{
Name: fn,
IsBasic: false,
FieldType: tn,
Max: max,
Max: max1,
Submax: max2,
}
}
@@ -245,7 +289,8 @@ func handleStruct(t *ast.StructType) []fieldInfo {
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max,
Max: max1,
Submax: max2,
}
} else if enc, ok := xdrEncoders[tn]; ok {
f = fieldInfo{
@@ -255,14 +300,16 @@ func handleStruct(t *ast.StructType) []fieldInfo {
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max,
Max: max1,
Submax: max2,
}
} else {
f = fieldInfo{
Name: fn,
IsSlice: true,
FieldType: tn,
Max: max,
Max: max1,
Submax: max2,
}
}
@@ -270,7 +317,8 @@ func handleStruct(t *ast.StructType) []fieldInfo {
f = fieldInfo{
Name: fn,
FieldType: ft.Sel.Name,
Max: max,
Max: max1,
Submax: max2,
}
}
@@ -285,7 +333,14 @@ func generateCode(output io.Writer, s structInfo) {
fs := s.Fields
var buf bytes.Buffer
err := encodeTpl.Execute(&buf, map[string]interface{}{"TypeName": name, "Fields": fs})
var err error
if len(fs) == 0 {
// This is an empty type. We can create a quite simple codec for it.
err = emptyTypeTpl.Execute(&buf, map[string]interface{}{"TypeName": name})
} else {
// Generate with the default template.
err = encodeTpl.Execute(&buf, map[string]interface{}{"TypeName": name, "Fields": fs})
}
if err != nil {
panic(err)
}
@@ -311,6 +366,14 @@ func generateDiagram(output io.Writer, s structInfo) {
fs := s.Fields
fmt.Fprintln(output, sn+" Structure:")
if len(fs) == 0 {
fmt.Fprintln(output, "(contains no fields)")
fmt.Fprintln(output)
fmt.Fprintln(output)
return
}
fmt.Fprintln(output)
fmt.Fprintln(output, " 0 1 2 3")
fmt.Fprintln(output, " 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1")

View File

@@ -1,79 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr_test
import (
"bytes"
"math/rand"
"reflect"
"testing"
"testing/quick"
"github.com/calmh/xdr"
)
// Contains all supported types
type TestStruct struct {
I int
I8 int8
UI8 uint8
I16 int16
UI16 uint16
I32 int32
UI32 uint32
I64 int64
UI64 uint64
BS []byte // max:1024
S string // max:1024
C Opaque
SS []string // max:1024
}
type Opaque [32]byte
func (u *Opaque) EncodeXDRInto(w *xdr.Writer) (int, error) {
return w.WriteRaw(u[:])
}
func (u *Opaque) DecodeXDRFrom(r *xdr.Reader) (int, error) {
return r.ReadRaw(u[:])
}
func (Opaque) Generate(rand *rand.Rand, size int) reflect.Value {
var u Opaque
for i := range u[:] {
u[i] = byte(rand.Int())
}
return reflect.ValueOf(u)
}
func TestEncDec(t *testing.T) {
fn := func(t0 TestStruct) bool {
bs, err := t0.MarshalXDR()
if err != nil {
t.Fatal(err)
}
var t1 TestStruct
err = t1.UnmarshalXDR(bs)
if err != nil {
t.Fatal(err)
}
// Not comparing with DeepEqual since we'll unmarshal nil slices as empty
if t0.I != t1.I ||
t0.I16 != t1.I16 || t0.UI16 != t1.UI16 ||
t0.I32 != t1.I32 || t0.UI32 != t1.UI32 ||
t0.I64 != t1.I64 || t0.UI64 != t1.UI64 ||
bytes.Compare(t0.BS, t1.BS) != 0 ||
t0.S != t1.S || t0.C != t1.C {
t.Logf("%#v", t0)
t.Logf("%#v", t1)
return false
}
return true
}
if err := quick.Check(fn, nil); err != nil {
t.Error(err)
}
}

View File

@@ -1,185 +0,0 @@
// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package xdr_test
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
/*
TestStruct Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ int Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ int8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ uint8 Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | I16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | UI16 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| I32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| UI32 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ I64 (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ UI64 (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of BS |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ BS (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of S |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ S (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Opaque Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of SS |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of SS |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ SS (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct TestStruct {
int I;
int8 I8;
uint8 UI8;
int I16;
unsigned int UI16;
int I32;
unsigned int UI32;
hyper I64;
unsigned hyper UI64;
opaque BS<1024>;
string S<1024>;
Opaque C;
string SS<1024>;
}
*/
func (o TestStruct) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o TestStruct) MarshalXDR() ([]byte, error) {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o TestStruct) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o TestStruct) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o TestStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint64(uint64(o.I))
xw.WriteUint8(uint8(o.I8))
xw.WriteUint8(o.UI8)
xw.WriteUint16(uint16(o.I16))
xw.WriteUint16(o.UI16)
xw.WriteUint32(uint32(o.I32))
xw.WriteUint32(o.UI32)
xw.WriteUint64(uint64(o.I64))
xw.WriteUint64(o.UI64)
if l := len(o.BS); l > 1024 {
return xw.Tot(), xdr.ElementSizeExceeded("BS", l, 1024)
}
xw.WriteBytes(o.BS)
if l := len(o.S); l > 1024 {
return xw.Tot(), xdr.ElementSizeExceeded("S", l, 1024)
}
xw.WriteString(o.S)
_, err := o.C.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
if l := len(o.SS); l > 1024 {
return xw.Tot(), xdr.ElementSizeExceeded("SS", l, 1024)
}
xw.WriteUint32(uint32(len(o.SS)))
for i := range o.SS {
xw.WriteString(o.SS[i])
}
return xw.Tot(), xw.Error()
}
func (o *TestStruct) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *TestStruct) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *TestStruct) DecodeXDRFrom(xr *xdr.Reader) error {
o.I = int(xr.ReadUint64())
o.I8 = int8(xr.ReadUint8())
o.UI8 = xr.ReadUint8()
o.I16 = int16(xr.ReadUint16())
o.UI16 = xr.ReadUint16()
o.I32 = int32(xr.ReadUint32())
o.UI32 = xr.ReadUint32()
o.I64 = int64(xr.ReadUint64())
o.UI64 = xr.ReadUint64()
o.BS = xr.ReadBytesMax(1024)
o.S = xr.ReadStringMax(1024)
(&o.C).DecodeXDRFrom(xr)
_SSSize := int(xr.ReadUint32())
if _SSSize < 0 {
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
}
if _SSSize > 1024 {
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
}
o.SS = make([]string, _SSSize)
for i := range o.SS {
o.SS[i] = xr.ReadString()
}
return xr.Error()
}

View File

@@ -1,44 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build refl
package xdr_test
import (
"bytes"
"testing"
refl "github.com/davecgh/go-xdr/xdr"
)
func TestCompareMarshals(t *testing.T) {
e0 := s.MarshalXDR()
e1, err := refl.Marshal(s)
if err != nil {
t.Fatal(err)
}
if bytes.Compare(e0, e1) != 0 {
t.Fatalf("Encoding mismatch;\n\t%x (this)\n\t%x (refl)", e0, e1)
}
}
func BenchmarkReflMarshal(b *testing.B) {
var err error
for i := 0; i < b.N; i++ {
res, err = refl.Marshal(s)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkReflUnmarshal(b *testing.B) {
var t XDRBenchStruct
for i := 0; i < b.N; i++ {
_, err := refl.Unmarshal(e, &t)
if err != nil {
b.Fatal(err)
}
}
}

View File

@@ -1,93 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import (
"bytes"
"strings"
"testing"
"testing/quick"
)
func TestBytesNil(t *testing.T) {
fn := func(bs []byte) bool {
var b = new(bytes.Buffer)
var w = NewWriter(b)
var r = NewReader(b)
w.WriteBytes(bs)
w.WriteBytes(bs)
r.ReadBytes()
res := r.ReadBytes()
return bytes.Compare(bs, res) == 0
}
if err := quick.Check(fn, nil); err != nil {
t.Error(err)
}
}
func TestBytesGiven(t *testing.T) {
fn := func(bs []byte) bool {
var b = new(bytes.Buffer)
var w = NewWriter(b)
var r = NewReader(b)
w.WriteBytes(bs)
w.WriteBytes(bs)
res := make([]byte, 12)
res = r.ReadBytesInto(res)
res = r.ReadBytesInto(res)
return bytes.Compare(bs, res) == 0
}
if err := quick.Check(fn, nil); err != nil {
t.Error(err)
}
}
func TestReadBytesMaxInto(t *testing.T) {
var max = 64
for tot := 32; tot < 128; tot++ {
for diff := -32; diff <= 32; diff++ {
var b = new(bytes.Buffer)
var r = NewReader(b)
var w = NewWriter(b)
var toWrite = make([]byte, tot)
w.WriteBytes(toWrite)
var buf = make([]byte, tot+diff)
var bs = r.ReadBytesMaxInto(max, buf)
if tot <= max {
if read := len(bs); read != tot {
t.Errorf("Incorrect read bytes, wrote=%d, buf=%d, max=%d, read=%d", tot, tot+diff, max, read)
}
} else if !strings.Contains(r.err.Error(), "exceeds size") {
t.Errorf("Unexpected non-ErrElementSizeExceeded error for wrote=%d, max=%d: %v", tot, max, r.err)
}
}
}
}
func TestReadStringMax(t *testing.T) {
for tot := 42; tot < 72; tot++ {
for max := 0; max < 128; max++ {
var b = new(bytes.Buffer)
var r = NewReader(b)
var w = NewWriter(b)
var toWrite = make([]byte, tot)
w.WriteBytes(toWrite)
var str = r.ReadStringMax(max)
var read = len(str)
if max == 0 || tot <= max {
if read != tot {
t.Errorf("Incorrect read bytes, wrote=%d, max=%d, read=%d", tot, max, read)
}
} else if !strings.Contains(r.err.Error(), "exceeds size") {
t.Errorf("Unexpected non-ErrElementSizeExceeded error for wrote=%d, max=%d, read=%d: %v", tot, max, read, r.err)
}
}
}
}

View File

@@ -1,377 +0,0 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"bytes"
"flag"
"fmt"
"io"
"io/ioutil"
"math/rand"
"net/http"
"os"
"path/filepath"
"strings"
"testing"
)
var (
download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
testdata = flag.String("testdata", "testdata", "Directory containing the test data")
)
func roundtrip(b, ebuf, dbuf []byte) error {
d, err := Decode(dbuf, Encode(ebuf, b))
if err != nil {
return fmt.Errorf("decoding error: %v", err)
}
if !bytes.Equal(b, d) {
return fmt.Errorf("roundtrip mismatch:\n\twant %v\n\tgot %v", b, d)
}
return nil
}
func TestEmpty(t *testing.T) {
if err := roundtrip(nil, nil, nil); err != nil {
t.Fatal(err)
}
}
func TestSmallCopy(t *testing.T) {
for _, ebuf := range [][]byte{nil, make([]byte, 20), make([]byte, 64)} {
for _, dbuf := range [][]byte{nil, make([]byte, 20), make([]byte, 64)} {
for i := 0; i < 32; i++ {
s := "aaaa" + strings.Repeat("b", i) + "aaaabbbb"
if err := roundtrip([]byte(s), ebuf, dbuf); err != nil {
t.Errorf("len(ebuf)=%d, len(dbuf)=%d, i=%d: %v", len(ebuf), len(dbuf), i, err)
}
}
}
}
}
func TestSmallRand(t *testing.T) {
rng := rand.New(rand.NewSource(27354294))
for n := 1; n < 20000; n += 23 {
b := make([]byte, n)
for i := range b {
b[i] = uint8(rng.Uint32())
}
if err := roundtrip(b, nil, nil); err != nil {
t.Fatal(err)
}
}
}
func TestSmallRegular(t *testing.T) {
for n := 1; n < 20000; n += 23 {
b := make([]byte, n)
for i := range b {
b[i] = uint8(i%10 + 'a')
}
if err := roundtrip(b, nil, nil); err != nil {
t.Fatal(err)
}
}
}
func TestInvalidVarint(t *testing.T) {
data := []byte("\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
// The encoded varint overflows 32 bits
data = []byte("\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
}
func cmp(a, b []byte) error {
if len(a) != len(b) {
return fmt.Errorf("got %d bytes, want %d", len(a), len(b))
}
for i := range a {
if a[i] != b[i] {
return fmt.Errorf("byte #%d: got 0x%02x, want 0x%02x", i, a[i], b[i])
}
}
return nil
}
func TestFramingFormat(t *testing.T) {
// src is comprised of alternating 1e5-sized sequences of random
// (incompressible) bytes and repeated (compressible) bytes. 1e5 was chosen
// because it is larger than maxUncompressedChunkLen (64k).
src := make([]byte, 1e6)
rng := rand.New(rand.NewSource(1))
for i := 0; i < 10; i++ {
if i%2 == 0 {
for j := 0; j < 1e5; j++ {
src[1e5*i+j] = uint8(rng.Intn(256))
}
} else {
for j := 0; j < 1e5; j++ {
src[1e5*i+j] = uint8(i)
}
}
}
buf := new(bytes.Buffer)
if _, err := NewWriter(buf).Write(src); err != nil {
t.Fatalf("Write: encoding: %v", err)
}
dst, err := ioutil.ReadAll(NewReader(buf))
if err != nil {
t.Fatalf("ReadAll: decoding: %v", err)
}
if err := cmp(dst, src); err != nil {
t.Fatal(err)
}
}
func TestReaderReset(t *testing.T) {
gold := bytes.Repeat([]byte("All that is gold does not glitter,\n"), 10000)
buf := new(bytes.Buffer)
if _, err := NewWriter(buf).Write(gold); err != nil {
t.Fatalf("Write: %v", err)
}
encoded, invalid, partial := buf.String(), "invalid", "partial"
r := NewReader(nil)
for i, s := range []string{encoded, invalid, partial, encoded, partial, invalid, encoded, encoded} {
if s == partial {
r.Reset(strings.NewReader(encoded))
if _, err := r.Read(make([]byte, 101)); err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
continue
}
r.Reset(strings.NewReader(s))
got, err := ioutil.ReadAll(r)
switch s {
case encoded:
if err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
if err := cmp(got, gold); err != nil {
t.Errorf("#%d: %v", i, err)
continue
}
case invalid:
if err == nil {
t.Errorf("#%d: got nil error, want non-nil", i)
continue
}
}
}
}
func TestWriterReset(t *testing.T) {
gold := bytes.Repeat([]byte("Not all those who wander are lost;\n"), 10000)
var gots, wants [][]byte
const n = 20
w, failed := NewWriter(nil), false
for i := 0; i <= n; i++ {
buf := new(bytes.Buffer)
w.Reset(buf)
want := gold[:len(gold)*i/n]
if _, err := w.Write(want); err != nil {
t.Errorf("#%d: Write: %v", i, err)
failed = true
continue
}
got, err := ioutil.ReadAll(NewReader(buf))
if err != nil {
t.Errorf("#%d: ReadAll: %v", i, err)
failed = true
continue
}
gots = append(gots, got)
wants = append(wants, want)
}
if failed {
return
}
for i := range gots {
if err := cmp(gots[i], wants[i]); err != nil {
t.Errorf("#%d: %v", i, err)
}
}
}
func benchDecode(b *testing.B, src []byte) {
encoded := Encode(nil, src)
// Bandwidth is in amount of uncompressed data.
b.SetBytes(int64(len(src)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
Decode(src, encoded)
}
}
func benchEncode(b *testing.B, src []byte) {
// Bandwidth is in amount of uncompressed data.
b.SetBytes(int64(len(src)))
dst := make([]byte, MaxEncodedLen(len(src)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
Encode(dst, src)
}
}
func readFile(b testing.TB, filename string) []byte {
src, err := ioutil.ReadFile(filename)
if err != nil {
b.Skipf("skipping benchmark: %v", err)
}
if len(src) == 0 {
b.Fatalf("%s has zero length", filename)
}
return src
}
// expand returns a slice of length n containing repeated copies of src.
func expand(src []byte, n int) []byte {
dst := make([]byte, n)
for x := dst; len(x) > 0; {
i := copy(x, src)
x = x[i:]
}
return dst
}
func benchWords(b *testing.B, n int, decode bool) {
// Note: the file is OS-language dependent so the resulting values are not
// directly comparable for non-US-English OS installations.
data := expand(readFile(b, "/usr/share/dict/words"), n)
if decode {
benchDecode(b, data)
} else {
benchEncode(b, data)
}
}
func BenchmarkWordsDecode1e3(b *testing.B) { benchWords(b, 1e3, true) }
func BenchmarkWordsDecode1e4(b *testing.B) { benchWords(b, 1e4, true) }
func BenchmarkWordsDecode1e5(b *testing.B) { benchWords(b, 1e5, true) }
func BenchmarkWordsDecode1e6(b *testing.B) { benchWords(b, 1e6, true) }
func BenchmarkWordsEncode1e3(b *testing.B) { benchWords(b, 1e3, false) }
func BenchmarkWordsEncode1e4(b *testing.B) { benchWords(b, 1e4, false) }
func BenchmarkWordsEncode1e5(b *testing.B) { benchWords(b, 1e5, false) }
func BenchmarkWordsEncode1e6(b *testing.B) { benchWords(b, 1e6, false) }
// testFiles' values are copied directly from
// https://raw.githubusercontent.com/google/snappy/master/snappy_unittest.cc
// The label field is unused in snappy-go.
var testFiles = []struct {
label string
filename string
}{
{"html", "html"},
{"urls", "urls.10K"},
{"jpg", "fireworks.jpeg"},
{"jpg_200", "fireworks.jpeg"},
{"pdf", "paper-100k.pdf"},
{"html4", "html_x_4"},
{"txt1", "alice29.txt"},
{"txt2", "asyoulik.txt"},
{"txt3", "lcet10.txt"},
{"txt4", "plrabn12.txt"},
{"pb", "geo.protodata"},
{"gaviota", "kppkn.gtb"},
}
// The test data files are present at this canonical URL.
const baseURL = "https://raw.githubusercontent.com/google/snappy/master/testdata/"
func downloadTestdata(b *testing.B, basename string) (errRet error) {
filename := filepath.Join(*testdata, basename)
if stat, err := os.Stat(filename); err == nil && stat.Size() != 0 {
return nil
}
if !*download {
b.Skipf("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
if err := os.Mkdir(*testdata, 0777); err != nil && !os.IsExist(err) {
return fmt.Errorf("failed to create testdata: %s", err)
}
f, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to create %s: %s", filename, err)
}
defer f.Close()
defer func() {
if errRet != nil {
os.Remove(filename)
}
}()
url := baseURL + basename
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("failed to download %s: %s", url, err)
}
defer resp.Body.Close()
if s := resp.StatusCode; s != http.StatusOK {
return fmt.Errorf("downloading %s: HTTP status code %d (%s)", url, s, http.StatusText(s))
}
_, err = io.Copy(f, resp.Body)
if err != nil {
return fmt.Errorf("failed to download %s to %s: %s", url, filename, err)
}
return nil
}
func benchFile(b *testing.B, n int, decode bool) {
if err := downloadTestdata(b, testFiles[n].filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
data := readFile(b, filepath.Join(*testdata, testFiles[n].filename))
if decode {
benchDecode(b, data)
} else {
benchEncode(b, data)
}
}
// Naming convention is kept similar to what snappy's C++ implementation uses.
func Benchmark_UFlat0(b *testing.B) { benchFile(b, 0, true) }
func Benchmark_UFlat1(b *testing.B) { benchFile(b, 1, true) }
func Benchmark_UFlat2(b *testing.B) { benchFile(b, 2, true) }
func Benchmark_UFlat3(b *testing.B) { benchFile(b, 3, true) }
func Benchmark_UFlat4(b *testing.B) { benchFile(b, 4, true) }
func Benchmark_UFlat5(b *testing.B) { benchFile(b, 5, true) }
func Benchmark_UFlat6(b *testing.B) { benchFile(b, 6, true) }
func Benchmark_UFlat7(b *testing.B) { benchFile(b, 7, true) }
func Benchmark_UFlat8(b *testing.B) { benchFile(b, 8, true) }
func Benchmark_UFlat9(b *testing.B) { benchFile(b, 9, true) }
func Benchmark_UFlat10(b *testing.B) { benchFile(b, 10, true) }
func Benchmark_UFlat11(b *testing.B) { benchFile(b, 11, true) }
func Benchmark_ZFlat0(b *testing.B) { benchFile(b, 0, false) }
func Benchmark_ZFlat1(b *testing.B) { benchFile(b, 1, false) }
func Benchmark_ZFlat2(b *testing.B) { benchFile(b, 2, false) }
func Benchmark_ZFlat3(b *testing.B) { benchFile(b, 3, false) }
func Benchmark_ZFlat4(b *testing.B) { benchFile(b, 4, false) }
func Benchmark_ZFlat5(b *testing.B) { benchFile(b, 5, false) }
func Benchmark_ZFlat6(b *testing.B) { benchFile(b, 6, false) }
func Benchmark_ZFlat7(b *testing.B) { benchFile(b, 7, false) }
func Benchmark_ZFlat8(b *testing.B) { benchFile(b, 8, false) }
func Benchmark_ZFlat9(b *testing.B) { benchFile(b, 9, false) }
func Benchmark_ZFlat10(b *testing.B) { benchFile(b, 10, false) }
func Benchmark_ZFlat11(b *testing.B) { benchFile(b, 11, false) }

View File

@@ -1,328 +0,0 @@
// Copyright 2014 Canonical Ltd.
// Licensed under the LGPLv3 with static-linking exception.
// See LICENCE file for details.
package ratelimit
import (
gc "launchpad.net/gocheck"
"testing"
"time"
)
func TestPackage(t *testing.T) {
gc.TestingT(t)
}
type rateLimitSuite struct{}
var _ = gc.Suite(rateLimitSuite{})
type takeReq struct {
time time.Duration
count int64
expectWait time.Duration
}
var takeTests = []struct {
about string
fillInterval time.Duration
capacity int64
reqs []takeReq
}{{
about: "serial requests",
fillInterval: 250 * time.Millisecond,
capacity: 10,
reqs: []takeReq{{
time: 0,
count: 0,
expectWait: 0,
}, {
time: 0,
count: 10,
expectWait: 0,
}, {
time: 0,
count: 1,
expectWait: 250 * time.Millisecond,
}, {
time: 250 * time.Millisecond,
count: 1,
expectWait: 250 * time.Millisecond,
}},
}, {
about: "concurrent requests",
fillInterval: 250 * time.Millisecond,
capacity: 10,
reqs: []takeReq{{
time: 0,
count: 10,
expectWait: 0,
}, {
time: 0,
count: 2,
expectWait: 500 * time.Millisecond,
}, {
time: 0,
count: 2,
expectWait: 1000 * time.Millisecond,
}, {
time: 0,
count: 1,
expectWait: 1250 * time.Millisecond,
}},
}, {
about: "more than capacity",
fillInterval: 1 * time.Millisecond,
capacity: 10,
reqs: []takeReq{{
time: 0,
count: 10,
expectWait: 0,
}, {
time: 20 * time.Millisecond,
count: 15,
expectWait: 5 * time.Millisecond,
}},
}, {
about: "sub-quantum time",
fillInterval: 10 * time.Millisecond,
capacity: 10,
reqs: []takeReq{{
time: 0,
count: 10,
expectWait: 0,
}, {
time: 7 * time.Millisecond,
count: 1,
expectWait: 3 * time.Millisecond,
}, {
time: 8 * time.Millisecond,
count: 1,
expectWait: 12 * time.Millisecond,
}},
}, {
about: "within capacity",
fillInterval: 10 * time.Millisecond,
capacity: 5,
reqs: []takeReq{{
time: 0,
count: 5,
expectWait: 0,
}, {
time: 60 * time.Millisecond,
count: 5,
expectWait: 0,
}, {
time: 60 * time.Millisecond,
count: 1,
expectWait: 10 * time.Millisecond,
}, {
time: 80 * time.Millisecond,
count: 2,
expectWait: 10 * time.Millisecond,
}},
}}
func (rateLimitSuite) TestTake(c *gc.C) {
for i, test := range takeTests {
tb := NewBucket(test.fillInterval, test.capacity)
for j, req := range test.reqs {
d, ok := tb.take(tb.startTime.Add(req.time), req.count, infinityDuration)
c.Assert(ok, gc.Equals, true)
if d != req.expectWait {
c.Fatalf("test %d.%d, %s, got %v want %v", i, j, test.about, d, req.expectWait)
}
}
}
}
func (rateLimitSuite) TestTakeMaxDuration(c *gc.C) {
for i, test := range takeTests {
tb := NewBucket(test.fillInterval, test.capacity)
for j, req := range test.reqs {
if req.expectWait > 0 {
d, ok := tb.take(tb.startTime.Add(req.time), req.count, req.expectWait-1)
c.Assert(ok, gc.Equals, false)
c.Assert(d, gc.Equals, time.Duration(0))
}
d, ok := tb.take(tb.startTime.Add(req.time), req.count, req.expectWait)
c.Assert(ok, gc.Equals, true)
if d != req.expectWait {
c.Fatalf("test %d.%d, %s, got %v want %v", i, j, test.about, d, req.expectWait)
}
}
}
}
type takeAvailableReq struct {
time time.Duration
count int64
expect int64
}
var takeAvailableTests = []struct {
about string
fillInterval time.Duration
capacity int64
reqs []takeAvailableReq
}{{
about: "serial requests",
fillInterval: 250 * time.Millisecond,
capacity: 10,
reqs: []takeAvailableReq{{
time: 0,
count: 0,
expect: 0,
}, {
time: 0,
count: 10,
expect: 10,
}, {
time: 0,
count: 1,
expect: 0,
}, {
time: 250 * time.Millisecond,
count: 1,
expect: 1,
}},
}, {
about: "concurrent requests",
fillInterval: 250 * time.Millisecond,
capacity: 10,
reqs: []takeAvailableReq{{
time: 0,
count: 5,
expect: 5,
}, {
time: 0,
count: 2,
expect: 2,
}, {
time: 0,
count: 5,
expect: 3,
}, {
time: 0,
count: 1,
expect: 0,
}},
}, {
about: "more than capacity",
fillInterval: 1 * time.Millisecond,
capacity: 10,
reqs: []takeAvailableReq{{
time: 0,
count: 10,
expect: 10,
}, {
time: 20 * time.Millisecond,
count: 15,
expect: 10,
}},
}, {
about: "within capacity",
fillInterval: 10 * time.Millisecond,
capacity: 5,
reqs: []takeAvailableReq{{
time: 0,
count: 5,
expect: 5,
}, {
time: 60 * time.Millisecond,
count: 5,
expect: 5,
}, {
time: 70 * time.Millisecond,
count: 1,
expect: 1,
}},
}}
func (rateLimitSuite) TestTakeAvailable(c *gc.C) {
for i, test := range takeAvailableTests {
tb := NewBucket(test.fillInterval, test.capacity)
for j, req := range test.reqs {
d := tb.takeAvailable(tb.startTime.Add(req.time), req.count)
if d != req.expect {
c.Fatalf("test %d.%d, %s, got %v want %v", i, j, test.about, d, req.expect)
}
}
}
}
func (rateLimitSuite) TestPanics(c *gc.C) {
c.Assert(func() { NewBucket(0, 1) }, gc.PanicMatches, "token bucket fill interval is not > 0")
c.Assert(func() { NewBucket(-2, 1) }, gc.PanicMatches, "token bucket fill interval is not > 0")
c.Assert(func() { NewBucket(1, 0) }, gc.PanicMatches, "token bucket capacity is not > 0")
c.Assert(func() { NewBucket(1, -2) }, gc.PanicMatches, "token bucket capacity is not > 0")
}
func isCloseTo(x, y, tolerance float64) bool {
return abs(x-y)/y < tolerance
}
func (rateLimitSuite) TestRate(c *gc.C) {
tb := NewBucket(1, 1)
if !isCloseTo(tb.Rate(), 1e9, 0.00001) {
c.Fatalf("got %v want 1e9", tb.Rate())
}
tb = NewBucket(2*time.Second, 1)
if !isCloseTo(tb.Rate(), 0.5, 0.00001) {
c.Fatalf("got %v want 0.5", tb.Rate())
}
tb = NewBucketWithQuantum(100*time.Millisecond, 1, 5)
if !isCloseTo(tb.Rate(), 50, 0.00001) {
c.Fatalf("got %v want 50", tb.Rate())
}
}
func checkRate(c *gc.C, rate float64) {
tb := NewBucketWithRate(rate, 1<<62)
if !isCloseTo(tb.Rate(), rate, rateMargin) {
c.Fatalf("got %g want %v", tb.Rate(), rate)
}
d, ok := tb.take(tb.startTime, 1<<62, infinityDuration)
c.Assert(ok, gc.Equals, true)
c.Assert(d, gc.Equals, time.Duration(0))
// Check that the actual rate is as expected by
// asking for a not-quite multiple of the bucket's
// quantum and checking that the wait time
// correct.
d, ok = tb.take(tb.startTime, tb.quantum*2-tb.quantum/2, infinityDuration)
c.Assert(ok, gc.Equals, true)
expectTime := 1e9 * float64(tb.quantum) * 2 / rate
if !isCloseTo(float64(d), expectTime, rateMargin) {
c.Fatalf("rate %g: got %g want %v", rate, float64(d), expectTime)
}
}
func (rateLimitSuite) TestNewWithRate(c *gc.C) {
for rate := float64(1); rate < 1e6; rate += 7 {
checkRate(c, rate)
}
for _, rate := range []float64{
1024 * 1024 * 1024,
1e-5,
0.9e-5,
0.5,
0.9,
0.9e8,
3e12,
4e18,
} {
checkRate(c, rate)
checkRate(c, rate/3)
checkRate(c, rate*1.3)
}
}
func BenchmarkWait(b *testing.B) {
tb := NewBucket(1, 16*1024)
for i := b.N - 1; i >= 0; i-- {
tb.Wait(1)
}
}

View File

@@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux netbsd openbsd solaris dragonfly
// +build linux netbsd solaris dragonfly
package osext
@@ -27,7 +27,7 @@ func executable() (string, error) {
return execpath, nil
case "netbsd":
return os.Readlink("/proc/curproc/exe")
case "openbsd", "dragonfly":
case "dragonfly":
return os.Readlink("/proc/curproc/file")
case "solaris":
return os.Readlink(fmt.Sprintf("/proc/%d/path/a.out", os.Getpid()))

View File

@@ -2,12 +2,13 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin freebsd
// +build darwin freebsd openbsd
package osext
import (
"os"
"os/exec"
"path/filepath"
"runtime"
"syscall"
@@ -23,6 +24,8 @@ func executable() (string, error) {
mib = [4]int32{1 /* CTL_KERN */, 14 /* KERN_PROC */, 12 /* KERN_PROC_PATHNAME */, -1}
case "darwin":
mib = [4]int32{1 /* CTL_KERN */, 38 /* KERN_PROCARGS */, int32(os.Getpid()), -1}
case "openbsd":
mib = [4]int32{1 /* CTL_KERN */, 55 /* KERN_PROC_ARGS */, int32(os.Getpid()), 1 /* KERN_PROC_ARGV */}
}
n := uintptr(0)
@@ -42,14 +45,58 @@ func executable() (string, error) {
if n == 0 { // This shouldn't happen.
return "", nil
}
for i, v := range buf {
if v == 0 {
buf = buf[:i]
break
var execPath string
switch runtime.GOOS {
case "openbsd":
// buf now contains **argv, with pointers to each of the C-style
// NULL terminated arguments.
var args []string
argv := uintptr(unsafe.Pointer(&buf[0]))
Loop:
for {
argp := *(**[1<<20]byte)(unsafe.Pointer(argv))
if argp == nil {
break
}
for i := 0; uintptr(i) < n; i++ {
// we don't want the full arguments list
if string(argp[i]) == " " {
break Loop
}
if argp[i] != 0 {
continue
}
args = append(args, string(argp[:i]))
n -= uintptr(i)
break
}
if n < unsafe.Sizeof(argv) {
break
}
argv += unsafe.Sizeof(argv)
n -= unsafe.Sizeof(argv)
}
execPath = args[0]
// There is no canonical way to get an executable path on
// OpenBSD, so check PATH in case we are called directly
if execPath[0] != '/' && execPath[0] != '.' {
execIsInPath, err := exec.LookPath(execPath)
if err == nil {
execPath = execIsInPath
}
}
default:
for i, v := range buf {
if v == 0 {
buf = buf[:i]
break
}
}
execPath = string(buf)
}
var err error
execPath := string(buf)
// execPath will not be empty due to above checks.
// Try to get the absolute path if the execPath is not rooted.
if execPath[0] != '/' {

View File

@@ -1,203 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin linux freebsd netbsd windows
package osext
import (
"bytes"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"runtime"
"testing"
)
const (
executableEnvVar = "OSTEST_OUTPUT_EXECUTABLE"
executableEnvValueMatch = "match"
executableEnvValueDelete = "delete"
)
func TestPrintExecutable(t *testing.T) {
ef, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
t.Log("Executable:", ef)
}
func TestPrintExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
t.Log("Executable Folder:", ef)
}
func TestExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
if ef[len(ef)-1] == filepath.Separator {
t.Fatal("ExecutableFolder ends with a trailing slash.")
}
}
func TestExecutableMatch(t *testing.T) {
ep, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
// fullpath to be of the form "dir/prog".
dir := filepath.Dir(filepath.Dir(ep))
fullpath, err := filepath.Rel(dir, ep)
if err != nil {
t.Fatalf("filepath.Rel: %v", err)
}
// Make child start with a relative program path.
// Alter argv[0] for child to verify getting real path without argv[0].
cmd := &exec.Cmd{
Dir: dir,
Path: fullpath,
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueMatch)},
}
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("exec(self) failed: %v", err)
}
outs := string(out)
if !filepath.IsAbs(outs) {
t.Fatalf("Child returned %q, want an absolute path", out)
}
if !sameFile(outs, ep) {
t.Fatalf("Child returned %q, not the same file as %q", out, ep)
}
}
func TestExecutableDelete(t *testing.T) {
if runtime.GOOS != "linux" {
t.Skip()
}
fpath, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
r, w := io.Pipe()
stderrBuff := &bytes.Buffer{}
stdoutBuff := &bytes.Buffer{}
cmd := &exec.Cmd{
Path: fpath,
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueDelete)},
Stdin: r,
Stderr: stderrBuff,
Stdout: stdoutBuff,
}
err = cmd.Start()
if err != nil {
t.Fatalf("exec(self) start failed: %v", err)
}
tempPath := fpath + "_copy"
_ = os.Remove(tempPath)
err = copyFile(tempPath, fpath)
if err != nil {
t.Fatalf("copy file failed: %v", err)
}
err = os.Remove(fpath)
if err != nil {
t.Fatalf("remove running test file failed: %v", err)
}
err = os.Rename(tempPath, fpath)
if err != nil {
t.Fatalf("rename copy to previous name failed: %v", err)
}
w.Write([]byte{0})
w.Close()
err = cmd.Wait()
if err != nil {
t.Fatalf("exec wait failed: %v", err)
}
childPath := stderrBuff.String()
if !filepath.IsAbs(childPath) {
t.Fatalf("Child returned %q, want an absolute path", childPath)
}
if !sameFile(childPath, fpath) {
t.Fatalf("Child returned %q, not the same file as %q", childPath, fpath)
}
}
func sameFile(fn1, fn2 string) bool {
fi1, err := os.Stat(fn1)
if err != nil {
return false
}
fi2, err := os.Stat(fn2)
if err != nil {
return false
}
return os.SameFile(fi1, fi2)
}
func copyFile(dest, src string) error {
df, err := os.Create(dest)
if err != nil {
return err
}
defer df.Close()
sf, err := os.Open(src)
if err != nil {
return err
}
defer sf.Close()
_, err = io.Copy(df, sf)
return err
}
func TestMain(m *testing.M) {
env := os.Getenv(executableEnvVar)
switch env {
case "":
os.Exit(m.Run())
case executableEnvValueMatch:
// First chdir to another path.
dir := "/"
if runtime.GOOS == "windows" {
dir = filepath.VolumeName(".")
}
os.Chdir(dir)
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
case executableEnvValueDelete:
bb := make([]byte, 1)
var err error
n, err := os.Stdin.Read(bb)
if err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
os.Exit(2)
}
if n != 1 {
fmt.Fprint(os.Stderr, "ERROR: n != 1, n == ", n)
os.Exit(2)
}
if ep, err := Executable(); err != nil {
fmt.Fprint(os.Stderr, "ERROR: ", err)
} else {
fmt.Fprint(os.Stderr, ep)
}
}
os.Exit(0)
}

View File

@@ -0,0 +1,9 @@
*.[68]
*.a
*.out
*.swp
_obj
_testmain.go
cmd/metrics-bench/metrics-bench
cmd/metrics-example/metrics-example
cmd/never-read/never-read

View File

@@ -0,0 +1,13 @@
language: go
go:
- 1.2
- 1.3
- 1.4
script:
- ./validate.sh
# this should give us faster builds according to
# http://docs.travis-ci.com/user/migrating-from-legacy/
sudo: false

View File

@@ -0,0 +1,29 @@
Copyright 2012 Richard Crowley. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
THIS SOFTWARE IS PROVIDED BY RICHARD CROWLEY ``AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL RICHARD CROWLEY OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation
are those of the authors and should not be interpreted as representing
official policies, either expressed or implied, of Richard Crowley.

View File

@@ -0,0 +1,126 @@
go-metrics
==========
![travis build status](https://travis-ci.org/rcrowley/go-metrics.svg?branch=master)
Go port of Coda Hale's Metrics library: <https://github.com/dropwizard/metrics>.
Documentation: <http://godoc.org/github.com/rcrowley/go-metrics>.
Usage
-----
Create and update metrics:
```go
c := metrics.NewCounter()
metrics.Register("foo", c)
c.Inc(47)
g := metrics.NewGauge()
metrics.Register("bar", g)
g.Update(47)
s := metrics.NewExpDecaySample(1028, 0.015) // or metrics.NewUniformSample(1028)
h := metrics.NewHistogram(s)
metrics.Register("baz", h)
h.Update(47)
m := metrics.NewMeter()
metrics.Register("quux", m)
m.Mark(47)
t := metrics.NewTimer()
metrics.Register("bang", t)
t.Time(func() {})
t.Update(47)
```
Periodically log every metric in human-readable form to standard error:
```go
go metrics.Log(metrics.DefaultRegistry, 60e9, log.New(os.Stderr, "metrics: ", log.Lmicroseconds))
```
Periodically log every metric in slightly-more-parseable form to syslog:
```go
w, _ := syslog.Dial("unixgram", "/dev/log", syslog.LOG_INFO, "metrics")
go metrics.Syslog(metrics.DefaultRegistry, 60e9, w)
```
Periodically emit every metric to Graphite using the [Graphite client](https://github.com/cyberdelia/go-metrics-graphite):
```go
import "github.com/cyberdelia/go-metrics-graphite"
addr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:2003")
go graphite.Graphite(metrics.DefaultRegistry, 10e9, "metrics", addr)
```
Periodically emit every metric into InfluxDB:
**NOTE:** this has been pulled out of the library due to constant fluctuations
in the InfluxDB API. In fact, all client libraries are on their way out. see
issues [#121](https://github.com/rcrowley/go-metrics/issues/121) and
[#124](https://github.com/rcrowley/go-metrics/issues/124) for progress and details.
```go
import "github.com/rcrowley/go-metrics/influxdb"
go influxdb.Influxdb(metrics.DefaultRegistry, 10e9, &influxdb.Config{
Host: "127.0.0.1:8086",
Database: "metrics",
Username: "test",
Password: "test",
})
```
Periodically upload every metric to Librato using the [Librato client](https://github.com/mihasya/go-metrics-librato):
**Note**: the client included with this repository under the `librato` package
has been deprecated and moved to the repository linked above.
```go
import "github.com/mihasya/go-metrics-librato"
go librato.Librato(metrics.DefaultRegistry,
10e9, // interval
"example@example.com", // account owner email address
"token", // Librato API token
"hostname", // source
[]float64{0.95}, // percentiles to send
time.Millisecond, // time unit
)
```
Periodically emit every metric to StatHat:
```go
import "github.com/rcrowley/go-metrics/stathat"
go stathat.Stathat(metrics.DefaultRegistry, 10e9, "example@example.com")
```
Installation
------------
```sh
go get github.com/rcrowley/go-metrics
```
StatHat support additionally requires their Go client:
```sh
go get github.com/stathat/go
```
Publishing Metrics
------------------
Clients are available for the following destinations:
* Librato - [https://github.com/mihasya/go-metrics-librato](https://github.com/mihasya/go-metrics-librato)
* Graphite - [https://github.com/cyberdelia/go-metrics-graphite](https://github.com/cyberdelia/go-metrics-graphite)
* InfluxDB - [https://github.com/vrischmann/go-metrics-influxdb](https://github.com/vrischmann/go-metrics-influxdb)

View File

@@ -0,0 +1,20 @@
package main
import (
"fmt"
"github.com/rcrowley/go-metrics"
"time"
)
func main() {
r := metrics.NewRegistry()
for i := 0; i < 10000; i++ {
r.Register(fmt.Sprintf("counter-%d", i), metrics.NewCounter())
r.Register(fmt.Sprintf("gauge-%d", i), metrics.NewGauge())
r.Register(fmt.Sprintf("gaugefloat64-%d", i), metrics.NewGaugeFloat64())
r.Register(fmt.Sprintf("histogram-uniform-%d", i), metrics.NewHistogram(metrics.NewUniformSample(1028)))
r.Register(fmt.Sprintf("histogram-exp-%d", i), metrics.NewHistogram(metrics.NewExpDecaySample(1028, 0.015)))
r.Register(fmt.Sprintf("meter-%d", i), metrics.NewMeter())
}
time.Sleep(600e9)
}

View File

@@ -0,0 +1,154 @@
package main
import (
"errors"
"github.com/rcrowley/go-metrics"
// "github.com/rcrowley/go-metrics/stathat"
"log"
"math/rand"
"os"
// "syslog"
"time"
)
const fanout = 10
func main() {
r := metrics.NewRegistry()
c := metrics.NewCounter()
r.Register("foo", c)
for i := 0; i < fanout; i++ {
go func() {
for {
c.Dec(19)
time.Sleep(300e6)
}
}()
go func() {
for {
c.Inc(47)
time.Sleep(400e6)
}
}()
}
g := metrics.NewGauge()
r.Register("bar", g)
for i := 0; i < fanout; i++ {
go func() {
for {
g.Update(19)
time.Sleep(300e6)
}
}()
go func() {
for {
g.Update(47)
time.Sleep(400e6)
}
}()
}
gf := metrics.NewGaugeFloat64()
r.Register("barfloat64", gf)
for i := 0; i < fanout; i++ {
go func() {
for {
g.Update(19.0)
time.Sleep(300e6)
}
}()
go func() {
for {
g.Update(47.0)
time.Sleep(400e6)
}
}()
}
hc := metrics.NewHealthcheck(func(h metrics.Healthcheck) {
if 0 < rand.Intn(2) {
h.Healthy()
} else {
h.Unhealthy(errors.New("baz"))
}
})
r.Register("baz", hc)
s := metrics.NewExpDecaySample(1028, 0.015)
//s := metrics.NewUniformSample(1028)
h := metrics.NewHistogram(s)
r.Register("bang", h)
for i := 0; i < fanout; i++ {
go func() {
for {
h.Update(19)
time.Sleep(300e6)
}
}()
go func() {
for {
h.Update(47)
time.Sleep(400e6)
}
}()
}
m := metrics.NewMeter()
r.Register("quux", m)
for i := 0; i < fanout; i++ {
go func() {
for {
m.Mark(19)
time.Sleep(300e6)
}
}()
go func() {
for {
m.Mark(47)
time.Sleep(400e6)
}
}()
}
t := metrics.NewTimer()
r.Register("hooah", t)
for i := 0; i < fanout; i++ {
go func() {
for {
t.Time(func() { time.Sleep(300e6) })
}
}()
go func() {
for {
t.Time(func() { time.Sleep(400e6) })
}
}()
}
metrics.RegisterDebugGCStats(r)
go metrics.CaptureDebugGCStats(r, 5e9)
metrics.RegisterRuntimeMemStats(r)
go metrics.CaptureRuntimeMemStats(r, 5e9)
metrics.Log(r, 60e9, log.New(os.Stderr, "metrics: ", log.Lmicroseconds))
/*
w, err := syslog.Dial("unixgram", "/dev/log", syslog.LOG_INFO, "metrics")
if nil != err { log.Fatalln(err) }
metrics.Syslog(r, 60e9, w)
*/
/*
addr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:2003")
metrics.Graphite(r, 10e9, "metrics", addr)
*/
/*
stathat.Stathat(r, 10e9, "example@example.com")
*/
}

View File

@@ -0,0 +1,22 @@
package main
import (
"log"
"net"
)
func main() {
addr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:2003")
l, err := net.ListenTCP("tcp", addr)
if nil != err {
log.Fatalln(err)
}
log.Println("listening", l.Addr())
for {
c, err := l.AcceptTCP()
if nil != err {
log.Fatalln(err)
}
log.Println("accepted", c.RemoteAddr())
}
}

View File

@@ -0,0 +1,112 @@
package metrics
import "sync/atomic"
// Counters hold an int64 value that can be incremented and decremented.
type Counter interface {
Clear()
Count() int64
Dec(int64)
Inc(int64)
Snapshot() Counter
}
// GetOrRegisterCounter returns an existing Counter or constructs and registers
// a new StandardCounter.
func GetOrRegisterCounter(name string, r Registry) Counter {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, NewCounter).(Counter)
}
// NewCounter constructs a new StandardCounter.
func NewCounter() Counter {
if UseNilMetrics {
return NilCounter{}
}
return &StandardCounter{0}
}
// NewRegisteredCounter constructs and registers a new StandardCounter.
func NewRegisteredCounter(name string, r Registry) Counter {
c := NewCounter()
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// CounterSnapshot is a read-only copy of another Counter.
type CounterSnapshot int64
// Clear panics.
func (CounterSnapshot) Clear() {
panic("Clear called on a CounterSnapshot")
}
// Count returns the count at the time the snapshot was taken.
func (c CounterSnapshot) Count() int64 { return int64(c) }
// Dec panics.
func (CounterSnapshot) Dec(int64) {
panic("Dec called on a CounterSnapshot")
}
// Inc panics.
func (CounterSnapshot) Inc(int64) {
panic("Inc called on a CounterSnapshot")
}
// Snapshot returns the snapshot.
func (c CounterSnapshot) Snapshot() Counter { return c }
// NilCounter is a no-op Counter.
type NilCounter struct{}
// Clear is a no-op.
func (NilCounter) Clear() {}
// Count is a no-op.
func (NilCounter) Count() int64 { return 0 }
// Dec is a no-op.
func (NilCounter) Dec(i int64) {}
// Inc is a no-op.
func (NilCounter) Inc(i int64) {}
// Snapshot is a no-op.
func (NilCounter) Snapshot() Counter { return NilCounter{} }
// StandardCounter is the standard implementation of a Counter and uses the
// sync/atomic package to manage a single int64 value.
type StandardCounter struct {
count int64
}
// Clear sets the counter to zero.
func (c *StandardCounter) Clear() {
atomic.StoreInt64(&c.count, 0)
}
// Count returns the current count.
func (c *StandardCounter) Count() int64 {
return atomic.LoadInt64(&c.count)
}
// Dec decrements the counter by the given amount.
func (c *StandardCounter) Dec(i int64) {
atomic.AddInt64(&c.count, -i)
}
// Inc increments the counter by the given amount.
func (c *StandardCounter) Inc(i int64) {
atomic.AddInt64(&c.count, i)
}
// Snapshot returns a read-only copy of the counter.
func (c *StandardCounter) Snapshot() Counter {
return CounterSnapshot(c.Count())
}

View File

@@ -0,0 +1,76 @@
package metrics
import (
"runtime/debug"
"time"
)
var (
debugMetrics struct {
GCStats struct {
LastGC Gauge
NumGC Gauge
Pause Histogram
//PauseQuantiles Histogram
PauseTotal Gauge
}
ReadGCStats Timer
}
gcStats debug.GCStats
)
// Capture new values for the Go garbage collector statistics exported in
// debug.GCStats. This is designed to be called as a goroutine.
func CaptureDebugGCStats(r Registry, d time.Duration) {
for _ = range time.Tick(d) {
CaptureDebugGCStatsOnce(r)
}
}
// Capture new values for the Go garbage collector statistics exported in
// debug.GCStats. This is designed to be called in a background goroutine.
// Giving a registry which has not been given to RegisterDebugGCStats will
// panic.
//
// Be careful (but much less so) with this because debug.ReadGCStats calls
// the C function runtime·lock(runtime·mheap) which, while not a stop-the-world
// operation, isn't something you want to be doing all the time.
func CaptureDebugGCStatsOnce(r Registry) {
lastGC := gcStats.LastGC
t := time.Now()
debug.ReadGCStats(&gcStats)
debugMetrics.ReadGCStats.UpdateSince(t)
debugMetrics.GCStats.LastGC.Update(int64(gcStats.LastGC.UnixNano()))
debugMetrics.GCStats.NumGC.Update(int64(gcStats.NumGC))
if lastGC != gcStats.LastGC && 0 < len(gcStats.Pause) {
debugMetrics.GCStats.Pause.Update(int64(gcStats.Pause[0]))
}
//debugMetrics.GCStats.PauseQuantiles.Update(gcStats.PauseQuantiles)
debugMetrics.GCStats.PauseTotal.Update(int64(gcStats.PauseTotal))
}
// Register metrics for the Go garbage collector statistics exported in
// debug.GCStats. The metrics are named by their fully-qualified Go symbols,
// i.e. debug.GCStats.PauseTotal.
func RegisterDebugGCStats(r Registry) {
debugMetrics.GCStats.LastGC = NewGauge()
debugMetrics.GCStats.NumGC = NewGauge()
debugMetrics.GCStats.Pause = NewHistogram(NewExpDecaySample(1028, 0.015))
//debugMetrics.GCStats.PauseQuantiles = NewHistogram(NewExpDecaySample(1028, 0.015))
debugMetrics.GCStats.PauseTotal = NewGauge()
debugMetrics.ReadGCStats = NewTimer()
r.Register("debug.GCStats.LastGC", debugMetrics.GCStats.LastGC)
r.Register("debug.GCStats.NumGC", debugMetrics.GCStats.NumGC)
r.Register("debug.GCStats.Pause", debugMetrics.GCStats.Pause)
//r.Register("debug.GCStats.PauseQuantiles", debugMetrics.GCStats.PauseQuantiles)
r.Register("debug.GCStats.PauseTotal", debugMetrics.GCStats.PauseTotal)
r.Register("debug.ReadGCStats", debugMetrics.ReadGCStats)
}
// Allocate an initial slice for gcStats.Pause to avoid allocations during
// normal operation.
func init() {
gcStats.Pause = make([]time.Duration, 11)
}

View File

@@ -0,0 +1,118 @@
package metrics
import (
"math"
"sync"
"sync/atomic"
)
// EWMAs continuously calculate an exponentially-weighted moving average
// based on an outside source of clock ticks.
type EWMA interface {
Rate() float64
Snapshot() EWMA
Tick()
Update(int64)
}
// NewEWMA constructs a new EWMA with the given alpha.
func NewEWMA(alpha float64) EWMA {
if UseNilMetrics {
return NilEWMA{}
}
return &StandardEWMA{alpha: alpha}
}
// NewEWMA1 constructs a new EWMA for a one-minute moving average.
func NewEWMA1() EWMA {
return NewEWMA(1 - math.Exp(-5.0/60.0/1))
}
// NewEWMA5 constructs a new EWMA for a five-minute moving average.
func NewEWMA5() EWMA {
return NewEWMA(1 - math.Exp(-5.0/60.0/5))
}
// NewEWMA15 constructs a new EWMA for a fifteen-minute moving average.
func NewEWMA15() EWMA {
return NewEWMA(1 - math.Exp(-5.0/60.0/15))
}
// EWMASnapshot is a read-only copy of another EWMA.
type EWMASnapshot float64
// Rate returns the rate of events per second at the time the snapshot was
// taken.
func (a EWMASnapshot) Rate() float64 { return float64(a) }
// Snapshot returns the snapshot.
func (a EWMASnapshot) Snapshot() EWMA { return a }
// Tick panics.
func (EWMASnapshot) Tick() {
panic("Tick called on an EWMASnapshot")
}
// Update panics.
func (EWMASnapshot) Update(int64) {
panic("Update called on an EWMASnapshot")
}
// NilEWMA is a no-op EWMA.
type NilEWMA struct{}
// Rate is a no-op.
func (NilEWMA) Rate() float64 { return 0.0 }
// Snapshot is a no-op.
func (NilEWMA) Snapshot() EWMA { return NilEWMA{} }
// Tick is a no-op.
func (NilEWMA) Tick() {}
// Update is a no-op.
func (NilEWMA) Update(n int64) {}
// StandardEWMA is the standard implementation of an EWMA and tracks the number
// of uncounted events and processes them on each tick. It uses the
// sync/atomic package to manage uncounted events.
type StandardEWMA struct {
uncounted int64 // /!\ this should be the first member to ensure 64-bit alignment
alpha float64
rate float64
init bool
mutex sync.Mutex
}
// Rate returns the moving average rate of events per second.
func (a *StandardEWMA) Rate() float64 {
a.mutex.Lock()
defer a.mutex.Unlock()
return a.rate * float64(1e9)
}
// Snapshot returns a read-only copy of the EWMA.
func (a *StandardEWMA) Snapshot() EWMA {
return EWMASnapshot(a.Rate())
}
// Tick ticks the clock to update the moving average. It assumes it is called
// every five seconds.
func (a *StandardEWMA) Tick() {
count := atomic.LoadInt64(&a.uncounted)
atomic.AddInt64(&a.uncounted, -count)
instantRate := float64(count) / float64(5e9)
a.mutex.Lock()
defer a.mutex.Unlock()
if a.init {
a.rate += a.alpha * (instantRate - a.rate)
} else {
a.init = true
a.rate = instantRate
}
}
// Update adds n uncounted events.
func (a *StandardEWMA) Update(n int64) {
atomic.AddInt64(&a.uncounted, n)
}

View File

@@ -0,0 +1,84 @@
package metrics
import "sync/atomic"
// Gauges hold an int64 value that can be set arbitrarily.
type Gauge interface {
Snapshot() Gauge
Update(int64)
Value() int64
}
// GetOrRegisterGauge returns an existing Gauge or constructs and registers a
// new StandardGauge.
func GetOrRegisterGauge(name string, r Registry) Gauge {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, NewGauge).(Gauge)
}
// NewGauge constructs a new StandardGauge.
func NewGauge() Gauge {
if UseNilMetrics {
return NilGauge{}
}
return &StandardGauge{0}
}
// NewRegisteredGauge constructs and registers a new StandardGauge.
func NewRegisteredGauge(name string, r Registry) Gauge {
c := NewGauge()
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// GaugeSnapshot is a read-only copy of another Gauge.
type GaugeSnapshot int64
// Snapshot returns the snapshot.
func (g GaugeSnapshot) Snapshot() Gauge { return g }
// Update panics.
func (GaugeSnapshot) Update(int64) {
panic("Update called on a GaugeSnapshot")
}
// Value returns the value at the time the snapshot was taken.
func (g GaugeSnapshot) Value() int64 { return int64(g) }
// NilGauge is a no-op Gauge.
type NilGauge struct{}
// Snapshot is a no-op.
func (NilGauge) Snapshot() Gauge { return NilGauge{} }
// Update is a no-op.
func (NilGauge) Update(v int64) {}
// Value is a no-op.
func (NilGauge) Value() int64 { return 0 }
// StandardGauge is the standard implementation of a Gauge and uses the
// sync/atomic package to manage a single int64 value.
type StandardGauge struct {
value int64
}
// Snapshot returns a read-only copy of the gauge.
func (g *StandardGauge) Snapshot() Gauge {
return GaugeSnapshot(g.Value())
}
// Update updates the gauge's value.
func (g *StandardGauge) Update(v int64) {
atomic.StoreInt64(&g.value, v)
}
// Value returns the gauge's current value.
func (g *StandardGauge) Value() int64 {
return atomic.LoadInt64(&g.value)
}

View File

@@ -0,0 +1,91 @@
package metrics
import "sync"
// GaugeFloat64s hold a float64 value that can be set arbitrarily.
type GaugeFloat64 interface {
Snapshot() GaugeFloat64
Update(float64)
Value() float64
}
// GetOrRegisterGaugeFloat64 returns an existing GaugeFloat64 or constructs and registers a
// new StandardGaugeFloat64.
func GetOrRegisterGaugeFloat64(name string, r Registry) GaugeFloat64 {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, NewGaugeFloat64()).(GaugeFloat64)
}
// NewGaugeFloat64 constructs a new StandardGaugeFloat64.
func NewGaugeFloat64() GaugeFloat64 {
if UseNilMetrics {
return NilGaugeFloat64{}
}
return &StandardGaugeFloat64{
value: 0.0,
}
}
// NewRegisteredGaugeFloat64 constructs and registers a new StandardGaugeFloat64.
func NewRegisteredGaugeFloat64(name string, r Registry) GaugeFloat64 {
c := NewGaugeFloat64()
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// GaugeFloat64Snapshot is a read-only copy of another GaugeFloat64.
type GaugeFloat64Snapshot float64
// Snapshot returns the snapshot.
func (g GaugeFloat64Snapshot) Snapshot() GaugeFloat64 { return g }
// Update panics.
func (GaugeFloat64Snapshot) Update(float64) {
panic("Update called on a GaugeFloat64Snapshot")
}
// Value returns the value at the time the snapshot was taken.
func (g GaugeFloat64Snapshot) Value() float64 { return float64(g) }
// NilGauge is a no-op Gauge.
type NilGaugeFloat64 struct{}
// Snapshot is a no-op.
func (NilGaugeFloat64) Snapshot() GaugeFloat64 { return NilGaugeFloat64{} }
// Update is a no-op.
func (NilGaugeFloat64) Update(v float64) {}
// Value is a no-op.
func (NilGaugeFloat64) Value() float64 { return 0.0 }
// StandardGaugeFloat64 is the standard implementation of a GaugeFloat64 and uses
// sync.Mutex to manage a single float64 value.
type StandardGaugeFloat64 struct {
mutex sync.Mutex
value float64
}
// Snapshot returns a read-only copy of the gauge.
func (g *StandardGaugeFloat64) Snapshot() GaugeFloat64 {
return GaugeFloat64Snapshot(g.Value())
}
// Update updates the gauge's value.
func (g *StandardGaugeFloat64) Update(v float64) {
g.mutex.Lock()
defer g.mutex.Unlock()
g.value = v
}
// Value returns the gauge's current value.
func (g *StandardGaugeFloat64) Value() float64 {
g.mutex.Lock()
defer g.mutex.Unlock()
return g.value
}

View File

@@ -0,0 +1,113 @@
package metrics
import (
"bufio"
"fmt"
"log"
"net"
"strconv"
"strings"
"time"
)
// GraphiteConfig provides a container with configuration parameters for
// the Graphite exporter
type GraphiteConfig struct {
Addr *net.TCPAddr // Network address to connect to
Registry Registry // Registry to be exported
FlushInterval time.Duration // Flush interval
DurationUnit time.Duration // Time conversion unit for durations
Prefix string // Prefix to be prepended to metric names
Percentiles []float64 // Percentiles to export from timers and histograms
}
// Graphite is a blocking exporter function which reports metrics in r
// to a graphite server located at addr, flushing them every d duration
// and prepending metric names with prefix.
func Graphite(r Registry, d time.Duration, prefix string, addr *net.TCPAddr) {
GraphiteWithConfig(GraphiteConfig{
Addr: addr,
Registry: r,
FlushInterval: d,
DurationUnit: time.Nanosecond,
Prefix: prefix,
Percentiles: []float64{0.5, 0.75, 0.95, 0.99, 0.999},
})
}
// GraphiteWithConfig is a blocking exporter function just like Graphite,
// but it takes a GraphiteConfig instead.
func GraphiteWithConfig(c GraphiteConfig) {
log.Printf("WARNING: This go-metrics client has been DEPRECATED! It has been moved to https://github.com/cyberdelia/go-metrics-graphite and will be removed from rcrowley/go-metrics on August 12th 2015")
for _ = range time.Tick(c.FlushInterval) {
if err := graphite(&c); nil != err {
log.Println(err)
}
}
}
// GraphiteOnce performs a single submission to Graphite, returning a
// non-nil error on failed connections. This can be used in a loop
// similar to GraphiteWithConfig for custom error handling.
func GraphiteOnce(c GraphiteConfig) error {
log.Printf("WARNING: This go-metrics client has been DEPRECATED! It has been moved to https://github.com/cyberdelia/go-metrics-graphite and will be removed from rcrowley/go-metrics on August 12th 2015")
return graphite(&c)
}
func graphite(c *GraphiteConfig) error {
now := time.Now().Unix()
du := float64(c.DurationUnit)
conn, err := net.DialTCP("tcp", nil, c.Addr)
if nil != err {
return err
}
defer conn.Close()
w := bufio.NewWriter(conn)
c.Registry.Each(func(name string, i interface{}) {
switch metric := i.(type) {
case Counter:
fmt.Fprintf(w, "%s.%s.count %d %d\n", c.Prefix, name, metric.Count(), now)
case Gauge:
fmt.Fprintf(w, "%s.%s.value %d %d\n", c.Prefix, name, metric.Value(), now)
case GaugeFloat64:
fmt.Fprintf(w, "%s.%s.value %f %d\n", c.Prefix, name, metric.Value(), now)
case Histogram:
h := metric.Snapshot()
ps := h.Percentiles(c.Percentiles)
fmt.Fprintf(w, "%s.%s.count %d %d\n", c.Prefix, name, h.Count(), now)
fmt.Fprintf(w, "%s.%s.min %d %d\n", c.Prefix, name, h.Min(), now)
fmt.Fprintf(w, "%s.%s.max %d %d\n", c.Prefix, name, h.Max(), now)
fmt.Fprintf(w, "%s.%s.mean %.2f %d\n", c.Prefix, name, h.Mean(), now)
fmt.Fprintf(w, "%s.%s.std-dev %.2f %d\n", c.Prefix, name, h.StdDev(), now)
for psIdx, psKey := range c.Percentiles {
key := strings.Replace(strconv.FormatFloat(psKey*100.0, 'f', -1, 64), ".", "", 1)
fmt.Fprintf(w, "%s.%s.%s-percentile %.2f %d\n", c.Prefix, name, key, ps[psIdx], now)
}
case Meter:
m := metric.Snapshot()
fmt.Fprintf(w, "%s.%s.count %d %d\n", c.Prefix, name, m.Count(), now)
fmt.Fprintf(w, "%s.%s.one-minute %.2f %d\n", c.Prefix, name, m.Rate1(), now)
fmt.Fprintf(w, "%s.%s.five-minute %.2f %d\n", c.Prefix, name, m.Rate5(), now)
fmt.Fprintf(w, "%s.%s.fifteen-minute %.2f %d\n", c.Prefix, name, m.Rate15(), now)
fmt.Fprintf(w, "%s.%s.mean %.2f %d\n", c.Prefix, name, m.RateMean(), now)
case Timer:
t := metric.Snapshot()
ps := t.Percentiles(c.Percentiles)
fmt.Fprintf(w, "%s.%s.count %d %d\n", c.Prefix, name, t.Count(), now)
fmt.Fprintf(w, "%s.%s.min %d %d\n", c.Prefix, name, t.Min()/int64(du), now)
fmt.Fprintf(w, "%s.%s.max %d %d\n", c.Prefix, name, t.Max()/int64(du), now)
fmt.Fprintf(w, "%s.%s.mean %.2f %d\n", c.Prefix, name, t.Mean()/du, now)
fmt.Fprintf(w, "%s.%s.std-dev %.2f %d\n", c.Prefix, name, t.StdDev()/du, now)
for psIdx, psKey := range c.Percentiles {
key := strings.Replace(strconv.FormatFloat(psKey*100.0, 'f', -1, 64), ".", "", 1)
fmt.Fprintf(w, "%s.%s.%s-percentile %.2f %d\n", c.Prefix, name, key, ps[psIdx], now)
}
fmt.Fprintf(w, "%s.%s.one-minute %.2f %d\n", c.Prefix, name, t.Rate1(), now)
fmt.Fprintf(w, "%s.%s.five-minute %.2f %d\n", c.Prefix, name, t.Rate5(), now)
fmt.Fprintf(w, "%s.%s.fifteen-minute %.2f %d\n", c.Prefix, name, t.Rate15(), now)
fmt.Fprintf(w, "%s.%s.mean-rate %.2f %d\n", c.Prefix, name, t.RateMean(), now)
}
w.Flush()
})
return nil
}

View File

@@ -0,0 +1,61 @@
package metrics
// Healthchecks hold an error value describing an arbitrary up/down status.
type Healthcheck interface {
Check()
Error() error
Healthy()
Unhealthy(error)
}
// NewHealthcheck constructs a new Healthcheck which will use the given
// function to update its status.
func NewHealthcheck(f func(Healthcheck)) Healthcheck {
if UseNilMetrics {
return NilHealthcheck{}
}
return &StandardHealthcheck{nil, f}
}
// NilHealthcheck is a no-op.
type NilHealthcheck struct{}
// Check is a no-op.
func (NilHealthcheck) Check() {}
// Error is a no-op.
func (NilHealthcheck) Error() error { return nil }
// Healthy is a no-op.
func (NilHealthcheck) Healthy() {}
// Unhealthy is a no-op.
func (NilHealthcheck) Unhealthy(error) {}
// StandardHealthcheck is the standard implementation of a Healthcheck and
// stores the status and a function to call to update the status.
type StandardHealthcheck struct {
err error
f func(Healthcheck)
}
// Check runs the healthcheck function to update the healthcheck's status.
func (h *StandardHealthcheck) Check() {
h.f(h)
}
// Error returns the healthcheck's status, which will be nil if it is healthy.
func (h *StandardHealthcheck) Error() error {
return h.err
}
// Healthy marks the healthcheck as healthy.
func (h *StandardHealthcheck) Healthy() {
h.err = nil
}
// Unhealthy marks the healthcheck as unhealthy. The error is stored and
// may be retrieved by the Error method.
func (h *StandardHealthcheck) Unhealthy(err error) {
h.err = err
}

View File

@@ -0,0 +1,202 @@
package metrics
// Histograms calculate distribution statistics from a series of int64 values.
type Histogram interface {
Clear()
Count() int64
Max() int64
Mean() float64
Min() int64
Percentile(float64) float64
Percentiles([]float64) []float64
Sample() Sample
Snapshot() Histogram
StdDev() float64
Sum() int64
Update(int64)
Variance() float64
}
// GetOrRegisterHistogram returns an existing Histogram or constructs and
// registers a new StandardHistogram.
func GetOrRegisterHistogram(name string, r Registry, s Sample) Histogram {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, func() Histogram { return NewHistogram(s) }).(Histogram)
}
// NewHistogram constructs a new StandardHistogram from a Sample.
func NewHistogram(s Sample) Histogram {
if UseNilMetrics {
return NilHistogram{}
}
return &StandardHistogram{sample: s}
}
// NewRegisteredHistogram constructs and registers a new StandardHistogram from
// a Sample.
func NewRegisteredHistogram(name string, r Registry, s Sample) Histogram {
c := NewHistogram(s)
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// HistogramSnapshot is a read-only copy of another Histogram.
type HistogramSnapshot struct {
sample *SampleSnapshot
}
// Clear panics.
func (*HistogramSnapshot) Clear() {
panic("Clear called on a HistogramSnapshot")
}
// Count returns the number of samples recorded at the time the snapshot was
// taken.
func (h *HistogramSnapshot) Count() int64 { return h.sample.Count() }
// Max returns the maximum value in the sample at the time the snapshot was
// taken.
func (h *HistogramSnapshot) Max() int64 { return h.sample.Max() }
// Mean returns the mean of the values in the sample at the time the snapshot
// was taken.
func (h *HistogramSnapshot) Mean() float64 { return h.sample.Mean() }
// Min returns the minimum value in the sample at the time the snapshot was
// taken.
func (h *HistogramSnapshot) Min() int64 { return h.sample.Min() }
// Percentile returns an arbitrary percentile of values in the sample at the
// time the snapshot was taken.
func (h *HistogramSnapshot) Percentile(p float64) float64 {
return h.sample.Percentile(p)
}
// Percentiles returns a slice of arbitrary percentiles of values in the sample
// at the time the snapshot was taken.
func (h *HistogramSnapshot) Percentiles(ps []float64) []float64 {
return h.sample.Percentiles(ps)
}
// Sample returns the Sample underlying the histogram.
func (h *HistogramSnapshot) Sample() Sample { return h.sample }
// Snapshot returns the snapshot.
func (h *HistogramSnapshot) Snapshot() Histogram { return h }
// StdDev returns the standard deviation of the values in the sample at the
// time the snapshot was taken.
func (h *HistogramSnapshot) StdDev() float64 { return h.sample.StdDev() }
// Sum returns the sum in the sample at the time the snapshot was taken.
func (h *HistogramSnapshot) Sum() int64 { return h.sample.Sum() }
// Update panics.
func (*HistogramSnapshot) Update(int64) {
panic("Update called on a HistogramSnapshot")
}
// Variance returns the variance of inputs at the time the snapshot was taken.
func (h *HistogramSnapshot) Variance() float64 { return h.sample.Variance() }
// NilHistogram is a no-op Histogram.
type NilHistogram struct{}
// Clear is a no-op.
func (NilHistogram) Clear() {}
// Count is a no-op.
func (NilHistogram) Count() int64 { return 0 }
// Max is a no-op.
func (NilHistogram) Max() int64 { return 0 }
// Mean is a no-op.
func (NilHistogram) Mean() float64 { return 0.0 }
// Min is a no-op.
func (NilHistogram) Min() int64 { return 0 }
// Percentile is a no-op.
func (NilHistogram) Percentile(p float64) float64 { return 0.0 }
// Percentiles is a no-op.
func (NilHistogram) Percentiles(ps []float64) []float64 {
return make([]float64, len(ps))
}
// Sample is a no-op.
func (NilHistogram) Sample() Sample { return NilSample{} }
// Snapshot is a no-op.
func (NilHistogram) Snapshot() Histogram { return NilHistogram{} }
// StdDev is a no-op.
func (NilHistogram) StdDev() float64 { return 0.0 }
// Sum is a no-op.
func (NilHistogram) Sum() int64 { return 0 }
// Update is a no-op.
func (NilHistogram) Update(v int64) {}
// Variance is a no-op.
func (NilHistogram) Variance() float64 { return 0.0 }
// StandardHistogram is the standard implementation of a Histogram and uses a
// Sample to bound its memory use.
type StandardHistogram struct {
sample Sample
}
// Clear clears the histogram and its sample.
func (h *StandardHistogram) Clear() { h.sample.Clear() }
// Count returns the number of samples recorded since the histogram was last
// cleared.
func (h *StandardHistogram) Count() int64 { return h.sample.Count() }
// Max returns the maximum value in the sample.
func (h *StandardHistogram) Max() int64 { return h.sample.Max() }
// Mean returns the mean of the values in the sample.
func (h *StandardHistogram) Mean() float64 { return h.sample.Mean() }
// Min returns the minimum value in the sample.
func (h *StandardHistogram) Min() int64 { return h.sample.Min() }
// Percentile returns an arbitrary percentile of the values in the sample.
func (h *StandardHistogram) Percentile(p float64) float64 {
return h.sample.Percentile(p)
}
// Percentiles returns a slice of arbitrary percentiles of the values in the
// sample.
func (h *StandardHistogram) Percentiles(ps []float64) []float64 {
return h.sample.Percentiles(ps)
}
// Sample returns the Sample underlying the histogram.
func (h *StandardHistogram) Sample() Sample { return h.sample }
// Snapshot returns a read-only copy of the histogram.
func (h *StandardHistogram) Snapshot() Histogram {
return &HistogramSnapshot{sample: h.sample.Snapshot().(*SampleSnapshot)}
}
// StdDev returns the standard deviation of the values in the sample.
func (h *StandardHistogram) StdDev() float64 { return h.sample.StdDev() }
// Sum returns the sum in the sample.
func (h *StandardHistogram) Sum() int64 { return h.sample.Sum() }
// Update samples a new value.
func (h *StandardHistogram) Update(v int64) { h.sample.Update(v) }
// Variance returns the variance of the values in the sample.
func (h *StandardHistogram) Variance() float64 { return h.sample.Variance() }

View File

@@ -0,0 +1,83 @@
package metrics
import (
"encoding/json"
"io"
"time"
)
// MarshalJSON returns a byte slice containing a JSON representation of all
// the metrics in the Registry.
func (r *StandardRegistry) MarshalJSON() ([]byte, error) {
data := make(map[string]map[string]interface{})
r.Each(func(name string, i interface{}) {
values := make(map[string]interface{})
switch metric := i.(type) {
case Counter:
values["count"] = metric.Count()
case Gauge:
values["value"] = metric.Value()
case GaugeFloat64:
values["value"] = metric.Value()
case Healthcheck:
values["error"] = nil
metric.Check()
if err := metric.Error(); nil != err {
values["error"] = metric.Error().Error()
}
case Histogram:
h := metric.Snapshot()
ps := h.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
values["count"] = h.Count()
values["min"] = h.Min()
values["max"] = h.Max()
values["mean"] = h.Mean()
values["stddev"] = h.StdDev()
values["median"] = ps[0]
values["75%"] = ps[1]
values["95%"] = ps[2]
values["99%"] = ps[3]
values["99.9%"] = ps[4]
case Meter:
m := metric.Snapshot()
values["count"] = m.Count()
values["1m.rate"] = m.Rate1()
values["5m.rate"] = m.Rate5()
values["15m.rate"] = m.Rate15()
values["mean.rate"] = m.RateMean()
case Timer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
values["count"] = t.Count()
values["min"] = t.Min()
values["max"] = t.Max()
values["mean"] = t.Mean()
values["stddev"] = t.StdDev()
values["median"] = ps[0]
values["75%"] = ps[1]
values["95%"] = ps[2]
values["99%"] = ps[3]
values["99.9%"] = ps[4]
values["1m.rate"] = t.Rate1()
values["5m.rate"] = t.Rate5()
values["15m.rate"] = t.Rate15()
values["mean.rate"] = t.RateMean()
}
data[name] = values
})
return json.Marshal(data)
}
// WriteJSON writes metrics from the given registry periodically to the
// specified io.Writer as JSON.
func WriteJSON(r Registry, d time.Duration, w io.Writer) {
for _ = range time.Tick(d) {
WriteJSONOnce(r, w)
}
}
// WriteJSONOnce writes metrics from the given registry to the specified
// io.Writer as JSON.
func WriteJSONOnce(r Registry, w io.Writer) {
json.NewEncoder(w).Encode(r)
}

View File

@@ -0,0 +1,102 @@
package librato
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
)
const Operations = "operations"
const OperationsShort = "ops"
type LibratoClient struct {
Email, Token string
}
// property strings
const (
// display attributes
Color = "color"
DisplayMax = "display_max"
DisplayMin = "display_min"
DisplayUnitsLong = "display_units_long"
DisplayUnitsShort = "display_units_short"
DisplayStacked = "display_stacked"
DisplayTransform = "display_transform"
// special gauge display attributes
SummarizeFunction = "summarize_function"
Aggregate = "aggregate"
// metric keys
Name = "name"
Period = "period"
Description = "description"
DisplayName = "display_name"
Attributes = "attributes"
// measurement keys
MeasureTime = "measure_time"
Source = "source"
Value = "value"
// special gauge keys
Count = "count"
Sum = "sum"
Max = "max"
Min = "min"
SumSquares = "sum_squares"
// batch keys
Counters = "counters"
Gauges = "gauges"
MetricsPostUrl = "https://metrics-api.librato.com/v1/metrics"
)
type Measurement map[string]interface{}
type Metric map[string]interface{}
type Batch struct {
Gauges []Measurement `json:"gauges,omitempty"`
Counters []Measurement `json:"counters,omitempty"`
MeasureTime int64 `json:"measure_time"`
Source string `json:"source"`
}
func (self *LibratoClient) PostMetrics(batch Batch) (err error) {
var (
js []byte
req *http.Request
resp *http.Response
)
if len(batch.Counters) == 0 && len(batch.Gauges) == 0 {
return nil
}
if js, err = json.Marshal(batch); err != nil {
return
}
if req, err = http.NewRequest("POST", MetricsPostUrl, bytes.NewBuffer(js)); err != nil {
return
}
req.Header.Set("Content-Type", "application/json")
req.SetBasicAuth(self.Email, self.Token)
if resp, err = http.DefaultClient.Do(req); err != nil {
return
}
if resp.StatusCode != http.StatusOK {
var body []byte
if body, err = ioutil.ReadAll(resp.Body); err != nil {
body = []byte(fmt.Sprintf("(could not fetch response body for error: %s)", err))
}
err = fmt.Errorf("Unable to post to Librato: %d %s %s", resp.StatusCode, resp.Status, string(body))
}
return
}

View File

@@ -0,0 +1,231 @@
package librato
import (
"fmt"
"log"
"math"
"regexp"
"time"
"github.com/rcrowley/go-metrics"
)
// a regexp for extracting the unit from time.Duration.String
var unitRegexp = regexp.MustCompile("[^\\d]+$")
// a helper that turns a time.Duration into librato display attributes for timer metrics
func translateTimerAttributes(d time.Duration) (attrs map[string]interface{}) {
attrs = make(map[string]interface{})
attrs[DisplayTransform] = fmt.Sprintf("x/%d", int64(d))
attrs[DisplayUnitsShort] = string(unitRegexp.Find([]byte(d.String())))
return
}
type Reporter struct {
Email, Token string
Source string
Interval time.Duration
Registry metrics.Registry
Percentiles []float64 // percentiles to report on histogram metrics
TimerAttributes map[string]interface{} // units in which timers will be displayed
intervalSec int64
}
func NewReporter(r metrics.Registry, d time.Duration, e string, t string, s string, p []float64, u time.Duration) *Reporter {
return &Reporter{e, t, s, d, r, p, translateTimerAttributes(u), int64(d / time.Second)}
}
func Librato(r metrics.Registry, d time.Duration, e string, t string, s string, p []float64, u time.Duration) {
NewReporter(r, d, e, t, s, p, u).Run()
}
func (self *Reporter) Run() {
log.Printf("WARNING: This client has been DEPRECATED! It has been moved to https://github.com/mihasya/go-metrics-librato and will be removed from rcrowley/go-metrics on August 5th 2015")
ticker := time.Tick(self.Interval)
metricsApi := &LibratoClient{self.Email, self.Token}
for now := range ticker {
var metrics Batch
var err error
if metrics, err = self.BuildRequest(now, self.Registry); err != nil {
log.Printf("ERROR constructing librato request body %s", err)
continue
}
if err := metricsApi.PostMetrics(metrics); err != nil {
log.Printf("ERROR sending metrics to librato %s", err)
continue
}
}
}
// calculate sum of squares from data provided by metrics.Histogram
// see http://en.wikipedia.org/wiki/Standard_deviation#Rapid_calculation_methods
func sumSquares(s metrics.Sample) float64 {
count := float64(s.Count())
sumSquared := math.Pow(count*s.Mean(), 2)
sumSquares := math.Pow(count*s.StdDev(), 2) + sumSquared/count
if math.IsNaN(sumSquares) {
return 0.0
}
return sumSquares
}
func sumSquaresTimer(t metrics.Timer) float64 {
count := float64(t.Count())
sumSquared := math.Pow(count*t.Mean(), 2)
sumSquares := math.Pow(count*t.StdDev(), 2) + sumSquared/count
if math.IsNaN(sumSquares) {
return 0.0
}
return sumSquares
}
func (self *Reporter) BuildRequest(now time.Time, r metrics.Registry) (snapshot Batch, err error) {
snapshot = Batch{
// coerce timestamps to a stepping fn so that they line up in Librato graphs
MeasureTime: (now.Unix() / self.intervalSec) * self.intervalSec,
Source: self.Source,
}
snapshot.Gauges = make([]Measurement, 0)
snapshot.Counters = make([]Measurement, 0)
histogramGaugeCount := 1 + len(self.Percentiles)
r.Each(func(name string, metric interface{}) {
measurement := Measurement{}
measurement[Period] = self.Interval.Seconds()
switch m := metric.(type) {
case metrics.Counter:
if m.Count() > 0 {
measurement[Name] = fmt.Sprintf("%s.%s", name, "count")
measurement[Value] = float64(m.Count())
measurement[Attributes] = map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
}
snapshot.Counters = append(snapshot.Counters, measurement)
}
case metrics.Gauge:
measurement[Name] = name
measurement[Value] = float64(m.Value())
snapshot.Gauges = append(snapshot.Gauges, measurement)
case metrics.GaugeFloat64:
measurement[Name] = name
measurement[Value] = float64(m.Value())
snapshot.Gauges = append(snapshot.Gauges, measurement)
case metrics.Histogram:
if m.Count() > 0 {
gauges := make([]Measurement, histogramGaugeCount, histogramGaugeCount)
s := m.Sample()
measurement[Name] = fmt.Sprintf("%s.%s", name, "hist")
measurement[Count] = uint64(s.Count())
measurement[Max] = float64(s.Max())
measurement[Min] = float64(s.Min())
measurement[Sum] = float64(s.Sum())
measurement[SumSquares] = sumSquares(s)
gauges[0] = measurement
for i, p := range self.Percentiles {
gauges[i+1] = Measurement{
Name: fmt.Sprintf("%s.%.2f", measurement[Name], p),
Value: s.Percentile(p),
Period: measurement[Period],
}
}
snapshot.Gauges = append(snapshot.Gauges, gauges...)
}
case metrics.Meter:
measurement[Name] = name
measurement[Value] = float64(m.Count())
snapshot.Counters = append(snapshot.Counters, measurement)
snapshot.Gauges = append(snapshot.Gauges,
Measurement{
Name: fmt.Sprintf("%s.%s", name, "1min"),
Value: m.Rate1(),
Period: int64(self.Interval.Seconds()),
Attributes: map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
},
},
Measurement{
Name: fmt.Sprintf("%s.%s", name, "5min"),
Value: m.Rate5(),
Period: int64(self.Interval.Seconds()),
Attributes: map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
},
},
Measurement{
Name: fmt.Sprintf("%s.%s", name, "15min"),
Value: m.Rate15(),
Period: int64(self.Interval.Seconds()),
Attributes: map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
},
},
)
case metrics.Timer:
measurement[Name] = name
measurement[Value] = float64(m.Count())
snapshot.Counters = append(snapshot.Counters, measurement)
if m.Count() > 0 {
libratoName := fmt.Sprintf("%s.%s", name, "timer.mean")
gauges := make([]Measurement, histogramGaugeCount, histogramGaugeCount)
gauges[0] = Measurement{
Name: libratoName,
Count: uint64(m.Count()),
Sum: m.Mean() * float64(m.Count()),
Max: float64(m.Max()),
Min: float64(m.Min()),
SumSquares: sumSquaresTimer(m),
Period: int64(self.Interval.Seconds()),
Attributes: self.TimerAttributes,
}
for i, p := range self.Percentiles {
gauges[i+1] = Measurement{
Name: fmt.Sprintf("%s.timer.%2.0f", name, p*100),
Value: m.Percentile(p),
Period: int64(self.Interval.Seconds()),
Attributes: self.TimerAttributes,
}
}
snapshot.Gauges = append(snapshot.Gauges, gauges...)
snapshot.Gauges = append(snapshot.Gauges,
Measurement{
Name: fmt.Sprintf("%s.%s", name, "rate.1min"),
Value: m.Rate1(),
Period: int64(self.Interval.Seconds()),
Attributes: map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
},
},
Measurement{
Name: fmt.Sprintf("%s.%s", name, "rate.5min"),
Value: m.Rate5(),
Period: int64(self.Interval.Seconds()),
Attributes: map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
},
},
Measurement{
Name: fmt.Sprintf("%s.%s", name, "rate.15min"),
Value: m.Rate15(),
Period: int64(self.Interval.Seconds()),
Attributes: map[string]interface{}{
DisplayUnitsLong: Operations,
DisplayUnitsShort: OperationsShort,
DisplayMin: "0",
},
},
)
}
}
})
return
}

View File

@@ -0,0 +1,70 @@
package metrics
import (
"log"
"time"
)
// Output each metric in the given registry periodically using the given
// logger.
func Log(r Registry, d time.Duration, l *log.Logger) {
for _ = range time.Tick(d) {
r.Each(func(name string, i interface{}) {
switch metric := i.(type) {
case Counter:
l.Printf("counter %s\n", name)
l.Printf(" count: %9d\n", metric.Count())
case Gauge:
l.Printf("gauge %s\n", name)
l.Printf(" value: %9d\n", metric.Value())
case GaugeFloat64:
l.Printf("gauge %s\n", name)
l.Printf(" value: %f\n", metric.Value())
case Healthcheck:
metric.Check()
l.Printf("healthcheck %s\n", name)
l.Printf(" error: %v\n", metric.Error())
case Histogram:
h := metric.Snapshot()
ps := h.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
l.Printf("histogram %s\n", name)
l.Printf(" count: %9d\n", h.Count())
l.Printf(" min: %9d\n", h.Min())
l.Printf(" max: %9d\n", h.Max())
l.Printf(" mean: %12.2f\n", h.Mean())
l.Printf(" stddev: %12.2f\n", h.StdDev())
l.Printf(" median: %12.2f\n", ps[0])
l.Printf(" 75%%: %12.2f\n", ps[1])
l.Printf(" 95%%: %12.2f\n", ps[2])
l.Printf(" 99%%: %12.2f\n", ps[3])
l.Printf(" 99.9%%: %12.2f\n", ps[4])
case Meter:
m := metric.Snapshot()
l.Printf("meter %s\n", name)
l.Printf(" count: %9d\n", m.Count())
l.Printf(" 1-min rate: %12.2f\n", m.Rate1())
l.Printf(" 5-min rate: %12.2f\n", m.Rate5())
l.Printf(" 15-min rate: %12.2f\n", m.Rate15())
l.Printf(" mean rate: %12.2f\n", m.RateMean())
case Timer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
l.Printf("timer %s\n", name)
l.Printf(" count: %9d\n", t.Count())
l.Printf(" min: %9d\n", t.Min())
l.Printf(" max: %9d\n", t.Max())
l.Printf(" mean: %12.2f\n", t.Mean())
l.Printf(" stddev: %12.2f\n", t.StdDev())
l.Printf(" median: %12.2f\n", ps[0])
l.Printf(" 75%%: %12.2f\n", ps[1])
l.Printf(" 95%%: %12.2f\n", ps[2])
l.Printf(" 99%%: %12.2f\n", ps[3])
l.Printf(" 99.9%%: %12.2f\n", ps[4])
l.Printf(" 1-min rate: %12.2f\n", t.Rate1())
l.Printf(" 5-min rate: %12.2f\n", t.Rate5())
l.Printf(" 15-min rate: %12.2f\n", t.Rate15())
l.Printf(" mean rate: %12.2f\n", t.RateMean())
}
})
}
}

View File

@@ -0,0 +1,285 @@
Memory usage
============
(Highly unscientific.)
Command used to gather static memory usage:
```sh
grep ^Vm "/proc/$(ps fax | grep [m]etrics-bench | awk '{print $1}')/status"
```
Program used to gather baseline memory usage:
```go
package main
import "time"
func main() {
time.Sleep(600e9)
}
```
Baseline
--------
```
VmPeak: 42604 kB
VmSize: 42604 kB
VmLck: 0 kB
VmHWM: 1120 kB
VmRSS: 1120 kB
VmData: 35460 kB
VmStk: 136 kB
VmExe: 1020 kB
VmLib: 1848 kB
VmPTE: 36 kB
VmSwap: 0 kB
```
Program used to gather metric memory usage (with other metrics being similar):
```go
package main
import (
"fmt"
"metrics"
"time"
)
func main() {
fmt.Sprintf("foo")
metrics.NewRegistry()
time.Sleep(600e9)
}
```
1000 counters registered
------------------------
```
VmPeak: 44016 kB
VmSize: 44016 kB
VmLck: 0 kB
VmHWM: 1928 kB
VmRSS: 1928 kB
VmData: 36868 kB
VmStk: 136 kB
VmExe: 1024 kB
VmLib: 1848 kB
VmPTE: 40 kB
VmSwap: 0 kB
```
**1.412 kB virtual, TODO 0.808 kB resident per counter.**
100000 counters registered
--------------------------
```
VmPeak: 55024 kB
VmSize: 55024 kB
VmLck: 0 kB
VmHWM: 12440 kB
VmRSS: 12440 kB
VmData: 47876 kB
VmStk: 136 kB
VmExe: 1024 kB
VmLib: 1848 kB
VmPTE: 64 kB
VmSwap: 0 kB
```
**0.1242 kB virtual, 0.1132 kB resident per counter.**
1000 gauges registered
----------------------
```
VmPeak: 44012 kB
VmSize: 44012 kB
VmLck: 0 kB
VmHWM: 1928 kB
VmRSS: 1928 kB
VmData: 36868 kB
VmStk: 136 kB
VmExe: 1020 kB
VmLib: 1848 kB
VmPTE: 40 kB
VmSwap: 0 kB
```
**1.408 kB virtual, 0.808 kB resident per counter.**
100000 gauges registered
------------------------
```
VmPeak: 55020 kB
VmSize: 55020 kB
VmLck: 0 kB
VmHWM: 12432 kB
VmRSS: 12432 kB
VmData: 47876 kB
VmStk: 136 kB
VmExe: 1020 kB
VmLib: 1848 kB
VmPTE: 60 kB
VmSwap: 0 kB
```
**0.12416 kB virtual, 0.11312 resident per gauge.**
1000 histograms with a uniform sample size of 1028
--------------------------------------------------
```
VmPeak: 72272 kB
VmSize: 72272 kB
VmLck: 0 kB
VmHWM: 16204 kB
VmRSS: 16204 kB
VmData: 65100 kB
VmStk: 136 kB
VmExe: 1048 kB
VmLib: 1848 kB
VmPTE: 80 kB
VmSwap: 0 kB
```
**29.668 kB virtual, TODO 15.084 resident per histogram.**
10000 histograms with a uniform sample size of 1028
---------------------------------------------------
```
VmPeak: 256912 kB
VmSize: 256912 kB
VmLck: 0 kB
VmHWM: 146204 kB
VmRSS: 146204 kB
VmData: 249740 kB
VmStk: 136 kB
VmExe: 1048 kB
VmLib: 1848 kB
VmPTE: 448 kB
VmSwap: 0 kB
```
**21.4308 kB virtual, 14.5084 kB resident per histogram.**
50000 histograms with a uniform sample size of 1028
---------------------------------------------------
```
VmPeak: 908112 kB
VmSize: 908112 kB
VmLck: 0 kB
VmHWM: 645832 kB
VmRSS: 645588 kB
VmData: 900940 kB
VmStk: 136 kB
VmExe: 1048 kB
VmLib: 1848 kB
VmPTE: 1716 kB
VmSwap: 1544 kB
```
**17.31016 kB virtual, 12.88936 kB resident per histogram.**
1000 histograms with an exponentially-decaying sample size of 1028 and alpha of 0.015
-------------------------------------------------------------------------------------
```
VmPeak: 62480 kB
VmSize: 62480 kB
VmLck: 0 kB
VmHWM: 11572 kB
VmRSS: 11572 kB
VmData: 55308 kB
VmStk: 136 kB
VmExe: 1048 kB
VmLib: 1848 kB
VmPTE: 64 kB
VmSwap: 0 kB
```
**19.876 kB virtual, 10.452 kB resident per histogram.**
10000 histograms with an exponentially-decaying sample size of 1028 and alpha of 0.015
--------------------------------------------------------------------------------------
```
VmPeak: 153296 kB
VmSize: 153296 kB
VmLck: 0 kB
VmHWM: 101176 kB
VmRSS: 101176 kB
VmData: 146124 kB
VmStk: 136 kB
VmExe: 1048 kB
VmLib: 1848 kB
VmPTE: 240 kB
VmSwap: 0 kB
```
**11.0692 kB virtual, 10.0056 kB resident per histogram.**
50000 histograms with an exponentially-decaying sample size of 1028 and alpha of 0.015
--------------------------------------------------------------------------------------
```
VmPeak: 557264 kB
VmSize: 557264 kB
VmLck: 0 kB
VmHWM: 501056 kB
VmRSS: 501056 kB
VmData: 550092 kB
VmStk: 136 kB
VmExe: 1048 kB
VmLib: 1848 kB
VmPTE: 1032 kB
VmSwap: 0 kB
```
**10.2932 kB virtual, 9.99872 kB resident per histogram.**
1000 meters
-----------
```
VmPeak: 74504 kB
VmSize: 74504 kB
VmLck: 0 kB
VmHWM: 24124 kB
VmRSS: 24124 kB
VmData: 67340 kB
VmStk: 136 kB
VmExe: 1040 kB
VmLib: 1848 kB
VmPTE: 92 kB
VmSwap: 0 kB
```
**31.9 kB virtual, 23.004 kB resident per meter.**
10000 meters
------------
```
VmPeak: 278920 kB
VmSize: 278920 kB
VmLck: 0 kB
VmHWM: 227300 kB
VmRSS: 227300 kB
VmData: 271756 kB
VmStk: 136 kB
VmExe: 1040 kB
VmLib: 1848 kB
VmPTE: 488 kB
VmSwap: 0 kB
```
**23.6316 kB virtual, 22.618 kB resident per meter.**

View File

@@ -0,0 +1,233 @@
package metrics
import (
"sync"
"time"
)
// Meters count events to produce exponentially-weighted moving average rates
// at one-, five-, and fifteen-minutes and a mean rate.
type Meter interface {
Count() int64
Mark(int64)
Rate1() float64
Rate5() float64
Rate15() float64
RateMean() float64
Snapshot() Meter
}
// GetOrRegisterMeter returns an existing Meter or constructs and registers a
// new StandardMeter.
func GetOrRegisterMeter(name string, r Registry) Meter {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, NewMeter).(Meter)
}
// NewMeter constructs a new StandardMeter and launches a goroutine.
func NewMeter() Meter {
if UseNilMetrics {
return NilMeter{}
}
m := newStandardMeter()
arbiter.Lock()
defer arbiter.Unlock()
arbiter.meters = append(arbiter.meters, m)
if !arbiter.started {
arbiter.started = true
go arbiter.tick()
}
return m
}
// NewMeter constructs and registers a new StandardMeter and launches a
// goroutine.
func NewRegisteredMeter(name string, r Registry) Meter {
c := NewMeter()
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// MeterSnapshot is a read-only copy of another Meter.
type MeterSnapshot struct {
count int64
rate1, rate5, rate15, rateMean float64
}
// Count returns the count of events at the time the snapshot was taken.
func (m *MeterSnapshot) Count() int64 { return m.count }
// Mark panics.
func (*MeterSnapshot) Mark(n int64) {
panic("Mark called on a MeterSnapshot")
}
// Rate1 returns the one-minute moving average rate of events per second at the
// time the snapshot was taken.
func (m *MeterSnapshot) Rate1() float64 { return m.rate1 }
// Rate5 returns the five-minute moving average rate of events per second at
// the time the snapshot was taken.
func (m *MeterSnapshot) Rate5() float64 { return m.rate5 }
// Rate15 returns the fifteen-minute moving average rate of events per second
// at the time the snapshot was taken.
func (m *MeterSnapshot) Rate15() float64 { return m.rate15 }
// RateMean returns the meter's mean rate of events per second at the time the
// snapshot was taken.
func (m *MeterSnapshot) RateMean() float64 { return m.rateMean }
// Snapshot returns the snapshot.
func (m *MeterSnapshot) Snapshot() Meter { return m }
// NilMeter is a no-op Meter.
type NilMeter struct{}
// Count is a no-op.
func (NilMeter) Count() int64 { return 0 }
// Mark is a no-op.
func (NilMeter) Mark(n int64) {}
// Rate1 is a no-op.
func (NilMeter) Rate1() float64 { return 0.0 }
// Rate5 is a no-op.
func (NilMeter) Rate5() float64 { return 0.0 }
// Rate15is a no-op.
func (NilMeter) Rate15() float64 { return 0.0 }
// RateMean is a no-op.
func (NilMeter) RateMean() float64 { return 0.0 }
// Snapshot is a no-op.
func (NilMeter) Snapshot() Meter { return NilMeter{} }
// StandardMeter is the standard implementation of a Meter.
type StandardMeter struct {
lock sync.RWMutex
snapshot *MeterSnapshot
a1, a5, a15 EWMA
startTime time.Time
}
func newStandardMeter() *StandardMeter {
return &StandardMeter{
snapshot: &MeterSnapshot{},
a1: NewEWMA1(),
a5: NewEWMA5(),
a15: NewEWMA15(),
startTime: time.Now(),
}
}
// Count returns the number of events recorded.
func (m *StandardMeter) Count() int64 {
m.lock.RLock()
count := m.snapshot.count
m.lock.RUnlock()
return count
}
// Mark records the occurance of n events.
func (m *StandardMeter) Mark(n int64) {
m.lock.Lock()
defer m.lock.Unlock()
m.snapshot.count += n
m.a1.Update(n)
m.a5.Update(n)
m.a15.Update(n)
m.updateSnapshot()
}
// Rate1 returns the one-minute moving average rate of events per second.
func (m *StandardMeter) Rate1() float64 {
m.lock.RLock()
rate1 := m.snapshot.rate1
m.lock.RUnlock()
return rate1
}
// Rate5 returns the five-minute moving average rate of events per second.
func (m *StandardMeter) Rate5() float64 {
m.lock.RLock()
rate5 := m.snapshot.rate5
m.lock.RUnlock()
return rate5
}
// Rate15 returns the fifteen-minute moving average rate of events per second.
func (m *StandardMeter) Rate15() float64 {
m.lock.RLock()
rate15 := m.snapshot.rate15
m.lock.RUnlock()
return rate15
}
// RateMean returns the meter's mean rate of events per second.
func (m *StandardMeter) RateMean() float64 {
m.lock.RLock()
rateMean := m.snapshot.rateMean
m.lock.RUnlock()
return rateMean
}
// Snapshot returns a read-only copy of the meter.
func (m *StandardMeter) Snapshot() Meter {
m.lock.RLock()
snapshot := *m.snapshot
m.lock.RUnlock()
return &snapshot
}
func (m *StandardMeter) updateSnapshot() {
// should run with write lock held on m.lock
snapshot := m.snapshot
snapshot.rate1 = m.a1.Rate()
snapshot.rate5 = m.a5.Rate()
snapshot.rate15 = m.a15.Rate()
snapshot.rateMean = float64(snapshot.count) / time.Since(m.startTime).Seconds()
}
func (m *StandardMeter) tick() {
m.lock.Lock()
defer m.lock.Unlock()
m.a1.Tick()
m.a5.Tick()
m.a15.Tick()
m.updateSnapshot()
}
type meterArbiter struct {
sync.RWMutex
started bool
meters []*StandardMeter
ticker *time.Ticker
}
var arbiter = meterArbiter{ticker: time.NewTicker(5e9)}
// Ticks meters on the scheduled interval
func (ma *meterArbiter) tick() {
for {
select {
case <-ma.ticker.C:
ma.tickMeters()
}
}
}
func (ma *meterArbiter) tickMeters() {
ma.RLock()
defer ma.RUnlock()
for _, meter := range ma.meters {
meter.tick()
}
}

View File

@@ -0,0 +1,13 @@
// Go port of Coda Hale's Metrics library
//
// <https://github.com/rcrowley/go-metrics>
//
// Coda Hale's original work: <https://github.com/codahale/metrics>
package metrics
// UseNilMetrics is checked by the constructor functions for all of the
// standard metrics. If it is true, the metric returned is a stub.
//
// This global kill-switch helps quantify the observer effect and makes
// for less cluttered pprof profiles.
var UseNilMetrics bool = false

View File

@@ -0,0 +1,119 @@
package metrics
import (
"bufio"
"fmt"
"log"
"net"
"os"
"strings"
"time"
)
var shortHostName string = ""
// OpenTSDBConfig provides a container with configuration parameters for
// the OpenTSDB exporter
type OpenTSDBConfig struct {
Addr *net.TCPAddr // Network address to connect to
Registry Registry // Registry to be exported
FlushInterval time.Duration // Flush interval
DurationUnit time.Duration // Time conversion unit for durations
Prefix string // Prefix to be prepended to metric names
}
// OpenTSDB is a blocking exporter function which reports metrics in r
// to a TSDB server located at addr, flushing them every d duration
// and prepending metric names with prefix.
func OpenTSDB(r Registry, d time.Duration, prefix string, addr *net.TCPAddr) {
OpenTSDBWithConfig(OpenTSDBConfig{
Addr: addr,
Registry: r,
FlushInterval: d,
DurationUnit: time.Nanosecond,
Prefix: prefix,
})
}
// OpenTSDBWithConfig is a blocking exporter function just like OpenTSDB,
// but it takes a OpenTSDBConfig instead.
func OpenTSDBWithConfig(c OpenTSDBConfig) {
for _ = range time.Tick(c.FlushInterval) {
if err := openTSDB(&c); nil != err {
log.Println(err)
}
}
}
func getShortHostname() string {
if shortHostName == "" {
host, _ := os.Hostname()
if index := strings.Index(host, "."); index > 0 {
shortHostName = host[:index]
} else {
shortHostName = host
}
}
return shortHostName
}
func openTSDB(c *OpenTSDBConfig) error {
shortHostname := getShortHostname()
now := time.Now().Unix()
du := float64(c.DurationUnit)
conn, err := net.DialTCP("tcp", nil, c.Addr)
if nil != err {
return err
}
defer conn.Close()
w := bufio.NewWriter(conn)
c.Registry.Each(func(name string, i interface{}) {
switch metric := i.(type) {
case Counter:
fmt.Fprintf(w, "put %s.%s.count %d %d host=%s\n", c.Prefix, name, now, metric.Count(), shortHostname)
case Gauge:
fmt.Fprintf(w, "put %s.%s.value %d %d host=%s\n", c.Prefix, name, now, metric.Value(), shortHostname)
case GaugeFloat64:
fmt.Fprintf(w, "put %s.%s.value %d %f host=%s\n", c.Prefix, name, now, metric.Value(), shortHostname)
case Histogram:
h := metric.Snapshot()
ps := h.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
fmt.Fprintf(w, "put %s.%s.count %d %d host=%s\n", c.Prefix, name, now, h.Count(), shortHostname)
fmt.Fprintf(w, "put %s.%s.min %d %d host=%s\n", c.Prefix, name, now, h.Min(), shortHostname)
fmt.Fprintf(w, "put %s.%s.max %d %d host=%s\n", c.Prefix, name, now, h.Max(), shortHostname)
fmt.Fprintf(w, "put %s.%s.mean %d %.2f host=%s\n", c.Prefix, name, now, h.Mean(), shortHostname)
fmt.Fprintf(w, "put %s.%s.std-dev %d %.2f host=%s\n", c.Prefix, name, now, h.StdDev(), shortHostname)
fmt.Fprintf(w, "put %s.%s.50-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[0], shortHostname)
fmt.Fprintf(w, "put %s.%s.75-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[1], shortHostname)
fmt.Fprintf(w, "put %s.%s.95-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[2], shortHostname)
fmt.Fprintf(w, "put %s.%s.99-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[3], shortHostname)
fmt.Fprintf(w, "put %s.%s.999-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[4], shortHostname)
case Meter:
m := metric.Snapshot()
fmt.Fprintf(w, "put %s.%s.count %d %d host=%s\n", c.Prefix, name, now, m.Count(), shortHostname)
fmt.Fprintf(w, "put %s.%s.one-minute %d %.2f host=%s\n", c.Prefix, name, now, m.Rate1(), shortHostname)
fmt.Fprintf(w, "put %s.%s.five-minute %d %.2f host=%s\n", c.Prefix, name, now, m.Rate5(), shortHostname)
fmt.Fprintf(w, "put %s.%s.fifteen-minute %d %.2f host=%s\n", c.Prefix, name, now, m.Rate15(), shortHostname)
fmt.Fprintf(w, "put %s.%s.mean %d %.2f host=%s\n", c.Prefix, name, now, m.RateMean(), shortHostname)
case Timer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
fmt.Fprintf(w, "put %s.%s.count %d %d host=%s\n", c.Prefix, name, now, t.Count(), shortHostname)
fmt.Fprintf(w, "put %s.%s.min %d %d host=%s\n", c.Prefix, name, now, t.Min()/int64(du), shortHostname)
fmt.Fprintf(w, "put %s.%s.max %d %d host=%s\n", c.Prefix, name, now, t.Max()/int64(du), shortHostname)
fmt.Fprintf(w, "put %s.%s.mean %d %.2f host=%s\n", c.Prefix, name, now, t.Mean()/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.std-dev %d %.2f host=%s\n", c.Prefix, name, now, t.StdDev()/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.50-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[0]/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.75-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[1]/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.95-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[2]/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.99-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[3]/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.999-percentile %d %.2f host=%s\n", c.Prefix, name, now, ps[4]/du, shortHostname)
fmt.Fprintf(w, "put %s.%s.one-minute %d %.2f host=%s\n", c.Prefix, name, now, t.Rate1(), shortHostname)
fmt.Fprintf(w, "put %s.%s.five-minute %d %.2f host=%s\n", c.Prefix, name, now, t.Rate5(), shortHostname)
fmt.Fprintf(w, "put %s.%s.fifteen-minute %d %.2f host=%s\n", c.Prefix, name, now, t.Rate15(), shortHostname)
fmt.Fprintf(w, "put %s.%s.mean-rate %d %.2f host=%s\n", c.Prefix, name, now, t.RateMean(), shortHostname)
}
w.Flush()
})
return nil
}

View File

@@ -0,0 +1,240 @@
package metrics
import (
"fmt"
"reflect"
"sync"
)
// DuplicateMetric is the error returned by Registry.Register when a metric
// already exists. If you mean to Register that metric you must first
// Unregister the existing metric.
type DuplicateMetric string
func (err DuplicateMetric) Error() string {
return fmt.Sprintf("duplicate metric: %s", string(err))
}
// A Registry holds references to a set of metrics by name and can iterate
// over them, calling callback functions provided by the user.
//
// This is an interface so as to encourage other structs to implement
// the Registry API as appropriate.
type Registry interface {
// Call the given function for each registered metric.
Each(func(string, interface{}))
// Get the metric by the given name or nil if none is registered.
Get(string) interface{}
// Gets an existing metric or registers the given one.
// The interface can be the metric to register if not found in registry,
// or a function returning the metric for lazy instantiation.
GetOrRegister(string, interface{}) interface{}
// Register the given metric under the given name.
Register(string, interface{}) error
// Run all registered healthchecks.
RunHealthchecks()
// Unregister the metric with the given name.
Unregister(string)
// Unregister all metrics. (Mostly for testing.)
UnregisterAll()
}
// The standard implementation of a Registry is a mutex-protected map
// of names to metrics.
type StandardRegistry struct {
metrics map[string]interface{}
mutex sync.Mutex
}
// Create a new registry.
func NewRegistry() Registry {
return &StandardRegistry{metrics: make(map[string]interface{})}
}
// Call the given function for each registered metric.
func (r *StandardRegistry) Each(f func(string, interface{})) {
for name, i := range r.registered() {
f(name, i)
}
}
// Get the metric by the given name or nil if none is registered.
func (r *StandardRegistry) Get(name string) interface{} {
r.mutex.Lock()
defer r.mutex.Unlock()
return r.metrics[name]
}
// Gets an existing metric or creates and registers a new one. Threadsafe
// alternative to calling Get and Register on failure.
// The interface can be the metric to register if not found in registry,
// or a function returning the metric for lazy instantiation.
func (r *StandardRegistry) GetOrRegister(name string, i interface{}) interface{} {
r.mutex.Lock()
defer r.mutex.Unlock()
if metric, ok := r.metrics[name]; ok {
return metric
}
if v := reflect.ValueOf(i); v.Kind() == reflect.Func {
i = v.Call(nil)[0].Interface()
}
r.register(name, i)
return i
}
// Register the given metric under the given name. Returns a DuplicateMetric
// if a metric by the given name is already registered.
func (r *StandardRegistry) Register(name string, i interface{}) error {
r.mutex.Lock()
defer r.mutex.Unlock()
return r.register(name, i)
}
// Run all registered healthchecks.
func (r *StandardRegistry) RunHealthchecks() {
r.mutex.Lock()
defer r.mutex.Unlock()
for _, i := range r.metrics {
if h, ok := i.(Healthcheck); ok {
h.Check()
}
}
}
// Unregister the metric with the given name.
func (r *StandardRegistry) Unregister(name string) {
r.mutex.Lock()
defer r.mutex.Unlock()
delete(r.metrics, name)
}
// Unregister all metrics. (Mostly for testing.)
func (r *StandardRegistry) UnregisterAll() {
r.mutex.Lock()
defer r.mutex.Unlock()
for name, _ := range r.metrics {
delete(r.metrics, name)
}
}
func (r *StandardRegistry) register(name string, i interface{}) error {
if _, ok := r.metrics[name]; ok {
return DuplicateMetric(name)
}
switch i.(type) {
case Counter, Gauge, GaugeFloat64, Healthcheck, Histogram, Meter, Timer:
r.metrics[name] = i
}
return nil
}
func (r *StandardRegistry) registered() map[string]interface{} {
r.mutex.Lock()
defer r.mutex.Unlock()
metrics := make(map[string]interface{}, len(r.metrics))
for name, i := range r.metrics {
metrics[name] = i
}
return metrics
}
type PrefixedRegistry struct {
underlying Registry
prefix string
}
func NewPrefixedRegistry(prefix string) Registry {
return &PrefixedRegistry{
underlying: NewRegistry(),
prefix: prefix,
}
}
// Call the given function for each registered metric.
func (r *PrefixedRegistry) Each(fn func(string, interface{})) {
r.underlying.Each(fn)
}
// Get the metric by the given name or nil if none is registered.
func (r *PrefixedRegistry) Get(name string) interface{} {
return r.underlying.Get(name)
}
// Gets an existing metric or registers the given one.
// The interface can be the metric to register if not found in registry,
// or a function returning the metric for lazy instantiation.
func (r *PrefixedRegistry) GetOrRegister(name string, metric interface{}) interface{} {
realName := r.prefix + name
return r.underlying.GetOrRegister(realName, metric)
}
// Register the given metric under the given name. The name will be prefixed.
func (r *PrefixedRegistry) Register(name string, metric interface{}) error {
realName := r.prefix + name
return r.underlying.Register(realName, metric)
}
// Run all registered healthchecks.
func (r *PrefixedRegistry) RunHealthchecks() {
r.underlying.RunHealthchecks()
}
// Unregister the metric with the given name. The name will be prefixed.
func (r *PrefixedRegistry) Unregister(name string) {
realName := r.prefix + name
r.underlying.Unregister(realName)
}
// Unregister all metrics. (Mostly for testing.)
func (r *PrefixedRegistry) UnregisterAll() {
r.underlying.UnregisterAll()
}
var DefaultRegistry Registry = NewRegistry()
// Call the given function for each registered metric.
func Each(f func(string, interface{})) {
DefaultRegistry.Each(f)
}
// Get the metric by the given name or nil if none is registered.
func Get(name string) interface{} {
return DefaultRegistry.Get(name)
}
// Gets an existing metric or creates and registers a new one. Threadsafe
// alternative to calling Get and Register on failure.
func GetOrRegister(name string, i interface{}) interface{} {
return DefaultRegistry.GetOrRegister(name, i)
}
// Register the given metric under the given name. Returns a DuplicateMetric
// if a metric by the given name is already registered.
func Register(name string, i interface{}) error {
return DefaultRegistry.Register(name, i)
}
// Register the given metric under the given name. Panics if a metric by the
// given name is already registered.
func MustRegister(name string, i interface{}) {
if err := Register(name, i); err != nil {
panic(err)
}
}
// Run all registered healthchecks.
func RunHealthchecks() {
DefaultRegistry.RunHealthchecks()
}
// Unregister the metric with the given name.
func Unregister(name string) {
DefaultRegistry.Unregister(name)
}

View File

@@ -0,0 +1,200 @@
package metrics
import (
"runtime"
"time"
)
var (
memStats runtime.MemStats
runtimeMetrics struct {
MemStats struct {
Alloc Gauge
BuckHashSys Gauge
DebugGC Gauge
EnableGC Gauge
Frees Gauge
HeapAlloc Gauge
HeapIdle Gauge
HeapInuse Gauge
HeapObjects Gauge
HeapReleased Gauge
HeapSys Gauge
LastGC Gauge
Lookups Gauge
Mallocs Gauge
MCacheInuse Gauge
MCacheSys Gauge
MSpanInuse Gauge
MSpanSys Gauge
NextGC Gauge
NumGC Gauge
PauseNs Histogram
PauseTotalNs Gauge
StackInuse Gauge
StackSys Gauge
Sys Gauge
TotalAlloc Gauge
}
NumCgoCall Gauge
NumGoroutine Gauge
ReadMemStats Timer
}
frees uint64
lookups uint64
mallocs uint64
numGC uint32
numCgoCalls int64
)
// Capture new values for the Go runtime statistics exported in
// runtime.MemStats. This is designed to be called as a goroutine.
func CaptureRuntimeMemStats(r Registry, d time.Duration) {
for _ = range time.Tick(d) {
CaptureRuntimeMemStatsOnce(r)
}
}
// Capture new values for the Go runtime statistics exported in
// runtime.MemStats. This is designed to be called in a background
// goroutine. Giving a registry which has not been given to
// RegisterRuntimeMemStats will panic.
//
// Be very careful with this because runtime.ReadMemStats calls the C
// functions runtime·semacquire(&runtime·worldsema) and runtime·stoptheworld()
// and that last one does what it says on the tin.
func CaptureRuntimeMemStatsOnce(r Registry) {
t := time.Now()
runtime.ReadMemStats(&memStats) // This takes 50-200us.
runtimeMetrics.ReadMemStats.UpdateSince(t)
runtimeMetrics.MemStats.Alloc.Update(int64(memStats.Alloc))
runtimeMetrics.MemStats.BuckHashSys.Update(int64(memStats.BuckHashSys))
if memStats.DebugGC {
runtimeMetrics.MemStats.DebugGC.Update(1)
} else {
runtimeMetrics.MemStats.DebugGC.Update(0)
}
if memStats.EnableGC {
runtimeMetrics.MemStats.EnableGC.Update(1)
} else {
runtimeMetrics.MemStats.EnableGC.Update(0)
}
runtimeMetrics.MemStats.Frees.Update(int64(memStats.Frees - frees))
runtimeMetrics.MemStats.HeapAlloc.Update(int64(memStats.HeapAlloc))
runtimeMetrics.MemStats.HeapIdle.Update(int64(memStats.HeapIdle))
runtimeMetrics.MemStats.HeapInuse.Update(int64(memStats.HeapInuse))
runtimeMetrics.MemStats.HeapObjects.Update(int64(memStats.HeapObjects))
runtimeMetrics.MemStats.HeapReleased.Update(int64(memStats.HeapReleased))
runtimeMetrics.MemStats.HeapSys.Update(int64(memStats.HeapSys))
runtimeMetrics.MemStats.LastGC.Update(int64(memStats.LastGC))
runtimeMetrics.MemStats.Lookups.Update(int64(memStats.Lookups - lookups))
runtimeMetrics.MemStats.Mallocs.Update(int64(memStats.Mallocs - mallocs))
runtimeMetrics.MemStats.MCacheInuse.Update(int64(memStats.MCacheInuse))
runtimeMetrics.MemStats.MCacheSys.Update(int64(memStats.MCacheSys))
runtimeMetrics.MemStats.MSpanInuse.Update(int64(memStats.MSpanInuse))
runtimeMetrics.MemStats.MSpanSys.Update(int64(memStats.MSpanSys))
runtimeMetrics.MemStats.NextGC.Update(int64(memStats.NextGC))
runtimeMetrics.MemStats.NumGC.Update(int64(memStats.NumGC - numGC))
// <https://code.google.com/p/go/source/browse/src/pkg/runtime/mgc0.c>
i := numGC % uint32(len(memStats.PauseNs))
ii := memStats.NumGC % uint32(len(memStats.PauseNs))
if memStats.NumGC-numGC >= uint32(len(memStats.PauseNs)) {
for i = 0; i < uint32(len(memStats.PauseNs)); i++ {
runtimeMetrics.MemStats.PauseNs.Update(int64(memStats.PauseNs[i]))
}
} else {
if i > ii {
for ; i < uint32(len(memStats.PauseNs)); i++ {
runtimeMetrics.MemStats.PauseNs.Update(int64(memStats.PauseNs[i]))
}
i = 0
}
for ; i < ii; i++ {
runtimeMetrics.MemStats.PauseNs.Update(int64(memStats.PauseNs[i]))
}
}
frees = memStats.Frees
lookups = memStats.Lookups
mallocs = memStats.Mallocs
numGC = memStats.NumGC
runtimeMetrics.MemStats.PauseTotalNs.Update(int64(memStats.PauseTotalNs))
runtimeMetrics.MemStats.StackInuse.Update(int64(memStats.StackInuse))
runtimeMetrics.MemStats.StackSys.Update(int64(memStats.StackSys))
runtimeMetrics.MemStats.Sys.Update(int64(memStats.Sys))
runtimeMetrics.MemStats.TotalAlloc.Update(int64(memStats.TotalAlloc))
currentNumCgoCalls := numCgoCall()
runtimeMetrics.NumCgoCall.Update(currentNumCgoCalls - numCgoCalls)
numCgoCalls = currentNumCgoCalls
runtimeMetrics.NumGoroutine.Update(int64(runtime.NumGoroutine()))
}
// Register runtimeMetrics for the Go runtime statistics exported in runtime and
// specifically runtime.MemStats. The runtimeMetrics are named by their
// fully-qualified Go symbols, i.e. runtime.MemStats.Alloc.
func RegisterRuntimeMemStats(r Registry) {
runtimeMetrics.MemStats.Alloc = NewGauge()
runtimeMetrics.MemStats.BuckHashSys = NewGauge()
runtimeMetrics.MemStats.DebugGC = NewGauge()
runtimeMetrics.MemStats.EnableGC = NewGauge()
runtimeMetrics.MemStats.Frees = NewGauge()
runtimeMetrics.MemStats.HeapAlloc = NewGauge()
runtimeMetrics.MemStats.HeapIdle = NewGauge()
runtimeMetrics.MemStats.HeapInuse = NewGauge()
runtimeMetrics.MemStats.HeapObjects = NewGauge()
runtimeMetrics.MemStats.HeapReleased = NewGauge()
runtimeMetrics.MemStats.HeapSys = NewGauge()
runtimeMetrics.MemStats.LastGC = NewGauge()
runtimeMetrics.MemStats.Lookups = NewGauge()
runtimeMetrics.MemStats.Mallocs = NewGauge()
runtimeMetrics.MemStats.MCacheInuse = NewGauge()
runtimeMetrics.MemStats.MCacheSys = NewGauge()
runtimeMetrics.MemStats.MSpanInuse = NewGauge()
runtimeMetrics.MemStats.MSpanSys = NewGauge()
runtimeMetrics.MemStats.NextGC = NewGauge()
runtimeMetrics.MemStats.NumGC = NewGauge()
runtimeMetrics.MemStats.PauseNs = NewHistogram(NewExpDecaySample(1028, 0.015))
runtimeMetrics.MemStats.PauseTotalNs = NewGauge()
runtimeMetrics.MemStats.StackInuse = NewGauge()
runtimeMetrics.MemStats.StackSys = NewGauge()
runtimeMetrics.MemStats.Sys = NewGauge()
runtimeMetrics.MemStats.TotalAlloc = NewGauge()
runtimeMetrics.NumCgoCall = NewGauge()
runtimeMetrics.NumGoroutine = NewGauge()
runtimeMetrics.ReadMemStats = NewTimer()
r.Register("runtime.MemStats.Alloc", runtimeMetrics.MemStats.Alloc)
r.Register("runtime.MemStats.BuckHashSys", runtimeMetrics.MemStats.BuckHashSys)
r.Register("runtime.MemStats.DebugGC", runtimeMetrics.MemStats.DebugGC)
r.Register("runtime.MemStats.EnableGC", runtimeMetrics.MemStats.EnableGC)
r.Register("runtime.MemStats.Frees", runtimeMetrics.MemStats.Frees)
r.Register("runtime.MemStats.HeapAlloc", runtimeMetrics.MemStats.HeapAlloc)
r.Register("runtime.MemStats.HeapIdle", runtimeMetrics.MemStats.HeapIdle)
r.Register("runtime.MemStats.HeapInuse", runtimeMetrics.MemStats.HeapInuse)
r.Register("runtime.MemStats.HeapObjects", runtimeMetrics.MemStats.HeapObjects)
r.Register("runtime.MemStats.HeapReleased", runtimeMetrics.MemStats.HeapReleased)
r.Register("runtime.MemStats.HeapSys", runtimeMetrics.MemStats.HeapSys)
r.Register("runtime.MemStats.LastGC", runtimeMetrics.MemStats.LastGC)
r.Register("runtime.MemStats.Lookups", runtimeMetrics.MemStats.Lookups)
r.Register("runtime.MemStats.Mallocs", runtimeMetrics.MemStats.Mallocs)
r.Register("runtime.MemStats.MCacheInuse", runtimeMetrics.MemStats.MCacheInuse)
r.Register("runtime.MemStats.MCacheSys", runtimeMetrics.MemStats.MCacheSys)
r.Register("runtime.MemStats.MSpanInuse", runtimeMetrics.MemStats.MSpanInuse)
r.Register("runtime.MemStats.MSpanSys", runtimeMetrics.MemStats.MSpanSys)
r.Register("runtime.MemStats.NextGC", runtimeMetrics.MemStats.NextGC)
r.Register("runtime.MemStats.NumGC", runtimeMetrics.MemStats.NumGC)
r.Register("runtime.MemStats.PauseNs", runtimeMetrics.MemStats.PauseNs)
r.Register("runtime.MemStats.PauseTotalNs", runtimeMetrics.MemStats.PauseTotalNs)
r.Register("runtime.MemStats.StackInuse", runtimeMetrics.MemStats.StackInuse)
r.Register("runtime.MemStats.StackSys", runtimeMetrics.MemStats.StackSys)
r.Register("runtime.MemStats.Sys", runtimeMetrics.MemStats.Sys)
r.Register("runtime.MemStats.TotalAlloc", runtimeMetrics.MemStats.TotalAlloc)
r.Register("runtime.NumCgoCall", runtimeMetrics.NumCgoCall)
r.Register("runtime.NumGoroutine", runtimeMetrics.NumGoroutine)
r.Register("runtime.ReadMemStats", runtimeMetrics.ReadMemStats)
}

View File

@@ -0,0 +1,10 @@
// +build cgo
// +build !appengine
package metrics
import "runtime"
func numCgoCall() int64 {
return runtime.NumCgoCall()
}

View File

@@ -0,0 +1,7 @@
// +build !cgo appengine
package metrics
func numCgoCall() int64 {
return 0
}

View File

@@ -0,0 +1,609 @@
package metrics
import (
"math"
"math/rand"
"sort"
"sync"
"time"
)
const rescaleThreshold = time.Hour
// Samples maintain a statistically-significant selection of values from
// a stream.
type Sample interface {
Clear()
Count() int64
Max() int64
Mean() float64
Min() int64
Percentile(float64) float64
Percentiles([]float64) []float64
Size() int
Snapshot() Sample
StdDev() float64
Sum() int64
Update(int64)
Values() []int64
Variance() float64
}
// ExpDecaySample is an exponentially-decaying sample using a forward-decaying
// priority reservoir. See Cormode et al's "Forward Decay: A Practical Time
// Decay Model for Streaming Systems".
//
// <http://www.research.att.com/people/Cormode_Graham/library/publications/CormodeShkapenyukSrivastavaXu09.pdf>
type ExpDecaySample struct {
alpha float64
count int64
mutex sync.Mutex
reservoirSize int
t0, t1 time.Time
values *expDecaySampleHeap
}
// NewExpDecaySample constructs a new exponentially-decaying sample with the
// given reservoir size and alpha.
func NewExpDecaySample(reservoirSize int, alpha float64) Sample {
if UseNilMetrics {
return NilSample{}
}
s := &ExpDecaySample{
alpha: alpha,
reservoirSize: reservoirSize,
t0: time.Now(),
values: newExpDecaySampleHeap(reservoirSize),
}
s.t1 = s.t0.Add(rescaleThreshold)
return s
}
// Clear clears all samples.
func (s *ExpDecaySample) Clear() {
s.mutex.Lock()
defer s.mutex.Unlock()
s.count = 0
s.t0 = time.Now()
s.t1 = s.t0.Add(rescaleThreshold)
s.values.Clear()
}
// Count returns the number of samples recorded, which may exceed the
// reservoir size.
func (s *ExpDecaySample) Count() int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return s.count
}
// Max returns the maximum value in the sample, which may not be the maximum
// value ever to be part of the sample.
func (s *ExpDecaySample) Max() int64 {
return SampleMax(s.Values())
}
// Mean returns the mean of the values in the sample.
func (s *ExpDecaySample) Mean() float64 {
return SampleMean(s.Values())
}
// Min returns the minimum value in the sample, which may not be the minimum
// value ever to be part of the sample.
func (s *ExpDecaySample) Min() int64 {
return SampleMin(s.Values())
}
// Percentile returns an arbitrary percentile of values in the sample.
func (s *ExpDecaySample) Percentile(p float64) float64 {
return SamplePercentile(s.Values(), p)
}
// Percentiles returns a slice of arbitrary percentiles of values in the
// sample.
func (s *ExpDecaySample) Percentiles(ps []float64) []float64 {
return SamplePercentiles(s.Values(), ps)
}
// Size returns the size of the sample, which is at most the reservoir size.
func (s *ExpDecaySample) Size() int {
s.mutex.Lock()
defer s.mutex.Unlock()
return s.values.Size()
}
// Snapshot returns a read-only copy of the sample.
func (s *ExpDecaySample) Snapshot() Sample {
s.mutex.Lock()
defer s.mutex.Unlock()
vals := s.values.Values()
values := make([]int64, len(vals))
for i, v := range vals {
values[i] = v.v
}
return &SampleSnapshot{
count: s.count,
values: values,
}
}
// StdDev returns the standard deviation of the values in the sample.
func (s *ExpDecaySample) StdDev() float64 {
return SampleStdDev(s.Values())
}
// Sum returns the sum of the values in the sample.
func (s *ExpDecaySample) Sum() int64 {
return SampleSum(s.Values())
}
// Update samples a new value.
func (s *ExpDecaySample) Update(v int64) {
s.update(time.Now(), v)
}
// Values returns a copy of the values in the sample.
func (s *ExpDecaySample) Values() []int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
vals := s.values.Values()
values := make([]int64, len(vals))
for i, v := range vals {
values[i] = v.v
}
return values
}
// Variance returns the variance of the values in the sample.
func (s *ExpDecaySample) Variance() float64 {
return SampleVariance(s.Values())
}
// update samples a new value at a particular timestamp. This is a method all
// its own to facilitate testing.
func (s *ExpDecaySample) update(t time.Time, v int64) {
s.mutex.Lock()
defer s.mutex.Unlock()
s.count++
if s.values.Size() == s.reservoirSize {
s.values.Pop()
}
s.values.Push(expDecaySample{
k: math.Exp(t.Sub(s.t0).Seconds()*s.alpha) / rand.Float64(),
v: v,
})
if t.After(s.t1) {
values := s.values.Values()
t0 := s.t0
s.values.Clear()
s.t0 = t
s.t1 = s.t0.Add(rescaleThreshold)
for _, v := range values {
v.k = v.k * math.Exp(-s.alpha*s.t0.Sub(t0).Seconds())
s.values.Push(v)
}
}
}
// NilSample is a no-op Sample.
type NilSample struct{}
// Clear is a no-op.
func (NilSample) Clear() {}
// Count is a no-op.
func (NilSample) Count() int64 { return 0 }
// Max is a no-op.
func (NilSample) Max() int64 { return 0 }
// Mean is a no-op.
func (NilSample) Mean() float64 { return 0.0 }
// Min is a no-op.
func (NilSample) Min() int64 { return 0 }
// Percentile is a no-op.
func (NilSample) Percentile(p float64) float64 { return 0.0 }
// Percentiles is a no-op.
func (NilSample) Percentiles(ps []float64) []float64 {
return make([]float64, len(ps))
}
// Size is a no-op.
func (NilSample) Size() int { return 0 }
// Sample is a no-op.
func (NilSample) Snapshot() Sample { return NilSample{} }
// StdDev is a no-op.
func (NilSample) StdDev() float64 { return 0.0 }
// Sum is a no-op.
func (NilSample) Sum() int64 { return 0 }
// Update is a no-op.
func (NilSample) Update(v int64) {}
// Values is a no-op.
func (NilSample) Values() []int64 { return []int64{} }
// Variance is a no-op.
func (NilSample) Variance() float64 { return 0.0 }
// SampleMax returns the maximum value of the slice of int64.
func SampleMax(values []int64) int64 {
if 0 == len(values) {
return 0
}
var max int64 = math.MinInt64
for _, v := range values {
if max < v {
max = v
}
}
return max
}
// SampleMean returns the mean value of the slice of int64.
func SampleMean(values []int64) float64 {
if 0 == len(values) {
return 0.0
}
return float64(SampleSum(values)) / float64(len(values))
}
// SampleMin returns the minimum value of the slice of int64.
func SampleMin(values []int64) int64 {
if 0 == len(values) {
return 0
}
var min int64 = math.MaxInt64
for _, v := range values {
if min > v {
min = v
}
}
return min
}
// SamplePercentiles returns an arbitrary percentile of the slice of int64.
func SamplePercentile(values int64Slice, p float64) float64 {
return SamplePercentiles(values, []float64{p})[0]
}
// SamplePercentiles returns a slice of arbitrary percentiles of the slice of
// int64.
func SamplePercentiles(values int64Slice, ps []float64) []float64 {
scores := make([]float64, len(ps))
size := len(values)
if size > 0 {
sort.Sort(values)
for i, p := range ps {
pos := p * float64(size+1)
if pos < 1.0 {
scores[i] = float64(values[0])
} else if pos >= float64(size) {
scores[i] = float64(values[size-1])
} else {
lower := float64(values[int(pos)-1])
upper := float64(values[int(pos)])
scores[i] = lower + (pos-math.Floor(pos))*(upper-lower)
}
}
}
return scores
}
// SampleSnapshot is a read-only copy of another Sample.
type SampleSnapshot struct {
count int64
values []int64
}
// Clear panics.
func (*SampleSnapshot) Clear() {
panic("Clear called on a SampleSnapshot")
}
// Count returns the count of inputs at the time the snapshot was taken.
func (s *SampleSnapshot) Count() int64 { return s.count }
// Max returns the maximal value at the time the snapshot was taken.
func (s *SampleSnapshot) Max() int64 { return SampleMax(s.values) }
// Mean returns the mean value at the time the snapshot was taken.
func (s *SampleSnapshot) Mean() float64 { return SampleMean(s.values) }
// Min returns the minimal value at the time the snapshot was taken.
func (s *SampleSnapshot) Min() int64 { return SampleMin(s.values) }
// Percentile returns an arbitrary percentile of values at the time the
// snapshot was taken.
func (s *SampleSnapshot) Percentile(p float64) float64 {
return SamplePercentile(s.values, p)
}
// Percentiles returns a slice of arbitrary percentiles of values at the time
// the snapshot was taken.
func (s *SampleSnapshot) Percentiles(ps []float64) []float64 {
return SamplePercentiles(s.values, ps)
}
// Size returns the size of the sample at the time the snapshot was taken.
func (s *SampleSnapshot) Size() int { return len(s.values) }
// Snapshot returns the snapshot.
func (s *SampleSnapshot) Snapshot() Sample { return s }
// StdDev returns the standard deviation of values at the time the snapshot was
// taken.
func (s *SampleSnapshot) StdDev() float64 { return SampleStdDev(s.values) }
// Sum returns the sum of values at the time the snapshot was taken.
func (s *SampleSnapshot) Sum() int64 { return SampleSum(s.values) }
// Update panics.
func (*SampleSnapshot) Update(int64) {
panic("Update called on a SampleSnapshot")
}
// Values returns a copy of the values in the sample.
func (s *SampleSnapshot) Values() []int64 {
values := make([]int64, len(s.values))
copy(values, s.values)
return values
}
// Variance returns the variance of values at the time the snapshot was taken.
func (s *SampleSnapshot) Variance() float64 { return SampleVariance(s.values) }
// SampleStdDev returns the standard deviation of the slice of int64.
func SampleStdDev(values []int64) float64 {
return math.Sqrt(SampleVariance(values))
}
// SampleSum returns the sum of the slice of int64.
func SampleSum(values []int64) int64 {
var sum int64
for _, v := range values {
sum += v
}
return sum
}
// SampleVariance returns the variance of the slice of int64.
func SampleVariance(values []int64) float64 {
if 0 == len(values) {
return 0.0
}
m := SampleMean(values)
var sum float64
for _, v := range values {
d := float64(v) - m
sum += d * d
}
return sum / float64(len(values))
}
// A uniform sample using Vitter's Algorithm R.
//
// <http://www.cs.umd.edu/~samir/498/vitter.pdf>
type UniformSample struct {
count int64
mutex sync.Mutex
reservoirSize int
values []int64
}
// NewUniformSample constructs a new uniform sample with the given reservoir
// size.
func NewUniformSample(reservoirSize int) Sample {
if UseNilMetrics {
return NilSample{}
}
return &UniformSample{
reservoirSize: reservoirSize,
values: make([]int64, 0, reservoirSize),
}
}
// Clear clears all samples.
func (s *UniformSample) Clear() {
s.mutex.Lock()
defer s.mutex.Unlock()
s.count = 0
s.values = make([]int64, 0, s.reservoirSize)
}
// Count returns the number of samples recorded, which may exceed the
// reservoir size.
func (s *UniformSample) Count() int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return s.count
}
// Max returns the maximum value in the sample, which may not be the maximum
// value ever to be part of the sample.
func (s *UniformSample) Max() int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SampleMax(s.values)
}
// Mean returns the mean of the values in the sample.
func (s *UniformSample) Mean() float64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SampleMean(s.values)
}
// Min returns the minimum value in the sample, which may not be the minimum
// value ever to be part of the sample.
func (s *UniformSample) Min() int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SampleMin(s.values)
}
// Percentile returns an arbitrary percentile of values in the sample.
func (s *UniformSample) Percentile(p float64) float64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SamplePercentile(s.values, p)
}
// Percentiles returns a slice of arbitrary percentiles of values in the
// sample.
func (s *UniformSample) Percentiles(ps []float64) []float64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SamplePercentiles(s.values, ps)
}
// Size returns the size of the sample, which is at most the reservoir size.
func (s *UniformSample) Size() int {
s.mutex.Lock()
defer s.mutex.Unlock()
return len(s.values)
}
// Snapshot returns a read-only copy of the sample.
func (s *UniformSample) Snapshot() Sample {
s.mutex.Lock()
defer s.mutex.Unlock()
values := make([]int64, len(s.values))
copy(values, s.values)
return &SampleSnapshot{
count: s.count,
values: values,
}
}
// StdDev returns the standard deviation of the values in the sample.
func (s *UniformSample) StdDev() float64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SampleStdDev(s.values)
}
// Sum returns the sum of the values in the sample.
func (s *UniformSample) Sum() int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SampleSum(s.values)
}
// Update samples a new value.
func (s *UniformSample) Update(v int64) {
s.mutex.Lock()
defer s.mutex.Unlock()
s.count++
if len(s.values) < s.reservoirSize {
s.values = append(s.values, v)
} else {
r := rand.Int63n(s.count)
if r < int64(len(s.values)) {
s.values[int(r)] = v
}
}
}
// Values returns a copy of the values in the sample.
func (s *UniformSample) Values() []int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
values := make([]int64, len(s.values))
copy(values, s.values)
return values
}
// Variance returns the variance of the values in the sample.
func (s *UniformSample) Variance() float64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return SampleVariance(s.values)
}
// expDecaySample represents an individual sample in a heap.
type expDecaySample struct {
k float64
v int64
}
func newExpDecaySampleHeap(reservoirSize int) *expDecaySampleHeap {
return &expDecaySampleHeap{make([]expDecaySample, 0, reservoirSize)}
}
// expDecaySampleHeap is a min-heap of expDecaySamples.
// The internal implementation is copied from the standard library's container/heap
type expDecaySampleHeap struct {
s []expDecaySample
}
func (h *expDecaySampleHeap) Clear() {
h.s = h.s[:0]
}
func (h *expDecaySampleHeap) Push(s expDecaySample) {
n := len(h.s)
h.s = h.s[0 : n+1]
h.s[n] = s
h.up(n)
}
func (h *expDecaySampleHeap) Pop() expDecaySample {
n := len(h.s) - 1
h.s[0], h.s[n] = h.s[n], h.s[0]
h.down(0, n)
n = len(h.s)
s := h.s[n-1]
h.s = h.s[0 : n-1]
return s
}
func (h *expDecaySampleHeap) Size() int {
return len(h.s)
}
func (h *expDecaySampleHeap) Values() []expDecaySample {
return h.s
}
func (h *expDecaySampleHeap) up(j int) {
for {
i := (j - 1) / 2 // parent
if i == j || !(h.s[j].k < h.s[i].k) {
break
}
h.s[i], h.s[j] = h.s[j], h.s[i]
j = i
}
}
func (h *expDecaySampleHeap) down(i, n int) {
for {
j1 := 2*i + 1
if j1 >= n || j1 < 0 { // j1 < 0 after int overflow
break
}
j := j1 // left child
if j2 := j1 + 1; j2 < n && !(h.s[j1].k < h.s[j2].k) {
j = j2 // = 2*i + 2 // right child
}
if !(h.s[j].k < h.s[i].k) {
break
}
h.s[i], h.s[j] = h.s[j], h.s[i]
i = j
}
}
type int64Slice []int64
func (p int64Slice) Len() int { return len(p) }
func (p int64Slice) Less(i, j int) bool { return p[i] < p[j] }
func (p int64Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

View File

@@ -0,0 +1,69 @@
// Metrics output to StatHat.
package stathat
import (
"github.com/rcrowley/go-metrics"
"github.com/stathat/go"
"log"
"time"
)
func Stathat(r metrics.Registry, d time.Duration, userkey string) {
for {
if err := sh(r, userkey); nil != err {
log.Println(err)
}
time.Sleep(d)
}
}
func sh(r metrics.Registry, userkey string) error {
r.Each(func(name string, i interface{}) {
switch metric := i.(type) {
case metrics.Counter:
stathat.PostEZCount(name, userkey, int(metric.Count()))
case metrics.Gauge:
stathat.PostEZValue(name, userkey, float64(metric.Value()))
case metrics.GaugeFloat64:
stathat.PostEZValue(name, userkey, float64(metric.Value()))
case metrics.Histogram:
h := metric.Snapshot()
ps := h.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
stathat.PostEZCount(name+".count", userkey, int(h.Count()))
stathat.PostEZValue(name+".min", userkey, float64(h.Min()))
stathat.PostEZValue(name+".max", userkey, float64(h.Max()))
stathat.PostEZValue(name+".mean", userkey, float64(h.Mean()))
stathat.PostEZValue(name+".std-dev", userkey, float64(h.StdDev()))
stathat.PostEZValue(name+".50-percentile", userkey, float64(ps[0]))
stathat.PostEZValue(name+".75-percentile", userkey, float64(ps[1]))
stathat.PostEZValue(name+".95-percentile", userkey, float64(ps[2]))
stathat.PostEZValue(name+".99-percentile", userkey, float64(ps[3]))
stathat.PostEZValue(name+".999-percentile", userkey, float64(ps[4]))
case metrics.Meter:
m := metric.Snapshot()
stathat.PostEZCount(name+".count", userkey, int(m.Count()))
stathat.PostEZValue(name+".one-minute", userkey, float64(m.Rate1()))
stathat.PostEZValue(name+".five-minute", userkey, float64(m.Rate5()))
stathat.PostEZValue(name+".fifteen-minute", userkey, float64(m.Rate15()))
stathat.PostEZValue(name+".mean", userkey, float64(m.RateMean()))
case metrics.Timer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
stathat.PostEZCount(name+".count", userkey, int(t.Count()))
stathat.PostEZValue(name+".min", userkey, float64(t.Min()))
stathat.PostEZValue(name+".max", userkey, float64(t.Max()))
stathat.PostEZValue(name+".mean", userkey, float64(t.Mean()))
stathat.PostEZValue(name+".std-dev", userkey, float64(t.StdDev()))
stathat.PostEZValue(name+".50-percentile", userkey, float64(ps[0]))
stathat.PostEZValue(name+".75-percentile", userkey, float64(ps[1]))
stathat.PostEZValue(name+".95-percentile", userkey, float64(ps[2]))
stathat.PostEZValue(name+".99-percentile", userkey, float64(ps[3]))
stathat.PostEZValue(name+".999-percentile", userkey, float64(ps[4]))
stathat.PostEZValue(name+".one-minute", userkey, float64(t.Rate1()))
stathat.PostEZValue(name+".five-minute", userkey, float64(t.Rate5()))
stathat.PostEZValue(name+".fifteen-minute", userkey, float64(t.Rate15()))
stathat.PostEZValue(name+".mean-rate", userkey, float64(t.RateMean()))
}
})
return nil
}

View File

@@ -0,0 +1,78 @@
// +build !windows
package metrics
import (
"fmt"
"log/syslog"
"time"
)
// Output each metric in the given registry to syslog periodically using
// the given syslogger.
func Syslog(r Registry, d time.Duration, w *syslog.Writer) {
for _ = range time.Tick(d) {
r.Each(func(name string, i interface{}) {
switch metric := i.(type) {
case Counter:
w.Info(fmt.Sprintf("counter %s: count: %d", name, metric.Count()))
case Gauge:
w.Info(fmt.Sprintf("gauge %s: value: %d", name, metric.Value()))
case GaugeFloat64:
w.Info(fmt.Sprintf("gauge %s: value: %f", name, metric.Value()))
case Healthcheck:
metric.Check()
w.Info(fmt.Sprintf("healthcheck %s: error: %v", name, metric.Error()))
case Histogram:
h := metric.Snapshot()
ps := h.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
w.Info(fmt.Sprintf(
"histogram %s: count: %d min: %d max: %d mean: %.2f stddev: %.2f median: %.2f 75%%: %.2f 95%%: %.2f 99%%: %.2f 99.9%%: %.2f",
name,
h.Count(),
h.Min(),
h.Max(),
h.Mean(),
h.StdDev(),
ps[0],
ps[1],
ps[2],
ps[3],
ps[4],
))
case Meter:
m := metric.Snapshot()
w.Info(fmt.Sprintf(
"meter %s: count: %d 1-min: %.2f 5-min: %.2f 15-min: %.2f mean: %.2f",
name,
m.Count(),
m.Rate1(),
m.Rate5(),
m.Rate15(),
m.RateMean(),
))
case Timer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
w.Info(fmt.Sprintf(
"timer %s: count: %d min: %d max: %d mean: %.2f stddev: %.2f median: %.2f 75%%: %.2f 95%%: %.2f 99%%: %.2f 99.9%%: %.2f 1-min: %.2f 5-min: %.2f 15-min: %.2f mean-rate: %.2f",
name,
t.Count(),
t.Min(),
t.Max(),
t.Mean(),
t.StdDev(),
ps[0],
ps[1],
ps[2],
ps[3],
ps[4],
t.Rate1(),
t.Rate5(),
t.Rate15(),
t.RateMean(),
))
}
})
}
}

View File

@@ -0,0 +1,311 @@
package metrics
import (
"sync"
"time"
)
// Timers capture the duration and rate of events.
type Timer interface {
Count() int64
Max() int64
Mean() float64
Min() int64
Percentile(float64) float64
Percentiles([]float64) []float64
Rate1() float64
Rate5() float64
Rate15() float64
RateMean() float64
Snapshot() Timer
StdDev() float64
Sum() int64
Time(func())
Update(time.Duration)
UpdateSince(time.Time)
Variance() float64
}
// GetOrRegisterTimer returns an existing Timer or constructs and registers a
// new StandardTimer.
func GetOrRegisterTimer(name string, r Registry) Timer {
if nil == r {
r = DefaultRegistry
}
return r.GetOrRegister(name, NewTimer).(Timer)
}
// NewCustomTimer constructs a new StandardTimer from a Histogram and a Meter.
func NewCustomTimer(h Histogram, m Meter) Timer {
if UseNilMetrics {
return NilTimer{}
}
return &StandardTimer{
histogram: h,
meter: m,
}
}
// NewRegisteredTimer constructs and registers a new StandardTimer.
func NewRegisteredTimer(name string, r Registry) Timer {
c := NewTimer()
if nil == r {
r = DefaultRegistry
}
r.Register(name, c)
return c
}
// NewTimer constructs a new StandardTimer using an exponentially-decaying
// sample with the same reservoir size and alpha as UNIX load averages.
func NewTimer() Timer {
if UseNilMetrics {
return NilTimer{}
}
return &StandardTimer{
histogram: NewHistogram(NewExpDecaySample(1028, 0.015)),
meter: NewMeter(),
}
}
// NilTimer is a no-op Timer.
type NilTimer struct {
h Histogram
m Meter
}
// Count is a no-op.
func (NilTimer) Count() int64 { return 0 }
// Max is a no-op.
func (NilTimer) Max() int64 { return 0 }
// Mean is a no-op.
func (NilTimer) Mean() float64 { return 0.0 }
// Min is a no-op.
func (NilTimer) Min() int64 { return 0 }
// Percentile is a no-op.
func (NilTimer) Percentile(p float64) float64 { return 0.0 }
// Percentiles is a no-op.
func (NilTimer) Percentiles(ps []float64) []float64 {
return make([]float64, len(ps))
}
// Rate1 is a no-op.
func (NilTimer) Rate1() float64 { return 0.0 }
// Rate5 is a no-op.
func (NilTimer) Rate5() float64 { return 0.0 }
// Rate15 is a no-op.
func (NilTimer) Rate15() float64 { return 0.0 }
// RateMean is a no-op.
func (NilTimer) RateMean() float64 { return 0.0 }
// Snapshot is a no-op.
func (NilTimer) Snapshot() Timer { return NilTimer{} }
// StdDev is a no-op.
func (NilTimer) StdDev() float64 { return 0.0 }
// Sum is a no-op.
func (NilTimer) Sum() int64 { return 0 }
// Time is a no-op.
func (NilTimer) Time(func()) {}
// Update is a no-op.
func (NilTimer) Update(time.Duration) {}
// UpdateSince is a no-op.
func (NilTimer) UpdateSince(time.Time) {}
// Variance is a no-op.
func (NilTimer) Variance() float64 { return 0.0 }
// StandardTimer is the standard implementation of a Timer and uses a Histogram
// and Meter.
type StandardTimer struct {
histogram Histogram
meter Meter
mutex sync.Mutex
}
// Count returns the number of events recorded.
func (t *StandardTimer) Count() int64 {
return t.histogram.Count()
}
// Max returns the maximum value in the sample.
func (t *StandardTimer) Max() int64 {
return t.histogram.Max()
}
// Mean returns the mean of the values in the sample.
func (t *StandardTimer) Mean() float64 {
return t.histogram.Mean()
}
// Min returns the minimum value in the sample.
func (t *StandardTimer) Min() int64 {
return t.histogram.Min()
}
// Percentile returns an arbitrary percentile of the values in the sample.
func (t *StandardTimer) Percentile(p float64) float64 {
return t.histogram.Percentile(p)
}
// Percentiles returns a slice of arbitrary percentiles of the values in the
// sample.
func (t *StandardTimer) Percentiles(ps []float64) []float64 {
return t.histogram.Percentiles(ps)
}
// Rate1 returns the one-minute moving average rate of events per second.
func (t *StandardTimer) Rate1() float64 {
return t.meter.Rate1()
}
// Rate5 returns the five-minute moving average rate of events per second.
func (t *StandardTimer) Rate5() float64 {
return t.meter.Rate5()
}
// Rate15 returns the fifteen-minute moving average rate of events per second.
func (t *StandardTimer) Rate15() float64 {
return t.meter.Rate15()
}
// RateMean returns the meter's mean rate of events per second.
func (t *StandardTimer) RateMean() float64 {
return t.meter.RateMean()
}
// Snapshot returns a read-only copy of the timer.
func (t *StandardTimer) Snapshot() Timer {
t.mutex.Lock()
defer t.mutex.Unlock()
return &TimerSnapshot{
histogram: t.histogram.Snapshot().(*HistogramSnapshot),
meter: t.meter.Snapshot().(*MeterSnapshot),
}
}
// StdDev returns the standard deviation of the values in the sample.
func (t *StandardTimer) StdDev() float64 {
return t.histogram.StdDev()
}
// Sum returns the sum in the sample.
func (t *StandardTimer) Sum() int64 {
return t.histogram.Sum()
}
// Record the duration of the execution of the given function.
func (t *StandardTimer) Time(f func()) {
ts := time.Now()
f()
t.Update(time.Since(ts))
}
// Record the duration of an event.
func (t *StandardTimer) Update(d time.Duration) {
t.mutex.Lock()
defer t.mutex.Unlock()
t.histogram.Update(int64(d))
t.meter.Mark(1)
}
// Record the duration of an event that started at a time and ends now.
func (t *StandardTimer) UpdateSince(ts time.Time) {
t.mutex.Lock()
defer t.mutex.Unlock()
t.histogram.Update(int64(time.Since(ts)))
t.meter.Mark(1)
}
// Variance returns the variance of the values in the sample.
func (t *StandardTimer) Variance() float64 {
return t.histogram.Variance()
}
// TimerSnapshot is a read-only copy of another Timer.
type TimerSnapshot struct {
histogram *HistogramSnapshot
meter *MeterSnapshot
}
// Count returns the number of events recorded at the time the snapshot was
// taken.
func (t *TimerSnapshot) Count() int64 { return t.histogram.Count() }
// Max returns the maximum value at the time the snapshot was taken.
func (t *TimerSnapshot) Max() int64 { return t.histogram.Max() }
// Mean returns the mean value at the time the snapshot was taken.
func (t *TimerSnapshot) Mean() float64 { return t.histogram.Mean() }
// Min returns the minimum value at the time the snapshot was taken.
func (t *TimerSnapshot) Min() int64 { return t.histogram.Min() }
// Percentile returns an arbitrary percentile of sampled values at the time the
// snapshot was taken.
func (t *TimerSnapshot) Percentile(p float64) float64 {
return t.histogram.Percentile(p)
}
// Percentiles returns a slice of arbitrary percentiles of sampled values at
// the time the snapshot was taken.
func (t *TimerSnapshot) Percentiles(ps []float64) []float64 {
return t.histogram.Percentiles(ps)
}
// Rate1 returns the one-minute moving average rate of events per second at the
// time the snapshot was taken.
func (t *TimerSnapshot) Rate1() float64 { return t.meter.Rate1() }
// Rate5 returns the five-minute moving average rate of events per second at
// the time the snapshot was taken.
func (t *TimerSnapshot) Rate5() float64 { return t.meter.Rate5() }
// Rate15 returns the fifteen-minute moving average rate of events per second
// at the time the snapshot was taken.
func (t *TimerSnapshot) Rate15() float64 { return t.meter.Rate15() }
// RateMean returns the meter's mean rate of events per second at the time the
// snapshot was taken.
func (t *TimerSnapshot) RateMean() float64 { return t.meter.RateMean() }
// Snapshot returns the snapshot.
func (t *TimerSnapshot) Snapshot() Timer { return t }
// StdDev returns the standard deviation of the values at the time the snapshot
// was taken.
func (t *TimerSnapshot) StdDev() float64 { return t.histogram.StdDev() }
// Sum returns the sum at the time the snapshot was taken.
func (t *TimerSnapshot) Sum() int64 { return t.histogram.Sum() }
// Time panics.
func (*TimerSnapshot) Time(func()) {
panic("Time called on a TimerSnapshot")
}
// Update panics.
func (*TimerSnapshot) Update(time.Duration) {
panic("Update called on a TimerSnapshot")
}
// UpdateSince panics.
func (*TimerSnapshot) UpdateSince(time.Time) {
panic("UpdateSince called on a TimerSnapshot")
}
// Variance returns the variance of the values at the time the snapshot was
// taken.
func (t *TimerSnapshot) Variance() float64 { return t.histogram.Variance() }

View File

@@ -0,0 +1,10 @@
#!/bin/bash
set -e
# check there are no formatting issues
GOFMT_LINES=`gofmt -l . | wc -l | xargs`
test $GOFMT_LINES -eq 0 || echo "gofmt needs to be run, ${GOFMT_LINES} files have issues"
# run the tests for the root package
go test .

View File

@@ -0,0 +1,100 @@
package metrics
import (
"fmt"
"io"
"sort"
"time"
)
// Write sorts writes each metric in the given registry periodically to the
// given io.Writer.
func Write(r Registry, d time.Duration, w io.Writer) {
for _ = range time.Tick(d) {
WriteOnce(r, w)
}
}
// WriteOnce sorts and writes metrics in the given registry to the given
// io.Writer.
func WriteOnce(r Registry, w io.Writer) {
var namedMetrics namedMetricSlice
r.Each(func(name string, i interface{}) {
namedMetrics = append(namedMetrics, namedMetric{name, i})
})
sort.Sort(namedMetrics)
for _, namedMetric := range namedMetrics {
switch metric := namedMetric.m.(type) {
case Counter:
fmt.Fprintf(w, "counter %s\n", namedMetric.name)
fmt.Fprintf(w, " count: %9d\n", metric.Count())
case Gauge:
fmt.Fprintf(w, "gauge %s\n", namedMetric.name)
fmt.Fprintf(w, " value: %9d\n", metric.Value())
case GaugeFloat64:
fmt.Fprintf(w, "gauge %s\n", namedMetric.name)
fmt.Fprintf(w, " value: %f\n", metric.Value())
case Healthcheck:
metric.Check()
fmt.Fprintf(w, "healthcheck %s\n", namedMetric.name)
fmt.Fprintf(w, " error: %v\n", metric.Error())
case Histogram:
h := metric.Snapshot()
ps := h.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
fmt.Fprintf(w, "histogram %s\n", namedMetric.name)
fmt.Fprintf(w, " count: %9d\n", h.Count())
fmt.Fprintf(w, " min: %9d\n", h.Min())
fmt.Fprintf(w, " max: %9d\n", h.Max())
fmt.Fprintf(w, " mean: %12.2f\n", h.Mean())
fmt.Fprintf(w, " stddev: %12.2f\n", h.StdDev())
fmt.Fprintf(w, " median: %12.2f\n", ps[0])
fmt.Fprintf(w, " 75%%: %12.2f\n", ps[1])
fmt.Fprintf(w, " 95%%: %12.2f\n", ps[2])
fmt.Fprintf(w, " 99%%: %12.2f\n", ps[3])
fmt.Fprintf(w, " 99.9%%: %12.2f\n", ps[4])
case Meter:
m := metric.Snapshot()
fmt.Fprintf(w, "meter %s\n", namedMetric.name)
fmt.Fprintf(w, " count: %9d\n", m.Count())
fmt.Fprintf(w, " 1-min rate: %12.2f\n", m.Rate1())
fmt.Fprintf(w, " 5-min rate: %12.2f\n", m.Rate5())
fmt.Fprintf(w, " 15-min rate: %12.2f\n", m.Rate15())
fmt.Fprintf(w, " mean rate: %12.2f\n", m.RateMean())
case Timer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{0.5, 0.75, 0.95, 0.99, 0.999})
fmt.Fprintf(w, "timer %s\n", namedMetric.name)
fmt.Fprintf(w, " count: %9d\n", t.Count())
fmt.Fprintf(w, " min: %9d\n", t.Min())
fmt.Fprintf(w, " max: %9d\n", t.Max())
fmt.Fprintf(w, " mean: %12.2f\n", t.Mean())
fmt.Fprintf(w, " stddev: %12.2f\n", t.StdDev())
fmt.Fprintf(w, " median: %12.2f\n", ps[0])
fmt.Fprintf(w, " 75%%: %12.2f\n", ps[1])
fmt.Fprintf(w, " 95%%: %12.2f\n", ps[2])
fmt.Fprintf(w, " 99%%: %12.2f\n", ps[3])
fmt.Fprintf(w, " 99.9%%: %12.2f\n", ps[4])
fmt.Fprintf(w, " 1-min rate: %12.2f\n", t.Rate1())
fmt.Fprintf(w, " 5-min rate: %12.2f\n", t.Rate5())
fmt.Fprintf(w, " 15-min rate: %12.2f\n", t.Rate15())
fmt.Fprintf(w, " mean rate: %12.2f\n", t.RateMean())
}
}
}
type namedMetric struct {
name string
m interface{}
}
// namedMetricSlice is a slice of namedMetrics that implements sort.Interface.
type namedMetricSlice []namedMetric
func (nms namedMetricSlice) Len() int { return len(nms) }
func (nms namedMetricSlice) Swap(i, j int) { nms[i], nms[j] = nms[j], nms[i] }
func (nms namedMetricSlice) Less(i, j int) bool {
return nms[i].name < nms[j].name
}

View File

@@ -1,120 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"bytes"
"testing"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/memdb"
)
type tbRec struct {
kt kType
key, value []byte
}
type testBatch struct {
rec []*tbRec
}
func (p *testBatch) Put(key, value []byte) {
p.rec = append(p.rec, &tbRec{ktVal, key, value})
}
func (p *testBatch) Delete(key []byte) {
p.rec = append(p.rec, &tbRec{ktDel, key, nil})
}
func compareBatch(t *testing.T, b1, b2 *Batch) {
if b1.seq != b2.seq {
t.Errorf("invalid seq number want %d, got %d", b1.seq, b2.seq)
}
if b1.Len() != b2.Len() {
t.Fatalf("invalid record length want %d, got %d", b1.Len(), b2.Len())
}
p1, p2 := new(testBatch), new(testBatch)
err := b1.Replay(p1)
if err != nil {
t.Fatal("error when replaying batch 1: ", err)
}
err = b2.Replay(p2)
if err != nil {
t.Fatal("error when replaying batch 2: ", err)
}
for i := range p1.rec {
r1, r2 := p1.rec[i], p2.rec[i]
if r1.kt != r2.kt {
t.Errorf("invalid type on record '%d' want %d, got %d", i, r1.kt, r2.kt)
}
if !bytes.Equal(r1.key, r2.key) {
t.Errorf("invalid key on record '%d' want %s, got %s", i, string(r1.key), string(r2.key))
}
if r1.kt == ktVal {
if !bytes.Equal(r1.value, r2.value) {
t.Errorf("invalid value on record '%d' want %s, got %s", i, string(r1.value), string(r2.value))
}
}
}
}
func TestBatch_EncodeDecode(t *testing.T) {
b1 := new(Batch)
b1.seq = 10009
b1.Put([]byte("key1"), []byte("value1"))
b1.Put([]byte("key2"), []byte("value2"))
b1.Delete([]byte("key1"))
b1.Put([]byte("k"), []byte(""))
b1.Put([]byte("zzzzzzzzzzz"), []byte("zzzzzzzzzzzzzzzzzzzzzzzz"))
b1.Delete([]byte("key10000"))
b1.Delete([]byte("k"))
buf := b1.encode()
b2 := new(Batch)
err := b2.decode(0, buf)
if err != nil {
t.Error("error when decoding batch: ", err)
}
compareBatch(t, b1, b2)
}
func TestBatch_Append(t *testing.T) {
b1 := new(Batch)
b1.seq = 10009
b1.Put([]byte("key1"), []byte("value1"))
b1.Put([]byte("key2"), []byte("value2"))
b1.Delete([]byte("key1"))
b1.Put([]byte("foo"), []byte("foovalue"))
b1.Put([]byte("bar"), []byte("barvalue"))
b2a := new(Batch)
b2a.seq = 10009
b2a.Put([]byte("key1"), []byte("value1"))
b2a.Put([]byte("key2"), []byte("value2"))
b2a.Delete([]byte("key1"))
b2b := new(Batch)
b2b.Put([]byte("foo"), []byte("foovalue"))
b2b.Put([]byte("bar"), []byte("barvalue"))
b2a.append(b2b)
compareBatch(t, b1, b2a)
}
func TestBatch_Size(t *testing.T) {
b := new(Batch)
for i := 0; i < 2; i++ {
b.Put([]byte("key1"), []byte("value1"))
b.Put([]byte("key2"), []byte("value2"))
b.Delete([]byte("key1"))
b.Put([]byte("foo"), []byte("foovalue"))
b.Put([]byte("bar"), []byte("barvalue"))
mem := memdb.New(&iComparer{comparer.DefaultComparer}, 0)
b.memReplay(mem)
if b.size() != mem.Size() {
t.Errorf("invalid batch size calculation, want=%d got=%d", mem.Size(), b.size())
}
b.Reset()
}
}

View File

@@ -1,58 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build !go1.2
package leveldb
import (
"sync/atomic"
"testing"
)
func BenchmarkDBReadConcurrent(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gc()
defer p.close()
b.ResetTimer()
b.SetBytes(116)
b.RunParallel(func(pb *testing.PB) {
iter := p.newIter()
defer iter.Release()
for pb.Next() && iter.Next() {
}
})
}
func BenchmarkDBReadConcurrent2(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gc()
defer p.close()
b.ResetTimer()
b.SetBytes(116)
var dir uint32
b.RunParallel(func(pb *testing.PB) {
iter := p.newIter()
defer iter.Release()
if atomic.AddUint32(&dir, 1)%2 == 0 {
for pb.Next() && iter.Next() {
}
} else {
if pb.Next() && iter.Last() {
for pb.Next() && iter.Prev() {
}
}
}
})
}

View File

@@ -1,464 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"bytes"
"fmt"
"math/rand"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
)
func randomString(r *rand.Rand, n int) []byte {
b := new(bytes.Buffer)
for i := 0; i < n; i++ {
b.WriteByte(' ' + byte(r.Intn(95)))
}
return b.Bytes()
}
func compressibleStr(r *rand.Rand, frac float32, n int) []byte {
nn := int(float32(n) * frac)
rb := randomString(r, nn)
b := make([]byte, 0, n+nn)
for len(b) < n {
b = append(b, rb...)
}
return b[:n]
}
type valueGen struct {
src []byte
pos int
}
func newValueGen(frac float32) *valueGen {
v := new(valueGen)
r := rand.New(rand.NewSource(301))
v.src = make([]byte, 0, 1048576+100)
for len(v.src) < 1048576 {
v.src = append(v.src, compressibleStr(r, frac, 100)...)
}
return v
}
func (v *valueGen) get(n int) []byte {
if v.pos+n > len(v.src) {
v.pos = 0
}
v.pos += n
return v.src[v.pos-n : v.pos]
}
var benchDB = filepath.Join(os.TempDir(), fmt.Sprintf("goleveldbbench-%d", os.Getuid()))
type dbBench struct {
b *testing.B
stor storage.Storage
db *DB
o *opt.Options
ro *opt.ReadOptions
wo *opt.WriteOptions
keys, values [][]byte
}
func openDBBench(b *testing.B, noCompress bool) *dbBench {
_, err := os.Stat(benchDB)
if err == nil {
err = os.RemoveAll(benchDB)
if err != nil {
b.Fatal("cannot remove old db: ", err)
}
}
p := &dbBench{
b: b,
o: &opt.Options{},
ro: &opt.ReadOptions{},
wo: &opt.WriteOptions{},
}
p.stor, err = storage.OpenFile(benchDB)
if err != nil {
b.Fatal("cannot open stor: ", err)
}
if noCompress {
p.o.Compression = opt.NoCompression
}
p.db, err = Open(p.stor, p.o)
if err != nil {
b.Fatal("cannot open db: ", err)
}
runtime.GOMAXPROCS(runtime.NumCPU())
return p
}
func (p *dbBench) reopen() {
p.db.Close()
var err error
p.db, err = Open(p.stor, p.o)
if err != nil {
p.b.Fatal("Reopen: got error: ", err)
}
}
func (p *dbBench) populate(n int) {
p.keys, p.values = make([][]byte, n), make([][]byte, n)
v := newValueGen(0.5)
for i := range p.keys {
p.keys[i], p.values[i] = []byte(fmt.Sprintf("%016d", i)), v.get(100)
}
}
func (p *dbBench) randomize() {
m := len(p.keys)
times := m * 2
r1, r2 := rand.New(rand.NewSource(0xdeadbeef)), rand.New(rand.NewSource(0xbeefface))
for n := 0; n < times; n++ {
i, j := r1.Int()%m, r2.Int()%m
if i == j {
continue
}
p.keys[i], p.keys[j] = p.keys[j], p.keys[i]
p.values[i], p.values[j] = p.values[j], p.values[i]
}
}
func (p *dbBench) writes(perBatch int) {
b := p.b
db := p.db
n := len(p.keys)
m := n / perBatch
if n%perBatch > 0 {
m++
}
batches := make([]Batch, m)
j := 0
for i := range batches {
first := true
for ; j < n && ((j+1)%perBatch != 0 || first); j++ {
first = false
batches[i].Put(p.keys[j], p.values[j])
}
}
runtime.GC()
b.ResetTimer()
b.StartTimer()
for i := range batches {
err := db.Write(&(batches[i]), p.wo)
if err != nil {
b.Fatal("write failed: ", err)
}
}
b.StopTimer()
b.SetBytes(116)
}
func (p *dbBench) gc() {
p.keys, p.values = nil, nil
runtime.GC()
}
func (p *dbBench) puts() {
b := p.b
db := p.db
b.ResetTimer()
b.StartTimer()
for i := range p.keys {
err := db.Put(p.keys[i], p.values[i], p.wo)
if err != nil {
b.Fatal("put failed: ", err)
}
}
b.StopTimer()
b.SetBytes(116)
}
func (p *dbBench) fill() {
b := p.b
db := p.db
perBatch := 10000
batch := new(Batch)
for i, n := 0, len(p.keys); i < n; {
first := true
for ; i < n && ((i+1)%perBatch != 0 || first); i++ {
first = false
batch.Put(p.keys[i], p.values[i])
}
err := db.Write(batch, p.wo)
if err != nil {
b.Fatal("write failed: ", err)
}
batch.Reset()
}
}
func (p *dbBench) gets() {
b := p.b
db := p.db
b.ResetTimer()
for i := range p.keys {
_, err := db.Get(p.keys[i], p.ro)
if err != nil {
b.Error("got error: ", err)
}
}
b.StopTimer()
}
func (p *dbBench) seeks() {
b := p.b
iter := p.newIter()
defer iter.Release()
b.ResetTimer()
for i := range p.keys {
if !iter.Seek(p.keys[i]) {
b.Error("value not found for: ", string(p.keys[i]))
}
}
b.StopTimer()
}
func (p *dbBench) newIter() iterator.Iterator {
iter := p.db.NewIterator(nil, p.ro)
err := iter.Error()
if err != nil {
p.b.Fatal("cannot create iterator: ", err)
}
return iter
}
func (p *dbBench) close() {
if bp, err := p.db.GetProperty("leveldb.blockpool"); err == nil {
p.b.Log("Block pool stats: ", bp)
}
p.db.Close()
p.stor.Close()
os.RemoveAll(benchDB)
p.db = nil
p.keys = nil
p.values = nil
runtime.GC()
runtime.GOMAXPROCS(1)
}
func BenchmarkDBWrite(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.writes(1)
p.close()
}
func BenchmarkDBWriteBatch(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.writes(1000)
p.close()
}
func BenchmarkDBWriteUncompressed(b *testing.B) {
p := openDBBench(b, true)
p.populate(b.N)
p.writes(1)
p.close()
}
func BenchmarkDBWriteBatchUncompressed(b *testing.B) {
p := openDBBench(b, true)
p.populate(b.N)
p.writes(1000)
p.close()
}
func BenchmarkDBWriteRandom(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.randomize()
p.writes(1)
p.close()
}
func BenchmarkDBWriteRandomSync(b *testing.B) {
p := openDBBench(b, false)
p.wo.Sync = true
p.populate(b.N)
p.writes(1)
p.close()
}
func BenchmarkDBOverwrite(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.writes(1)
p.writes(1)
p.close()
}
func BenchmarkDBOverwriteRandom(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.writes(1)
p.randomize()
p.writes(1)
p.close()
}
func BenchmarkDBPut(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.puts()
p.close()
}
func BenchmarkDBRead(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gc()
iter := p.newIter()
b.ResetTimer()
for iter.Next() {
}
iter.Release()
b.StopTimer()
b.SetBytes(116)
p.close()
}
func BenchmarkDBReadGC(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
iter := p.newIter()
b.ResetTimer()
for iter.Next() {
}
iter.Release()
b.StopTimer()
b.SetBytes(116)
p.close()
}
func BenchmarkDBReadUncompressed(b *testing.B) {
p := openDBBench(b, true)
p.populate(b.N)
p.fill()
p.gc()
iter := p.newIter()
b.ResetTimer()
for iter.Next() {
}
iter.Release()
b.StopTimer()
b.SetBytes(116)
p.close()
}
func BenchmarkDBReadTable(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.reopen()
p.gc()
iter := p.newIter()
b.ResetTimer()
for iter.Next() {
}
iter.Release()
b.StopTimer()
b.SetBytes(116)
p.close()
}
func BenchmarkDBReadReverse(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gc()
iter := p.newIter()
b.ResetTimer()
iter.Last()
for iter.Prev() {
}
iter.Release()
b.StopTimer()
b.SetBytes(116)
p.close()
}
func BenchmarkDBReadReverseTable(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.reopen()
p.gc()
iter := p.newIter()
b.ResetTimer()
iter.Last()
for iter.Prev() {
}
iter.Release()
b.StopTimer()
b.SetBytes(116)
p.close()
}
func BenchmarkDBSeek(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.seeks()
p.close()
}
func BenchmarkDBSeekRandom(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.randomize()
p.seeks()
p.close()
}
func BenchmarkDBGet(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.gets()
p.close()
}
func BenchmarkDBGetRandom(b *testing.B) {
p := openDBBench(b, false)
p.populate(b.N)
p.fill()
p.randomize()
p.gets()
p.close()
}

View File

@@ -1,30 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// +build !go1.2
package cache
import (
"math/rand"
"testing"
)
func BenchmarkLRUCache(b *testing.B) {
c := NewCache(NewLRU(10000))
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for pb.Next() {
key := uint64(r.Intn(1000000))
c.Get(0, key, func() (int, Value) {
return 1, key
}).Release()
}
})
}

View File

@@ -1,554 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"math/rand"
"runtime"
"sync"
"sync/atomic"
"testing"
"time"
"unsafe"
)
type int32o int32
func (o *int32o) acquire() {
if atomic.AddInt32((*int32)(o), 1) != 1 {
panic("BUG: invalid ref")
}
}
func (o *int32o) Release() {
if atomic.AddInt32((*int32)(o), -1) != 0 {
panic("BUG: invalid ref")
}
}
type releaserFunc struct {
fn func()
value Value
}
func (r releaserFunc) Release() {
if r.fn != nil {
r.fn()
}
}
func set(c *Cache, ns, key uint64, value Value, charge int, relf func()) *Handle {
return c.Get(ns, key, func() (int, Value) {
if relf != nil {
return charge, releaserFunc{relf, value}
} else {
return charge, value
}
})
}
func TestCacheMap(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
nsx := []struct {
nobjects, nhandles, concurrent, repeat int
}{
{10000, 400, 50, 3},
{100000, 1000, 100, 10},
}
var (
objects [][]int32o
handles [][]unsafe.Pointer
)
for _, x := range nsx {
objects = append(objects, make([]int32o, x.nobjects))
handles = append(handles, make([]unsafe.Pointer, x.nhandles))
}
c := NewCache(nil)
wg := new(sync.WaitGroup)
var done int32
for ns, x := range nsx {
for i := 0; i < x.concurrent; i++ {
wg.Add(1)
go func(ns, i, repeat int, objects []int32o, handles []unsafe.Pointer) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for j := len(objects) * repeat; j >= 0; j-- {
key := uint64(r.Intn(len(objects)))
h := c.Get(uint64(ns), key, func() (int, Value) {
o := &objects[key]
o.acquire()
return 1, o
})
if v := h.Value().(*int32o); v != &objects[key] {
t.Fatalf("#%d invalid value: want=%p got=%p", ns, &objects[key], v)
}
if objects[key] != 1 {
t.Fatalf("#%d invalid object %d: %d", ns, key, objects[key])
}
if !atomic.CompareAndSwapPointer(&handles[r.Intn(len(handles))], nil, unsafe.Pointer(h)) {
h.Release()
}
}
}(ns, i, x.repeat, objects[ns], handles[ns])
}
go func(handles []unsafe.Pointer) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for atomic.LoadInt32(&done) == 0 {
i := r.Intn(len(handles))
h := (*Handle)(atomic.LoadPointer(&handles[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles[i], unsafe.Pointer(h), nil) {
h.Release()
}
time.Sleep(time.Millisecond)
}
}(handles[ns])
}
go func() {
handles := make([]*Handle, 100000)
for atomic.LoadInt32(&done) == 0 {
for i := range handles {
handles[i] = c.Get(999999999, uint64(i), func() (int, Value) {
return 1, 1
})
}
for _, h := range handles {
h.Release()
}
}
}()
wg.Wait()
atomic.StoreInt32(&done, 1)
for _, handles0 := range handles {
for i := range handles0 {
h := (*Handle)(atomic.LoadPointer(&handles0[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles0[i], unsafe.Pointer(h), nil) {
h.Release()
}
}
}
for ns, objects0 := range objects {
for i, o := range objects0 {
if o != 0 {
t.Fatalf("invalid object #%d.%d: ref=%d", ns, i, o)
}
}
}
}
func TestCacheMap_NodesAndSize(t *testing.T) {
c := NewCache(nil)
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 2, nil)
set(c, 1, 1, 3, 3, nil)
set(c, 2, 1, 4, 1, nil)
if c.Nodes() != 4 {
t.Errorf("invalid nodes counter: want=%d got=%d", 4, c.Nodes())
}
if c.Size() != 7 {
t.Errorf("invalid size counter: want=%d got=%d", 4, c.Size())
}
}
func TestLRUCache_Capacity(t *testing.T) {
c := NewCache(NewLRU(10))
if c.Capacity() != 10 {
t.Errorf("invalid capacity: want=%d got=%d", 10, c.Capacity())
}
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 2, nil).Release()
set(c, 1, 1, 3, 3, nil).Release()
set(c, 2, 1, 4, 1, nil).Release()
set(c, 2, 2, 5, 1, nil).Release()
set(c, 2, 3, 6, 1, nil).Release()
set(c, 2, 4, 7, 1, nil).Release()
set(c, 2, 5, 8, 1, nil).Release()
if c.Nodes() != 7 {
t.Errorf("invalid nodes counter: want=%d got=%d", 7, c.Nodes())
}
if c.Size() != 10 {
t.Errorf("invalid size counter: want=%d got=%d", 10, c.Size())
}
c.SetCapacity(9)
if c.Capacity() != 9 {
t.Errorf("invalid capacity: want=%d got=%d", 9, c.Capacity())
}
if c.Nodes() != 6 {
t.Errorf("invalid nodes counter: want=%d got=%d", 6, c.Nodes())
}
if c.Size() != 8 {
t.Errorf("invalid size counter: want=%d got=%d", 8, c.Size())
}
}
func TestCacheMap_NilValue(t *testing.T) {
c := NewCache(NewLRU(10))
h := c.Get(0, 0, func() (size int, value Value) {
return 1, nil
})
if h != nil {
t.Error("cache handle is non-nil")
}
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
}
func TestLRUCache_GetLatency(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
const (
concurrentSet = 30
concurrentGet = 3
duration = 3 * time.Second
delay = 3 * time.Millisecond
maxkey = 100000
)
var (
set, getHit, getAll int32
getMaxLatency, getDuration int64
)
c := NewCache(NewLRU(5000))
wg := &sync.WaitGroup{}
until := time.Now().Add(duration)
for i := 0; i < concurrentSet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for time.Now().Before(until) {
c.Get(0, uint64(r.Intn(maxkey)), func() (int, Value) {
time.Sleep(delay)
atomic.AddInt32(&set, 1)
return 1, 1
}).Release()
}
}(i)
}
for i := 0; i < concurrentGet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for {
mark := time.Now()
if mark.Before(until) {
h := c.Get(0, uint64(r.Intn(maxkey)), nil)
latency := int64(time.Now().Sub(mark))
m := atomic.LoadInt64(&getMaxLatency)
if latency > m {
atomic.CompareAndSwapInt64(&getMaxLatency, m, latency)
}
atomic.AddInt64(&getDuration, latency)
if h != nil {
atomic.AddInt32(&getHit, 1)
h.Release()
}
atomic.AddInt32(&getAll, 1)
} else {
break
}
}
}(i)
}
wg.Wait()
getAvglatency := time.Duration(getDuration) / time.Duration(getAll)
t.Logf("set=%d getHit=%d getAll=%d getMaxLatency=%v getAvgLatency=%v",
set, getHit, getAll, time.Duration(getMaxLatency), getAvglatency)
if getAvglatency > delay/3 {
t.Errorf("get avg latency > %v: got=%v", delay/3, getAvglatency)
}
}
func TestLRUCache_HitMiss(t *testing.T) {
cases := []struct {
key uint64
value string
}{
{1, "vvvvvvvvv"},
{100, "v1"},
{0, "v2"},
{12346, "v3"},
{777, "v4"},
{999, "v5"},
{7654, "v6"},
{2, "v7"},
{3, "v8"},
{9, "v9"},
}
setfin := 0
c := NewCache(NewLRU(1000))
for i, x := range cases {
set(c, 0, x.key, x.value, len(x.value), func() {
setfin++
}).Release()
for j, y := range cases {
h := c.Get(0, y.key, nil)
if j <= i {
// should hit
if h == nil {
t.Errorf("case '%d' iteration '%d' is miss", i, j)
} else {
if x := h.Value().(releaserFunc).value.(string); x != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
}
}
} else {
// should miss
if h != nil {
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, h.Value().(releaserFunc).value.(string))
}
}
if h != nil {
h.Release()
}
}
}
for i, x := range cases {
finalizerOk := false
c.Delete(0, x.key, func() {
finalizerOk = true
})
if !finalizerOk {
t.Errorf("case %d delete finalizer not executed", i)
}
for j, y := range cases {
h := c.Get(0, y.key, nil)
if j > i {
// should hit
if h == nil {
t.Errorf("case '%d' iteration '%d' is miss", i, j)
} else {
if x := h.Value().(releaserFunc).value.(string); x != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
}
}
} else {
// should miss
if h != nil {
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, h.Value().(releaserFunc).value.(string))
}
}
if h != nil {
h.Release()
}
}
}
if setfin != len(cases) {
t.Errorf("some set finalizer may not be executed, want=%d got=%d", len(cases), setfin)
}
}
func TestLRUCache_Eviction(t *testing.T) {
c := NewCache(NewLRU(12))
o1 := set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 1, nil).Release()
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
set(c, 0, 5, 5, 1, nil).Release()
if h := c.Get(0, 2, nil); h != nil { // 1,3,4,5,2
h.Release()
}
set(c, 0, 9, 9, 10, nil).Release() // 5,2,9
for _, key := range []uint64{9, 2, 5, 1} {
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
o1.Release()
for _, key := range []uint64{1, 2, 5} {
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
for _, key := range []uint64{3, 4, 9} {
h := c.Get(0, key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
}
func TestLRUCache_Evict(t *testing.T) {
c := NewCache(NewLRU(6))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
set(c, 1, 1, 4, 1, nil).Release()
set(c, 1, 2, 5, 1, nil).Release()
set(c, 2, 1, 6, 1, nil).Release()
set(c, 2, 2, 7, 1, nil).Release()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
h.Release()
} else {
t.Errorf("Cache.Get on #%d.%d return nil", ns, key)
}
}
}
if ok := c.Evict(0, 1); !ok {
t.Error("first Cache.Evict on #0.1 return false")
}
if ok := c.Evict(0, 1); ok {
t.Error("second Cache.Evict on #0.1 return true")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #0.1 return non-nil: %v", h.Value())
}
c.EvictNS(1)
if h := c.Get(1, 1, nil); h != nil {
t.Errorf("Cache.Get on #1.1 return non-nil: %v", h.Value())
}
if h := c.Get(1, 2, nil); h != nil {
t.Errorf("Cache.Get on #1.2 return non-nil: %v", h.Value())
}
c.EvictAll()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
t.Errorf("Cache.Get on #%d.%d return non-nil: %v", ns, key, h.Value())
}
}
}
}
func TestLRUCache_Delete(t *testing.T) {
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
if ok := c.Delete(0, 1, delFunc); !ok {
t.Error("Cache.Delete on #1 return false")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #1 return non-nil: %v", h.Value())
}
if ok := c.Delete(0, 1, delFunc); ok {
t.Error("Cache.Delete on #1 return true")
}
h2 := c.Get(0, 2, nil)
if h2 == nil {
t.Error("Cache.Get on #2 return nil")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(1) Cache.Delete on #2 return false")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(2) Cache.Delete on #2 return false")
}
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
c.Get(0, 2, nil).Release()
for key := 2; key <= 4; key++ {
if h := c.Get(0, uint64(key), nil); h != nil {
h.Release()
} else {
t.Errorf("Cache.Get on #%d return nil", key)
}
}
h2.Release()
if h := c.Get(0, 2, nil); h != nil {
t.Errorf("Cache.Get on #2 return non-nil: %v", h.Value())
}
if delFuncCalled != 4 {
t.Errorf("delFunc isn't called 4 times: got=%d", delFuncCalled)
}
}
func TestLRUCache_Close(t *testing.T) {
relFuncCalled := 0
relFunc := func() {
relFuncCalled++
}
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, relFunc).Release()
set(c, 0, 2, 2, 1, relFunc).Release()
h3 := set(c, 0, 3, 3, 1, relFunc)
if h3 == nil {
t.Error("Cache.Get on #3 return nil")
}
if ok := c.Delete(0, 3, delFunc); !ok {
t.Error("Cache.Delete on #3 return false")
}
c.Close()
if relFuncCalled != 3 {
t.Errorf("relFunc isn't called 3 times: got=%d", relFuncCalled)
}
if delFuncCalled != 1 {
t.Errorf("delFunc isn't called 1 times: got=%d", delFuncCalled)
}
}

View File

@@ -1,500 +0,0 @@
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"bytes"
"fmt"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
"io"
"math/rand"
"testing"
)
const ctValSize = 1000
type dbCorruptHarness struct {
dbHarness
}
func newDbCorruptHarnessWopt(t *testing.T, o *opt.Options) *dbCorruptHarness {
h := new(dbCorruptHarness)
h.init(t, o)
return h
}
func newDbCorruptHarness(t *testing.T) *dbCorruptHarness {
return newDbCorruptHarnessWopt(t, &opt.Options{
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
})
}
func (h *dbCorruptHarness) recover() {
p := &h.dbHarness
t := p.t
var err error
p.db, err = Recover(h.stor, h.o)
if err != nil {
t.Fatal("Repair: got error: ", err)
}
}
func (h *dbCorruptHarness) build(n int) {
p := &h.dbHarness
t := p.t
db := p.db
batch := new(Batch)
for i := 0; i < n; i++ {
batch.Reset()
batch.Put(tkey(i), tval(i, ctValSize))
err := db.Write(batch, p.wo)
if err != nil {
t.Fatal("write error: ", err)
}
}
}
func (h *dbCorruptHarness) buildShuffled(n int, rnd *rand.Rand) {
p := &h.dbHarness
t := p.t
db := p.db
batch := new(Batch)
for i := range rnd.Perm(n) {
batch.Reset()
batch.Put(tkey(i), tval(i, ctValSize))
err := db.Write(batch, p.wo)
if err != nil {
t.Fatal("write error: ", err)
}
}
}
func (h *dbCorruptHarness) deleteRand(n, max int, rnd *rand.Rand) {
p := &h.dbHarness
t := p.t
db := p.db
batch := new(Batch)
for i := 0; i < n; i++ {
batch.Reset()
batch.Delete(tkey(rnd.Intn(max)))
err := db.Write(batch, p.wo)
if err != nil {
t.Fatal("write error: ", err)
}
}
}
func (h *dbCorruptHarness) corrupt(ft storage.FileType, fi, offset, n int) {
p := &h.dbHarness
t := p.t
ff, _ := p.stor.GetFiles(ft)
sff := files(ff)
sff.sort()
if fi < 0 {
fi = len(sff) - 1
}
if fi >= len(sff) {
t.Fatalf("no such file with type %q with index %d", ft, fi)
}
file := sff[fi]
r, err := file.Open()
if err != nil {
t.Fatal("cannot open file: ", err)
}
x, err := r.Seek(0, 2)
if err != nil {
t.Fatal("cannot query file size: ", err)
}
m := int(x)
if _, err := r.Seek(0, 0); err != nil {
t.Fatal(err)
}
if offset < 0 {
if -offset > m {
offset = 0
} else {
offset = m + offset
}
}
if offset > m {
offset = m
}
if offset+n > m {
n = m - offset
}
buf := make([]byte, m)
_, err = io.ReadFull(r, buf)
if err != nil {
t.Fatal("cannot read file: ", err)
}
r.Close()
for i := 0; i < n; i++ {
buf[offset+i] ^= 0x80
}
err = file.Remove()
if err != nil {
t.Fatal("cannot remove old file: ", err)
}
w, err := file.Create()
if err != nil {
t.Fatal("cannot create new file: ", err)
}
_, err = w.Write(buf)
if err != nil {
t.Fatal("cannot write new file: ", err)
}
w.Close()
}
func (h *dbCorruptHarness) removeAll(ft storage.FileType) {
ff, err := h.stor.GetFiles(ft)
if err != nil {
h.t.Fatal("get files: ", err)
}
for _, f := range ff {
if err := f.Remove(); err != nil {
h.t.Error("remove file: ", err)
}
}
}
func (h *dbCorruptHarness) removeOne(ft storage.FileType) {
ff, err := h.stor.GetFiles(ft)
if err != nil {
h.t.Fatal("get files: ", err)
}
f := ff[rand.Intn(len(ff))]
h.t.Logf("removing file @%d", f.Num())
if err := f.Remove(); err != nil {
h.t.Error("remove file: ", err)
}
}
func (h *dbCorruptHarness) check(min, max int) {
p := &h.dbHarness
t := p.t
db := p.db
var n, badk, badv, missed, good int
iter := db.NewIterator(nil, p.ro)
for iter.Next() {
k := 0
fmt.Sscanf(string(iter.Key()), "%d", &k)
if k < n {
badk++
continue
}
missed += k - n
n = k + 1
if !bytes.Equal(iter.Value(), tval(k, ctValSize)) {
badv++
} else {
good++
}
}
err := iter.Error()
iter.Release()
t.Logf("want=%d..%d got=%d badkeys=%d badvalues=%d missed=%d, err=%v",
min, max, good, badk, badv, missed, err)
if good < min || good > max {
t.Errorf("good entries number not in range")
}
}
func TestCorruptDB_Journal(t *testing.T) {
h := newDbCorruptHarness(t)
h.build(100)
h.check(100, 100)
h.closeDB()
h.corrupt(storage.TypeJournal, -1, 19, 1)
h.corrupt(storage.TypeJournal, -1, 32*1024+1000, 1)
h.openDB()
h.check(36, 36)
h.close()
}
func TestCorruptDB_Table(t *testing.T) {
h := newDbCorruptHarness(t)
h.build(100)
h.compactMem()
h.compactRangeAt(0, "", "")
h.compactRangeAt(1, "", "")
h.closeDB()
h.corrupt(storage.TypeTable, -1, 100, 1)
h.openDB()
h.check(99, 99)
h.close()
}
func TestCorruptDB_TableIndex(t *testing.T) {
h := newDbCorruptHarness(t)
h.build(10000)
h.compactMem()
h.closeDB()
h.corrupt(storage.TypeTable, -1, -2000, 500)
h.openDB()
h.check(5000, 9999)
h.close()
}
func TestCorruptDB_MissingManifest(t *testing.T) {
rnd := rand.New(rand.NewSource(0x0badda7a))
h := newDbCorruptHarnessWopt(t, &opt.Options{
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
})
h.build(1000)
h.compactMem()
h.buildShuffled(1000, rnd)
h.compactMem()
h.deleteRand(500, 1000, rnd)
h.compactMem()
h.buildShuffled(1000, rnd)
h.compactMem()
h.deleteRand(500, 1000, rnd)
h.compactMem()
h.buildShuffled(1000, rnd)
h.compactMem()
h.closeDB()
h.stor.SetIgnoreOpenErr(storage.TypeManifest)
h.removeAll(storage.TypeManifest)
h.openAssert(false)
h.stor.SetIgnoreOpenErr(0)
h.recover()
h.check(1000, 1000)
h.build(1000)
h.compactMem()
h.compactRange("", "")
h.closeDB()
h.recover()
h.check(1000, 1000)
h.close()
}
func TestCorruptDB_SequenceNumberRecovery(t *testing.T) {
h := newDbCorruptHarness(t)
h.put("foo", "v1")
h.put("foo", "v2")
h.put("foo", "v3")
h.put("foo", "v4")
h.put("foo", "v5")
h.closeDB()
h.recover()
h.getVal("foo", "v5")
h.put("foo", "v6")
h.getVal("foo", "v6")
h.reopenDB()
h.getVal("foo", "v6")
h.close()
}
func TestCorruptDB_SequenceNumberRecoveryTable(t *testing.T) {
h := newDbCorruptHarness(t)
h.put("foo", "v1")
h.put("foo", "v2")
h.put("foo", "v3")
h.compactMem()
h.put("foo", "v4")
h.put("foo", "v5")
h.compactMem()
h.closeDB()
h.recover()
h.getVal("foo", "v5")
h.put("foo", "v6")
h.getVal("foo", "v6")
h.reopenDB()
h.getVal("foo", "v6")
h.close()
}
func TestCorruptDB_CorruptedManifest(t *testing.T) {
h := newDbCorruptHarness(t)
h.put("foo", "hello")
h.compactMem()
h.compactRange("", "")
h.closeDB()
h.corrupt(storage.TypeManifest, -1, 0, 1000)
h.openAssert(false)
h.recover()
h.getVal("foo", "hello")
h.close()
}
func TestCorruptDB_CompactionInputError(t *testing.T) {
h := newDbCorruptHarness(t)
h.build(10)
h.compactMem()
h.closeDB()
h.corrupt(storage.TypeTable, -1, 100, 1)
h.openDB()
h.check(9, 9)
h.build(10000)
h.check(10000, 10000)
h.close()
}
func TestCorruptDB_UnrelatedKeys(t *testing.T) {
h := newDbCorruptHarness(t)
h.build(10)
h.compactMem()
h.closeDB()
h.corrupt(storage.TypeTable, -1, 100, 1)
h.openDB()
h.put(string(tkey(1000)), string(tval(1000, ctValSize)))
h.getVal(string(tkey(1000)), string(tval(1000, ctValSize)))
h.compactMem()
h.getVal(string(tkey(1000)), string(tval(1000, ctValSize)))
h.close()
}
func TestCorruptDB_Level0NewerFileHasOlderSeqnum(t *testing.T) {
h := newDbCorruptHarness(t)
h.put("a", "v1")
h.put("b", "v1")
h.compactMem()
h.put("a", "v2")
h.put("b", "v2")
h.compactMem()
h.put("a", "v3")
h.put("b", "v3")
h.compactMem()
h.put("c", "v0")
h.put("d", "v0")
h.compactMem()
h.compactRangeAt(1, "", "")
h.closeDB()
h.recover()
h.getVal("a", "v3")
h.getVal("b", "v3")
h.getVal("c", "v0")
h.getVal("d", "v0")
h.close()
}
func TestCorruptDB_RecoverInvalidSeq_Issue53(t *testing.T) {
h := newDbCorruptHarness(t)
h.put("a", "v1")
h.put("b", "v1")
h.compactMem()
h.put("a", "v2")
h.put("b", "v2")
h.compactMem()
h.put("a", "v3")
h.put("b", "v3")
h.compactMem()
h.put("c", "v0")
h.put("d", "v0")
h.compactMem()
h.compactRangeAt(0, "", "")
h.closeDB()
h.recover()
h.getVal("a", "v3")
h.getVal("b", "v3")
h.getVal("c", "v0")
h.getVal("d", "v0")
h.close()
}
func TestCorruptDB_MissingTableFiles(t *testing.T) {
h := newDbCorruptHarness(t)
h.put("a", "v1")
h.put("b", "v1")
h.compactMem()
h.put("c", "v2")
h.put("d", "v2")
h.compactMem()
h.put("e", "v3")
h.put("f", "v3")
h.closeDB()
h.removeOne(storage.TypeTable)
h.openAssert(false)
h.close()
}
func TestCorruptDB_RecoverTable(t *testing.T) {
h := newDbCorruptHarnessWopt(t, &opt.Options{
WriteBuffer: 112 * opt.KiB,
CompactionTableSize: 90 * opt.KiB,
Filter: filter.NewBloomFilter(10),
})
h.build(1000)
h.compactMem()
h.compactRangeAt(0, "", "")
h.compactRangeAt(1, "", "")
seq := h.db.seq
h.closeDB()
h.corrupt(storage.TypeTable, 0, 1000, 1)
h.corrupt(storage.TypeTable, 3, 10000, 1)
// Corrupted filter shouldn't affect recovery.
h.corrupt(storage.TypeTable, 3, 113888, 10)
h.corrupt(storage.TypeTable, -1, 20000, 1)
h.recover()
if h.db.seq != seq {
t.Errorf("invalid seq, want=%d got=%d", seq, h.db.seq)
}
h.check(985, 985)
h.close()
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,58 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
var _ = testutil.Defer(func() {
Describe("Leveldb external", func() {
o := &opt.Options{
DisableBlockCache: true,
BlockRestartInterval: 5,
BlockSize: 80,
Compression: opt.NoCompression,
OpenFilesCacheCapacity: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
CompactionTableSize: 2000,
}
Describe("write test", func() {
It("should do write correctly", func(done Done) {
db := newTestingDB(o, nil, nil)
t := testutil.DBTesting{
DB: db,
Deleted: testutil.KeyValue_Generate(nil, 500, 1, 50, 5, 5).Clone(),
}
testutil.DoDBTesting(&t)
db.TestClose()
done <- true
}, 20.0)
})
Describe("read test", func() {
testutil.AllKeyValueTesting(nil, nil, func(kv testutil.KeyValue) testutil.DB {
// Building the DB.
db := newTestingDB(o, nil, nil)
kv.IterateShuffled(nil, func(i int, key, value []byte) {
err := db.TestPut(key, value)
Expect(err).NotTo(HaveOccurred())
})
return db
}, func(db testutil.DB) {
db.(*testingDB).TestClose()
})
})
})
})

View File

@@ -1,142 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package filter
import (
"encoding/binary"
"github.com/syndtr/goleveldb/leveldb/util"
"testing"
)
type harness struct {
t *testing.T
bloom Filter
generator FilterGenerator
filter []byte
}
func newHarness(t *testing.T) *harness {
bloom := NewBloomFilter(10)
return &harness{
t: t,
bloom: bloom,
generator: bloom.NewGenerator(),
}
}
func (h *harness) add(key []byte) {
h.generator.Add(key)
}
func (h *harness) addNum(key uint32) {
var b [4]byte
binary.LittleEndian.PutUint32(b[:], key)
h.add(b[:])
}
func (h *harness) build() {
b := &util.Buffer{}
h.generator.Generate(b)
h.filter = b.Bytes()
}
func (h *harness) reset() {
h.filter = nil
}
func (h *harness) filterLen() int {
return len(h.filter)
}
func (h *harness) assert(key []byte, want, silent bool) bool {
got := h.bloom.Contains(h.filter, key)
if !silent && got != want {
h.t.Errorf("assert on '%v' failed got '%v', want '%v'", key, got, want)
}
return got
}
func (h *harness) assertNum(key uint32, want, silent bool) bool {
var b [4]byte
binary.LittleEndian.PutUint32(b[:], key)
return h.assert(b[:], want, silent)
}
func TestBloomFilter_Empty(t *testing.T) {
h := newHarness(t)
h.build()
h.assert([]byte("hello"), false, false)
h.assert([]byte("world"), false, false)
}
func TestBloomFilter_Small(t *testing.T) {
h := newHarness(t)
h.add([]byte("hello"))
h.add([]byte("world"))
h.build()
h.assert([]byte("hello"), true, false)
h.assert([]byte("world"), true, false)
h.assert([]byte("x"), false, false)
h.assert([]byte("foo"), false, false)
}
func nextN(n int) int {
switch {
case n < 10:
n += 1
case n < 100:
n += 10
case n < 1000:
n += 100
default:
n += 1000
}
return n
}
func TestBloomFilter_VaryingLengths(t *testing.T) {
h := newHarness(t)
var mediocre, good int
for n := 1; n < 10000; n = nextN(n) {
h.reset()
for i := 0; i < n; i++ {
h.addNum(uint32(i))
}
h.build()
got := h.filterLen()
want := (n * 10 / 8) + 40
if got > want {
t.Errorf("filter len test failed, '%d' > '%d'", got, want)
}
for i := 0; i < n; i++ {
h.assertNum(uint32(i), true, false)
}
var rate float32
for i := 0; i < 10000; i++ {
if h.assertNum(uint32(i+1000000000), true, true) {
rate++
}
}
rate /= 10000
if rate > 0.02 {
t.Errorf("false positive rate is more than 2%%, got %v, at len %d", rate, n)
}
if rate > 0.0125 {
mediocre++
} else {
good++
}
}
t.Logf("false positive rate: %d good, %d mediocre", good, mediocre)
if mediocre > good/5 {
t.Error("mediocre false positive rate is more than expected")
}
}

View File

@@ -1,30 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package iterator_test
import (
. "github.com/onsi/ginkgo"
. "github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
var _ = testutil.Defer(func() {
Describe("Array iterator", func() {
It("Should iterates and seeks correctly", func() {
// Build key/value.
kv := testutil.KeyValue_Generate(nil, 70, 1, 5, 3, 3)
// Test the iterator.
t := testutil.IteratorTesting{
KeyValue: kv.Clone(),
Iter: NewArrayIterator(kv),
}
testutil.DoIteratorTesting(&t)
})
})
})

View File

@@ -1,83 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package iterator_test
import (
"sort"
. "github.com/onsi/ginkgo"
"github.com/syndtr/goleveldb/leveldb/comparer"
. "github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
type keyValue struct {
key []byte
testutil.KeyValue
}
type keyValueIndex []keyValue
func (x keyValueIndex) Search(key []byte) int {
return sort.Search(x.Len(), func(i int) bool {
return comparer.DefaultComparer.Compare(x[i].key, key) >= 0
})
}
func (x keyValueIndex) Len() int { return len(x) }
func (x keyValueIndex) Index(i int) (key, value []byte) { return x[i].key, nil }
func (x keyValueIndex) Get(i int) Iterator { return NewArrayIterator(x[i]) }
var _ = testutil.Defer(func() {
Describe("Indexed iterator", func() {
Test := func(n ...int) func() {
if len(n) == 0 {
rnd := testutil.NewRand()
n = make([]int, rnd.Intn(17)+3)
for i := range n {
n[i] = rnd.Intn(19) + 1
}
}
return func() {
It("Should iterates and seeks correctly", func(done Done) {
// Build key/value.
index := make(keyValueIndex, len(n))
sum := 0
for _, x := range n {
sum += x
}
kv := testutil.KeyValue_Generate(nil, sum, 1, 10, 4, 4)
for i, j := 0, 0; i < len(n); i++ {
for x := n[i]; x > 0; x-- {
key, value := kv.Index(j)
index[i].key = key
index[i].Put(key, value)
j++
}
}
// Test the iterator.
t := testutil.IteratorTesting{
KeyValue: kv.Clone(),
Iter: NewIndexedIterator(NewArrayIndexer(index), true),
}
testutil.DoIteratorTesting(&t)
done <- true
}, 1.5)
}
}
Describe("with 100 keys", Test(100))
Describe("with 50-50 keys", Test(50, 50))
Describe("with 50-1 keys", Test(50, 1))
Describe("with 50-1-50 keys", Test(50, 1, 50))
Describe("with 1-50 keys", Test(1, 50))
Describe("with random N-keys", Test())
})
})

View File

@@ -1,11 +0,0 @@
package iterator_test
import (
"testing"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestIterator(t *testing.T) {
testutil.RunSuite(t, "Iterator Suite")
}

View File

@@ -1,60 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package iterator_test
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/comparer"
. "github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
var _ = testutil.Defer(func() {
Describe("Merged iterator", func() {
Test := func(filled int, empty int) func() {
return func() {
It("Should iterates and seeks correctly", func(done Done) {
rnd := testutil.NewRand()
// Build key/value.
filledKV := make([]testutil.KeyValue, filled)
kv := testutil.KeyValue_Generate(nil, 100, 1, 10, 4, 4)
kv.Iterate(func(i int, key, value []byte) {
filledKV[rnd.Intn(filled)].Put(key, value)
})
// Create itearators.
iters := make([]Iterator, filled+empty)
for i := range iters {
if empty == 0 || (rnd.Int()%2 == 0 && filled > 0) {
filled--
Expect(filledKV[filled].Len()).ShouldNot(BeZero())
iters[i] = NewArrayIterator(filledKV[filled])
} else {
empty--
iters[i] = NewEmptyIterator(nil)
}
}
// Test the iterator.
t := testutil.IteratorTesting{
KeyValue: kv.Clone(),
Iter: NewMergedIterator(iters, comparer.DefaultComparer, true),
}
testutil.DoIteratorTesting(&t)
done <- true
}, 1.5)
}
}
Describe("with three, all filled iterators", Test(3, 0))
Describe("with one filled, one empty iterators", Test(1, 1))
Describe("with one filled, two empty iterators", Test(1, 2))
})
})

View File

@@ -1,818 +0,0 @@
// Copyright 2011 The LevelDB-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Taken from: https://code.google.com/p/leveldb-go/source/browse/leveldb/record/record_test.go?r=df1fa28f7f3be6c3935548169002309c12967135
// License, authors and contributors informations can be found at bellow URLs respectively:
// https://code.google.com/p/leveldb-go/source/browse/LICENSE
// https://code.google.com/p/leveldb-go/source/browse/AUTHORS
// https://code.google.com/p/leveldb-go/source/browse/CONTRIBUTORS
package journal
import (
"bytes"
"encoding/binary"
"fmt"
"io"
"io/ioutil"
"math/rand"
"strings"
"testing"
)
type dropper struct {
t *testing.T
}
func (d dropper) Drop(err error) {
d.t.Log(err)
}
func short(s string) string {
if len(s) < 64 {
return s
}
return fmt.Sprintf("%s...(skipping %d bytes)...%s", s[:20], len(s)-40, s[len(s)-20:])
}
// big returns a string of length n, composed of repetitions of partial.
func big(partial string, n int) string {
return strings.Repeat(partial, n/len(partial)+1)[:n]
}
func TestEmpty(t *testing.T) {
buf := new(bytes.Buffer)
r := NewReader(buf, dropper{t}, true, true)
if _, err := r.Next(); err != io.EOF {
t.Fatalf("got %v, want %v", err, io.EOF)
}
}
func testGenerator(t *testing.T, reset func(), gen func() (string, bool)) {
buf := new(bytes.Buffer)
reset()
w := NewWriter(buf)
for {
s, ok := gen()
if !ok {
break
}
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write([]byte(s)); err != nil {
t.Fatal(err)
}
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
reset()
r := NewReader(buf, dropper{t}, true, true)
for {
s, ok := gen()
if !ok {
break
}
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
x, err := ioutil.ReadAll(rr)
if err != nil {
t.Fatal(err)
}
if string(x) != s {
t.Fatalf("got %q, want %q", short(string(x)), short(s))
}
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("got %v, want %v", err, io.EOF)
}
}
func testLiterals(t *testing.T, s []string) {
var i int
reset := func() {
i = 0
}
gen := func() (string, bool) {
if i == len(s) {
return "", false
}
i++
return s[i-1], true
}
testGenerator(t, reset, gen)
}
func TestMany(t *testing.T) {
const n = 1e5
var i int
reset := func() {
i = 0
}
gen := func() (string, bool) {
if i == n {
return "", false
}
i++
return fmt.Sprintf("%d.", i-1), true
}
testGenerator(t, reset, gen)
}
func TestRandom(t *testing.T) {
const n = 1e2
var (
i int
r *rand.Rand
)
reset := func() {
i, r = 0, rand.New(rand.NewSource(0))
}
gen := func() (string, bool) {
if i == n {
return "", false
}
i++
return strings.Repeat(string(uint8(i)), r.Intn(2*blockSize+16)), true
}
testGenerator(t, reset, gen)
}
func TestBasic(t *testing.T) {
testLiterals(t, []string{
strings.Repeat("a", 1000),
strings.Repeat("b", 97270),
strings.Repeat("c", 8000),
})
}
func TestBoundary(t *testing.T) {
for i := blockSize - 16; i < blockSize+16; i++ {
s0 := big("abcd", i)
for j := blockSize - 16; j < blockSize+16; j++ {
s1 := big("ABCDE", j)
testLiterals(t, []string{s0, s1})
testLiterals(t, []string{s0, "", s1})
testLiterals(t, []string{s0, "x", s1})
}
}
}
func TestFlush(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// Write a couple of records. Everything should still be held
// in the record.Writer buffer, so that buf.Len should be 0.
w0, _ := w.Next()
w0.Write([]byte("0"))
w1, _ := w.Next()
w1.Write([]byte("11"))
if got, want := buf.Len(), 0; got != want {
t.Fatalf("buffer length #0: got %d want %d", got, want)
}
// Flush the record.Writer buffer, which should yield 17 bytes.
// 17 = 2*7 + 1 + 2, which is two headers and 1 + 2 payload bytes.
if err := w.Flush(); err != nil {
t.Fatal(err)
}
if got, want := buf.Len(), 17; got != want {
t.Fatalf("buffer length #1: got %d want %d", got, want)
}
// Do another write, one that isn't large enough to complete the block.
// The write should not have flowed through to buf.
w2, _ := w.Next()
w2.Write(bytes.Repeat([]byte("2"), 10000))
if got, want := buf.Len(), 17; got != want {
t.Fatalf("buffer length #2: got %d want %d", got, want)
}
// Flushing should get us up to 10024 bytes written.
// 10024 = 17 + 7 + 10000.
if err := w.Flush(); err != nil {
t.Fatal(err)
}
if got, want := buf.Len(), 10024; got != want {
t.Fatalf("buffer length #3: got %d want %d", got, want)
}
// Do a bigger write, one that completes the current block.
// We should now have 32768 bytes (a complete block), without
// an explicit flush.
w3, _ := w.Next()
w3.Write(bytes.Repeat([]byte("3"), 40000))
if got, want := buf.Len(), 32768; got != want {
t.Fatalf("buffer length #4: got %d want %d", got, want)
}
// Flushing should get us up to 50038 bytes written.
// 50038 = 10024 + 2*7 + 40000. There are two headers because
// the one record was split into two chunks.
if err := w.Flush(); err != nil {
t.Fatal(err)
}
if got, want := buf.Len(), 50038; got != want {
t.Fatalf("buffer length #5: got %d want %d", got, want)
}
// Check that reading those records give the right lengths.
r := NewReader(buf, dropper{t}, true, true)
wants := []int64{1, 2, 10000, 40000}
for i, want := range wants {
rr, _ := r.Next()
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #%d: %v", i, err)
}
if n != want {
t.Fatalf("read #%d: got %d bytes want %d", i, n, want)
}
}
}
func TestNonExhaustiveRead(t *testing.T) {
const n = 100
buf := new(bytes.Buffer)
p := make([]byte, 10)
rnd := rand.New(rand.NewSource(1))
w := NewWriter(buf)
for i := 0; i < n; i++ {
length := len(p) + rnd.Intn(3*blockSize)
s := string(uint8(i)) + "123456789abcdefgh"
ww, _ := w.Next()
ww.Write([]byte(big(s, length)))
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
r := NewReader(buf, dropper{t}, true, true)
for i := 0; i < n; i++ {
rr, _ := r.Next()
_, err := io.ReadFull(rr, p)
if err != nil {
t.Fatal(err)
}
want := string(uint8(i)) + "123456789"
if got := string(p); got != want {
t.Fatalf("read #%d: got %q want %q", i, got, want)
}
}
}
func TestStaleReader(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
w0, err := w.Next()
if err != nil {
t.Fatal(err)
}
w0.Write([]byte("0"))
w1, err := w.Next()
if err != nil {
t.Fatal(err)
}
w1.Write([]byte("11"))
if err := w.Close(); err != nil {
t.Fatal(err)
}
r := NewReader(buf, dropper{t}, true, true)
r0, err := r.Next()
if err != nil {
t.Fatal(err)
}
r1, err := r.Next()
if err != nil {
t.Fatal(err)
}
p := make([]byte, 1)
if _, err := r0.Read(p); err == nil || !strings.Contains(err.Error(), "stale") {
t.Fatalf("stale read #0: unexpected error: %v", err)
}
if _, err := r1.Read(p); err != nil {
t.Fatalf("fresh read #1: got %v want nil error", err)
}
if p[0] != '1' {
t.Fatalf("fresh read #1: byte contents: got '%c' want '1'", p[0])
}
}
func TestStaleWriter(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
w0, err := w.Next()
if err != nil {
t.Fatal(err)
}
w1, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := w0.Write([]byte("0")); err == nil || !strings.Contains(err.Error(), "stale") {
t.Fatalf("stale write #0: unexpected error: %v", err)
}
if _, err := w1.Write([]byte("11")); err != nil {
t.Fatalf("fresh write #1: got %v want nil error", err)
}
if err := w.Flush(); err != nil {
t.Fatalf("flush: %v", err)
}
if _, err := w1.Write([]byte("0")); err == nil || !strings.Contains(err.Error(), "stale") {
t.Fatalf("stale write #1: unexpected error: %v", err)
}
}
func TestCorrupt_MissingLastBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-1024)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
// Cut the last block.
b := buf.Bytes()[:blockSize]
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read.
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if n != blockSize-1024 {
t.Fatalf("read #0: got %d bytes want %d", n, blockSize-1024)
}
// Second read.
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #1: unexpected error: %v", err)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_CorruptedFirstBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
// Fourth record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
t.Fatalf("write #3: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting block #0.
for i := 0; i < 1024; i++ {
b[i] = '1'
}
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (third record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize-headerSize) + 1; n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (fourth record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #1: %v", err)
}
if want := int64(blockSize-headerSize) + 2; n != want {
t.Fatalf("read #1: got %d bytes want %d", n, want)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_CorruptedMiddleBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
// Fourth record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
t.Fatalf("write #3: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting block #1.
for i := 0; i < 1024; i++ {
b[blockSize+i] = '1'
}
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (second record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #1: unexpected error: %v", err)
}
// Third read (fourth record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #2: %v", err)
}
if want := int64(blockSize-headerSize) + 2; n != want {
t.Fatalf("read #2: got %d bytes want %d", n, want)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_CorruptedLastBlock(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
// Fourth record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
t.Fatalf("write #3: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting block #3.
for i := len(b) - 1; i > len(b)-1024; i-- {
b[i] = '1'
}
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (second record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #1: %v", err)
}
if want := int64(blockSize - headerSize); n != want {
t.Fatalf("read #1: got %d bytes want %d", n, want)
}
// Third read (third record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #2: %v", err)
}
if want := int64(blockSize-headerSize) + 1; n != want {
t.Fatalf("read #2: got %d bytes want %d", n, want)
}
// Fourth read (fourth record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #3: unexpected error: %v", err)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_FirstChuckLengthOverflow(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting record #1.
x := blockSize
binary.LittleEndian.PutUint16(b[x+4:], 0xffff)
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (second record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != io.ErrUnexpectedEOF {
t.Fatalf("read #1: unexpected error: %v", err)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}
func TestCorrupt_MiddleChuckLengthOverflow(t *testing.T) {
buf := new(bytes.Buffer)
w := NewWriter(buf)
// First record.
ww, err := w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
t.Fatalf("write #0: unexpected error: %v", err)
}
// Second record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
t.Fatalf("write #1: unexpected error: %v", err)
}
// Third record.
ww, err = w.Next()
if err != nil {
t.Fatal(err)
}
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
t.Fatalf("write #2: unexpected error: %v", err)
}
if err := w.Close(); err != nil {
t.Fatal(err)
}
b := buf.Bytes()
// Corrupting record #1.
x := blockSize/2 + headerSize
binary.LittleEndian.PutUint16(b[x+4:], 0xffff)
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
// First read (first record).
rr, err := r.Next()
if err != nil {
t.Fatal(err)
}
n, err := io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #0: %v", err)
}
if want := int64(blockSize / 2); n != want {
t.Fatalf("read #0: got %d bytes want %d", n, want)
}
// Second read (third record).
rr, err = r.Next()
if err != nil {
t.Fatal(err)
}
n, err = io.Copy(ioutil.Discard, rr)
if err != nil {
t.Fatalf("read #1: %v", err)
}
if want := int64(blockSize-headerSize) + 1; n != want {
t.Fatalf("read #1: got %d bytes want %d", n, want)
}
if _, err := r.Next(); err != io.EOF {
t.Fatalf("last next: unexpected error: %v", err)
}
}

View File

@@ -1,133 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"bytes"
"testing"
"github.com/syndtr/goleveldb/leveldb/comparer"
)
var defaultIComparer = &iComparer{comparer.DefaultComparer}
func ikey(key string, seq uint64, kt kType) iKey {
return newIkey([]byte(key), uint64(seq), kt)
}
func shortSep(a, b []byte) []byte {
dst := make([]byte, len(a))
dst = defaultIComparer.Separator(dst[:0], a, b)
if dst == nil {
return a
}
return dst
}
func shortSuccessor(b []byte) []byte {
dst := make([]byte, len(b))
dst = defaultIComparer.Successor(dst[:0], b)
if dst == nil {
return b
}
return dst
}
func testSingleKey(t *testing.T, key string, seq uint64, kt kType) {
ik := ikey(key, seq, kt)
if !bytes.Equal(ik.ukey(), []byte(key)) {
t.Errorf("user key does not equal, got %v, want %v", string(ik.ukey()), key)
}
rseq, rt := ik.parseNum()
if rseq != seq {
t.Errorf("seq number does not equal, got %v, want %v", rseq, seq)
}
if rt != kt {
t.Errorf("type does not equal, got %v, want %v", rt, kt)
}
if rukey, rseq, rt, kerr := parseIkey(ik); kerr == nil {
if !bytes.Equal(rukey, []byte(key)) {
t.Errorf("user key does not equal, got %v, want %v", string(ik.ukey()), key)
}
if rseq != seq {
t.Errorf("seq number does not equal, got %v, want %v", rseq, seq)
}
if rt != kt {
t.Errorf("type does not equal, got %v, want %v", rt, kt)
}
} else {
t.Errorf("key error: %v", kerr)
}
}
func TestIkey_EncodeDecode(t *testing.T) {
keys := []string{"", "k", "hello", "longggggggggggggggggggggg"}
seqs := []uint64{
1, 2, 3,
(1 << 8) - 1, 1 << 8, (1 << 8) + 1,
(1 << 16) - 1, 1 << 16, (1 << 16) + 1,
(1 << 32) - 1, 1 << 32, (1 << 32) + 1,
}
for _, key := range keys {
for _, seq := range seqs {
testSingleKey(t, key, seq, ktVal)
testSingleKey(t, "hello", 1, ktDel)
}
}
}
func assertBytes(t *testing.T, want, got []byte) {
if !bytes.Equal(got, want) {
t.Errorf("assert failed, got %v, want %v", got, want)
}
}
func TestIkeyShortSeparator(t *testing.T) {
// When user keys are same
assertBytes(t, ikey("foo", 100, ktVal),
shortSep(ikey("foo", 100, ktVal),
ikey("foo", 99, ktVal)))
assertBytes(t, ikey("foo", 100, ktVal),
shortSep(ikey("foo", 100, ktVal),
ikey("foo", 101, ktVal)))
assertBytes(t, ikey("foo", 100, ktVal),
shortSep(ikey("foo", 100, ktVal),
ikey("foo", 100, ktVal)))
assertBytes(t, ikey("foo", 100, ktVal),
shortSep(ikey("foo", 100, ktVal),
ikey("foo", 100, ktDel)))
// When user keys are misordered
assertBytes(t, ikey("foo", 100, ktVal),
shortSep(ikey("foo", 100, ktVal),
ikey("bar", 99, ktVal)))
// When user keys are different, but correctly ordered
assertBytes(t, ikey("g", uint64(kMaxSeq), ktSeek),
shortSep(ikey("foo", 100, ktVal),
ikey("hello", 200, ktVal)))
// When start user key is prefix of limit user key
assertBytes(t, ikey("foo", 100, ktVal),
shortSep(ikey("foo", 100, ktVal),
ikey("foobar", 200, ktVal)))
// When limit user key is prefix of start user key
assertBytes(t, ikey("foobar", 100, ktVal),
shortSep(ikey("foobar", 100, ktVal),
ikey("foo", 200, ktVal)))
}
func TestIkeyShortestSuccessor(t *testing.T) {
assertBytes(t, ikey("g", uint64(kMaxSeq), ktSeek),
shortSuccessor(ikey("foo", 100, ktVal)))
assertBytes(t, ikey("\xff\xff", 100, ktVal),
shortSuccessor(ikey("\xff\xff", 100, ktVal)))
}

View File

@@ -1,11 +0,0 @@
package leveldb
import (
"testing"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestLevelDB(t *testing.T) {
testutil.RunSuite(t, "LevelDB Suite")
}

View File

@@ -1,75 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package memdb
import (
"encoding/binary"
"math/rand"
"testing"
"github.com/syndtr/goleveldb/leveldb/comparer"
)
func BenchmarkPut(b *testing.B) {
buf := make([][4]byte, b.N)
for i := range buf {
binary.LittleEndian.PutUint32(buf[i][:], uint32(i))
}
b.ResetTimer()
p := New(comparer.DefaultComparer, 0)
for i := range buf {
p.Put(buf[i][:], nil)
}
}
func BenchmarkPutRandom(b *testing.B) {
buf := make([][4]byte, b.N)
for i := range buf {
binary.LittleEndian.PutUint32(buf[i][:], uint32(rand.Int()))
}
b.ResetTimer()
p := New(comparer.DefaultComparer, 0)
for i := range buf {
p.Put(buf[i][:], nil)
}
}
func BenchmarkGet(b *testing.B) {
buf := make([][4]byte, b.N)
for i := range buf {
binary.LittleEndian.PutUint32(buf[i][:], uint32(i))
}
p := New(comparer.DefaultComparer, 0)
for i := range buf {
p.Put(buf[i][:], nil)
}
b.ResetTimer()
for i := range buf {
p.Get(buf[i][:])
}
}
func BenchmarkGetRandom(b *testing.B) {
buf := make([][4]byte, b.N)
for i := range buf {
binary.LittleEndian.PutUint32(buf[i][:], uint32(i))
}
p := New(comparer.DefaultComparer, 0)
for i := range buf {
p.Put(buf[i][:], nil)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
p.Get(buf[rand.Int()%b.N][:])
}
}

View File

@@ -1,11 +0,0 @@
package memdb
import (
"testing"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestMemDB(t *testing.T) {
testutil.RunSuite(t, "MemDB Suite")
}

View File

@@ -1,135 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package memdb
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/testutil"
"github.com/syndtr/goleveldb/leveldb/util"
)
func (p *DB) TestFindLT(key []byte) (rkey, value []byte, err error) {
p.mu.RLock()
if node := p.findLT(key); node != 0 {
n := p.nodeData[node]
m := n + p.nodeData[node+nKey]
rkey = p.kvData[n:m]
value = p.kvData[m : m+p.nodeData[node+nVal]]
} else {
err = ErrNotFound
}
p.mu.RUnlock()
return
}
func (p *DB) TestFindLast() (rkey, value []byte, err error) {
p.mu.RLock()
if node := p.findLast(); node != 0 {
n := p.nodeData[node]
m := n + p.nodeData[node+nKey]
rkey = p.kvData[n:m]
value = p.kvData[m : m+p.nodeData[node+nVal]]
} else {
err = ErrNotFound
}
p.mu.RUnlock()
return
}
func (p *DB) TestPut(key []byte, value []byte) error {
p.Put(key, value)
return nil
}
func (p *DB) TestDelete(key []byte) error {
p.Delete(key)
return nil
}
func (p *DB) TestFind(key []byte) (rkey, rvalue []byte, err error) {
return p.Find(key)
}
func (p *DB) TestGet(key []byte) (value []byte, err error) {
return p.Get(key)
}
func (p *DB) TestNewIterator(slice *util.Range) iterator.Iterator {
return p.NewIterator(slice)
}
var _ = testutil.Defer(func() {
Describe("Memdb", func() {
Describe("write test", func() {
It("should do write correctly", func() {
db := New(comparer.DefaultComparer, 0)
t := testutil.DBTesting{
DB: db,
Deleted: testutil.KeyValue_Generate(nil, 1000, 1, 30, 5, 5).Clone(),
PostFn: func(t *testutil.DBTesting) {
Expect(db.Len()).Should(Equal(t.Present.Len()))
Expect(db.Size()).Should(Equal(t.Present.Size()))
switch t.Act {
case testutil.DBPut, testutil.DBOverwrite:
Expect(db.Contains(t.ActKey)).Should(BeTrue())
default:
Expect(db.Contains(t.ActKey)).Should(BeFalse())
}
},
}
testutil.DoDBTesting(&t)
})
})
Describe("read test", func() {
testutil.AllKeyValueTesting(nil, func(kv testutil.KeyValue) testutil.DB {
// Building the DB.
db := New(comparer.DefaultComparer, 0)
kv.IterateShuffled(nil, func(i int, key, value []byte) {
db.Put(key, value)
})
if kv.Len() > 1 {
It("Should find correct keys with findLT", func() {
testutil.ShuffledIndex(nil, kv.Len()-1, 1, func(i int) {
key_, key, _ := kv.IndexInexact(i + 1)
expectedKey, expectedValue := kv.Index(i)
// Using key that exist.
rkey, rvalue, err := db.TestFindLT(key)
Expect(err).ShouldNot(HaveOccurred(), "Error for key %q -> %q", key, expectedKey)
Expect(rkey).Should(Equal(expectedKey), "Key")
Expect(rvalue).Should(Equal(expectedValue), "Value for key %q -> %q", key, expectedKey)
// Using key that doesn't exist.
rkey, rvalue, err = db.TestFindLT(key_)
Expect(err).ShouldNot(HaveOccurred(), "Error for key %q (%q) -> %q", key_, key, expectedKey)
Expect(rkey).Should(Equal(expectedKey))
Expect(rvalue).Should(Equal(expectedValue), "Value for key %q (%q) -> %q", key_, key, expectedKey)
})
})
}
if kv.Len() > 0 {
It("Should find last key with findLast", func() {
key, value := kv.Index(kv.Len() - 1)
rkey, rvalue, err := db.TestFindLast()
Expect(err).ShouldNot(HaveOccurred())
Expect(rkey).Should(Equal(key))
Expect(rvalue).Should(Equal(value))
})
}
return db
}, nil, nil)
})
})
})

View File

@@ -1,64 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"bytes"
"testing"
"github.com/syndtr/goleveldb/leveldb/opt"
)
func decodeEncode(v *sessionRecord) (res bool, err error) {
b := new(bytes.Buffer)
err = v.encode(b)
if err != nil {
return
}
v2 := &sessionRecord{}
err = v.decode(b, opt.DefaultNumLevel)
if err != nil {
return
}
b2 := new(bytes.Buffer)
err = v2.encode(b2)
if err != nil {
return
}
return bytes.Equal(b.Bytes(), b2.Bytes()), nil
}
func TestSessionRecord_EncodeDecode(t *testing.T) {
big := uint64(1) << 50
v := &sessionRecord{}
i := uint64(0)
test := func() {
res, err := decodeEncode(v)
if err != nil {
t.Fatalf("error when testing encode/decode sessionRecord: %v", err)
}
if !res {
t.Error("encode/decode test failed at iteration:", i)
}
}
for ; i < 4; i++ {
test()
v.addTable(3, big+300+i, big+400+i,
newIkey([]byte("foo"), big+500+1, ktVal),
newIkey([]byte("zoo"), big+600+1, ktDel))
v.delTable(4, big+700+i)
v.addCompPtr(int(i), newIkey([]byte("x"), big+900+1, ktVal))
}
v.setComparer("foo")
v.setJournalNum(big + 100)
v.setPrevJournalNum(big + 99)
v.setNextFileNum(big + 200)
v.setSeqNum(big + 1000)
test()
}

View File

@@ -1,142 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package storage
import (
"fmt"
"os"
"path/filepath"
"testing"
)
var cases = []struct {
oldName []string
name string
ftype FileType
num uint64
}{
{nil, "000100.log", TypeJournal, 100},
{nil, "000000.log", TypeJournal, 0},
{[]string{"000000.sst"}, "000000.ldb", TypeTable, 0},
{nil, "MANIFEST-000002", TypeManifest, 2},
{nil, "MANIFEST-000007", TypeManifest, 7},
{nil, "18446744073709551615.log", TypeJournal, 18446744073709551615},
{nil, "000100.tmp", TypeTemp, 100},
}
var invalidCases = []string{
"",
"foo",
"foo-dx-100.log",
".log",
"",
"manifest",
"CURREN",
"CURRENTX",
"MANIFES",
"MANIFEST",
"MANIFEST-",
"XMANIFEST-3",
"MANIFEST-3x",
"LOC",
"LOCKx",
"LO",
"LOGx",
"18446744073709551616.log",
"184467440737095516150.log",
"100",
"100.",
"100.lop",
}
func TestFileStorage_CreateFileName(t *testing.T) {
for _, c := range cases {
f := &file{num: c.num, t: c.ftype}
if f.name() != c.name {
t.Errorf("invalid filename got '%s', want '%s'", f.name(), c.name)
}
}
}
func TestFileStorage_ParseFileName(t *testing.T) {
for _, c := range cases {
for _, name := range append([]string{c.name}, c.oldName...) {
f := new(file)
if !f.parse(name) {
t.Errorf("cannot parse filename '%s'", name)
continue
}
if f.Type() != c.ftype {
t.Errorf("filename '%s' invalid type got '%d', want '%d'", name, f.Type(), c.ftype)
}
if f.Num() != c.num {
t.Errorf("filename '%s' invalid number got '%d', want '%d'", name, f.Num(), c.num)
}
}
}
}
func TestFileStorage_InvalidFileName(t *testing.T) {
for _, name := range invalidCases {
f := new(file)
if f.parse(name) {
t.Errorf("filename '%s' should be invalid", name)
}
}
}
func TestFileStorage_Locking(t *testing.T) {
path := filepath.Join(os.TempDir(), fmt.Sprintf("goleveldbtestfd-%d", os.Getuid()))
_, err := os.Stat(path)
if err == nil {
err = os.RemoveAll(path)
if err != nil {
t.Fatal("RemoveAll: got error: ", err)
}
}
p1, err := OpenFile(path)
if err != nil {
t.Fatal("OpenFile(1): got error: ", err)
}
defer os.RemoveAll(path)
p2, err := OpenFile(path)
if err != nil {
t.Logf("OpenFile(2): got error: %s (expected)", err)
} else {
p2.Close()
p1.Close()
t.Fatal("OpenFile(2): expect error")
}
p1.Close()
p3, err := OpenFile(path)
if err != nil {
t.Fatal("OpenFile(3): got error: ", err)
}
defer p3.Close()
l, err := p3.Lock()
if err != nil {
t.Fatal("storage lock failed(1): ", err)
}
_, err = p3.Lock()
if err == nil {
t.Fatal("expect error for second storage lock attempt")
} else {
t.Logf("storage lock got error: %s (expected)", err)
}
l.Release()
_, err = p3.Lock()
if err != nil {
t.Fatal("storage lock failed(2): ", err)
}
}

View File

@@ -1,66 +0,0 @@
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package storage
import (
"bytes"
"testing"
)
func TestMemStorage(t *testing.T) {
m := NewMemStorage()
l, err := m.Lock()
if err != nil {
t.Fatal("storage lock failed(1): ", err)
}
_, err = m.Lock()
if err == nil {
t.Fatal("expect error for second storage lock attempt")
} else {
t.Logf("storage lock got error: %s (expected)", err)
}
l.Release()
_, err = m.Lock()
if err != nil {
t.Fatal("storage lock failed(2): ", err)
}
f := m.GetFile(1, TypeTable)
if f.Num() != 1 && f.Type() != TypeTable {
t.Fatal("invalid file number and type")
}
w, _ := f.Create()
w.Write([]byte("abc"))
w.Close()
if ff, _ := m.GetFiles(TypeAll); len(ff) != 1 {
t.Fatal("invalid GetFiles len")
}
buf := new(bytes.Buffer)
r, err := f.Open()
if err != nil {
t.Fatal("Open: got error: ", err)
}
buf.ReadFrom(r)
r.Close()
if got := buf.String(); got != "abc" {
t.Fatalf("Read: invalid value, want=abc got=%s", got)
}
if _, err := f.Open(); err != nil {
t.Fatal("Open: got error: ", err)
}
if _, err := m.GetFile(1, TypeTable).Open(); err == nil {
t.Fatal("expecting error")
}
f.Remove()
if ff, _ := m.GetFiles(TypeAll); len(ff) != 0 {
t.Fatal("invalid GetFiles len", len(ff))
}
if _, err := f.Open(); err == nil {
t.Fatal("expecting error")
}
}

View File

@@ -1,549 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENE file.
package leveldb
import (
"errors"
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
"sync"
"testing"
"github.com/syndtr/goleveldb/leveldb/storage"
"github.com/syndtr/goleveldb/leveldb/util"
)
const typeShift = 4
var (
tsErrInvalidFile = errors.New("leveldb.testStorage: invalid file for argument")
tsErrFileOpen = errors.New("leveldb.testStorage: file still open")
)
var (
tsFSEnv = os.Getenv("GOLEVELDB_USEFS")
tsTempdir = os.Getenv("GOLEVELDB_TEMPDIR")
tsKeepFS = tsFSEnv == "2"
tsFS = tsKeepFS || tsFSEnv == "" || tsFSEnv == "1"
tsMU = &sync.Mutex{}
tsNum = 0
)
type tsOp uint
const (
tsOpOpen tsOp = iota
tsOpCreate
tsOpReplace
tsOpRemove
tsOpRead
tsOpReadAt
tsOpWrite
tsOpSync
tsOpNum
)
type tsLock struct {
ts *testStorage
r util.Releaser
}
func (l tsLock) Release() {
l.r.Release()
l.ts.t.Log("I: storage lock released")
}
type tsReader struct {
tf tsFile
storage.Reader
}
func (tr tsReader) Read(b []byte) (n int, err error) {
ts := tr.tf.ts
ts.countRead(tr.tf.Type())
if tr.tf.shouldErrLocked(tsOpRead) {
return 0, errors.New("leveldb.testStorage: emulated read error")
}
n, err = tr.Reader.Read(b)
if err != nil && err != io.EOF {
ts.t.Errorf("E: read error, num=%d type=%v n=%d: %v", tr.tf.Num(), tr.tf.Type(), n, err)
}
return
}
func (tr tsReader) ReadAt(b []byte, off int64) (n int, err error) {
ts := tr.tf.ts
ts.countRead(tr.tf.Type())
if tr.tf.shouldErrLocked(tsOpReadAt) {
return 0, errors.New("leveldb.testStorage: emulated readAt error")
}
n, err = tr.Reader.ReadAt(b, off)
if err != nil && err != io.EOF {
ts.t.Errorf("E: readAt error, num=%d type=%v off=%d n=%d: %v", tr.tf.Num(), tr.tf.Type(), off, n, err)
}
return
}
func (tr tsReader) Close() (err error) {
err = tr.Reader.Close()
tr.tf.close("reader", err)
return
}
type tsWriter struct {
tf tsFile
storage.Writer
}
func (tw tsWriter) Write(b []byte) (n int, err error) {
if tw.tf.shouldErrLocked(tsOpWrite) {
return 0, errors.New("leveldb.testStorage: emulated write error")
}
n, err = tw.Writer.Write(b)
if err != nil {
tw.tf.ts.t.Errorf("E: write error, num=%d type=%v n=%d: %v", tw.tf.Num(), tw.tf.Type(), n, err)
}
return
}
func (tw tsWriter) Sync() (err error) {
ts := tw.tf.ts
ts.mu.Lock()
for ts.emuDelaySync&tw.tf.Type() != 0 {
ts.cond.Wait()
}
ts.mu.Unlock()
if tw.tf.shouldErrLocked(tsOpSync) {
return errors.New("leveldb.testStorage: emulated sync error")
}
err = tw.Writer.Sync()
if err != nil {
tw.tf.ts.t.Errorf("E: sync error, num=%d type=%v: %v", tw.tf.Num(), tw.tf.Type(), err)
}
return
}
func (tw tsWriter) Close() (err error) {
err = tw.Writer.Close()
tw.tf.close("writer", err)
return
}
type tsFile struct {
ts *testStorage
storage.File
}
func (tf tsFile) x() uint64 {
return tf.Num()<<typeShift | uint64(tf.Type())
}
func (tf tsFile) shouldErr(op tsOp) bool {
return tf.ts.shouldErr(tf, op)
}
func (tf tsFile) shouldErrLocked(op tsOp) bool {
tf.ts.mu.Lock()
defer tf.ts.mu.Unlock()
return tf.shouldErr(op)
}
func (tf tsFile) checkOpen(m string) error {
ts := tf.ts
if writer, ok := ts.opens[tf.x()]; ok {
if writer {
ts.t.Errorf("E: cannot %s file, num=%d type=%v: a writer still open", m, tf.Num(), tf.Type())
} else {
ts.t.Errorf("E: cannot %s file, num=%d type=%v: a reader still open", m, tf.Num(), tf.Type())
}
return tsErrFileOpen
}
return nil
}
func (tf tsFile) close(m string, err error) {
ts := tf.ts
ts.mu.Lock()
defer ts.mu.Unlock()
if _, ok := ts.opens[tf.x()]; !ok {
ts.t.Errorf("E: %s: redudant file closing, num=%d type=%v", m, tf.Num(), tf.Type())
} else if err == nil {
ts.t.Logf("I: %s: file closed, num=%d type=%v", m, tf.Num(), tf.Type())
}
delete(ts.opens, tf.x())
if err != nil {
ts.t.Errorf("E: %s: cannot close file, num=%d type=%v: %v", m, tf.Num(), tf.Type(), err)
}
}
func (tf tsFile) Open() (r storage.Reader, err error) {
ts := tf.ts
ts.mu.Lock()
defer ts.mu.Unlock()
err = tf.checkOpen("open")
if err != nil {
return
}
if tf.shouldErr(tsOpOpen) {
err = errors.New("leveldb.testStorage: emulated open error")
return
}
r, err = tf.File.Open()
if err != nil {
if ts.ignoreOpenErr&tf.Type() != 0 {
ts.t.Logf("I: cannot open file, num=%d type=%v: %v (ignored)", tf.Num(), tf.Type(), err)
} else {
ts.t.Errorf("E: cannot open file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
}
} else {
ts.t.Logf("I: file opened, num=%d type=%v", tf.Num(), tf.Type())
ts.opens[tf.x()] = false
r = tsReader{tf, r}
}
return
}
func (tf tsFile) Create() (w storage.Writer, err error) {
ts := tf.ts
ts.mu.Lock()
defer ts.mu.Unlock()
err = tf.checkOpen("create")
if err != nil {
return
}
if tf.shouldErr(tsOpCreate) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
w, err = tf.File.Create()
if err != nil {
ts.t.Errorf("E: cannot create file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
} else {
ts.t.Logf("I: file created, num=%d type=%v", tf.Num(), tf.Type())
ts.opens[tf.x()] = true
w = tsWriter{tf, w}
}
return
}
func (tf tsFile) Replace(newfile storage.File) (err error) {
ts := tf.ts
ts.mu.Lock()
defer ts.mu.Unlock()
err = tf.checkOpen("replace")
if err != nil {
return
}
if tf.shouldErr(tsOpReplace) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Replace(newfile.(tsFile).File)
if err != nil {
ts.t.Errorf("E: cannot replace file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
} else {
ts.t.Logf("I: file replace, num=%d type=%v", tf.Num(), tf.Type())
}
return
}
func (tf tsFile) Remove() (err error) {
ts := tf.ts
ts.mu.Lock()
defer ts.mu.Unlock()
err = tf.checkOpen("remove")
if err != nil {
return
}
if tf.shouldErr(tsOpRemove) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Remove()
if err != nil {
ts.t.Errorf("E: cannot remove file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
} else {
ts.t.Logf("I: file removed, num=%d type=%v", tf.Num(), tf.Type())
}
return
}
type testStorage struct {
t *testing.T
storage.Storage
closeFn func() error
mu sync.Mutex
cond sync.Cond
// Open files, true=writer, false=reader
opens map[uint64]bool
emuDelaySync storage.FileType
ignoreOpenErr storage.FileType
readCnt uint64
readCntEn storage.FileType
emuErr [tsOpNum]storage.FileType
emuErrOnce [tsOpNum]storage.FileType
emuRandErr [tsOpNum]storage.FileType
emuRandErrProb int
emuErrOnceMap map[uint64]uint
emuRandRand *rand.Rand
}
func (ts *testStorage) shouldErr(tf tsFile, op tsOp) bool {
if ts.emuErr[op]&tf.Type() != 0 {
return true
} else if ts.emuRandErr[op]&tf.Type() != 0 || ts.emuErrOnce[op]&tf.Type() != 0 {
sop := uint(1) << op
eop := ts.emuErrOnceMap[tf.x()]
if eop&sop == 0 && (ts.emuRandRand.Int()%ts.emuRandErrProb == 0 || ts.emuErrOnce[op]&tf.Type() != 0) {
ts.emuErrOnceMap[tf.x()] = eop | sop
ts.t.Logf("I: emulated error: file=%d type=%v op=%v", tf.Num(), tf.Type(), op)
return true
}
}
return false
}
func (ts *testStorage) SetEmuErr(t storage.FileType, ops ...tsOp) {
ts.mu.Lock()
for _, op := range ops {
ts.emuErr[op] = t
}
ts.mu.Unlock()
}
func (ts *testStorage) SetEmuErrOnce(t storage.FileType, ops ...tsOp) {
ts.mu.Lock()
for _, op := range ops {
ts.emuErrOnce[op] = t
}
ts.mu.Unlock()
}
func (ts *testStorage) SetEmuRandErr(t storage.FileType, ops ...tsOp) {
ts.mu.Lock()
for _, op := range ops {
ts.emuRandErr[op] = t
}
ts.mu.Unlock()
}
func (ts *testStorage) SetEmuRandErrProb(prob int) {
ts.mu.Lock()
ts.emuRandErrProb = prob
ts.mu.Unlock()
}
func (ts *testStorage) DelaySync(t storage.FileType) {
ts.mu.Lock()
ts.emuDelaySync |= t
ts.cond.Broadcast()
ts.mu.Unlock()
}
func (ts *testStorage) ReleaseSync(t storage.FileType) {
ts.mu.Lock()
ts.emuDelaySync &= ^t
ts.cond.Broadcast()
ts.mu.Unlock()
}
func (ts *testStorage) ReadCounter() uint64 {
ts.mu.Lock()
defer ts.mu.Unlock()
return ts.readCnt
}
func (ts *testStorage) ResetReadCounter() {
ts.mu.Lock()
ts.readCnt = 0
ts.mu.Unlock()
}
func (ts *testStorage) SetReadCounter(t storage.FileType) {
ts.mu.Lock()
ts.readCntEn = t
ts.mu.Unlock()
}
func (ts *testStorage) countRead(t storage.FileType) {
ts.mu.Lock()
if ts.readCntEn&t != 0 {
ts.readCnt++
}
ts.mu.Unlock()
}
func (ts *testStorage) SetIgnoreOpenErr(t storage.FileType) {
ts.ignoreOpenErr = t
}
func (ts *testStorage) Lock() (r util.Releaser, err error) {
r, err = ts.Storage.Lock()
if err != nil {
ts.t.Logf("W: storage locking failed: %v", err)
} else {
ts.t.Log("I: storage locked")
r = tsLock{ts, r}
}
return
}
func (ts *testStorage) Log(str string) {
ts.t.Log("L: " + str)
ts.Storage.Log(str)
}
func (ts *testStorage) GetFile(num uint64, t storage.FileType) storage.File {
return tsFile{ts, ts.Storage.GetFile(num, t)}
}
func (ts *testStorage) GetFiles(t storage.FileType) (ff []storage.File, err error) {
ff0, err := ts.Storage.GetFiles(t)
if err != nil {
ts.t.Errorf("E: get files failed: %v", err)
return
}
ff = make([]storage.File, len(ff0))
for i, f := range ff0 {
ff[i] = tsFile{ts, f}
}
ts.t.Logf("I: get files, type=0x%x count=%d", int(t), len(ff))
return
}
func (ts *testStorage) GetManifest() (f storage.File, err error) {
f0, err := ts.Storage.GetManifest()
if err != nil {
if !os.IsNotExist(err) {
ts.t.Errorf("E: get manifest failed: %v", err)
}
return
}
f = tsFile{ts, f0}
ts.t.Logf("I: get manifest, num=%d", f.Num())
return
}
func (ts *testStorage) SetManifest(f storage.File) error {
tf, ok := f.(tsFile)
if !ok {
ts.t.Error("E: set manifest failed: type assertion failed")
return tsErrInvalidFile
} else if tf.Type() != storage.TypeManifest {
ts.t.Errorf("E: set manifest failed: invalid file type: %s", tf.Type())
return tsErrInvalidFile
}
err := ts.Storage.SetManifest(tf.File)
if err != nil {
ts.t.Errorf("E: set manifest failed: %v", err)
} else {
ts.t.Logf("I: set manifest, num=%d", tf.Num())
}
return err
}
func (ts *testStorage) Close() error {
ts.CloseCheck()
err := ts.Storage.Close()
if err != nil {
ts.t.Errorf("E: closing storage failed: %v", err)
} else {
ts.t.Log("I: storage closed")
}
if ts.closeFn != nil {
if err := ts.closeFn(); err != nil {
ts.t.Errorf("E: close function: %v", err)
}
}
return err
}
func (ts *testStorage) CloseCheck() {
ts.mu.Lock()
if len(ts.opens) == 0 {
ts.t.Log("I: all files are closed")
} else {
ts.t.Errorf("E: %d files still open", len(ts.opens))
for x, writer := range ts.opens {
num, tt := x>>typeShift, storage.FileType(x)&storage.TypeAll
ts.t.Errorf("E: * num=%d type=%v writer=%v", num, tt, writer)
}
}
ts.mu.Unlock()
}
func newTestStorage(t *testing.T) *testStorage {
var stor storage.Storage
var closeFn func() error
if tsFS {
for {
tsMU.Lock()
num := tsNum
tsNum++
tsMU.Unlock()
tempdir := tsTempdir
if tempdir == "" {
tempdir = os.TempDir()
}
path := filepath.Join(tempdir, fmt.Sprintf("goleveldb-test%d0%d0%d", os.Getuid(), os.Getpid(), num))
if _, err := os.Stat(path); err != nil {
stor, err = storage.OpenFile(path)
if err != nil {
t.Fatalf("F: cannot create storage: %v", err)
}
t.Logf("I: storage created: %s", path)
closeFn = func() error {
for _, name := range []string{"LOG.old", "LOG"} {
f, err := os.Open(filepath.Join(path, name))
if err != nil {
continue
}
if log, err := ioutil.ReadAll(f); err != nil {
t.Logf("---------------------- %s ----------------------", name)
t.Logf("cannot read log: %v", err)
t.Logf("---------------------- %s ----------------------", name)
} else if len(log) > 0 {
t.Logf("---------------------- %s ----------------------\n%s", name, string(log))
t.Logf("---------------------- %s ----------------------", name)
}
f.Close()
}
if t.Failed() {
t.Logf("testing failed, test DB preserved at %s", path)
return nil
}
if tsKeepFS {
return nil
}
return os.RemoveAll(path)
}
break
}
}
} else {
stor = storage.NewMemStorage()
}
ts := &testStorage{
t: t,
Storage: stor,
closeFn: closeFn,
opens: make(map[uint64]bool),
emuErrOnceMap: make(map[uint64]uint),
emuRandErrProb: 0x999,
emuRandRand: rand.New(rand.NewSource(0xfacedead)),
}
ts.cond.L = &ts.mu
return ts
}

View File

@@ -1,139 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package table
import (
"encoding/binary"
"fmt"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/testutil"
"github.com/syndtr/goleveldb/leveldb/util"
)
type blockTesting struct {
tr *Reader
b *block
}
func (t *blockTesting) TestNewIterator(slice *util.Range) iterator.Iterator {
return t.tr.newBlockIter(t.b, nil, slice, false)
}
var _ = testutil.Defer(func() {
Describe("Block", func() {
Build := func(kv *testutil.KeyValue, restartInterval int) *blockTesting {
// Building the block.
bw := &blockWriter{
restartInterval: restartInterval,
scratch: make([]byte, 30),
}
kv.Iterate(func(i int, key, value []byte) {
bw.append(key, value)
})
bw.finish()
// Opening the block.
data := bw.buf.Bytes()
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
return &blockTesting{
tr: &Reader{cmp: comparer.DefaultComparer},
b: &block{
data: data,
restartsLen: restartsLen,
restartsOffset: len(data) - (restartsLen+1)*4,
},
}
}
Describe("read test", func() {
for restartInterval := 1; restartInterval <= 5; restartInterval++ {
Describe(fmt.Sprintf("with restart interval of %d", restartInterval), func() {
kv := &testutil.KeyValue{}
Text := func() string {
return fmt.Sprintf("and %d keys", kv.Len())
}
Test := func() {
// Make block.
br := Build(kv, restartInterval)
// Do testing.
testutil.KeyValueTesting(nil, kv.Clone(), br, nil, nil)
}
Describe(Text(), Test)
kv.PutString("", "empty")
Describe(Text(), Test)
kv.PutString("a1", "foo")
Describe(Text(), Test)
kv.PutString("a2", "v")
Describe(Text(), Test)
kv.PutString("a3qqwrkks", "hello")
Describe(Text(), Test)
kv.PutString("a4", "bar")
Describe(Text(), Test)
kv.PutString("a5111111", "v5")
kv.PutString("a6", "")
kv.PutString("a7", "v7")
kv.PutString("a8", "vvvvvvvvvvvvvvvvvvvvvv8")
kv.PutString("b", "v9")
kv.PutString("c9", "v9")
kv.PutString("c91", "v9")
kv.PutString("d0", "v9")
Describe(Text(), Test)
})
}
})
Describe("out-of-bound slice test", func() {
kv := &testutil.KeyValue{}
kv.PutString("k1", "v1")
kv.PutString("k2", "v2")
kv.PutString("k3abcdefgg", "v3")
kv.PutString("k4", "v4")
kv.PutString("k5", "v5")
for restartInterval := 1; restartInterval <= 5; restartInterval++ {
Describe(fmt.Sprintf("with restart interval of %d", restartInterval), func() {
// Make block.
bt := Build(kv, restartInterval)
Test := func(r *util.Range) func(done Done) {
return func(done Done) {
iter := bt.TestNewIterator(r)
Expect(iter.Error()).ShouldNot(HaveOccurred())
t := testutil.IteratorTesting{
KeyValue: kv.Clone(),
Iter: iter,
}
testutil.DoIteratorTesting(&t)
iter.Release()
done <- true
}
}
It("Should do iterations and seeks correctly #0",
Test(&util.Range{Start: []byte("k0"), Limit: []byte("k6")}), 2.0)
It("Should do iterations and seeks correctly #1",
Test(&util.Range{Start: []byte(""), Limit: []byte("zzzzzzz")}), 2.0)
})
}
})
})
})

View File

@@ -1,11 +0,0 @@
package table
import (
"testing"
"github.com/syndtr/goleveldb/leveldb/testutil"
)
func TestTable(t *testing.T) {
testutil.RunSuite(t, "Table Suite")
}

View File

@@ -1,122 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package table
import (
"bytes"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/testutil"
"github.com/syndtr/goleveldb/leveldb/util"
)
type tableWrapper struct {
*Reader
}
func (t tableWrapper) TestFind(key []byte) (rkey, rvalue []byte, err error) {
return t.Reader.Find(key, false, nil)
}
func (t tableWrapper) TestGet(key []byte) (value []byte, err error) {
return t.Reader.Get(key, nil)
}
func (t tableWrapper) TestNewIterator(slice *util.Range) iterator.Iterator {
return t.Reader.NewIterator(slice, nil)
}
var _ = testutil.Defer(func() {
Describe("Table", func() {
Describe("approximate offset test", func() {
var (
buf = &bytes.Buffer{}
o = &opt.Options{
BlockSize: 1024,
Compression: opt.NoCompression,
}
)
// Building the table.
tw := NewWriter(buf, o)
tw.Append([]byte("k01"), []byte("hello"))
tw.Append([]byte("k02"), []byte("hello2"))
tw.Append([]byte("k03"), bytes.Repeat([]byte{'x'}, 10000))
tw.Append([]byte("k04"), bytes.Repeat([]byte{'x'}, 200000))
tw.Append([]byte("k05"), bytes.Repeat([]byte{'x'}, 300000))
tw.Append([]byte("k06"), []byte("hello3"))
tw.Append([]byte("k07"), bytes.Repeat([]byte{'x'}, 100000))
err := tw.Close()
It("Should be able to approximate offset of a key correctly", func() {
Expect(err).ShouldNot(HaveOccurred())
tr, err := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, nil, o)
Expect(err).ShouldNot(HaveOccurred())
CheckOffset := func(key string, expect, threshold int) {
offset, err := tr.OffsetOf([]byte(key))
Expect(err).ShouldNot(HaveOccurred())
Expect(offset).Should(BeNumerically("~", expect, threshold), "Offset of key %q", key)
}
CheckOffset("k0", 0, 0)
CheckOffset("k01a", 0, 0)
CheckOffset("k02", 0, 0)
CheckOffset("k03", 0, 0)
CheckOffset("k04", 10000, 1000)
CheckOffset("k04a", 210000, 1000)
CheckOffset("k05", 210000, 1000)
CheckOffset("k06", 510000, 1000)
CheckOffset("k07", 510000, 1000)
CheckOffset("xyz", 610000, 2000)
})
})
Describe("read test", func() {
Build := func(kv testutil.KeyValue) testutil.DB {
o := &opt.Options{
BlockSize: 512,
BlockRestartInterval: 3,
}
buf := &bytes.Buffer{}
// Building the table.
tw := NewWriter(buf, o)
kv.Iterate(func(i int, key, value []byte) {
tw.Append(key, value)
})
tw.Close()
// Opening the table.
tr, _ := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, nil, o)
return tableWrapper{tr}
}
Test := func(kv *testutil.KeyValue, body func(r *Reader)) func() {
return func() {
db := Build(*kv)
if body != nil {
body(db.(tableWrapper).Reader)
}
testutil.KeyValueTesting(nil, *kv, db, nil, nil)
}
}
testutil.AllKeyValueTesting(nil, Build, nil, nil)
Describe("with one key per block", Test(testutil.KeyValue_Generate(nil, 9, 1, 10, 512, 512), func(r *Reader) {
It("should have correct blocks number", func() {
indexBlock, err := r.readBlock(r.indexBH, true)
Expect(err).To(BeNil())
Expect(indexBlock.restartsLen).Should(Equal(9))
})
}))
})
})
})

View File

@@ -1,63 +0,0 @@
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
. "github.com/onsi/gomega"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/testutil"
"github.com/syndtr/goleveldb/leveldb/util"
)
type testingDB struct {
*DB
ro *opt.ReadOptions
wo *opt.WriteOptions
stor *testutil.Storage
}
func (t *testingDB) TestPut(key []byte, value []byte) error {
return t.Put(key, value, t.wo)
}
func (t *testingDB) TestDelete(key []byte) error {
return t.Delete(key, t.wo)
}
func (t *testingDB) TestGet(key []byte) (value []byte, err error) {
return t.Get(key, t.ro)
}
func (t *testingDB) TestHas(key []byte) (ret bool, err error) {
return t.Has(key, t.ro)
}
func (t *testingDB) TestNewIterator(slice *util.Range) iterator.Iterator {
return t.NewIterator(slice, t.ro)
}
func (t *testingDB) TestClose() {
err := t.Close()
ExpectWithOffset(1, err).NotTo(HaveOccurred())
err = t.stor.Close()
ExpectWithOffset(1, err).NotTo(HaveOccurred())
}
func newTestingDB(o *opt.Options, ro *opt.ReadOptions, wo *opt.WriteOptions) *testingDB {
stor := testutil.NewStorage()
db, err := Open(stor, o)
// FIXME: This may be called from outside It, which may cause panic.
Expect(err).NotTo(HaveOccurred())
return &testingDB{
DB: db,
ro: ro,
wo: wo,
stor: stor,
}
}

View File

@@ -1,369 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package util
import (
"bytes"
"io"
"math/rand"
"runtime"
"testing"
)
const N = 10000 // make this bigger for a larger (and slower) test
var data string // test data for write tests
var testBytes []byte // test data; same as data but as a slice.
func init() {
testBytes = make([]byte, N)
for i := 0; i < N; i++ {
testBytes[i] = 'a' + byte(i%26)
}
data = string(testBytes)
}
// Verify that contents of buf match the string s.
func check(t *testing.T, testname string, buf *Buffer, s string) {
bytes := buf.Bytes()
str := buf.String()
if buf.Len() != len(bytes) {
t.Errorf("%s: buf.Len() == %d, len(buf.Bytes()) == %d", testname, buf.Len(), len(bytes))
}
if buf.Len() != len(str) {
t.Errorf("%s: buf.Len() == %d, len(buf.String()) == %d", testname, buf.Len(), len(str))
}
if buf.Len() != len(s) {
t.Errorf("%s: buf.Len() == %d, len(s) == %d", testname, buf.Len(), len(s))
}
if string(bytes) != s {
t.Errorf("%s: string(buf.Bytes()) == %q, s == %q", testname, string(bytes), s)
}
}
// Fill buf through n writes of byte slice fub.
// The initial contents of buf corresponds to the string s;
// the result is the final contents of buf returned as a string.
func fillBytes(t *testing.T, testname string, buf *Buffer, s string, n int, fub []byte) string {
check(t, testname+" (fill 1)", buf, s)
for ; n > 0; n-- {
m, err := buf.Write(fub)
if m != len(fub) {
t.Errorf(testname+" (fill 2): m == %d, expected %d", m, len(fub))
}
if err != nil {
t.Errorf(testname+" (fill 3): err should always be nil, found err == %s", err)
}
s += string(fub)
check(t, testname+" (fill 4)", buf, s)
}
return s
}
func TestNewBuffer(t *testing.T) {
buf := NewBuffer(testBytes)
check(t, "NewBuffer", buf, data)
}
// Empty buf through repeated reads into fub.
// The initial contents of buf corresponds to the string s.
func empty(t *testing.T, testname string, buf *Buffer, s string, fub []byte) {
check(t, testname+" (empty 1)", buf, s)
for {
n, err := buf.Read(fub)
if n == 0 {
break
}
if err != nil {
t.Errorf(testname+" (empty 2): err should always be nil, found err == %s", err)
}
s = s[n:]
check(t, testname+" (empty 3)", buf, s)
}
check(t, testname+" (empty 4)", buf, "")
}
func TestBasicOperations(t *testing.T) {
var buf Buffer
for i := 0; i < 5; i++ {
check(t, "TestBasicOperations (1)", &buf, "")
buf.Reset()
check(t, "TestBasicOperations (2)", &buf, "")
buf.Truncate(0)
check(t, "TestBasicOperations (3)", &buf, "")
n, err := buf.Write([]byte(data[0:1]))
if n != 1 {
t.Errorf("wrote 1 byte, but n == %d", n)
}
if err != nil {
t.Errorf("err should always be nil, but err == %s", err)
}
check(t, "TestBasicOperations (4)", &buf, "a")
buf.WriteByte(data[1])
check(t, "TestBasicOperations (5)", &buf, "ab")
n, err = buf.Write([]byte(data[2:26]))
if n != 24 {
t.Errorf("wrote 25 bytes, but n == %d", n)
}
check(t, "TestBasicOperations (6)", &buf, string(data[0:26]))
buf.Truncate(26)
check(t, "TestBasicOperations (7)", &buf, string(data[0:26]))
buf.Truncate(20)
check(t, "TestBasicOperations (8)", &buf, string(data[0:20]))
empty(t, "TestBasicOperations (9)", &buf, string(data[0:20]), make([]byte, 5))
empty(t, "TestBasicOperations (10)", &buf, "", make([]byte, 100))
buf.WriteByte(data[1])
c, err := buf.ReadByte()
if err != nil {
t.Error("ReadByte unexpected eof")
}
if c != data[1] {
t.Errorf("ReadByte wrong value c=%v", c)
}
c, err = buf.ReadByte()
if err == nil {
t.Error("ReadByte unexpected not eof")
}
}
}
func TestLargeByteWrites(t *testing.T) {
var buf Buffer
limit := 30
if testing.Short() {
limit = 9
}
for i := 3; i < limit; i += 3 {
s := fillBytes(t, "TestLargeWrites (1)", &buf, "", 5, testBytes)
empty(t, "TestLargeByteWrites (2)", &buf, s, make([]byte, len(data)/i))
}
check(t, "TestLargeByteWrites (3)", &buf, "")
}
func TestLargeByteReads(t *testing.T) {
var buf Buffer
for i := 3; i < 30; i += 3 {
s := fillBytes(t, "TestLargeReads (1)", &buf, "", 5, testBytes[0:len(testBytes)/i])
empty(t, "TestLargeReads (2)", &buf, s, make([]byte, len(data)))
}
check(t, "TestLargeByteReads (3)", &buf, "")
}
func TestMixedReadsAndWrites(t *testing.T) {
var buf Buffer
s := ""
for i := 0; i < 50; i++ {
wlen := rand.Intn(len(data))
s = fillBytes(t, "TestMixedReadsAndWrites (1)", &buf, s, 1, testBytes[0:wlen])
rlen := rand.Intn(len(data))
fub := make([]byte, rlen)
n, _ := buf.Read(fub)
s = s[n:]
}
empty(t, "TestMixedReadsAndWrites (2)", &buf, s, make([]byte, buf.Len()))
}
func TestNil(t *testing.T) {
var b *Buffer
if b.String() != "<nil>" {
t.Errorf("expected <nil>; got %q", b.String())
}
}
func TestReadFrom(t *testing.T) {
var buf Buffer
for i := 3; i < 30; i += 3 {
s := fillBytes(t, "TestReadFrom (1)", &buf, "", 5, testBytes[0:len(testBytes)/i])
var b Buffer
b.ReadFrom(&buf)
empty(t, "TestReadFrom (2)", &b, s, make([]byte, len(data)))
}
}
func TestWriteTo(t *testing.T) {
var buf Buffer
for i := 3; i < 30; i += 3 {
s := fillBytes(t, "TestWriteTo (1)", &buf, "", 5, testBytes[0:len(testBytes)/i])
var b Buffer
buf.WriteTo(&b)
empty(t, "TestWriteTo (2)", &b, s, make([]byte, len(data)))
}
}
func TestNext(t *testing.T) {
b := []byte{0, 1, 2, 3, 4}
tmp := make([]byte, 5)
for i := 0; i <= 5; i++ {
for j := i; j <= 5; j++ {
for k := 0; k <= 6; k++ {
// 0 <= i <= j <= 5; 0 <= k <= 6
// Check that if we start with a buffer
// of length j at offset i and ask for
// Next(k), we get the right bytes.
buf := NewBuffer(b[0:j])
n, _ := buf.Read(tmp[0:i])
if n != i {
t.Fatalf("Read %d returned %d", i, n)
}
bb := buf.Next(k)
want := k
if want > j-i {
want = j - i
}
if len(bb) != want {
t.Fatalf("in %d,%d: len(Next(%d)) == %d", i, j, k, len(bb))
}
for l, v := range bb {
if v != byte(l+i) {
t.Fatalf("in %d,%d: Next(%d)[%d] = %d, want %d", i, j, k, l, v, l+i)
}
}
}
}
}
}
var readBytesTests = []struct {
buffer string
delim byte
expected []string
err error
}{
{"", 0, []string{""}, io.EOF},
{"a\x00", 0, []string{"a\x00"}, nil},
{"abbbaaaba", 'b', []string{"ab", "b", "b", "aaab"}, nil},
{"hello\x01world", 1, []string{"hello\x01"}, nil},
{"foo\nbar", 0, []string{"foo\nbar"}, io.EOF},
{"alpha\nbeta\ngamma\n", '\n', []string{"alpha\n", "beta\n", "gamma\n"}, nil},
{"alpha\nbeta\ngamma", '\n', []string{"alpha\n", "beta\n", "gamma"}, io.EOF},
}
func TestReadBytes(t *testing.T) {
for _, test := range readBytesTests {
buf := NewBuffer([]byte(test.buffer))
var err error
for _, expected := range test.expected {
var bytes []byte
bytes, err = buf.ReadBytes(test.delim)
if string(bytes) != expected {
t.Errorf("expected %q, got %q", expected, bytes)
}
if err != nil {
break
}
}
if err != test.err {
t.Errorf("expected error %v, got %v", test.err, err)
}
}
}
func TestGrow(t *testing.T) {
x := []byte{'x'}
y := []byte{'y'}
tmp := make([]byte, 72)
for _, startLen := range []int{0, 100, 1000, 10000, 100000} {
xBytes := bytes.Repeat(x, startLen)
for _, growLen := range []int{0, 100, 1000, 10000, 100000} {
buf := NewBuffer(xBytes)
// If we read, this affects buf.off, which is good to test.
readBytes, _ := buf.Read(tmp)
buf.Grow(growLen)
yBytes := bytes.Repeat(y, growLen)
// Check no allocation occurs in write, as long as we're single-threaded.
var m1, m2 runtime.MemStats
runtime.ReadMemStats(&m1)
buf.Write(yBytes)
runtime.ReadMemStats(&m2)
if runtime.GOMAXPROCS(-1) == 1 && m1.Mallocs != m2.Mallocs {
t.Errorf("allocation occurred during write")
}
// Check that buffer has correct data.
if !bytes.Equal(buf.Bytes()[0:startLen-readBytes], xBytes[readBytes:]) {
t.Errorf("bad initial data at %d %d", startLen, growLen)
}
if !bytes.Equal(buf.Bytes()[startLen-readBytes:startLen-readBytes+growLen], yBytes) {
t.Errorf("bad written data at %d %d", startLen, growLen)
}
}
}
}
// Was a bug: used to give EOF reading empty slice at EOF.
func TestReadEmptyAtEOF(t *testing.T) {
b := new(Buffer)
slice := make([]byte, 0)
n, err := b.Read(slice)
if err != nil {
t.Errorf("read error: %v", err)
}
if n != 0 {
t.Errorf("wrong count; got %d want 0", n)
}
}
// Tests that we occasionally compact. Issue 5154.
func TestBufferGrowth(t *testing.T) {
var b Buffer
buf := make([]byte, 1024)
b.Write(buf[0:1])
var cap0 int
for i := 0; i < 5<<10; i++ {
b.Write(buf)
b.Read(buf)
if i == 0 {
cap0 = cap(b.buf)
}
}
cap1 := cap(b.buf)
// (*Buffer).grow allows for 2x capacity slop before sliding,
// so set our error threshold at 3x.
if cap1 > cap0*3 {
t.Errorf("buffer cap = %d; too big (grew from %d)", cap1, cap0)
}
}
// From Issue 5154.
func BenchmarkBufferNotEmptyWriteRead(b *testing.B) {
buf := make([]byte, 1024)
for i := 0; i < b.N; i++ {
var b Buffer
b.Write(buf[0:1])
for i := 0; i < 5<<10; i++ {
b.Write(buf)
b.Read(buf)
}
}
}
// Check that we don't compact too often. From Issue 5154.
func BenchmarkBufferFullSmallReads(b *testing.B) {
buf := make([]byte, 1024)
for i := 0; i < b.N; i++ {
var b Buffer
b.Write(buf)
for b.Len()+20 < cap(b.buf) {
b.Write(buf[:10])
}
for i := 0; i < 5<<10; i++ {
b.Read(buf[:1])
b.Write(buf[:1])
}
}
}

View File

@@ -1,49 +0,0 @@
package suture
import "fmt"
type Incrementor struct {
current int
next chan int
stop chan bool
}
func (i *Incrementor) Stop() {
fmt.Println("Stopping the service")
i.stop <- true
}
func (i *Incrementor) Serve() {
for {
select {
case i.next <- i.current:
i.current++
case <-i.stop:
// We sync here just to guarantee the output of "Stopping the service",
// so this passes the test reliably.
// Most services would simply "return" here.
i.stop <- true
return
}
}
}
func ExampleNew_simple() {
supervisor := NewSimple("Supervisor")
service := &Incrementor{0, make(chan int), make(chan bool)}
supervisor.Add(service)
go supervisor.ServeBackground()
fmt.Println("Got:", <-service.next)
fmt.Println("Got:", <-service.next)
supervisor.Stop()
// We sync here just to guarantee the output of "Stopping the service"
<-service.stop
// Output:
// Got: 0
// Got: 1
// Stopping the service
}

View File

@@ -1,616 +0,0 @@
package suture
import (
"errors"
"fmt"
"reflect"
"sync"
"testing"
"time"
)
const (
Happy = iota
Fail
Panic
Hang
UseStopChan
)
var everMultistarted = false
// Test that supervisors work perfectly when everything is hunky dory.
func TestTheHappyCase(t *testing.T) {
t.Parallel()
s := NewSimple("A")
if s.String() != "A" {
t.Fatal("Can't get name from a supervisor")
}
service := NewService("B")
s.Add(service)
go s.Serve()
<-service.started
// If we stop the service, it just gets restarted
service.Stop()
<-service.started
// And it is shut down when we stop the supervisor
service.take <- UseStopChan
s.Stop()
<-service.stop
}
// Test that adding to a running supervisor does indeed start the service.
func TestAddingToRunningSupervisor(t *testing.T) {
t.Parallel()
s := NewSimple("A1")
s.ServeBackground()
defer s.Stop()
service := NewService("B1")
s.Add(service)
<-service.started
services := s.Services()
if !reflect.DeepEqual([]Service{service}, services) {
t.Fatal("Can't get list of services as expected.")
}
}
// Test what happens when services fail.
func TestFailures(t *testing.T) {
t.Parallel()
s := NewSimple("A2")
s.failureThreshold = 3.5
go s.Serve()
defer func() {
// to avoid deadlocks during shutdown, we have to not try to send
// things out on channels while we're shutting down (this undoes the
// logFailure overide about 25 lines down)
s.logFailure = func(*Supervisor, Service, float64, float64, bool, interface{}, []byte) {}
s.Stop()
}()
s.sync()
service1 := NewService("B2")
service2 := NewService("C2")
s.Add(service1)
<-service1.started
s.Add(service2)
<-service2.started
nowFeeder := NewNowFeeder()
pastVal := time.Unix(1000000, 0)
nowFeeder.appendTimes(pastVal)
s.getNow = nowFeeder.getter
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan bool)
// use this to synchronize on here
s.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- r
}
// All that setup was for this: Service1, please return now.
service1.take <- Fail
restarted := <-failNotify
<-service1.started
if !restarted || s.failures != 1 || s.lastFail != pastVal {
t.Fatal("Did not fail in the expected manner")
}
// Getting past this means the service was restarted.
service1.take <- Happy
// Service2, your turn.
service2.take <- Fail
nowFeeder.appendTimes(pastVal)
restarted = <-failNotify
<-service2.started
if !restarted || s.failures != 2 || s.lastFail != pastVal {
t.Fatal("Did not fail in the expected manner")
}
// And you're back. (That is, the correct service was restarted.)
service2.take <- Happy
// Now, one failureDecay later, is everything working correctly?
oneDecayLater := time.Unix(1000030, 0)
nowFeeder.appendTimes(oneDecayLater)
service2.take <- Fail
restarted = <-failNotify
<-service2.started
// playing a bit fast and loose here with floating point, but...
// we get 2 by taking the current failure value of 2, decaying it
// by one interval, which cuts it in half to 1, then adding 1 again,
// all of which "should" be precise
if !restarted || s.failures != 2 || s.lastFail != oneDecayLater {
t.Fatal("Did not decay properly", s.lastFail, oneDecayLater)
}
// For a change of pace, service1 would you be so kind as to panic?
nowFeeder.appendTimes(oneDecayLater)
service1.take <- Panic
restarted = <-failNotify
<-service1.started
if !restarted || s.failures != 3 || s.lastFail != oneDecayLater {
t.Fatal("Did not correctly recover from a panic")
}
nowFeeder.appendTimes(oneDecayLater)
backingoff := make(chan bool)
s.logBackoff = func(s *Supervisor, backingOff bool) {
backingoff <- backingOff
}
// And with this failure, we trigger the backoff code.
service1.take <- Fail
backoff := <-backingoff
restarted = <-failNotify
if !backoff || restarted || s.failures != 4 {
t.Fatal("Broke past the threshold but did not log correctly", s.failures)
}
if service1.existing != 0 {
t.Fatal("service1 still exists according to itself?")
}
// service2 is still running, because we don't shut anything down in a
// backoff, we just stop restarting.
service2.take <- Happy
var correct bool
timer := time.NewTimer(time.Millisecond * 10)
// verify the service has not been restarted
// hard to get around race conditions here without simply using a timer...
select {
case service1.take <- Happy:
correct = false
case <-timer.C:
correct = true
}
if !correct {
t.Fatal("Restarted the service during the backoff interval")
}
// tell the supervisor the restart interval has passed
resumeChan <- time.Time{}
backoff = <-backingoff
<-service1.started
s.sync()
if s.failures != 0 {
t.Fatal("Did not reset failure count after coming back from timeout.")
}
nowFeeder.appendTimes(oneDecayLater)
service1.take <- Fail
restarted = <-failNotify
<-service1.started
if !restarted || backoff {
t.Fatal("For some reason, got that we were backing off again.", restarted, backoff)
}
}
func TestRunningAlreadyRunning(t *testing.T) {
t.Parallel()
s := NewSimple("A3")
go s.Serve()
defer s.Stop()
// ensure the supervisor has made it to its main loop
s.sync()
var errored bool
func() {
defer func() {
if r := recover(); r != nil {
errored = true
}
}()
s.Serve()
}()
if !errored {
t.Fatal("Supervisor failed to prevent itself from double-running.")
}
}
func TestFullConstruction(t *testing.T) {
t.Parallel()
s := New("Moo", Spec{
Log: func(string) {},
FailureDecay: 1,
FailureThreshold: 2,
FailureBackoff: 3,
Timeout: time.Second * 29,
})
if s.String() != "Moo" || s.failureDecay != 1 || s.failureThreshold != 2 || s.failureBackoff != 3 || s.timeout != time.Second*29 {
t.Fatal("Full construction failed somehow")
}
}
// This is mostly for coverage testing.
func TestDefaultLogging(t *testing.T) {
t.Parallel()
s := NewSimple("A4")
service := NewService("B4")
s.Add(service)
s.failureThreshold = .5
s.failureBackoff = time.Millisecond * 25
go s.Serve()
s.sync()
<-service.started
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
service.take <- UseStopChan
service.take <- Fail
<-service.stop
resumeChan <- time.Time{}
<-service.started
service.take <- Happy
serviceName(&BarelyService{})
s.logBadStop(s, service)
s.logFailure(s, service, 1, 1, true, errors.New("test error"), []byte{})
s.Stop()
}
func TestNestedSupervisors(t *testing.T) {
t.Parallel()
super1 := NewSimple("Top5")
super2 := NewSimple("Nested5")
service := NewService("Service5")
super2.logBadStop = func(*Supervisor, Service) {
panic("Failed to copy logBadStop")
}
super1.Add(super2)
super2.Add(service)
// test the functions got copied from super1; if this panics, it didn't
// get copied
super2.logBadStop(super2, service)
go super1.Serve()
super1.sync()
<-service.started
service.take <- Happy
super1.Stop()
}
func TestStoppingSupervisorStopsServices(t *testing.T) {
t.Parallel()
s := NewSimple("Top6")
service := NewService("Service 6")
s.Add(service)
go s.Serve()
s.sync()
<-service.started
service.take <- UseStopChan
s.Stop()
<-service.stop
}
func TestStoppingStillWorksWithHungServices(t *testing.T) {
t.Parallel()
s := NewSimple("Top7")
service := NewService("Service WillHang7")
s.Add(service)
go s.Serve()
<-service.started
service.take <- UseStopChan
service.take <- Hang
resumeChan := make(chan time.Time)
s.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan struct{})
s.logBadStop = func(supervisor *Supervisor, s Service) {
failNotify <- struct{}{}
}
s.Stop()
resumeChan <- time.Time{}
<-failNotify
service.release <- true
<-service.stop
}
func TestRemoveService(t *testing.T) {
t.Parallel()
s := NewSimple("Top")
service := NewService("ServiceToRemove8")
id := s.Add(service)
go s.Serve()
<-service.started
service.take <- UseStopChan
err := s.Remove(id)
if err != nil {
t.Fatal("Removing service somehow failed")
}
<-service.stop
err = s.Remove(ServiceToken{1<<36 + 1})
if err != ErrWrongSupervisor {
t.Fatal("Did not detect that the ServiceToken was wrong")
}
}
func TestFailureToConstruct(t *testing.T) {
t.Parallel()
var s *Supervisor
panics(func() {
s.Serve()
})
s = new(Supervisor)
panics(func() {
s.Serve()
})
}
func TestFailingSupervisors(t *testing.T) {
t.Parallel()
// This is a bit of a complicated test, so let me explain what
// all this is doing:
// 1. Set up a top-level supervisor with a hair-trigger backoff.
// 2. Add a supervisor to that.
// 3. To that supervisor, add a service.
// 4. Panic the supervisor in the middle, sending the top-level into
// backoff.
// 5. Kill the lower level service too.
// 6. Verify that when the top-level service comes out of backoff,
// the service ends up restarted as expected.
// Ultimately, we can't have more than a best-effort recovery here.
// A panic'ed supervisor can't really be trusted to have consistent state,
// and without *that*, we can't trust it to do anything sensible with
// the children it may have been running. So unlike Erlang, we can't
// can't really expect to be able to safely restart them or anything.
// Really, the "correct" answer is that the Supervisor must never panic,
// but in the event that it does, this verifies that it at least tries
// to get on with life.
// This also tests that if a Supervisor itself panics, and one of its
// monitored services goes down in the meantime, that the monitored
// service also gets correctly restarted when the supervisor does.
s1 := NewSimple("Top9")
s2 := NewSimple("Nested9")
service := NewService("Service9")
s1.Add(s2)
s2.Add(service)
go s1.Serve()
<-service.started
s1.failureThreshold = .5
// let us control precisely when s1 comes back
resumeChan := make(chan time.Time)
s1.getResume = func(d time.Duration) <-chan time.Time {
return resumeChan
}
failNotify := make(chan string)
// use this to synchronize on here
s1.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- fmt.Sprintf("%s", s)
}
s2.panic()
failing := <-failNotify
// that's enough sync to guarantee this:
if failing != "Nested9" || s1.state != paused {
t.Fatal("Top-level supervisor did not go into backoff as expected")
}
service.take <- Fail
resumeChan <- time.Time{}
<-service.started
}
func TestNilSupervisorAdd(t *testing.T) {
t.Parallel()
var s *Supervisor
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic as expected on nil add")
}
}()
s.Add(s)
}
// https://github.com/thejerf/suture/issues/11
//
// The purpose of this test is to verify that it does not cause data races,
// so there are no obvious assertions.
func TestIssue11(t *testing.T) {
t.Parallel()
s := NewSimple("main")
s.ServeBackground()
subsuper := NewSimple("sub")
s.Add(subsuper)
subsuper.Add(NewService("may cause data race"))
}
// http://golangtutorials.blogspot.com/2011/10/gotest-unit-testing-and-benchmarking-go.html
// claims test function are run in the same order as the source file...
// I'm not sure if this is part of the contract, though. Especially in the
// face of "t.Parallel()"...
func TestEverMultistarted(t *testing.T) {
if everMultistarted {
t.Fatal("Seem to have multistarted a service at some point, bummer.")
}
}
// A test service that can be induced to fail, panic, or hang on demand.
func NewService(name string) *FailableService {
return &FailableService{name, make(chan bool), make(chan int),
make(chan bool, 1), make(chan bool), make(chan bool), 0}
}
type FailableService struct {
name string
started chan bool
take chan int
shutdown chan bool
release chan bool
stop chan bool
existing int
}
func (s *FailableService) Serve() {
if s.existing != 0 {
everMultistarted = true
panic("Multi-started the same service! " + s.name)
}
s.existing++
s.started <- true
useStopChan := false
for {
select {
case val := <-s.take:
switch val {
case Happy:
// Do nothing on purpose. Life is good!
case Fail:
s.existing--
if useStopChan {
s.stop <- true
}
return
case Panic:
s.existing--
panic("Panic!")
case Hang:
// or more specifically, "hang until I release you"
<-s.release
case UseStopChan:
useStopChan = true
}
case <-s.shutdown:
s.existing--
if useStopChan {
s.stop <- true
}
return
}
}
}
func (s *FailableService) String() string {
return s.name
}
func (s *FailableService) Stop() {
s.shutdown <- true
}
type NowFeeder struct {
values []time.Time
getter func() time.Time
m sync.Mutex
}
// This is used to test serviceName; it's a service without a Stringer.
type BarelyService struct{}
func (bs *BarelyService) Serve() {}
func (bs *BarelyService) Stop() {}
func NewNowFeeder() (nf *NowFeeder) {
nf = new(NowFeeder)
nf.getter = func() time.Time {
nf.m.Lock()
defer nf.m.Unlock()
if len(nf.values) > 0 {
ret := nf.values[0]
nf.values = nf.values[1:]
return ret
}
panic("Ran out of values for NowFeeder")
}
return
}
func (nf *NowFeeder) appendTimes(t ...time.Time) {
nf.m.Lock()
defer nf.m.Unlock()
nf.values = append(nf.values, t...)
}
func panics(doesItPanic func()) (panics bool) {
defer func() {
if r := recover(); r != nil {
panics = true
}
}()
doesItPanic()
return
}

View File

@@ -1,226 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bcrypt
import (
"bytes"
"fmt"
"testing"
)
func TestBcryptingIsEasy(t *testing.T) {
pass := []byte("mypassword")
hp, err := GenerateFromPassword(pass, 0)
if err != nil {
t.Fatalf("GenerateFromPassword error: %s", err)
}
if CompareHashAndPassword(hp, pass) != nil {
t.Errorf("%v should hash %s correctly", hp, pass)
}
notPass := "notthepass"
err = CompareHashAndPassword(hp, []byte(notPass))
if err != ErrMismatchedHashAndPassword {
t.Errorf("%v and %s should be mismatched", hp, notPass)
}
}
func TestBcryptingIsCorrect(t *testing.T) {
pass := []byte("allmine")
salt := []byte("XajjQvNhvvRt5GSeFk1xFe")
expectedHash := []byte("$2a$10$XajjQvNhvvRt5GSeFk1xFeyqRrsxkhBkUiQeg0dt.wU1qD4aFDcga")
hash, err := bcrypt(pass, 10, salt)
if err != nil {
t.Fatalf("bcrypt blew up: %v", err)
}
if !bytes.HasSuffix(expectedHash, hash) {
t.Errorf("%v should be the suffix of %v", hash, expectedHash)
}
h, err := newFromHash(expectedHash)
if err != nil {
t.Errorf("Unable to parse %s: %v", string(expectedHash), err)
}
// This is not the safe way to compare these hashes. We do this only for
// testing clarity. Use bcrypt.CompareHashAndPassword()
if err == nil && !bytes.Equal(expectedHash, h.Hash()) {
t.Errorf("Parsed hash %v should equal %v", h.Hash(), expectedHash)
}
}
func TestVeryShortPasswords(t *testing.T) {
key := []byte("k")
salt := []byte("XajjQvNhvvRt5GSeFk1xFe")
_, err := bcrypt(key, 10, salt)
if err != nil {
t.Errorf("One byte key resulted in error: %s", err)
}
}
func TestTooLongPasswordsWork(t *testing.T) {
salt := []byte("XajjQvNhvvRt5GSeFk1xFe")
// One byte over the usual 56 byte limit that blowfish has
tooLongPass := []byte("012345678901234567890123456789012345678901234567890123456")
tooLongExpected := []byte("$2a$10$XajjQvNhvvRt5GSeFk1xFe5l47dONXg781AmZtd869sO8zfsHuw7C")
hash, err := bcrypt(tooLongPass, 10, salt)
if err != nil {
t.Fatalf("bcrypt blew up on long password: %v", err)
}
if !bytes.HasSuffix(tooLongExpected, hash) {
t.Errorf("%v should be the suffix of %v", hash, tooLongExpected)
}
}
type InvalidHashTest struct {
err error
hash []byte
}
var invalidTests = []InvalidHashTest{
{ErrHashTooShort, []byte("$2a$10$fooo")},
{ErrHashTooShort, []byte("$2a")},
{HashVersionTooNewError('3'), []byte("$3a$10$sssssssssssssssssssssshhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh")},
{InvalidHashPrefixError('%'), []byte("%2a$10$sssssssssssssssssssssshhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh")},
{InvalidCostError(32), []byte("$2a$32$sssssssssssssssssssssshhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh")},
}
func TestInvalidHashErrors(t *testing.T) {
check := func(name string, expected, err error) {
if err == nil {
t.Errorf("%s: Should have returned an error", name)
}
if err != nil && err != expected {
t.Errorf("%s gave err %v but should have given %v", name, err, expected)
}
}
for _, iht := range invalidTests {
_, err := newFromHash(iht.hash)
check("newFromHash", iht.err, err)
err = CompareHashAndPassword(iht.hash, []byte("anything"))
check("CompareHashAndPassword", iht.err, err)
}
}
func TestUnpaddedBase64Encoding(t *testing.T) {
original := []byte{101, 201, 101, 75, 19, 227, 199, 20, 239, 236, 133, 32, 30, 109, 243, 30}
encodedOriginal := []byte("XajjQvNhvvRt5GSeFk1xFe")
encoded := base64Encode(original)
if !bytes.Equal(encodedOriginal, encoded) {
t.Errorf("Encoded %v should have equaled %v", encoded, encodedOriginal)
}
decoded, err := base64Decode(encodedOriginal)
if err != nil {
t.Fatalf("base64Decode blew up: %s", err)
}
if !bytes.Equal(decoded, original) {
t.Errorf("Decoded %v should have equaled %v", decoded, original)
}
}
func TestCost(t *testing.T) {
suffix := "XajjQvNhvvRt5GSeFk1xFe5l47dONXg781AmZtd869sO8zfsHuw7C"
for _, vers := range []string{"2a", "2"} {
for _, cost := range []int{4, 10} {
s := fmt.Sprintf("$%s$%02d$%s", vers, cost, suffix)
h := []byte(s)
actual, err := Cost(h)
if err != nil {
t.Errorf("Cost, error: %s", err)
continue
}
if actual != cost {
t.Errorf("Cost, expected: %d, actual: %d", cost, actual)
}
}
}
_, err := Cost([]byte("$a$a$" + suffix))
if err == nil {
t.Errorf("Cost, malformed but no error returned")
}
}
func TestCostValidationInHash(t *testing.T) {
if testing.Short() {
return
}
pass := []byte("mypassword")
for c := 0; c < MinCost; c++ {
p, _ := newFromPassword(pass, c)
if p.cost != DefaultCost {
t.Errorf("newFromPassword should default costs below %d to %d, but was %d", MinCost, DefaultCost, p.cost)
}
}
p, _ := newFromPassword(pass, 14)
if p.cost != 14 {
t.Errorf("newFromPassword should default cost to 14, but was %d", p.cost)
}
hp, _ := newFromHash(p.Hash())
if p.cost != hp.cost {
t.Errorf("newFromHash should maintain the cost at %d, but was %d", p.cost, hp.cost)
}
_, err := newFromPassword(pass, 32)
if err == nil {
t.Fatalf("newFromPassword: should return a cost error")
}
if err != InvalidCostError(32) {
t.Errorf("newFromPassword: should return cost error, got %#v", err)
}
}
func TestCostReturnsWithLeadingZeroes(t *testing.T) {
hp, _ := newFromPassword([]byte("abcdefgh"), 7)
cost := hp.Hash()[4:7]
expected := []byte("07$")
if !bytes.Equal(expected, cost) {
t.Errorf("single digit costs in hash should have leading zeros: was %v instead of %v", cost, expected)
}
}
func TestMinorNotRequired(t *testing.T) {
noMinorHash := []byte("$2$10$XajjQvNhvvRt5GSeFk1xFeyqRrsxkhBkUiQeg0dt.wU1qD4aFDcga")
h, err := newFromHash(noMinorHash)
if err != nil {
t.Fatalf("No minor hash blew up: %s", err)
}
if h.minor != 0 {
t.Errorf("Should leave minor version at 0, but was %d", h.minor)
}
if !bytes.Equal(noMinorHash, h.Hash()) {
t.Errorf("Should generate hash %v, but created %v", noMinorHash, h.Hash())
}
}
func BenchmarkEqual(b *testing.B) {
b.StopTimer()
passwd := []byte("somepasswordyoulike")
hash, _ := GenerateFromPassword(passwd, 10)
b.StartTimer()
for i := 0; i < b.N; i++ {
CompareHashAndPassword(hash, passwd)
}
}
func BenchmarkGeneration(b *testing.B) {
b.StopTimer()
passwd := []byte("mylongpassword1234")
b.StartTimer()
for i := 0; i < b.N; i++ {
GenerateFromPassword(passwd, 10)
}
}

View File

@@ -1,274 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package blowfish
import "testing"
type CryptTest struct {
key []byte
in []byte
out []byte
}
// Test vector values are from http://www.schneier.com/code/vectors.txt.
var encryptTests = []CryptTest{
{
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x4E, 0xF9, 0x97, 0x45, 0x61, 0x98, 0xDD, 0x78}},
{
[]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
[]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
[]byte{0x51, 0x86, 0x6F, 0xD5, 0xB8, 0x5E, 0xCB, 0x8A}},
{
[]byte{0x30, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
[]byte{0x7D, 0x85, 0x6F, 0x9A, 0x61, 0x30, 0x63, 0xF2}},
{
[]byte{0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11},
[]byte{0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11},
[]byte{0x24, 0x66, 0xDD, 0x87, 0x8B, 0x96, 0x3C, 0x9D}},
{
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11},
[]byte{0x61, 0xF9, 0xC3, 0x80, 0x22, 0x81, 0xB0, 0x96}},
{
[]byte{0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11},
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0x7D, 0x0C, 0xC6, 0x30, 0xAF, 0xDA, 0x1E, 0xC7}},
{
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x4E, 0xF9, 0x97, 0x45, 0x61, 0x98, 0xDD, 0x78}},
{
[]byte{0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10},
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0x0A, 0xCE, 0xAB, 0x0F, 0xC6, 0xA0, 0xA2, 0x8D}},
{
[]byte{0x7C, 0xA1, 0x10, 0x45, 0x4A, 0x1A, 0x6E, 0x57},
[]byte{0x01, 0xA1, 0xD6, 0xD0, 0x39, 0x77, 0x67, 0x42},
[]byte{0x59, 0xC6, 0x82, 0x45, 0xEB, 0x05, 0x28, 0x2B}},
{
[]byte{0x01, 0x31, 0xD9, 0x61, 0x9D, 0xC1, 0x37, 0x6E},
[]byte{0x5C, 0xD5, 0x4C, 0xA8, 0x3D, 0xEF, 0x57, 0xDA},
[]byte{0xB1, 0xB8, 0xCC, 0x0B, 0x25, 0x0F, 0x09, 0xA0}},
{
[]byte{0x07, 0xA1, 0x13, 0x3E, 0x4A, 0x0B, 0x26, 0x86},
[]byte{0x02, 0x48, 0xD4, 0x38, 0x06, 0xF6, 0x71, 0x72},
[]byte{0x17, 0x30, 0xE5, 0x77, 0x8B, 0xEA, 0x1D, 0xA4}},
{
[]byte{0x38, 0x49, 0x67, 0x4C, 0x26, 0x02, 0x31, 0x9E},
[]byte{0x51, 0x45, 0x4B, 0x58, 0x2D, 0xDF, 0x44, 0x0A},
[]byte{0xA2, 0x5E, 0x78, 0x56, 0xCF, 0x26, 0x51, 0xEB}},
{
[]byte{0x04, 0xB9, 0x15, 0xBA, 0x43, 0xFE, 0xB5, 0xB6},
[]byte{0x42, 0xFD, 0x44, 0x30, 0x59, 0x57, 0x7F, 0xA2},
[]byte{0x35, 0x38, 0x82, 0xB1, 0x09, 0xCE, 0x8F, 0x1A}},
{
[]byte{0x01, 0x13, 0xB9, 0x70, 0xFD, 0x34, 0xF2, 0xCE},
[]byte{0x05, 0x9B, 0x5E, 0x08, 0x51, 0xCF, 0x14, 0x3A},
[]byte{0x48, 0xF4, 0xD0, 0x88, 0x4C, 0x37, 0x99, 0x18}},
{
[]byte{0x01, 0x70, 0xF1, 0x75, 0x46, 0x8F, 0xB5, 0xE6},
[]byte{0x07, 0x56, 0xD8, 0xE0, 0x77, 0x47, 0x61, 0xD2},
[]byte{0x43, 0x21, 0x93, 0xB7, 0x89, 0x51, 0xFC, 0x98}},
{
[]byte{0x43, 0x29, 0x7F, 0xAD, 0x38, 0xE3, 0x73, 0xFE},
[]byte{0x76, 0x25, 0x14, 0xB8, 0x29, 0xBF, 0x48, 0x6A},
[]byte{0x13, 0xF0, 0x41, 0x54, 0xD6, 0x9D, 0x1A, 0xE5}},
{
[]byte{0x07, 0xA7, 0x13, 0x70, 0x45, 0xDA, 0x2A, 0x16},
[]byte{0x3B, 0xDD, 0x11, 0x90, 0x49, 0x37, 0x28, 0x02},
[]byte{0x2E, 0xED, 0xDA, 0x93, 0xFF, 0xD3, 0x9C, 0x79}},
{
[]byte{0x04, 0x68, 0x91, 0x04, 0xC2, 0xFD, 0x3B, 0x2F},
[]byte{0x26, 0x95, 0x5F, 0x68, 0x35, 0xAF, 0x60, 0x9A},
[]byte{0xD8, 0x87, 0xE0, 0x39, 0x3C, 0x2D, 0xA6, 0xE3}},
{
[]byte{0x37, 0xD0, 0x6B, 0xB5, 0x16, 0xCB, 0x75, 0x46},
[]byte{0x16, 0x4D, 0x5E, 0x40, 0x4F, 0x27, 0x52, 0x32},
[]byte{0x5F, 0x99, 0xD0, 0x4F, 0x5B, 0x16, 0x39, 0x69}},
{
[]byte{0x1F, 0x08, 0x26, 0x0D, 0x1A, 0xC2, 0x46, 0x5E},
[]byte{0x6B, 0x05, 0x6E, 0x18, 0x75, 0x9F, 0x5C, 0xCA},
[]byte{0x4A, 0x05, 0x7A, 0x3B, 0x24, 0xD3, 0x97, 0x7B}},
{
[]byte{0x58, 0x40, 0x23, 0x64, 0x1A, 0xBA, 0x61, 0x76},
[]byte{0x00, 0x4B, 0xD6, 0xEF, 0x09, 0x17, 0x60, 0x62},
[]byte{0x45, 0x20, 0x31, 0xC1, 0xE4, 0xFA, 0xDA, 0x8E}},
{
[]byte{0x02, 0x58, 0x16, 0x16, 0x46, 0x29, 0xB0, 0x07},
[]byte{0x48, 0x0D, 0x39, 0x00, 0x6E, 0xE7, 0x62, 0xF2},
[]byte{0x75, 0x55, 0xAE, 0x39, 0xF5, 0x9B, 0x87, 0xBD}},
{
[]byte{0x49, 0x79, 0x3E, 0xBC, 0x79, 0xB3, 0x25, 0x8F},
[]byte{0x43, 0x75, 0x40, 0xC8, 0x69, 0x8F, 0x3C, 0xFA},
[]byte{0x53, 0xC5, 0x5F, 0x9C, 0xB4, 0x9F, 0xC0, 0x19}},
{
[]byte{0x4F, 0xB0, 0x5E, 0x15, 0x15, 0xAB, 0x73, 0xA7},
[]byte{0x07, 0x2D, 0x43, 0xA0, 0x77, 0x07, 0x52, 0x92},
[]byte{0x7A, 0x8E, 0x7B, 0xFA, 0x93, 0x7E, 0x89, 0xA3}},
{
[]byte{0x49, 0xE9, 0x5D, 0x6D, 0x4C, 0xA2, 0x29, 0xBF},
[]byte{0x02, 0xFE, 0x55, 0x77, 0x81, 0x17, 0xF1, 0x2A},
[]byte{0xCF, 0x9C, 0x5D, 0x7A, 0x49, 0x86, 0xAD, 0xB5}},
{
[]byte{0x01, 0x83, 0x10, 0xDC, 0x40, 0x9B, 0x26, 0xD6},
[]byte{0x1D, 0x9D, 0x5C, 0x50, 0x18, 0xF7, 0x28, 0xC2},
[]byte{0xD1, 0xAB, 0xB2, 0x90, 0x65, 0x8B, 0xC7, 0x78}},
{
[]byte{0x1C, 0x58, 0x7F, 0x1C, 0x13, 0x92, 0x4F, 0xEF},
[]byte{0x30, 0x55, 0x32, 0x28, 0x6D, 0x6F, 0x29, 0x5A},
[]byte{0x55, 0xCB, 0x37, 0x74, 0xD1, 0x3E, 0xF2, 0x01}},
{
[]byte{0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01},
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0xFA, 0x34, 0xEC, 0x48, 0x47, 0xB2, 0x68, 0xB2}},
{
[]byte{0x1F, 0x1F, 0x1F, 0x1F, 0x0E, 0x0E, 0x0E, 0x0E},
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0xA7, 0x90, 0x79, 0x51, 0x08, 0xEA, 0x3C, 0xAE}},
{
[]byte{0xE0, 0xFE, 0xE0, 0xFE, 0xF1, 0xFE, 0xF1, 0xFE},
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0xC3, 0x9E, 0x07, 0x2D, 0x9F, 0xAC, 0x63, 0x1D}},
{
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
[]byte{0x01, 0x49, 0x33, 0xE0, 0xCD, 0xAF, 0xF6, 0xE4}},
{
[]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0xF2, 0x1E, 0x9A, 0x77, 0xB7, 0x1C, 0x49, 0xBC}},
{
[]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF},
[]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
[]byte{0x24, 0x59, 0x46, 0x88, 0x57, 0x54, 0x36, 0x9A}},
{
[]byte{0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10},
[]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
[]byte{0x6B, 0x5C, 0x5A, 0x9C, 0x5D, 0x9E, 0x0A, 0x5A}},
}
func TestCipherEncrypt(t *testing.T) {
for i, tt := range encryptTests {
c, err := NewCipher(tt.key)
if err != nil {
t.Errorf("NewCipher(%d bytes) = %s", len(tt.key), err)
continue
}
ct := make([]byte, len(tt.out))
c.Encrypt(ct, tt.in)
for j, v := range ct {
if v != tt.out[j] {
t.Errorf("Cipher.Encrypt, test vector #%d: cipher-text[%d] = %#x, expected %#x", i, j, v, tt.out[j])
break
}
}
}
}
func TestCipherDecrypt(t *testing.T) {
for i, tt := range encryptTests {
c, err := NewCipher(tt.key)
if err != nil {
t.Errorf("NewCipher(%d bytes) = %s", len(tt.key), err)
continue
}
pt := make([]byte, len(tt.in))
c.Decrypt(pt, tt.out)
for j, v := range pt {
if v != tt.in[j] {
t.Errorf("Cipher.Decrypt, test vector #%d: plain-text[%d] = %#x, expected %#x", i, j, v, tt.in[j])
break
}
}
}
}
func TestSaltedCipherKeyLength(t *testing.T) {
if _, err := NewSaltedCipher(nil, []byte{'a'}); err != KeySizeError(0) {
t.Errorf("NewSaltedCipher with short key, gave error %#v, expected %#v", err, KeySizeError(0))
}
// A 57-byte key. One over the typical blowfish restriction.
key := []byte("012345678901234567890123456789012345678901234567890123456")
if _, err := NewSaltedCipher(key, []byte{'a'}); err != nil {
t.Errorf("NewSaltedCipher with long key, gave error %#v", err)
}
}
// Test vectors generated with Blowfish from OpenSSH.
var saltedVectors = [][8]byte{
{0x0c, 0x82, 0x3b, 0x7b, 0x8d, 0x01, 0x4b, 0x7e},
{0xd1, 0xe1, 0x93, 0xf0, 0x70, 0xa6, 0xdb, 0x12},
{0xfc, 0x5e, 0xba, 0xde, 0xcb, 0xf8, 0x59, 0xad},
{0x8a, 0x0c, 0x76, 0xe7, 0xdd, 0x2c, 0xd3, 0xa8},
{0x2c, 0xcb, 0x7b, 0xee, 0xac, 0x7b, 0x7f, 0xf8},
{0xbb, 0xf6, 0x30, 0x6f, 0xe1, 0x5d, 0x62, 0xbf},
{0x97, 0x1e, 0xc1, 0x3d, 0x3d, 0xe0, 0x11, 0xe9},
{0x06, 0xd7, 0x4d, 0xb1, 0x80, 0xa3, 0xb1, 0x38},
{0x67, 0xa1, 0xa9, 0x75, 0x0e, 0x5b, 0xc6, 0xb4},
{0x51, 0x0f, 0x33, 0x0e, 0x4f, 0x67, 0xd2, 0x0c},
{0xf1, 0x73, 0x7e, 0xd8, 0x44, 0xea, 0xdb, 0xe5},
{0x14, 0x0e, 0x16, 0xce, 0x7f, 0x4a, 0x9c, 0x7b},
{0x4b, 0xfe, 0x43, 0xfd, 0xbf, 0x36, 0x04, 0x47},
{0xb1, 0xeb, 0x3e, 0x15, 0x36, 0xa7, 0xbb, 0xe2},
{0x6d, 0x0b, 0x41, 0xdd, 0x00, 0x98, 0x0b, 0x19},
{0xd3, 0xce, 0x45, 0xce, 0x1d, 0x56, 0xb7, 0xfc},
{0xd9, 0xf0, 0xfd, 0xda, 0xc0, 0x23, 0xb7, 0x93},
{0x4c, 0x6f, 0xa1, 0xe4, 0x0c, 0xa8, 0xca, 0x57},
{0xe6, 0x2f, 0x28, 0xa7, 0x0c, 0x94, 0x0d, 0x08},
{0x8f, 0xe3, 0xf0, 0xb6, 0x29, 0xe3, 0x44, 0x03},
{0xff, 0x98, 0xdd, 0x04, 0x45, 0xb4, 0x6d, 0x1f},
{0x9e, 0x45, 0x4d, 0x18, 0x40, 0x53, 0xdb, 0xef},
{0xb7, 0x3b, 0xef, 0x29, 0xbe, 0xa8, 0x13, 0x71},
{0x02, 0x54, 0x55, 0x41, 0x8e, 0x04, 0xfc, 0xad},
{0x6a, 0x0a, 0xee, 0x7c, 0x10, 0xd9, 0x19, 0xfe},
{0x0a, 0x22, 0xd9, 0x41, 0xcc, 0x23, 0x87, 0x13},
{0x6e, 0xff, 0x1f, 0xff, 0x36, 0x17, 0x9c, 0xbe},
{0x79, 0xad, 0xb7, 0x40, 0xf4, 0x9f, 0x51, 0xa6},
{0x97, 0x81, 0x99, 0xa4, 0xde, 0x9e, 0x9f, 0xb6},
{0x12, 0x19, 0x7a, 0x28, 0xd0, 0xdc, 0xcc, 0x92},
{0x81, 0xda, 0x60, 0x1e, 0x0e, 0xdd, 0x65, 0x56},
{0x7d, 0x76, 0x20, 0xb2, 0x73, 0xc9, 0x9e, 0xee},
}
func TestSaltedCipher(t *testing.T) {
var key, salt [32]byte
for i := range key {
key[i] = byte(i)
salt[i] = byte(i + 32)
}
for i, v := range saltedVectors {
c, err := NewSaltedCipher(key[:], salt[:i])
if err != nil {
t.Fatal(err)
}
var buf [8]byte
c.Encrypt(buf[:], buf[:])
if v != buf {
t.Errorf("%d: expected %x, got %x", i, v, buf)
}
}
}
func BenchmarkExpandKeyWithSalt(b *testing.B) {
key := make([]byte, 32)
salt := make([]byte, 16)
c, _ := NewCipher(key)
for i := 0; i < b.N; i++ {
expandKeyWithSalt(key, salt, c)
}
}
func BenchmarkExpandKey(b *testing.B) {
key := make([]byte, 32)
c, _ := NewCipher(key)
for i := 0; i < b.N; i++ {
ExpandKey(key, c)
}
}

View File

@@ -42,7 +42,7 @@
// The outgoing packets will be labeled DiffServ assured forwarding
// class 1 low drop precedence, known as AF11 packets.
//
// if err := ipv6.NewConn(c).SetTrafficClass(DiffServAF11); err != nil {
// if err := ipv6.NewConn(c).SetTrafficClass(0x28); err != nil {
// // error handling
// }
// if _, err := c.Write(data); err != nil {
@@ -124,7 +124,7 @@
//
// The application can also send both unicast and multicast packets.
//
// p.SetTrafficClass(DiffServCS0)
// p.SetTrafficClass(0x0)
// p.SetHopLimit(16)
// if _, err := p.WriteTo(data[:n], nil, src); err != nil {
// // error handling

View File

@@ -1,214 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"fmt"
"log"
"net"
"os"
"time"
"golang.org/x/net/icmp"
"golang.org/x/net/ipv6"
)
func ExampleConn_markingTCP() {
ln, err := net.Listen("tcp6", "[::]:1024")
if err != nil {
log.Fatal(err)
}
defer ln.Close()
for {
c, err := ln.Accept()
if err != nil {
log.Fatal(err)
}
go func(c net.Conn) {
defer c.Close()
p := ipv6.NewConn(c)
if err := p.SetTrafficClass(0x28); err != nil { // DSCP AF11
log.Fatal(err)
}
if err := p.SetHopLimit(128); err != nil {
log.Fatal(err)
}
if _, err := c.Write([]byte("HELLO-R-U-THERE-ACK")); err != nil {
log.Fatal(err)
}
}(c)
}
}
func ExamplePacketConn_servingOneShotMulticastDNS() {
c, err := net.ListenPacket("udp6", "[::]:5353") // mDNS over UDP
if err != nil {
log.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
en0, err := net.InterfaceByName("en0")
if err != nil {
log.Fatal(err)
}
mDNSLinkLocal := net.UDPAddr{IP: net.ParseIP("ff02::fb")}
if err := p.JoinGroup(en0, &mDNSLinkLocal); err != nil {
log.Fatal(err)
}
defer p.LeaveGroup(en0, &mDNSLinkLocal)
if err := p.SetControlMessage(ipv6.FlagDst|ipv6.FlagInterface, true); err != nil {
log.Fatal(err)
}
var wcm ipv6.ControlMessage
b := make([]byte, 1500)
for {
_, rcm, peer, err := p.ReadFrom(b)
if err != nil {
log.Fatal(err)
}
if !rcm.Dst.IsMulticast() || !rcm.Dst.Equal(mDNSLinkLocal.IP) {
continue
}
wcm.IfIndex = rcm.IfIndex
answers := []byte("FAKE-MDNS-ANSWERS") // fake mDNS answers, you need to implement this
if _, err := p.WriteTo(answers, &wcm, peer); err != nil {
log.Fatal(err)
}
}
}
func ExamplePacketConn_tracingIPPacketRoute() {
// Tracing an IP packet route to www.google.com.
const host = "www.google.com"
ips, err := net.LookupIP(host)
if err != nil {
log.Fatal(err)
}
var dst net.IPAddr
for _, ip := range ips {
if ip.To16() != nil && ip.To4() == nil {
dst.IP = ip
fmt.Printf("using %v for tracing an IP packet route to %s\n", dst.IP, host)
break
}
}
if dst.IP == nil {
log.Fatal("no AAAA record found")
}
c, err := net.ListenPacket("ip6:58", "::") // ICMP for IPv6
if err != nil {
log.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
if err := p.SetControlMessage(ipv6.FlagHopLimit|ipv6.FlagSrc|ipv6.FlagDst|ipv6.FlagInterface, true); err != nil {
log.Fatal(err)
}
wm := icmp.Message{
Type: ipv6.ICMPTypeEchoRequest, Code: 0,
Body: &icmp.Echo{
ID: os.Getpid() & 0xffff,
Data: []byte("HELLO-R-U-THERE"),
},
}
var f ipv6.ICMPFilter
f.SetAll(true)
f.Accept(ipv6.ICMPTypeTimeExceeded)
f.Accept(ipv6.ICMPTypeEchoReply)
if err := p.SetICMPFilter(&f); err != nil {
log.Fatal(err)
}
var wcm ipv6.ControlMessage
rb := make([]byte, 1500)
for i := 1; i <= 64; i++ { // up to 64 hops
wm.Body.(*icmp.Echo).Seq = i
wb, err := wm.Marshal(nil)
if err != nil {
log.Fatal(err)
}
// In the real world usually there are several
// multiple traffic-engineered paths for each hop.
// You may need to probe a few times to each hop.
begin := time.Now()
wcm.HopLimit = i
if _, err := p.WriteTo(wb, &wcm, &dst); err != nil {
log.Fatal(err)
}
if err := p.SetReadDeadline(time.Now().Add(3 * time.Second)); err != nil {
log.Fatal(err)
}
n, rcm, peer, err := p.ReadFrom(rb)
if err != nil {
if err, ok := err.(net.Error); ok && err.Timeout() {
fmt.Printf("%v\t*\n", i)
continue
}
log.Fatal(err)
}
rm, err := icmp.ParseMessage(58, rb[:n])
if err != nil {
log.Fatal(err)
}
rtt := time.Since(begin)
// In the real world you need to determine whether the
// received message is yours using ControlMessage.Src,
// ControlMesage.Dst, icmp.Echo.ID and icmp.Echo.Seq.
switch rm.Type {
case ipv6.ICMPTypeTimeExceeded:
names, _ := net.LookupAddr(peer.String())
fmt.Printf("%d\t%v %+v %v\n\t%+v\n", i, peer, names, rtt, rcm)
case ipv6.ICMPTypeEchoReply:
names, _ := net.LookupAddr(peer.String())
fmt.Printf("%d\t%v %+v %v\n\t%+v\n", i, peer, names, rtt, rcm)
return
}
}
}
func ExamplePacketConn_advertisingOSPFHello() {
c, err := net.ListenPacket("ip6:89", "::") // OSPF for IPv6
if err != nil {
log.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
en0, err := net.InterfaceByName("en0")
if err != nil {
log.Fatal(err)
}
allSPFRouters := net.IPAddr{IP: net.ParseIP("ff02::5")}
if err := p.JoinGroup(en0, &allSPFRouters); err != nil {
log.Fatal(err)
}
defer p.LeaveGroup(en0, &allSPFRouters)
hello := make([]byte, 24) // fake hello data, you need to implement this
ospf := make([]byte, 16) // fake ospf header, you need to implement this
ospf[0] = 3 // version 3
ospf[1] = 1 // hello packet
ospf = append(ospf, hello...)
if err := p.SetChecksum(true, 12); err != nil {
log.Fatal(err)
}
cm := ipv6.ControlMessage{
TrafficClass: 0xc0, // DSCP CS6
HopLimit: 1,
IfIndex: en0.Index,
}
if _, err := p.WriteTo(ospf, &cm, &allSPFRouters); err != nil {
log.Fatal(err)
}
}

View File

@@ -1,50 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"net"
"reflect"
"testing"
"golang.org/x/net/internal/iana"
"golang.org/x/net/ipv6"
)
var (
wireHeaderFromKernel = [ipv6.HeaderLen]byte{
0x69, 0x8b, 0xee, 0xf1,
0xca, 0xfe, 0x2c, 0x01,
0x20, 0x01, 0x0d, 0xb8,
0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01,
0x20, 0x01, 0x0d, 0xb8,
0x00, 0x02, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01,
}
testHeader = &ipv6.Header{
Version: ipv6.Version,
TrafficClass: iana.DiffServAF43,
FlowLabel: 0xbeef1,
PayloadLen: 0xcafe,
NextHeader: iana.ProtocolIPv6Frag,
HopLimit: 1,
Src: net.ParseIP("2001:db8:1::1"),
Dst: net.ParseIP("2001:db8:2::1"),
}
)
func TestParseHeader(t *testing.T) {
h, err := ipv6.ParseHeader(wireHeaderFromKernel[:])
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(h, testHeader) {
t.Fatalf("got %#v; want %#v", h, testHeader)
}
}

View File

@@ -1,96 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"net"
"reflect"
"runtime"
"testing"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
var icmpStringTests = []struct {
in ipv6.ICMPType
out string
}{
{ipv6.ICMPTypeDestinationUnreachable, "destination unreachable"},
{256, "<nil>"},
}
func TestICMPString(t *testing.T) {
for _, tt := range icmpStringTests {
s := tt.in.String()
if s != tt.out {
t.Errorf("got %s; want %s", s, tt.out)
}
}
}
func TestICMPFilter(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
var f ipv6.ICMPFilter
for _, toggle := range []bool{false, true} {
f.SetAll(toggle)
for _, typ := range []ipv6.ICMPType{
ipv6.ICMPTypeDestinationUnreachable,
ipv6.ICMPTypeEchoReply,
ipv6.ICMPTypeNeighborSolicitation,
ipv6.ICMPTypeDuplicateAddressConfirmation,
} {
f.Accept(typ)
if f.WillBlock(typ) {
t.Errorf("ipv6.ICMPFilter.Set(%v, false) failed", typ)
}
f.Block(typ)
if !f.WillBlock(typ) {
t.Errorf("ipv6.ICMPFilter.Set(%v, true) failed", typ)
}
}
}
}
func TestSetICMPFilter(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
if m, ok := nettest.SupportsRawIPSocket(); !ok {
t.Skip(m)
}
c, err := net.ListenPacket("ip6:ipv6-icmp", "::1")
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
var f ipv6.ICMPFilter
f.SetAll(true)
f.Accept(ipv6.ICMPTypeEchoRequest)
f.Accept(ipv6.ICMPTypeEchoReply)
if err := p.SetICMPFilter(&f); err != nil {
t.Fatal(err)
}
kf, err := p.ICMPFilter()
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(kf, &f) {
t.Fatalf("got %#v; want %#v", kf, f)
}
}

View File

@@ -1,32 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"net"
"testing"
)
func connector(t *testing.T, network, addr string, done chan<- bool) {
defer func() { done <- true }()
c, err := net.Dial(network, addr)
if err != nil {
t.Error(err)
return
}
c.Close()
}
func acceptor(t *testing.T, ln net.Listener, done chan<- bool) {
defer func() { done <- true }()
c, err := ln.Accept()
if err != nil {
t.Error(err)
return
}
c.Close()
}

View File

@@ -1,263 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"bytes"
"net"
"os"
"runtime"
"testing"
"time"
"golang.org/x/net/icmp"
"golang.org/x/net/internal/iana"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
var packetConnReadWriteMulticastUDPTests = []struct {
addr string
grp, src *net.UDPAddr
}{
{"[ff02::]:0", &net.UDPAddr{IP: net.ParseIP("ff02::114")}, nil}, // see RFC 4727
{"[ff30::8000:0]:0", &net.UDPAddr{IP: net.ParseIP("ff30::8000:1")}, &net.UDPAddr{IP: net.IPv6loopback}}, // see RFC 5771
}
func TestPacketConnReadWriteMulticastUDP(t *testing.T) {
switch runtime.GOOS {
case "freebsd": // due to a bug on loopback marking
// See http://www.freebsd.org/cgi/query-pr.cgi?pr=180065.
t.Skipf("not supported on %s", runtime.GOOS)
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagMulticast|net.FlagLoopback)
if ifi == nil {
t.Skipf("not available on %s", runtime.GOOS)
}
for _, tt := range packetConnReadWriteMulticastUDPTests {
c, err := net.ListenPacket("udp6", tt.addr)
if err != nil {
t.Fatal(err)
}
defer c.Close()
grp := *tt.grp
grp.Port = c.LocalAddr().(*net.UDPAddr).Port
p := ipv6.NewPacketConn(c)
defer p.Close()
if tt.src == nil {
if err := p.JoinGroup(ifi, &grp); err != nil {
t.Fatal(err)
}
defer p.LeaveGroup(ifi, &grp)
} else {
if err := p.JoinSourceSpecificGroup(ifi, &grp, tt.src); err != nil {
switch runtime.GOOS {
case "freebsd", "linux":
default: // platforms that don't support MLDv2 fail here
t.Logf("not supported on %s", runtime.GOOS)
continue
}
t.Fatal(err)
}
defer p.LeaveSourceSpecificGroup(ifi, &grp, tt.src)
}
if err := p.SetMulticastInterface(ifi); err != nil {
t.Fatal(err)
}
if _, err := p.MulticastInterface(); err != nil {
t.Fatal(err)
}
if err := p.SetMulticastLoopback(true); err != nil {
t.Fatal(err)
}
if _, err := p.MulticastLoopback(); err != nil {
t.Fatal(err)
}
cm := ipv6.ControlMessage{
TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced,
Src: net.IPv6loopback,
IfIndex: ifi.Index,
}
cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU
wb := []byte("HELLO-R-U-THERE")
for i, toggle := range []bool{true, false, true} {
if err := p.SetControlMessage(cf, toggle); err != nil {
if nettest.ProtocolNotSupported(err) {
t.Logf("not supported on %s", runtime.GOOS)
continue
}
t.Fatal(err)
}
if err := p.SetDeadline(time.Now().Add(200 * time.Millisecond)); err != nil {
t.Fatal(err)
}
cm.HopLimit = i + 1
if n, err := p.WriteTo(wb, &cm, &grp); err != nil {
t.Fatal(err)
} else if n != len(wb) {
t.Fatal(err)
}
rb := make([]byte, 128)
if n, cm, _, err := p.ReadFrom(rb); err != nil {
t.Fatal(err)
} else if !bytes.Equal(rb[:n], wb) {
t.Fatalf("got %v; want %v", rb[:n], wb)
} else {
t.Logf("rcvd cmsg: %v", cm)
}
}
}
}
var packetConnReadWriteMulticastICMPTests = []struct {
grp, src *net.IPAddr
}{
{&net.IPAddr{IP: net.ParseIP("ff02::114")}, nil}, // see RFC 4727
{&net.IPAddr{IP: net.ParseIP("ff30::8000:1")}, &net.IPAddr{IP: net.IPv6loopback}}, // see RFC 5771
}
func TestPacketConnReadWriteMulticastICMP(t *testing.T) {
switch runtime.GOOS {
case "freebsd": // due to a bug on loopback marking
// See http://www.freebsd.org/cgi/query-pr.cgi?pr=180065.
t.Skipf("not supported on %s", runtime.GOOS)
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
if m, ok := nettest.SupportsRawIPSocket(); !ok {
t.Skip(m)
}
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagMulticast|net.FlagLoopback)
if ifi == nil {
t.Skipf("not available on %s", runtime.GOOS)
}
for _, tt := range packetConnReadWriteMulticastICMPTests {
c, err := net.ListenPacket("ip6:ipv6-icmp", "::")
if err != nil {
t.Fatal(err)
}
defer c.Close()
pshicmp := icmp.IPv6PseudoHeader(c.LocalAddr().(*net.IPAddr).IP, tt.grp.IP)
p := ipv6.NewPacketConn(c)
defer p.Close()
if tt.src == nil {
if err := p.JoinGroup(ifi, tt.grp); err != nil {
t.Fatal(err)
}
defer p.LeaveGroup(ifi, tt.grp)
} else {
if err := p.JoinSourceSpecificGroup(ifi, tt.grp, tt.src); err != nil {
switch runtime.GOOS {
case "freebsd", "linux":
default: // platforms that don't support MLDv2 fail here
t.Logf("not supported on %s", runtime.GOOS)
continue
}
t.Fatal(err)
}
defer p.LeaveSourceSpecificGroup(ifi, tt.grp, tt.src)
}
if err := p.SetMulticastInterface(ifi); err != nil {
t.Fatal(err)
}
if _, err := p.MulticastInterface(); err != nil {
t.Fatal(err)
}
if err := p.SetMulticastLoopback(true); err != nil {
t.Fatal(err)
}
if _, err := p.MulticastLoopback(); err != nil {
t.Fatal(err)
}
cm := ipv6.ControlMessage{
TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced,
Src: net.IPv6loopback,
IfIndex: ifi.Index,
}
cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU
var f ipv6.ICMPFilter
f.SetAll(true)
f.Accept(ipv6.ICMPTypeEchoReply)
if err := p.SetICMPFilter(&f); err != nil {
t.Fatal(err)
}
var psh []byte
for i, toggle := range []bool{true, false, true} {
if toggle {
psh = nil
if err := p.SetChecksum(true, 2); err != nil {
t.Fatal(err)
}
} else {
psh = pshicmp
// Some platforms never allow to
// disable the kernel checksum
// processing.
p.SetChecksum(false, -1)
}
wb, err := (&icmp.Message{
Type: ipv6.ICMPTypeEchoRequest, Code: 0,
Body: &icmp.Echo{
ID: os.Getpid() & 0xffff, Seq: i + 1,
Data: []byte("HELLO-R-U-THERE"),
},
}).Marshal(psh)
if err != nil {
t.Fatal(err)
}
if err := p.SetControlMessage(cf, toggle); err != nil {
if nettest.ProtocolNotSupported(err) {
t.Logf("not supported on %s", runtime.GOOS)
continue
}
t.Fatal(err)
}
if err := p.SetDeadline(time.Now().Add(200 * time.Millisecond)); err != nil {
t.Fatal(err)
}
cm.HopLimit = i + 1
if n, err := p.WriteTo(wb, &cm, tt.grp); err != nil {
t.Fatal(err)
} else if n != len(wb) {
t.Fatalf("got %v; want %v", n, len(wb))
}
rb := make([]byte, 128)
if n, cm, _, err := p.ReadFrom(rb); err != nil {
switch runtime.GOOS {
case "darwin": // older darwin kernels have some limitation on receiving icmp packet through raw socket
t.Logf("not supported on %s", runtime.GOOS)
continue
}
t.Fatal(err)
} else {
t.Logf("rcvd cmsg: %v", cm)
if m, err := icmp.ParseMessage(iana.ProtocolIPv6ICMP, rb[:n]); err != nil {
t.Fatal(err)
} else if m.Type != ipv6.ICMPTypeEchoReply || m.Code != 0 {
t.Fatalf("got type=%v, code=%v; want type=%v, code=%v", m.Type, m.Code, ipv6.ICMPTypeEchoReply, 0)
}
}
}
}
}

View File

@@ -1,246 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"fmt"
"net"
"runtime"
"testing"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
var udpMultipleGroupListenerTests = []net.Addr{
&net.UDPAddr{IP: net.ParseIP("ff02::114")}, // see RFC 4727
&net.UDPAddr{IP: net.ParseIP("ff02::1:114")},
&net.UDPAddr{IP: net.ParseIP("ff02::2:114")},
}
func TestUDPSinglePacketConnWithMultipleGroupListeners(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
for _, gaddr := range udpMultipleGroupListenerTests {
c, err := net.ListenPacket("udp6", "[::]:0") // wildcard address with non-reusable port
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
var mift []*net.Interface
ift, err := net.Interfaces()
if err != nil {
t.Fatal(err)
}
for i, ifi := range ift {
if _, ok := nettest.IsMulticastCapable("ip6", &ifi); !ok {
continue
}
if err := p.JoinGroup(&ifi, gaddr); err != nil {
t.Fatal(err)
}
mift = append(mift, &ift[i])
}
for _, ifi := range mift {
if err := p.LeaveGroup(ifi, gaddr); err != nil {
t.Fatal(err)
}
}
}
}
func TestUDPMultiplePacketConnWithMultipleGroupListeners(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
for _, gaddr := range udpMultipleGroupListenerTests {
c1, err := net.ListenPacket("udp6", "[ff02::]:1024") // wildcard address with reusable port
if err != nil {
t.Fatal(err)
}
defer c1.Close()
c2, err := net.ListenPacket("udp6", "[ff02::]:1024") // wildcard address with reusable port
if err != nil {
t.Fatal(err)
}
defer c2.Close()
var ps [2]*ipv6.PacketConn
ps[0] = ipv6.NewPacketConn(c1)
ps[1] = ipv6.NewPacketConn(c2)
var mift []*net.Interface
ift, err := net.Interfaces()
if err != nil {
t.Fatal(err)
}
for i, ifi := range ift {
if _, ok := nettest.IsMulticastCapable("ip6", &ifi); !ok {
continue
}
for _, p := range ps {
if err := p.JoinGroup(&ifi, gaddr); err != nil {
t.Fatal(err)
}
}
mift = append(mift, &ift[i])
}
for _, ifi := range mift {
for _, p := range ps {
if err := p.LeaveGroup(ifi, gaddr); err != nil {
t.Fatal(err)
}
}
}
}
}
func TestUDPPerInterfaceSinglePacketConnWithSingleGroupListener(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
gaddr := net.IPAddr{IP: net.ParseIP("ff02::114")} // see RFC 4727
type ml struct {
c *ipv6.PacketConn
ifi *net.Interface
}
var mlt []*ml
ift, err := net.Interfaces()
if err != nil {
t.Fatal(err)
}
for i, ifi := range ift {
ip, ok := nettest.IsMulticastCapable("ip6", &ifi)
if !ok {
continue
}
c, err := net.ListenPacket("udp6", fmt.Sprintf("[%s%%%s]:1024", ip.String(), ifi.Name)) // unicast address with non-reusable port
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
if err := p.JoinGroup(&ifi, &gaddr); err != nil {
t.Fatal(err)
}
mlt = append(mlt, &ml{p, &ift[i]})
}
for _, m := range mlt {
if err := m.c.LeaveGroup(m.ifi, &gaddr); err != nil {
t.Fatal(err)
}
}
}
func TestIPSinglePacketConnWithSingleGroupListener(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
if m, ok := nettest.SupportsRawIPSocket(); !ok {
t.Skip(m)
}
c, err := net.ListenPacket("ip6:ipv6-icmp", "::") // wildcard address
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
gaddr := net.IPAddr{IP: net.ParseIP("ff02::114")} // see RFC 4727
var mift []*net.Interface
ift, err := net.Interfaces()
if err != nil {
t.Fatal(err)
}
for i, ifi := range ift {
if _, ok := nettest.IsMulticastCapable("ip6", &ifi); !ok {
continue
}
if err := p.JoinGroup(&ifi, &gaddr); err != nil {
t.Fatal(err)
}
mift = append(mift, &ift[i])
}
for _, ifi := range mift {
if err := p.LeaveGroup(ifi, &gaddr); err != nil {
t.Fatal(err)
}
}
}
func TestIPPerInterfaceSinglePacketConnWithSingleGroupListener(t *testing.T) {
switch runtime.GOOS {
case "darwin", "dragonfly", "openbsd": // platforms that return fe80::1%lo0: bind: can't assign requested address
t.Skipf("not supported on %s", runtime.GOOS)
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
if m, ok := nettest.SupportsRawIPSocket(); !ok {
t.Skip(m)
}
gaddr := net.IPAddr{IP: net.ParseIP("ff02::114")} // see RFC 4727
type ml struct {
c *ipv6.PacketConn
ifi *net.Interface
}
var mlt []*ml
ift, err := net.Interfaces()
if err != nil {
t.Fatal(err)
}
for i, ifi := range ift {
ip, ok := nettest.IsMulticastCapable("ip6", &ifi)
if !ok {
continue
}
c, err := net.ListenPacket("ip6:ipv6-icmp", fmt.Sprintf("%s%%%s", ip.String(), ifi.Name)) // unicast address
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
if err := p.JoinGroup(&ifi, &gaddr); err != nil {
t.Fatal(err)
}
mlt = append(mlt, &ml{p, &ift[i]})
}
for _, m := range mlt {
if err := m.c.LeaveGroup(m.ifi, &gaddr); err != nil {
t.Fatal(err)
}
}
}

View File

@@ -1,157 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"net"
"runtime"
"testing"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
var packetConnMulticastSocketOptionTests = []struct {
net, proto, addr string
grp, src net.Addr
}{
{"udp6", "", "[ff02::]:0", &net.UDPAddr{IP: net.ParseIP("ff02::114")}, nil}, // see RFC 4727
{"ip6", ":ipv6-icmp", "::", &net.IPAddr{IP: net.ParseIP("ff02::115")}, nil}, // see RFC 4727
{"udp6", "", "[ff30::8000:0]:0", &net.UDPAddr{IP: net.ParseIP("ff30::8000:1")}, &net.UDPAddr{IP: net.IPv6loopback}}, // see RFC 5771
{"ip6", ":ipv6-icmp", "::", &net.IPAddr{IP: net.ParseIP("ff30::8000:2")}, &net.IPAddr{IP: net.IPv6loopback}}, // see RFC 5771
}
func TestPacketConnMulticastSocketOptions(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagMulticast|net.FlagLoopback)
if ifi == nil {
t.Skipf("not available on %s", runtime.GOOS)
}
m, ok := nettest.SupportsRawIPSocket()
for _, tt := range packetConnMulticastSocketOptionTests {
if tt.net == "ip6" && !ok {
t.Log(m)
continue
}
c, err := net.ListenPacket(tt.net+tt.proto, tt.addr)
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
defer p.Close()
if tt.src == nil {
testMulticastSocketOptions(t, p, ifi, tt.grp)
} else {
testSourceSpecificMulticastSocketOptions(t, p, ifi, tt.grp, tt.src)
}
}
}
type testIPv6MulticastConn interface {
MulticastHopLimit() (int, error)
SetMulticastHopLimit(ttl int) error
MulticastLoopback() (bool, error)
SetMulticastLoopback(bool) error
JoinGroup(*net.Interface, net.Addr) error
LeaveGroup(*net.Interface, net.Addr) error
JoinSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error
LeaveSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error
ExcludeSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error
IncludeSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error
}
func testMulticastSocketOptions(t *testing.T, c testIPv6MulticastConn, ifi *net.Interface, grp net.Addr) {
const hoplim = 255
if err := c.SetMulticastHopLimit(hoplim); err != nil {
t.Error(err)
return
}
if v, err := c.MulticastHopLimit(); err != nil {
t.Error(err)
return
} else if v != hoplim {
t.Errorf("got %v; want %v", v, hoplim)
return
}
for _, toggle := range []bool{true, false} {
if err := c.SetMulticastLoopback(toggle); err != nil {
t.Error(err)
return
}
if v, err := c.MulticastLoopback(); err != nil {
t.Error(err)
return
} else if v != toggle {
t.Errorf("got %v; want %v", v, toggle)
return
}
}
if err := c.JoinGroup(ifi, grp); err != nil {
t.Error(err)
return
}
if err := c.LeaveGroup(ifi, grp); err != nil {
t.Error(err)
return
}
}
func testSourceSpecificMulticastSocketOptions(t *testing.T, c testIPv6MulticastConn, ifi *net.Interface, grp, src net.Addr) {
// MCAST_JOIN_GROUP -> MCAST_BLOCK_SOURCE -> MCAST_UNBLOCK_SOURCE -> MCAST_LEAVE_GROUP
if err := c.JoinGroup(ifi, grp); err != nil {
t.Error(err)
return
}
if err := c.ExcludeSourceSpecificGroup(ifi, grp, src); err != nil {
switch runtime.GOOS {
case "freebsd", "linux":
default: // platforms that don't support MLDv2 fail here
t.Logf("not supported on %s", runtime.GOOS)
return
}
t.Error(err)
return
}
if err := c.IncludeSourceSpecificGroup(ifi, grp, src); err != nil {
t.Error(err)
return
}
if err := c.LeaveGroup(ifi, grp); err != nil {
t.Error(err)
return
}
// MCAST_JOIN_SOURCE_GROUP -> MCAST_LEAVE_SOURCE_GROUP
if err := c.JoinSourceSpecificGroup(ifi, grp, src); err != nil {
t.Error(err)
return
}
if err := c.LeaveSourceSpecificGroup(ifi, grp, src); err != nil {
t.Error(err)
return
}
// MCAST_JOIN_SOURCE_GROUP -> MCAST_LEAVE_GROUP
if err := c.JoinSourceSpecificGroup(ifi, grp, src); err != nil {
t.Error(err)
return
}
if err := c.LeaveGroup(ifi, grp); err != nil {
t.Error(err)
return
}
}

View File

@@ -1,185 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"bytes"
"net"
"runtime"
"sync"
"testing"
"golang.org/x/net/internal/iana"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
func benchmarkUDPListener() (net.PacketConn, net.Addr, error) {
c, err := net.ListenPacket("udp6", "[::1]:0")
if err != nil {
return nil, nil, err
}
dst, err := net.ResolveUDPAddr("udp6", c.LocalAddr().String())
if err != nil {
c.Close()
return nil, nil, err
}
return c, dst, nil
}
func BenchmarkReadWriteNetUDP(b *testing.B) {
if !supportsIPv6 {
b.Skip("ipv6 is not supported")
}
c, dst, err := benchmarkUDPListener()
if err != nil {
b.Fatal(err)
}
defer c.Close()
wb, rb := []byte("HELLO-R-U-THERE"), make([]byte, 128)
b.ResetTimer()
for i := 0; i < b.N; i++ {
benchmarkReadWriteNetUDP(b, c, wb, rb, dst)
}
}
func benchmarkReadWriteNetUDP(b *testing.B, c net.PacketConn, wb, rb []byte, dst net.Addr) {
if _, err := c.WriteTo(wb, dst); err != nil {
b.Fatal(err)
}
if _, _, err := c.ReadFrom(rb); err != nil {
b.Fatal(err)
}
}
func BenchmarkReadWriteIPv6UDP(b *testing.B) {
if !supportsIPv6 {
b.Skip("ipv6 is not supported")
}
c, dst, err := benchmarkUDPListener()
if err != nil {
b.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU
if err := p.SetControlMessage(cf, true); err != nil {
b.Fatal(err)
}
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback)
wb, rb := []byte("HELLO-R-U-THERE"), make([]byte, 128)
b.ResetTimer()
for i := 0; i < b.N; i++ {
benchmarkReadWriteIPv6UDP(b, p, wb, rb, dst, ifi)
}
}
func benchmarkReadWriteIPv6UDP(b *testing.B, p *ipv6.PacketConn, wb, rb []byte, dst net.Addr, ifi *net.Interface) {
cm := ipv6.ControlMessage{
TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced,
HopLimit: 1,
}
if ifi != nil {
cm.IfIndex = ifi.Index
}
if n, err := p.WriteTo(wb, &cm, dst); err != nil {
b.Fatal(err)
} else if n != len(wb) {
b.Fatalf("got %v; want %v", n, len(wb))
}
if _, _, _, err := p.ReadFrom(rb); err != nil {
b.Fatal(err)
}
}
func TestPacketConnConcurrentReadWriteUnicastUDP(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
c, err := net.ListenPacket("udp6", "[::1]:0")
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
defer p.Close()
dst, err := net.ResolveUDPAddr("udp6", c.LocalAddr().String())
if err != nil {
t.Fatal(err)
}
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback)
cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU
wb := []byte("HELLO-R-U-THERE")
if err := p.SetControlMessage(cf, true); err != nil { // probe before test
if nettest.ProtocolNotSupported(err) {
t.Skipf("not supported on %s", runtime.GOOS)
}
t.Fatal(err)
}
var wg sync.WaitGroup
reader := func() {
defer wg.Done()
rb := make([]byte, 128)
if n, cm, _, err := p.ReadFrom(rb); err != nil {
t.Error(err)
return
} else if !bytes.Equal(rb[:n], wb) {
t.Errorf("got %v; want %v", rb[:n], wb)
return
} else {
t.Logf("rcvd cmsg: %v", cm)
}
}
writer := func(toggle bool) {
defer wg.Done()
cm := ipv6.ControlMessage{
TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced,
Src: net.IPv6loopback,
}
if ifi != nil {
cm.IfIndex = ifi.Index
}
if err := p.SetControlMessage(cf, toggle); err != nil {
t.Error(err)
return
}
if n, err := p.WriteTo(wb, &cm, dst); err != nil {
t.Error(err)
return
} else if n != len(wb) {
t.Errorf("got %v; want %v", n, len(wb))
return
}
}
const N = 10
wg.Add(N)
for i := 0; i < N; i++ {
go reader()
}
wg.Add(2 * N)
for i := 0; i < 2*N; i++ {
go writer(i%2 != 0)
}
wg.Add(N)
for i := 0; i < N; i++ {
go reader()
}
wg.Wait()
}

View File

@@ -1,133 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"fmt"
"net"
"runtime"
"testing"
"golang.org/x/net/internal/iana"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
var supportsIPv6 bool = nettest.SupportsIPv6()
func TestConnInitiatorPathMTU(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
ln, err := net.Listen("tcp6", "[::1]:0")
if err != nil {
t.Fatal(err)
}
defer ln.Close()
done := make(chan bool)
go acceptor(t, ln, done)
c, err := net.Dial("tcp6", ln.Addr().String())
if err != nil {
t.Fatal(err)
}
defer c.Close()
if pmtu, err := ipv6.NewConn(c).PathMTU(); err != nil {
switch runtime.GOOS {
case "darwin": // older darwin kernels don't support IPV6_PATHMTU option
t.Logf("not supported on %s", runtime.GOOS)
default:
t.Fatal(err)
}
} else {
t.Logf("path mtu for %v: %v", c.RemoteAddr(), pmtu)
}
<-done
}
func TestConnResponderPathMTU(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
ln, err := net.Listen("tcp6", "[::1]:0")
if err != nil {
t.Fatal(err)
}
defer ln.Close()
done := make(chan bool)
go connector(t, "tcp6", ln.Addr().String(), done)
c, err := ln.Accept()
if err != nil {
t.Fatal(err)
}
defer c.Close()
if pmtu, err := ipv6.NewConn(c).PathMTU(); err != nil {
switch runtime.GOOS {
case "darwin": // older darwin kernels don't support IPV6_PATHMTU option
t.Logf("not supported on %s", runtime.GOOS)
default:
t.Fatal(err)
}
} else {
t.Logf("path mtu for %v: %v", c.RemoteAddr(), pmtu)
}
<-done
}
func TestPacketConnChecksum(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
if m, ok := nettest.SupportsRawIPSocket(); !ok {
t.Skip(m)
}
c, err := net.ListenPacket(fmt.Sprintf("ip6:%d", iana.ProtocolOSPFIGP), "::") // OSPF for IPv6
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
offset := 12 // see RFC 5340
for _, toggle := range []bool{false, true} {
if err := p.SetChecksum(toggle, offset); err != nil {
if toggle {
t.Fatalf("ipv6.PacketConn.SetChecksum(%v, %v) failed: %v", toggle, offset, err)
} else {
// Some platforms never allow to disable the kernel
// checksum processing.
t.Logf("ipv6.PacketConn.SetChecksum(%v, %v) failed: %v", toggle, offset, err)
}
}
if on, offset, err := p.Checksum(); err != nil {
t.Fatal(err)
} else {
t.Logf("kernel checksum processing enabled=%v, offset=%v", on, offset)
}
}
}

View File

@@ -1,185 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"bytes"
"net"
"os"
"runtime"
"testing"
"time"
"golang.org/x/net/icmp"
"golang.org/x/net/internal/iana"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
func TestPacketConnReadWriteUnicastUDP(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
c, err := net.ListenPacket("udp6", "[::1]:0")
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
defer p.Close()
dst, err := net.ResolveUDPAddr("udp6", c.LocalAddr().String())
if err != nil {
t.Fatal(err)
}
cm := ipv6.ControlMessage{
TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced,
Src: net.IPv6loopback,
}
cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback)
if ifi != nil {
cm.IfIndex = ifi.Index
}
wb := []byte("HELLO-R-U-THERE")
for i, toggle := range []bool{true, false, true} {
if err := p.SetControlMessage(cf, toggle); err != nil {
if nettest.ProtocolNotSupported(err) {
t.Skipf("not supported on %s", runtime.GOOS)
}
t.Fatal(err)
}
cm.HopLimit = i + 1
if err := p.SetWriteDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
t.Fatal(err)
}
if n, err := p.WriteTo(wb, &cm, dst); err != nil {
t.Fatal(err)
} else if n != len(wb) {
t.Fatalf("got %v; want %v", n, len(wb))
}
rb := make([]byte, 128)
if err := p.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
t.Fatal(err)
}
if n, cm, _, err := p.ReadFrom(rb); err != nil {
t.Fatal(err)
} else if !bytes.Equal(rb[:n], wb) {
t.Fatalf("got %v; want %v", rb[:n], wb)
} else {
t.Logf("rcvd cmsg: %v", cm)
}
}
}
func TestPacketConnReadWriteUnicastICMP(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
if m, ok := nettest.SupportsRawIPSocket(); !ok {
t.Skip(m)
}
c, err := net.ListenPacket("ip6:ipv6-icmp", "::1")
if err != nil {
t.Fatal(err)
}
defer c.Close()
p := ipv6.NewPacketConn(c)
defer p.Close()
dst, err := net.ResolveIPAddr("ip6", "::1")
if err != nil {
t.Fatal(err)
}
pshicmp := icmp.IPv6PseudoHeader(c.LocalAddr().(*net.IPAddr).IP, dst.IP)
cm := ipv6.ControlMessage{
TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced,
Src: net.IPv6loopback,
}
cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU
ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback)
if ifi != nil {
cm.IfIndex = ifi.Index
}
var f ipv6.ICMPFilter
f.SetAll(true)
f.Accept(ipv6.ICMPTypeEchoReply)
if err := p.SetICMPFilter(&f); err != nil {
t.Fatal(err)
}
var psh []byte
for i, toggle := range []bool{true, false, true} {
if toggle {
psh = nil
if err := p.SetChecksum(true, 2); err != nil {
t.Fatal(err)
}
} else {
psh = pshicmp
// Some platforms never allow to disable the
// kernel checksum processing.
p.SetChecksum(false, -1)
}
wb, err := (&icmp.Message{
Type: ipv6.ICMPTypeEchoRequest, Code: 0,
Body: &icmp.Echo{
ID: os.Getpid() & 0xffff, Seq: i + 1,
Data: []byte("HELLO-R-U-THERE"),
},
}).Marshal(psh)
if err != nil {
t.Fatal(err)
}
if err := p.SetControlMessage(cf, toggle); err != nil {
if nettest.ProtocolNotSupported(err) {
t.Skipf("not supported on %s", runtime.GOOS)
}
t.Fatal(err)
}
cm.HopLimit = i + 1
if err := p.SetWriteDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
t.Fatal(err)
}
if n, err := p.WriteTo(wb, &cm, dst); err != nil {
t.Fatal(err)
} else if n != len(wb) {
t.Fatalf("got %v; want %v", n, len(wb))
}
rb := make([]byte, 128)
if err := p.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil {
t.Fatal(err)
}
if n, cm, _, err := p.ReadFrom(rb); err != nil {
switch runtime.GOOS {
case "darwin": // older darwin kernels have some limitation on receiving icmp packet through raw socket
t.Logf("not supported on %s", runtime.GOOS)
continue
}
t.Fatal(err)
} else {
t.Logf("rcvd cmsg: %v", cm)
if m, err := icmp.ParseMessage(iana.ProtocolIPv6ICMP, rb[:n]); err != nil {
t.Fatal(err)
} else if m.Type != ipv6.ICMPTypeEchoReply || m.Code != 0 {
t.Fatalf("got type=%v, code=%v; want type=%v, code=%v", m.Type, m.Code, ipv6.ICMPTypeEchoReply, 0)
}
}
}
}

View File

@@ -1,111 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ipv6_test
import (
"net"
"runtime"
"testing"
"golang.org/x/net/internal/iana"
"golang.org/x/net/internal/nettest"
"golang.org/x/net/ipv6"
)
func TestConnUnicastSocketOptions(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
ln, err := net.Listen("tcp6", "[::1]:0")
if err != nil {
t.Fatal(err)
}
defer ln.Close()
done := make(chan bool)
go acceptor(t, ln, done)
c, err := net.Dial("tcp6", ln.Addr().String())
if err != nil {
t.Fatal(err)
}
defer c.Close()
testUnicastSocketOptions(t, ipv6.NewConn(c))
<-done
}
var packetConnUnicastSocketOptionTests = []struct {
net, proto, addr string
}{
{"udp6", "", "[::1]:0"},
{"ip6", ":ipv6-icmp", "::1"},
}
func TestPacketConnUnicastSocketOptions(t *testing.T) {
switch runtime.GOOS {
case "nacl", "plan9", "solaris", "windows":
t.Skipf("not supported on %s", runtime.GOOS)
}
if !supportsIPv6 {
t.Skip("ipv6 is not supported")
}
m, ok := nettest.SupportsRawIPSocket()
for _, tt := range packetConnUnicastSocketOptionTests {
if tt.net == "ip6" && !ok {
t.Log(m)
continue
}
c, err := net.ListenPacket(tt.net+tt.proto, tt.addr)
if err != nil {
t.Fatal(err)
}
defer c.Close()
testUnicastSocketOptions(t, ipv6.NewPacketConn(c))
}
}
type testIPv6UnicastConn interface {
TrafficClass() (int, error)
SetTrafficClass(int) error
HopLimit() (int, error)
SetHopLimit(int) error
}
func testUnicastSocketOptions(t *testing.T, c testIPv6UnicastConn) {
tclass := iana.DiffServCS0 | iana.NotECNTransport
if err := c.SetTrafficClass(tclass); err != nil {
switch runtime.GOOS {
case "darwin": // older darwin kernels don't support IPV6_TCLASS option
t.Logf("not supported on %s", runtime.GOOS)
goto next
}
t.Fatal(err)
}
if v, err := c.TrafficClass(); err != nil {
t.Fatal(err)
} else if v != tclass {
t.Fatalf("got %v; want %v", v, tclass)
}
next:
hoplim := 255
if err := c.SetHopLimit(hoplim); err != nil {
t.Fatal(err)
}
if v, err := c.HopLimit(); err != nil {
t.Fatal(err)
} else if v != hoplim {
t.Fatalf("got %v; want %v", v, hoplim)
}
}

18
Godeps/_workspace/src/golang.org/x/net/proxy/direct.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proxy
import (
"net"
)
type direct struct{}
// Direct is a direct proxy: one that makes network connections directly.
var Direct = direct{}
func (direct) Dial(network, addr string) (net.Conn, error) {
return net.Dial(network, addr)
}

View File

@@ -0,0 +1,140 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proxy
import (
"net"
"strings"
)
// A PerHost directs connections to a default Dialer unless the hostname
// requested matches one of a number of exceptions.
type PerHost struct {
def, bypass Dialer
bypassNetworks []*net.IPNet
bypassIPs []net.IP
bypassZones []string
bypassHosts []string
}
// NewPerHost returns a PerHost Dialer that directs connections to either
// defaultDialer or bypass, depending on whether the connection matches one of
// the configured rules.
func NewPerHost(defaultDialer, bypass Dialer) *PerHost {
return &PerHost{
def: defaultDialer,
bypass: bypass,
}
}
// Dial connects to the address addr on the given network through either
// defaultDialer or bypass.
func (p *PerHost) Dial(network, addr string) (c net.Conn, err error) {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return nil, err
}
return p.dialerForRequest(host).Dial(network, addr)
}
func (p *PerHost) dialerForRequest(host string) Dialer {
if ip := net.ParseIP(host); ip != nil {
for _, net := range p.bypassNetworks {
if net.Contains(ip) {
return p.bypass
}
}
for _, bypassIP := range p.bypassIPs {
if bypassIP.Equal(ip) {
return p.bypass
}
}
return p.def
}
for _, zone := range p.bypassZones {
if strings.HasSuffix(host, zone) {
return p.bypass
}
if host == zone[1:] {
// For a zone "example.com", we match "example.com"
// too.
return p.bypass
}
}
for _, bypassHost := range p.bypassHosts {
if bypassHost == host {
return p.bypass
}
}
return p.def
}
// AddFromString parses a string that contains comma-separated values
// specifying hosts that should use the bypass proxy. Each value is either an
// IP address, a CIDR range, a zone (*.example.com) or a hostname
// (localhost). A best effort is made to parse the string and errors are
// ignored.
func (p *PerHost) AddFromString(s string) {
hosts := strings.Split(s, ",")
for _, host := range hosts {
host = strings.TrimSpace(host)
if len(host) == 0 {
continue
}
if strings.Contains(host, "/") {
// We assume that it's a CIDR address like 127.0.0.0/8
if _, net, err := net.ParseCIDR(host); err == nil {
p.AddNetwork(net)
}
continue
}
if ip := net.ParseIP(host); ip != nil {
p.AddIP(ip)
continue
}
if strings.HasPrefix(host, "*.") {
p.AddZone(host[1:])
continue
}
p.AddHost(host)
}
}
// AddIP specifies an IP address that will use the bypass proxy. Note that
// this will only take effect if a literal IP address is dialed. A connection
// to a named host will never match an IP.
func (p *PerHost) AddIP(ip net.IP) {
p.bypassIPs = append(p.bypassIPs, ip)
}
// AddNetwork specifies an IP range that will use the bypass proxy. Note that
// this will only take effect if a literal IP address is dialed. A connection
// to a named host will never match.
func (p *PerHost) AddNetwork(net *net.IPNet) {
p.bypassNetworks = append(p.bypassNetworks, net)
}
// AddZone specifies a DNS suffix that will use the bypass proxy. A zone of
// "example.com" matches "example.com" and all of its subdomains.
func (p *PerHost) AddZone(zone string) {
if strings.HasSuffix(zone, ".") {
zone = zone[:len(zone)-1]
}
if !strings.HasPrefix(zone, ".") {
zone = "." + zone
}
p.bypassZones = append(p.bypassZones, zone)
}
// AddHost specifies a hostname that will use the bypass proxy.
func (p *PerHost) AddHost(host string) {
if strings.HasSuffix(host, ".") {
host = host[:len(host)-1]
}
p.bypassHosts = append(p.bypassHosts, host)
}

94
Godeps/_workspace/src/golang.org/x/net/proxy/proxy.go generated vendored Normal file
View File

@@ -0,0 +1,94 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package proxy provides support for a variety of protocols to proxy network
// data.
package proxy
import (
"errors"
"net"
"net/url"
"os"
)
// A Dialer is a means to establish a connection.
type Dialer interface {
// Dial connects to the given address via the proxy.
Dial(network, addr string) (c net.Conn, err error)
}
// Auth contains authentication parameters that specific Dialers may require.
type Auth struct {
User, Password string
}
// FromEnvironment returns the dialer specified by the proxy related variables in
// the environment.
func FromEnvironment() Dialer {
allProxy := os.Getenv("all_proxy")
if len(allProxy) == 0 {
return Direct
}
proxyURL, err := url.Parse(allProxy)
if err != nil {
return Direct
}
proxy, err := FromURL(proxyURL, Direct)
if err != nil {
return Direct
}
noProxy := os.Getenv("no_proxy")
if len(noProxy) == 0 {
return proxy
}
perHost := NewPerHost(proxy, Direct)
perHost.AddFromString(noProxy)
return perHost
}
// proxySchemes is a map from URL schemes to a function that creates a Dialer
// from a URL with such a scheme.
var proxySchemes map[string]func(*url.URL, Dialer) (Dialer, error)
// RegisterDialerType takes a URL scheme and a function to generate Dialers from
// a URL with that scheme and a forwarding Dialer. Registered schemes are used
// by FromURL.
func RegisterDialerType(scheme string, f func(*url.URL, Dialer) (Dialer, error)) {
if proxySchemes == nil {
proxySchemes = make(map[string]func(*url.URL, Dialer) (Dialer, error))
}
proxySchemes[scheme] = f
}
// FromURL returns a Dialer given a URL specification and an underlying
// Dialer for it to make network requests.
func FromURL(u *url.URL, forward Dialer) (Dialer, error) {
var auth *Auth
if u.User != nil {
auth = new(Auth)
auth.User = u.User.Username()
if p, ok := u.User.Password(); ok {
auth.Password = p
}
}
switch u.Scheme {
case "socks5":
return SOCKS5("tcp", u.Host, auth, forward)
}
// If the scheme doesn't match any of the built-in schemes, see if it
// was registered by another package.
if proxySchemes != nil {
if f, ok := proxySchemes[u.Scheme]; ok {
return f(u, forward)
}
}
return nil, errors.New("proxy: unknown scheme: " + u.Scheme)
}

Some files were not shown because too many files have changed in this diff Show More