mirror of
https://github.com/syncthing/syncthing.git
synced 2026-01-05 04:19:10 -05:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a6ed76e714 |
9
.gitattributes
vendored
9
.gitattributes
vendored
@@ -1,9 +0,0 @@
|
||||
# Text files use LF line endings in this repository
|
||||
* text=auto
|
||||
|
||||
# Except the dependencies, which we leave alone
|
||||
Godeps/** -text=auto
|
||||
|
||||
# Diffs on these files are meaningless
|
||||
gui.files.go -diff
|
||||
*.svg -diff
|
||||
13
.gitignore
vendored
13
.gitignore
vendored
@@ -1,16 +1,7 @@
|
||||
syncthing
|
||||
!gui/syncthing
|
||||
!Godeps/_workspace/src/github.com/syncthing
|
||||
syncthing.exe
|
||||
stcli
|
||||
stcli.exe
|
||||
*.tar.gz
|
||||
*.zip
|
||||
*.asc
|
||||
.jshintrc
|
||||
coverage.out
|
||||
files/pidx
|
||||
bin
|
||||
perfstats*.csv
|
||||
coverage.xml
|
||||
syncthing.sig
|
||||
RELEASE
|
||||
deb
|
||||
|
||||
76
AUTHORS
76
AUTHORS
@@ -1,76 +0,0 @@
|
||||
# This is the official list of Syncthing authors for copyright purposes.
|
||||
|
||||
Aaron Bieber <qbit@deftly.net>
|
||||
Adam Piggott <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com>
|
||||
Alexander Graf <register-github@alex-graf.de>
|
||||
Andrew Dunham <andrew@du.nham.ca>
|
||||
Antony Male <antony.male@gmail.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
|
||||
Audrius Butkevicius <audrius.butkevicius@gmail.com>
|
||||
Bart De Vries <devriesb@gmail.com>
|
||||
Ben Curthoys <ben@bencurthoys.com>
|
||||
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
|
||||
Ben Sidhom <bsidhom@gmail.com>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Brendan Long <self@brendanlong.com>
|
||||
Brian R. Becker <brbecker@gmail.com>
|
||||
Caleb Callaway <enlightened.despot@gmail.com>
|
||||
Carsten Hagemann <moter8@gmail.com>
|
||||
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
|
||||
Chris Howie <me@chrishowie.com>
|
||||
Chris Joel <chris@scriptolo.gy>
|
||||
Colin Kennedy <moshen.colin@gmail.com>
|
||||
Daniel Bergmann <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
|
||||
Daniel Martí <mvdan@mvdan.cc>
|
||||
Denis A. <denisva@gmail.com>
|
||||
Dennis Wilson <dw@risu.io>
|
||||
Dominik Heidler <dominik@heidler.eu>
|
||||
Elias Jarlebring <jarlebring@gmail.com>
|
||||
Emil Hessman <emil@hessman.se>
|
||||
Erik Meitner <e.meitner@willystreet.coop>
|
||||
Federico Castagnini <federico.castagnini@gmail.com>
|
||||
Felix Ableitner <me@nutomic.com>
|
||||
Felix Unterpaintner <bigbear2nd@gmail.com>
|
||||
Francois-Xavier Gsell <fxgsell@gmail.com>
|
||||
Frank Isemann <frank@isemann.name>
|
||||
Gilli Sigurdsson <gilli@vx.is>
|
||||
Jacek Szafarkiewicz <szafar@linux.pl>
|
||||
Jakob Borg <jakob@nym.se>
|
||||
Jake Peterson <jake@acogdev.com>
|
||||
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
|
||||
Jaroslav Malec <dzardacz@gmail.com>
|
||||
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
|
||||
Jochen Voss <voss@seehuhn.de>
|
||||
Johan Vromans <jvromans@squirrel.nl>
|
||||
Karol Różycki <rozycki.karol@gmail.com>
|
||||
Ken'ichi Kamada <kamada@nanohz.org>
|
||||
Lode Hoste <zillode@zillode.be>
|
||||
Lord Landon Agahnim <lordlandon@gmail.com>
|
||||
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
|
||||
Marc Pujol <kilburn@la3.org>
|
||||
Marcin Dziadus <dziadus.marcin@gmail.com>
|
||||
Mateusz Naściszewski <matin1111@wp.pl>
|
||||
Matt Burke <mburke@amplify.com> <burkemw3@gmail.com>
|
||||
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
|
||||
Michael Ploujnikov <ploujj@gmail.com>
|
||||
Michael Tilli <pyfisch@gmail.com>
|
||||
Nate Morrison <natemorrison@gmail.com>
|
||||
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>
|
||||
Peter Hoeg <peter@speartail.com>
|
||||
Philippe Schommers <philippe@schommers.be>
|
||||
Phill Luby <phill.luby@newredo.com>
|
||||
Piotr Bejda <piotrb10@gmail.com>
|
||||
Ryan Sullivan <kayoticsully@gmail.com>
|
||||
Scott Klupfel <kluppy@going2blue.com>
|
||||
Sergey Mishin <ralder@yandex.ru>
|
||||
Stefan Tatschner <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
|
||||
Stefan Kuntz <stefan.github@gmail.com> <Stefan.github@gmail.com>
|
||||
Tim Abell <tim@timwise.co.uk>
|
||||
Tobias Nygren <tnn@nygren.pp.se>
|
||||
Tomas Cerveny <kozec@kozec.com>
|
||||
Tully Robinson <tully@tojr.org>
|
||||
Tyler Brazier <tyler@tylerbrazier.com>
|
||||
Veeti Paananen <veeti.paananen@rojekti.fi>
|
||||
Vil Brekin <vilbrekin@gmail.com>
|
||||
Victor Buinsky <vix_booja@tut.by>
|
||||
Yannic A. <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>
|
||||
92
CONDUCT.md
92
CONDUCT.md
@@ -1,92 +0,0 @@
|
||||
## Conduct
|
||||
|
||||
* We are committed to providing a friendly, safe and welcoming
|
||||
environment for all, regardless of gender, sexual orientation,
|
||||
disability, ethnicity, religion, or similar personal characteristic.
|
||||
|
||||
* On IRC, please avoid using overtly sexual nicknames or other nicknames
|
||||
that might detract from a friendly, safe and welcoming environment for
|
||||
all.
|
||||
|
||||
* Please be kind and courteous. There's no need to be mean or rude.
|
||||
|
||||
* Respect that people have differences of opinion and that every design
|
||||
or implementation choice carries a trade-off and numerous costs. There
|
||||
is seldom a right answer.
|
||||
|
||||
* Please keep unstructured critique to a minimum. If you have solid
|
||||
ideas you want to experiment with, make a fork and see how it works.
|
||||
|
||||
* We will exclude you from interaction if you insult, demean or harass
|
||||
anyone. That is not welcome behaviour. We interpret the term
|
||||
"harassment" as including the definition in the <a
|
||||
href="http://citizencodeofconduct.org/">Citizen Code of Conduct</a>;
|
||||
if you have any lack of clarity about what might be included in that
|
||||
concept, please read their definition. In particular, we don't
|
||||
tolerate behavior that excludes people in socially marginalized
|
||||
groups.
|
||||
|
||||
* Private harassment is also unacceptable. No matter who you are, if you
|
||||
feel you have been or are being harassed or made uncomfortable by a
|
||||
community member, please contact one of the channel ops or any of the
|
||||
Syncthing core team immediately. Whether you're a regular contributor
|
||||
or a newcomer, we care about making this community a safe place for
|
||||
you and we've got your back.
|
||||
|
||||
* Likewise any spamming, trolling, flaming, baiting or other
|
||||
attention-stealing behaviour is not welcome.
|
||||
|
||||
## Moderation
|
||||
|
||||
These are the policies for upholding our community's standards of
|
||||
conduct in our communication channels, most notably in Syncthing-related
|
||||
IRC channels and on the web forum.
|
||||
|
||||
1. Remarks that violate the Syncthing standards of conduct, including
|
||||
hateful, hurtful, oppressive, or exclusionary remarks, are not
|
||||
allowed. (Cursing is allowed, but never targeting another user, and
|
||||
never in a hateful manner.)
|
||||
|
||||
2. Remarks that moderators find inappropriate, whether listed in the
|
||||
code of conduct or not, are also not allowed.
|
||||
|
||||
3. Moderators will first respond to such remarks with a warning.
|
||||
|
||||
4. If the warning is unheeded, the user will be "kicked," i.e., kicked
|
||||
out of the communication channel to cool off.
|
||||
|
||||
5. If the user comes back and continues to make trouble, they will be
|
||||
banned, i.e., indefinitely excluded.
|
||||
|
||||
6. Moderators may choose at their discretion to un-ban the user if it
|
||||
was a first offense and they offer the offended party a genuine
|
||||
apology.
|
||||
|
||||
7. If a moderator bans someone and you think it was unjustified, please
|
||||
take it up with that moderator, or with a different moderator, **in
|
||||
private**. Complaints about bans in-channel are not allowed.
|
||||
|
||||
8. Moderators are held to a higher standard than other community
|
||||
members. If a moderator creates an inappropriate situation, they
|
||||
should expect less leeway than others.
|
||||
|
||||
In the Syncthing community we strive to go the extra step to look out
|
||||
for each other. Don't just aim to be technically unimpeachable, try to
|
||||
be your best self. In particular, avoid flirting with offensive or
|
||||
sensitive issues, particularly if they're off-topic; this all too
|
||||
often leads to unnecessary fights, hurt feelings, and damaged trust;
|
||||
worse, it can drive people away from the community entirely.
|
||||
|
||||
And if someone takes issue with something you said or did, resist the
|
||||
urge to be defensive. Just stop doing what it was they complained about
|
||||
and apologize. Even if you feel you were misinterpreted or unfairly
|
||||
accused, chances are good there was something you could've communicated
|
||||
better — remember that it's your responsibility to make your fellow
|
||||
community members comfortable. Everyone wants to get along and we are
|
||||
all here first and foremost because we want to talk about cool
|
||||
technology. You will find that people will be eager to assume good
|
||||
intent and forgive as long as you earn their trust.
|
||||
|
||||
*Adapted from the [Rust Code of Conduct](https://github.com/rust-lang/rust/wiki/Note-development-policy#conduct)*
|
||||
|
||||
*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling)*
|
||||
@@ -1,52 +1,54 @@
|
||||
## Reporting Bugs
|
||||
|
||||
Please file bugs in the [Github Issue
|
||||
Tracker](https://github.com/syncthing/syncthing/issues). Include at
|
||||
least the following:
|
||||
|
||||
- What happened
|
||||
|
||||
- What did you expect to happen instead of what *did* happen, if it's
|
||||
not crazy obvious
|
||||
|
||||
- What operating system, operating system version and version of
|
||||
Syncthing you are running
|
||||
|
||||
- The same for other connected devices, where relevant
|
||||
|
||||
- Screenshot if the issue concerns something visible in the GUI
|
||||
|
||||
- Console log entries, where possible and relevant
|
||||
|
||||
If you're not sure whether something is relevant, erring on the side of
|
||||
too much information will never get you yelled at. :)
|
||||
|
||||
## Contributing Translations
|
||||
|
||||
All translations are done via
|
||||
[Transifex](https://www.transifex.com/projects/p/syncthing/). If you
|
||||
wish to contribute to a translation, just head over there and sign up.
|
||||
Before every release, the language resources are updated from the
|
||||
latest info on Transifex.
|
||||
|
||||
## Contributing Code
|
||||
|
||||
Every contribution is welcome. If you want to contribute but are unsure
|
||||
where to start, any open issues are fair game! See the [Contribution
|
||||
Guidelines](http://docs.syncthing.net/dev/contributing.html) for the full
|
||||
story on committing code.
|
||||
|
||||
## Contributing Documentation
|
||||
|
||||
Updates to the [documentation site](http://docs.syncthing.net/) can be
|
||||
made as pull requests on the [documentation
|
||||
repository](https://github.com/syncthing/docs).
|
||||
Please do contribute! If you want to contribute but are unsure where to
|
||||
start, the [Contributions Needed
|
||||
page](https://github.com/calmh/syncthing/wiki/Contributions-Needed)
|
||||
lists areas in need of attention.
|
||||
|
||||
## Licensing
|
||||
|
||||
All contributions are made under the same MPLv2 license as the rest of
|
||||
the project, except documentation, user interface text and translation
|
||||
strings which are licensed under the Creative Commons Attribution 4.0
|
||||
International License. You retain the copyright to code you have
|
||||
written.
|
||||
All contributions are made under the same MIT License as the rest of the
|
||||
project, except documentation which is licensed under the Creative
|
||||
Commons Attribution 4.0 International License. You retain the copyright
|
||||
to code you have written.
|
||||
|
||||
When accepting your first contribution, the maintainer of the project
|
||||
will ensure that you are added to the CONTRIBUTORS file.
|
||||
|
||||
## Building
|
||||
|
||||
[See the wiki](https://github.com/calmh/syncthing/wiki/Building)
|
||||
|
||||
## Branches
|
||||
|
||||
- `master` is the main branch containing good code that will end up in
|
||||
the next release. You should base your work on it. It won't ever be
|
||||
rebased or force-pushed to.
|
||||
|
||||
- `vx.y` branches exist to make patch releases on otherwise obsolete
|
||||
minor releases. Should only contain fixes cherry picked from master.
|
||||
Don't base any work on them.
|
||||
|
||||
- Other branches are probably topic branches and may be subject to
|
||||
rebasing. Don't base any work on them unless you specifically know
|
||||
otherwise.
|
||||
|
||||
## Tags
|
||||
|
||||
All releases are tagged semver style as `vx.y.z`. Release tags are
|
||||
signed by GPG key BCE524C7.
|
||||
|
||||
## Tests
|
||||
|
||||
Yes please!
|
||||
|
||||
## Style
|
||||
|
||||
`go fmt`
|
||||
|
||||
## Documentation
|
||||
|
||||
[Hack it here](https://github.com/calmh/syncthing/wiki)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
|
||||
4
CONTRIBUTORS
Normal file
4
CONTRIBUTORS
Normal file
@@ -0,0 +1,4 @@
|
||||
Aaron Bieber <qbit@deftly.net>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
James Patterson <jamespatterson@operamail.com>
|
||||
Philippe Schommers <philippe@schommers.be>
|
||||
88
Godeps/Godeps.json
generated
88
Godeps/Godeps.json
generated
@@ -1,90 +1,36 @@
|
||||
{
|
||||
"ImportPath": "github.com/syncthing/syncthing",
|
||||
"GoVersion": "go1.5.1",
|
||||
"ImportPath": "github.com/calmh/syncthing",
|
||||
"GoVersion": "go1.2.1",
|
||||
"Packages": [
|
||||
"./cmd/..."
|
||||
"./cmd/syncthing"
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "github.com/bkaradzic/go-lz4",
|
||||
"Rev": "74ddf82598bc4745b965729e9c6a463bedd33049"
|
||||
"ImportPath": "code.google.com/p/go.text/transform",
|
||||
"Comment": "null-81",
|
||||
"Rev": "9cbe983aed9b0dfc73954433fead5e00866342ac"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/du",
|
||||
"Rev": "3c0690cca16228b97741327b1b6781397afbdb24"
|
||||
"ImportPath": "code.google.com/p/go.text/unicode/norm",
|
||||
"Comment": "null-81",
|
||||
"Rev": "9cbe983aed9b0dfc73954433fead5e00866342ac"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/luhn",
|
||||
"Rev": "0c8388ff95fa92d4094011e5a04fc99dea3d1632"
|
||||
"ImportPath": "github.com/calmh/ini",
|
||||
"Rev": "386c4240a9684d91d9ec4d93651909b49c7269e1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/xdr",
|
||||
"Rev": "9eb3e1a622d9364deb39c831f7e5f164393d7e37"
|
||||
"ImportPath": "github.com/codegangsta/inject",
|
||||
"Rev": "9aea7a2fa5b79ef7fc00f63a575e72df33b4e886"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/golang/snappy",
|
||||
"Rev": "723cc1e459b8eea2dea4583200fd60757d40097a"
|
||||
"ImportPath": "github.com/codegangsta/martini",
|
||||
"Comment": "v0.1-142-g8659df7",
|
||||
"Rev": "8659df7a51aebe6c6120268cd5a8b4c34fa8441a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/juju/ratelimit",
|
||||
"Rev": "772f5c38e468398c4511514f4f6aa9a4185bc0a0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/kardianos/osext",
|
||||
"Rev": "345163ffe35aa66560a4cd7dddf00f3ae21c9fda"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/rcrowley/go-metrics",
|
||||
"Rev": "1ce93efbc8f9c568886b2ef85ce305b2217b3de3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "1a9d62f03ea92815b46fcaab357cfd4df264b1a0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/thejerf/suture",
|
||||
"Comment": "v1.0.1",
|
||||
"Rev": "99c1f2d613756768fc4299acd9dc621e11ed3fd7"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/coding",
|
||||
"Rev": "ccb109cf25f0cd24474da73b9fee4e7a3e8a8ce0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/gf256",
|
||||
"Rev": "ccb109cf25f0cd24474da73b9fee4e7a3e8a8ce0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/qr",
|
||||
"Rev": "ccb109cf25f0cd24474da73b9fee4e7a3e8a8ce0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/bcrypt",
|
||||
"Rev": "575fdbe86e5dd89229707ebec0575ce7d088a4a6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/blowfish",
|
||||
"Rev": "575fdbe86e5dd89229707ebec0575ce7d088a4a6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/internal/iana",
|
||||
"Rev": "042ba42fa6633b34205efc66ba5719cd3afd8d38"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/ipv6",
|
||||
"Rev": "042ba42fa6633b34205efc66ba5719cd3afd8d38"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/proxy",
|
||||
"Rev": "042ba42fa6633b34205efc66ba5719cd3afd8d38"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/text/transform",
|
||||
"Rev": "5eb8d4684c4796dd36c74f6452f2c0fa6c79597e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/text/unicode/norm",
|
||||
"Rev": "5eb8d4684c4796dd36c74f6452f2c0fa6c79597e"
|
||||
"Rev": "cbaa435c80a9716e086f25d409344b26c4039358"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
37
Godeps/_workspace/src/code.google.com/p/go.text/transform/examples_test.go
generated
vendored
Normal file
37
Godeps/_workspace/src/code.google.com/p/go.text/transform/examples_test.go
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package transform_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"unicode"
|
||||
|
||||
"code.google.com/p/go.text/transform"
|
||||
"code.google.com/p/go.text/unicode/norm"
|
||||
)
|
||||
|
||||
func ExampleRemoveFunc() {
|
||||
input := []byte(`tschüß; до свидания`)
|
||||
|
||||
b := make([]byte, len(input))
|
||||
|
||||
t := transform.RemoveFunc(unicode.IsSpace)
|
||||
n, _, _ := t.Transform(b, input, true)
|
||||
fmt.Println(string(b[:n]))
|
||||
|
||||
t = transform.RemoveFunc(func(r rune) bool {
|
||||
return !unicode.Is(unicode.Latin, r)
|
||||
})
|
||||
n, _, _ = t.Transform(b, input, true)
|
||||
fmt.Println(string(b[:n]))
|
||||
|
||||
n, _, _ = t.Transform(b, norm.NFD.Bytes(input), true)
|
||||
fmt.Println(string(b[:n]))
|
||||
|
||||
// Output:
|
||||
// tschüß;досвидания
|
||||
// tschüß
|
||||
// tschuß
|
||||
}
|
||||
@@ -9,7 +9,6 @@
|
||||
package transform
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"unicode/utf8"
|
||||
@@ -55,17 +54,10 @@ type Transformer interface {
|
||||
// either error may be returned. Other than the error conditions listed
|
||||
// here, implementations are free to report other errors that arise.
|
||||
Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error)
|
||||
|
||||
// Reset resets the state and allows a Transformer to be reused.
|
||||
Reset()
|
||||
}
|
||||
|
||||
// NopResetter can be embedded by implementations of Transformer to add a nop
|
||||
// Reset method.
|
||||
type NopResetter struct{}
|
||||
|
||||
// Reset implements the Reset method of the Transformer interface.
|
||||
func (NopResetter) Reset() {}
|
||||
// TODO: Do we require that a Transformer be reusable if it returns a nil error
|
||||
// or do we always require a reset after use? Is Reset mandatory or optional?
|
||||
|
||||
// Reader wraps another io.Reader by transforming the bytes read.
|
||||
type Reader struct {
|
||||
@@ -91,9 +83,8 @@ type Reader struct {
|
||||
const defaultBufSize = 4096
|
||||
|
||||
// NewReader returns a new Reader that wraps r by transforming the bytes read
|
||||
// via t. It calls Reset on t.
|
||||
// via t.
|
||||
func NewReader(r io.Reader, t Transformer) *Reader {
|
||||
t.Reset()
|
||||
return &Reader{
|
||||
r: r,
|
||||
t: t,
|
||||
@@ -136,7 +127,7 @@ func (r *Reader) Read(p []byte) (int, error) {
|
||||
// cannot read more bytes into src.
|
||||
r.transformComplete = r.err != nil
|
||||
continue
|
||||
case err == ErrShortDst && (r.dst1 != 0 || n != 0):
|
||||
case err == ErrShortDst && r.dst1 != 0:
|
||||
// Make room in dst by copying out, and try again.
|
||||
continue
|
||||
case err == ErrShortSrc && r.src1-r.src0 != len(r.src) && r.err == nil:
|
||||
@@ -178,9 +169,8 @@ type Writer struct {
|
||||
}
|
||||
|
||||
// NewWriter returns a new Writer that wraps w by transforming the bytes written
|
||||
// via t. It calls Reset on t.
|
||||
// via t.
|
||||
func NewWriter(w io.Writer, t Transformer) *Writer {
|
||||
t.Reset()
|
||||
return &Writer{
|
||||
w: w,
|
||||
t: t,
|
||||
@@ -220,7 +210,7 @@ func (w *Writer) Write(data []byte) (n int, err error) {
|
||||
n += nSrc
|
||||
}
|
||||
switch {
|
||||
case err == ErrShortDst && (nDst > 0 || nSrc > 0):
|
||||
case err == ErrShortDst && nDst > 0:
|
||||
case err == ErrShortSrc && len(src) < len(w.src):
|
||||
m := copy(w.src, src)
|
||||
// If w.n > 0, bytes from data were already copied to w.src and n
|
||||
@@ -256,7 +246,7 @@ func (w *Writer) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type nop struct{ NopResetter }
|
||||
type nop struct{}
|
||||
|
||||
func (nop) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
n := copy(dst, src)
|
||||
@@ -266,7 +256,7 @@ func (nop) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
return n, n, err
|
||||
}
|
||||
|
||||
type discard struct{ NopResetter }
|
||||
type discard struct{}
|
||||
|
||||
func (discard) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
return 0, len(src), nil
|
||||
@@ -336,16 +326,6 @@ func Chain(t ...Transformer) Transformer {
|
||||
return c
|
||||
}
|
||||
|
||||
// Reset resets the state of Chain. It calls Reset on all the Transformers.
|
||||
func (c *chain) Reset() {
|
||||
for i, l := range c.link {
|
||||
if l.t != nil {
|
||||
l.t.Reset()
|
||||
}
|
||||
c.link[i].p, c.link[i].n = 0, 0
|
||||
}
|
||||
}
|
||||
|
||||
// Transform applies the transformers of c in sequence.
|
||||
func (c *chain) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
// Set up src and dst in the chain.
|
||||
@@ -444,8 +424,6 @@ func RemoveFunc(f func(r rune) bool) Transformer {
|
||||
|
||||
type removeF func(r rune) bool
|
||||
|
||||
func (removeF) Reset() {}
|
||||
|
||||
// Transform implements the Transformer interface.
|
||||
func (t removeF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
for r, sz := rune(0), 0; len(src) > 0; src = src[sz:] {
|
||||
@@ -457,7 +435,7 @@ func (t removeF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err err
|
||||
|
||||
if sz == 1 {
|
||||
// Invalid rune.
|
||||
if !atEOF && !utf8.FullRune(src) {
|
||||
if !atEOF && !utf8.FullRune(src[nSrc:]) {
|
||||
err = ErrShortSrc
|
||||
break
|
||||
}
|
||||
@@ -489,128 +467,30 @@ func (t removeF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err err
|
||||
return
|
||||
}
|
||||
|
||||
// grow returns a new []byte that is longer than b, and copies the first n bytes
|
||||
// of b to the start of the new slice.
|
||||
func grow(b []byte, n int) []byte {
|
||||
m := len(b)
|
||||
if m <= 256 {
|
||||
m *= 2
|
||||
} else {
|
||||
m += m >> 1
|
||||
}
|
||||
buf := make([]byte, m)
|
||||
copy(buf, b[:n])
|
||||
return buf
|
||||
}
|
||||
|
||||
const initialBufSize = 128
|
||||
|
||||
// String returns a string with the result of converting s[:n] using t, where
|
||||
// n <= len(s). If err == nil, n will be len(s). It calls Reset on t.
|
||||
func String(t Transformer, s string) (result string, n int, err error) {
|
||||
if s == "" {
|
||||
return "", 0, nil
|
||||
}
|
||||
|
||||
t.Reset()
|
||||
|
||||
// Allocate only once. Note that both dst and src escape when passed to
|
||||
// Transform.
|
||||
buf := [2 * initialBufSize]byte{}
|
||||
dst := buf[:initialBufSize:initialBufSize]
|
||||
src := buf[initialBufSize : 2*initialBufSize]
|
||||
|
||||
// Avoid allocation if the transformed string is identical to the original.
|
||||
// After this loop, pDst will point to the furthest point in s for which it
|
||||
// could be detected that t gives equal results, src[:nSrc] will
|
||||
// indicated the last processed chunk of s for which the output is not equal
|
||||
// and dst[:nDst] will be the transform of this chunk.
|
||||
var nDst, nSrc int
|
||||
pDst := 0 // Used as index in both src and dst in this loop.
|
||||
// Bytes returns a new byte slice with the result of converting b using t.
|
||||
// If any unrecoverable error occurs it returns nil.
|
||||
func Bytes(t Transformer, b []byte) []byte {
|
||||
out := make([]byte, len(b))
|
||||
n := 0
|
||||
for {
|
||||
n := copy(src, s[pDst:])
|
||||
nDst, nSrc, err = t.Transform(dst, src[:n], pDst+n == len(s))
|
||||
|
||||
// Note 1: we will not enter the loop with pDst == len(s) and we will
|
||||
// not end the loop with it either. So if nSrc is 0, this means there is
|
||||
// some kind of error from which we cannot recover given the current
|
||||
// buffer sizes. We will give up in this case.
|
||||
// Note 2: it is not entirely correct to simply do a bytes.Equal as
|
||||
// a Transformer may buffer internally. It will work in most cases,
|
||||
// though, and no harm is done if it doesn't work.
|
||||
// TODO: let transformers implement an optional Spanner interface, akin
|
||||
// to norm's QuickSpan. This would even allow us to avoid any allocation.
|
||||
if nSrc == 0 || !bytes.Equal(dst[:nDst], src[:nSrc]) {
|
||||
break
|
||||
nDst, nSrc, err := t.Transform(out[n:], b, true)
|
||||
n += nDst
|
||||
if err == nil {
|
||||
return out[:n]
|
||||
} else if err != ErrShortDst {
|
||||
return nil
|
||||
}
|
||||
b = b[nSrc:]
|
||||
|
||||
if pDst += nDst; pDst == len(s) {
|
||||
return s, pDst, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Move the bytes seen so far to dst.
|
||||
pSrc := pDst + nSrc
|
||||
if pDst+nDst <= initialBufSize {
|
||||
copy(dst[pDst:], dst[:nDst])
|
||||
} else {
|
||||
b := make([]byte, len(s)+nDst-nSrc)
|
||||
copy(b[pDst:], dst[:nDst])
|
||||
dst = b
|
||||
}
|
||||
copy(dst, s[:pDst])
|
||||
pDst += nDst
|
||||
|
||||
if err != nil && err != ErrShortDst && err != ErrShortSrc {
|
||||
return string(dst[:pDst]), pSrc, err
|
||||
}
|
||||
|
||||
// Complete the string with the remainder.
|
||||
for {
|
||||
n := copy(src, s[pSrc:])
|
||||
nDst, nSrc, err = t.Transform(dst[pDst:], src[:n], pSrc+n == len(s))
|
||||
pDst += nDst
|
||||
pSrc += nSrc
|
||||
|
||||
switch err {
|
||||
case nil:
|
||||
if pSrc == len(s) {
|
||||
return string(dst[:pDst]), pSrc, nil
|
||||
}
|
||||
case ErrShortDst:
|
||||
// Do not grow as long as we can make progress. This may avoid
|
||||
// excessive allocations.
|
||||
if nDst == 0 {
|
||||
dst = grow(dst, pDst)
|
||||
}
|
||||
case ErrShortSrc:
|
||||
if nSrc == 0 {
|
||||
src = grow(src, 0)
|
||||
}
|
||||
default:
|
||||
return string(dst[:pDst]), pSrc, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Bytes returns a new byte slice with the result of converting b[:n] using t,
|
||||
// where n <= len(b). If err == nil, n will be len(b). It calls Reset on t.
|
||||
func Bytes(t Transformer, b []byte) (result []byte, n int, err error) {
|
||||
t.Reset()
|
||||
dst := make([]byte, len(b))
|
||||
pDst, pSrc := 0, 0
|
||||
for {
|
||||
nDst, nSrc, err := t.Transform(dst[pDst:], b[pSrc:], true)
|
||||
pDst += nDst
|
||||
pSrc += nSrc
|
||||
if err != ErrShortDst {
|
||||
return dst[:pDst], pSrc, err
|
||||
}
|
||||
|
||||
// Grow the destination buffer, but do not grow as long as we can make
|
||||
// progress. This may avoid excessive allocations.
|
||||
if nDst == 0 {
|
||||
dst = grow(dst, pDst)
|
||||
// Grow the destination buffer.
|
||||
sz := len(out)
|
||||
if sz <= 256 {
|
||||
sz *= 2
|
||||
} else {
|
||||
sz += sz >> 1
|
||||
}
|
||||
out2 := make([]byte, sz)
|
||||
copy(out2, out[:n])
|
||||
out = out2
|
||||
}
|
||||
}
|
||||
901
Godeps/_workspace/src/code.google.com/p/go.text/transform/transform_test.go
generated
vendored
Normal file
901
Godeps/_workspace/src/code.google.com/p/go.text/transform/transform_test.go
generated
vendored
Normal file
@@ -0,0 +1,901 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package transform
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type lowerCaseASCII struct{}
|
||||
|
||||
func (lowerCaseASCII) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
n := len(src)
|
||||
if n > len(dst) {
|
||||
n, err = len(dst), ErrShortDst
|
||||
}
|
||||
for i, c := range src[:n] {
|
||||
if 'A' <= c && c <= 'Z' {
|
||||
c += 'a' - 'A'
|
||||
}
|
||||
dst[i] = c
|
||||
}
|
||||
return n, n, err
|
||||
}
|
||||
|
||||
var errYouMentionedX = errors.New("you mentioned X")
|
||||
|
||||
type dontMentionX struct{}
|
||||
|
||||
func (dontMentionX) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
n := len(src)
|
||||
if n > len(dst) {
|
||||
n, err = len(dst), ErrShortDst
|
||||
}
|
||||
for i, c := range src[:n] {
|
||||
if c == 'X' {
|
||||
return i, i, errYouMentionedX
|
||||
}
|
||||
dst[i] = c
|
||||
}
|
||||
return n, n, err
|
||||
}
|
||||
|
||||
// doublerAtEOF is a strange Transformer that transforms "this" to "tthhiiss",
|
||||
// but only if atEOF is true.
|
||||
type doublerAtEOF struct{}
|
||||
|
||||
func (doublerAtEOF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
if !atEOF {
|
||||
return 0, 0, ErrShortSrc
|
||||
}
|
||||
for i, c := range src {
|
||||
if 2*i+2 >= len(dst) {
|
||||
return 2 * i, i, ErrShortDst
|
||||
}
|
||||
dst[2*i+0] = c
|
||||
dst[2*i+1] = c
|
||||
}
|
||||
return 2 * len(src), len(src), nil
|
||||
}
|
||||
|
||||
// rleDecode and rleEncode implement a toy run-length encoding: "aabbbbbbbbbb"
|
||||
// is encoded as "2a10b". The decoding is assumed to not contain any numbers.
|
||||
|
||||
type rleDecode struct{}
|
||||
|
||||
func (rleDecode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
loop:
|
||||
for len(src) > 0 {
|
||||
n := 0
|
||||
for i, c := range src {
|
||||
if '0' <= c && c <= '9' {
|
||||
n = 10*n + int(c-'0')
|
||||
continue
|
||||
}
|
||||
if i == 0 {
|
||||
return nDst, nSrc, errors.New("rleDecode: bad input")
|
||||
}
|
||||
if n > len(dst) {
|
||||
return nDst, nSrc, ErrShortDst
|
||||
}
|
||||
for j := 0; j < n; j++ {
|
||||
dst[j] = c
|
||||
}
|
||||
dst, src = dst[n:], src[i+1:]
|
||||
nDst, nSrc = nDst+n, nSrc+i+1
|
||||
continue loop
|
||||
}
|
||||
if atEOF {
|
||||
return nDst, nSrc, errors.New("rleDecode: bad input")
|
||||
}
|
||||
return nDst, nSrc, ErrShortSrc
|
||||
}
|
||||
return nDst, nSrc, nil
|
||||
}
|
||||
|
||||
type rleEncode struct {
|
||||
// allowStutter means that "xxxxxxxx" can be encoded as "5x3x"
|
||||
// instead of always as "8x".
|
||||
allowStutter bool
|
||||
}
|
||||
|
||||
func (e rleEncode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
for len(src) > 0 {
|
||||
n, c0 := len(src), src[0]
|
||||
for i, c := range src[1:] {
|
||||
if c != c0 {
|
||||
n = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
if n == len(src) && !atEOF && !e.allowStutter {
|
||||
return nDst, nSrc, ErrShortSrc
|
||||
}
|
||||
s := strconv.Itoa(n)
|
||||
if len(s) >= len(dst) {
|
||||
return nDst, nSrc, ErrShortDst
|
||||
}
|
||||
copy(dst, s)
|
||||
dst[len(s)] = c0
|
||||
dst, src = dst[len(s)+1:], src[n:]
|
||||
nDst, nSrc = nDst+len(s)+1, nSrc+n
|
||||
}
|
||||
return nDst, nSrc, nil
|
||||
}
|
||||
|
||||
type testCase struct {
|
||||
desc string
|
||||
t Transformer
|
||||
src string
|
||||
dstSize int
|
||||
srcSize int
|
||||
ioSize int
|
||||
wantStr string
|
||||
wantErr error
|
||||
wantIter int // number of iterations taken; 0 means we don't care.
|
||||
}
|
||||
|
||||
func (t testCase) String() string {
|
||||
return tstr(t.t) + "; " + t.desc
|
||||
}
|
||||
|
||||
func tstr(t Transformer) string {
|
||||
if stringer, ok := t.(fmt.Stringer); ok {
|
||||
return stringer.String()
|
||||
}
|
||||
s := fmt.Sprintf("%T", t)
|
||||
return s[1+strings.Index(s, "."):]
|
||||
}
|
||||
|
||||
func (c chain) String() string {
|
||||
buf := &bytes.Buffer{}
|
||||
buf.WriteString("Chain(")
|
||||
for i, l := range c.link[:len(c.link)-1] {
|
||||
if i != 0 {
|
||||
fmt.Fprint(buf, ", ")
|
||||
}
|
||||
buf.WriteString(tstr(l.t))
|
||||
}
|
||||
buf.WriteString(")")
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
var testCases = []testCase{
|
||||
{
|
||||
desc: "basic",
|
||||
t: lowerCaseASCII{},
|
||||
src: "Hello WORLD.",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "hello world.",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "small dst",
|
||||
t: lowerCaseASCII{},
|
||||
src: "Hello WORLD.",
|
||||
dstSize: 3,
|
||||
srcSize: 100,
|
||||
wantStr: "hello world.",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "small src",
|
||||
t: lowerCaseASCII{},
|
||||
src: "Hello WORLD.",
|
||||
dstSize: 100,
|
||||
srcSize: 4,
|
||||
wantStr: "hello world.",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "small buffers",
|
||||
t: lowerCaseASCII{},
|
||||
src: "Hello WORLD.",
|
||||
dstSize: 3,
|
||||
srcSize: 4,
|
||||
wantStr: "hello world.",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "very small buffers",
|
||||
t: lowerCaseASCII{},
|
||||
src: "Hello WORLD.",
|
||||
dstSize: 1,
|
||||
srcSize: 1,
|
||||
wantStr: "hello world.",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "basic",
|
||||
t: dontMentionX{},
|
||||
src: "The First Rule of Transform Club: don't mention Mister X, ever.",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "The First Rule of Transform Club: don't mention Mister ",
|
||||
wantErr: errYouMentionedX,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "small buffers",
|
||||
t: dontMentionX{},
|
||||
src: "The First Rule of Transform Club: don't mention Mister X, ever.",
|
||||
dstSize: 10,
|
||||
srcSize: 10,
|
||||
wantStr: "The First Rule of Transform Club: don't mention Mister ",
|
||||
wantErr: errYouMentionedX,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "very small buffers",
|
||||
t: dontMentionX{},
|
||||
src: "The First Rule of Transform Club: don't mention Mister X, ever.",
|
||||
dstSize: 1,
|
||||
srcSize: 1,
|
||||
wantStr: "The First Rule of Transform Club: don't mention Mister ",
|
||||
wantErr: errYouMentionedX,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "only transform at EOF",
|
||||
t: doublerAtEOF{},
|
||||
src: "this",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "tthhiiss",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "basic",
|
||||
t: rleDecode{},
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "long",
|
||||
t: rleDecode{},
|
||||
src: "12a23b34c45d56e99z",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: strings.Repeat("a", 12) +
|
||||
strings.Repeat("b", 23) +
|
||||
strings.Repeat("c", 34) +
|
||||
strings.Repeat("d", 45) +
|
||||
strings.Repeat("e", 56) +
|
||||
strings.Repeat("z", 99),
|
||||
},
|
||||
|
||||
{
|
||||
desc: "tight buffers",
|
||||
t: rleDecode{},
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 11,
|
||||
srcSize: 3,
|
||||
wantStr: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short dst",
|
||||
t: rleDecode{},
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 10,
|
||||
srcSize: 3,
|
||||
wantStr: "abbcccdddddddddd",
|
||||
wantErr: ErrShortDst,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short src",
|
||||
t: rleDecode{},
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 11,
|
||||
srcSize: 2,
|
||||
ioSize: 2,
|
||||
wantStr: "abbccc",
|
||||
wantErr: ErrShortSrc,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "basic",
|
||||
t: rleEncode{},
|
||||
src: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "1a2b3c10d11e1g",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "long",
|
||||
t: rleEncode{},
|
||||
src: strings.Repeat("a", 12) +
|
||||
strings.Repeat("b", 23) +
|
||||
strings.Repeat("c", 34) +
|
||||
strings.Repeat("d", 45) +
|
||||
strings.Repeat("e", 56) +
|
||||
strings.Repeat("z", 99),
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "12a23b34c45d56e99z",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "tight buffers",
|
||||
t: rleEncode{},
|
||||
src: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
dstSize: 3,
|
||||
srcSize: 12,
|
||||
wantStr: "1a2b3c10d11e1g",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short dst",
|
||||
t: rleEncode{},
|
||||
src: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
dstSize: 2,
|
||||
srcSize: 12,
|
||||
wantStr: "1a2b3c",
|
||||
wantErr: ErrShortDst,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short src",
|
||||
t: rleEncode{},
|
||||
src: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
dstSize: 3,
|
||||
srcSize: 11,
|
||||
ioSize: 11,
|
||||
wantStr: "1a2b3c10d",
|
||||
wantErr: ErrShortSrc,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "allowStutter = false",
|
||||
t: rleEncode{allowStutter: false},
|
||||
src: "aaaabbbbbbbbccccddddd",
|
||||
dstSize: 10,
|
||||
srcSize: 10,
|
||||
wantStr: "4a8b4c5d",
|
||||
},
|
||||
|
||||
{
|
||||
desc: "allowStutter = true",
|
||||
t: rleEncode{allowStutter: true},
|
||||
src: "aaaabbbbbbbbccccddddd",
|
||||
dstSize: 10,
|
||||
srcSize: 10,
|
||||
ioSize: 10,
|
||||
wantStr: "4a6b2b4c4d1d",
|
||||
},
|
||||
}
|
||||
|
||||
func TestReader(t *testing.T) {
|
||||
for _, tc := range testCases {
|
||||
reset(tc.t)
|
||||
r := NewReader(strings.NewReader(tc.src), tc.t)
|
||||
// Differently sized dst and src buffers are not part of the
|
||||
// exported API. We override them manually.
|
||||
r.dst = make([]byte, tc.dstSize)
|
||||
r.src = make([]byte, tc.srcSize)
|
||||
got, err := ioutil.ReadAll(r)
|
||||
str := string(got)
|
||||
if str != tc.wantStr || err != tc.wantErr {
|
||||
t.Errorf("%s:\ngot %q, %v\nwant %q, %v", tc, str, err, tc.wantStr, tc.wantErr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func reset(t Transformer) {
|
||||
var dst [128]byte
|
||||
for err := ErrShortDst; err != nil; {
|
||||
_, _, err = t.Transform(dst[:], nil, true)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriter(t *testing.T) {
|
||||
tests := append(testCases, chainTests()...)
|
||||
for _, tc := range tests {
|
||||
sizes := []int{1, 2, 3, 4, 5, 10, 100, 1000}
|
||||
if tc.ioSize > 0 {
|
||||
sizes = []int{tc.ioSize}
|
||||
}
|
||||
for _, sz := range sizes {
|
||||
bb := &bytes.Buffer{}
|
||||
reset(tc.t)
|
||||
w := NewWriter(bb, tc.t)
|
||||
// Differently sized dst and src buffers are not part of the
|
||||
// exported API. We override them manually.
|
||||
w.dst = make([]byte, tc.dstSize)
|
||||
w.src = make([]byte, tc.srcSize)
|
||||
src := make([]byte, sz)
|
||||
var err error
|
||||
for b := tc.src; len(b) > 0 && err == nil; {
|
||||
n := copy(src, b)
|
||||
b = b[n:]
|
||||
m := 0
|
||||
m, err = w.Write(src[:n])
|
||||
if m != n && err == nil {
|
||||
t.Errorf("%s:%d: did not consume all bytes %d < %d", tc, sz, m, n)
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
err = w.Close()
|
||||
}
|
||||
str := bb.String()
|
||||
if str != tc.wantStr || err != tc.wantErr {
|
||||
t.Errorf("%s:%d:\ngot %q, %v\nwant %q, %v", tc, sz, str, err, tc.wantStr, tc.wantErr)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNop(t *testing.T) {
|
||||
testCases := []struct {
|
||||
str string
|
||||
dstSize int
|
||||
err error
|
||||
}{
|
||||
{"", 0, nil},
|
||||
{"", 10, nil},
|
||||
{"a", 0, ErrShortDst},
|
||||
{"a", 1, nil},
|
||||
{"a", 10, nil},
|
||||
}
|
||||
for i, tc := range testCases {
|
||||
dst := make([]byte, tc.dstSize)
|
||||
nDst, nSrc, err := Nop.Transform(dst, []byte(tc.str), true)
|
||||
want := tc.str
|
||||
if tc.dstSize < len(want) {
|
||||
want = want[:tc.dstSize]
|
||||
}
|
||||
if got := string(dst[:nDst]); got != want || err != tc.err || nSrc != nDst {
|
||||
t.Errorf("%d:\ngot %q, %d, %v\nwant %q, %d, %v", i, got, nSrc, err, want, nDst, tc.err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDiscard(t *testing.T) {
|
||||
testCases := []struct {
|
||||
str string
|
||||
dstSize int
|
||||
}{
|
||||
{"", 0},
|
||||
{"", 10},
|
||||
{"a", 0},
|
||||
{"ab", 10},
|
||||
}
|
||||
for i, tc := range testCases {
|
||||
nDst, nSrc, err := Discard.Transform(make([]byte, tc.dstSize), []byte(tc.str), true)
|
||||
if nDst != 0 || nSrc != len(tc.str) || err != nil {
|
||||
t.Errorf("%d:\ngot %q, %d, %v\nwant 0, %d, nil", i, nDst, nSrc, err, len(tc.str))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// mkChain creates a Chain transformer. x must be alternating between transformer
|
||||
// and bufSize, like T, (sz, T)*
|
||||
func mkChain(x ...interface{}) *chain {
|
||||
t := []Transformer{}
|
||||
for i := 0; i < len(x); i += 2 {
|
||||
t = append(t, x[i].(Transformer))
|
||||
}
|
||||
c := Chain(t...).(*chain)
|
||||
for i, j := 1, 1; i < len(x); i, j = i+2, j+1 {
|
||||
c.link[j].b = make([]byte, x[i].(int))
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
func chainTests() []testCase {
|
||||
return []testCase{
|
||||
{
|
||||
desc: "nil error",
|
||||
t: mkChain(rleEncode{}, 100, lowerCaseASCII{}),
|
||||
src: "ABB",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "1a2b",
|
||||
wantErr: nil,
|
||||
wantIter: 1,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short dst buffer",
|
||||
t: mkChain(lowerCaseASCII{}, 3, rleDecode{}),
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 10,
|
||||
srcSize: 3,
|
||||
wantStr: "abbcccdddddddddd",
|
||||
wantErr: ErrShortDst,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal dst buffer",
|
||||
t: mkChain(lowerCaseASCII{}, 3, rleDecode{}, 10, Nop),
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 100,
|
||||
srcSize: 3,
|
||||
wantStr: "abbcccdddddddddd",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal dst buffer from input",
|
||||
t: mkChain(rleDecode{}, 10, Nop),
|
||||
src: "1a2b3c10d11e0f1g",
|
||||
dstSize: 100,
|
||||
srcSize: 3,
|
||||
wantStr: "abbcccdddddddddd",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "empty short internal dst buffer",
|
||||
t: mkChain(lowerCaseASCII{}, 3, rleDecode{}, 10, Nop),
|
||||
src: "4a7b11e0f1g",
|
||||
dstSize: 100,
|
||||
srcSize: 3,
|
||||
wantStr: "aaaabbbbbbb",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "empty short internal dst buffer from input",
|
||||
t: mkChain(rleDecode{}, 10, Nop),
|
||||
src: "4a7b11e0f1g",
|
||||
dstSize: 100,
|
||||
srcSize: 3,
|
||||
wantStr: "aaaabbbbbbb",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal src buffer after full dst buffer",
|
||||
t: mkChain(Nop, 5, rleEncode{}, 10, Nop),
|
||||
src: "cccccddddd",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "",
|
||||
wantErr: errShortInternal,
|
||||
wantIter: 1,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal src buffer after short dst buffer; test lastFull",
|
||||
t: mkChain(rleDecode{}, 5, rleEncode{}, 4, Nop),
|
||||
src: "2a1b4c6d",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "2a1b",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal src buffer after successful complete fill",
|
||||
t: mkChain(Nop, 3, rleDecode{}),
|
||||
src: "123a4b",
|
||||
dstSize: 4,
|
||||
srcSize: 3,
|
||||
wantStr: "",
|
||||
wantErr: errShortInternal,
|
||||
wantIter: 1,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal src buffer after short dst buffer; test lastFull",
|
||||
t: mkChain(rleDecode{}, 5, rleEncode{}),
|
||||
src: "2a1b4c6d",
|
||||
dstSize: 4,
|
||||
srcSize: 100,
|
||||
wantStr: "2a1b",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short src buffer",
|
||||
t: mkChain(rleEncode{}, 5, Nop),
|
||||
src: "abbcccddddeeeee",
|
||||
dstSize: 4,
|
||||
srcSize: 4,
|
||||
ioSize: 4,
|
||||
wantStr: "1a2b3c",
|
||||
wantErr: ErrShortSrc,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "process all in one go",
|
||||
t: mkChain(rleEncode{}, 5, Nop),
|
||||
src: "abbcccddddeeeeeffffff",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
wantStr: "1a2b3c4d5e6f",
|
||||
wantErr: nil,
|
||||
wantIter: 1,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "complete processing downstream after error",
|
||||
t: mkChain(dontMentionX{}, 2, rleDecode{}, 5, Nop),
|
||||
src: "3a4b5eX",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
ioSize: 100,
|
||||
wantStr: "aaabbbbeeeee",
|
||||
wantErr: errYouMentionedX,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "return downstream fatal errors first (followed by short dst)",
|
||||
t: mkChain(dontMentionX{}, 8, rleDecode{}, 4, Nop),
|
||||
src: "3a4b5eX",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
ioSize: 100,
|
||||
wantStr: "aaabbbb",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "return downstream fatal errors first (followed by short src)",
|
||||
t: mkChain(dontMentionX{}, 5, Nop, 1, rleDecode{}),
|
||||
src: "1a5bX",
|
||||
dstSize: 100,
|
||||
srcSize: 100,
|
||||
ioSize: 100,
|
||||
wantStr: "",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
|
||||
{
|
||||
desc: "short internal",
|
||||
t: mkChain(Nop, 11, rleEncode{}, 3, Nop),
|
||||
src: "abbcccddddddddddeeeeeeeeeeeg",
|
||||
dstSize: 3,
|
||||
srcSize: 100,
|
||||
wantStr: "1a2b3c10d",
|
||||
wantErr: errShortInternal,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func doTransform(tc testCase) (res string, iter int, err error) {
|
||||
reset(tc.t)
|
||||
dst := make([]byte, tc.dstSize)
|
||||
out, in := make([]byte, 0, 2*len(tc.src)), []byte(tc.src)
|
||||
for {
|
||||
iter++
|
||||
src, atEOF := in, true
|
||||
if len(src) > tc.srcSize {
|
||||
src, atEOF = src[:tc.srcSize], false
|
||||
}
|
||||
nDst, nSrc, err := tc.t.Transform(dst, src, atEOF)
|
||||
out = append(out, dst[:nDst]...)
|
||||
in = in[nSrc:]
|
||||
switch {
|
||||
case err == nil && len(in) != 0:
|
||||
case err == ErrShortSrc && nSrc > 0:
|
||||
case err == ErrShortDst && nDst > 0:
|
||||
default:
|
||||
return string(out), iter, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestChain(t *testing.T) {
|
||||
if c, ok := Chain().(nop); !ok {
|
||||
t.Errorf("empty chain: %v; want Nop", c)
|
||||
}
|
||||
|
||||
// Test Chain for a single Transformer.
|
||||
for _, tc := range testCases {
|
||||
tc.t = Chain(tc.t)
|
||||
str, _, err := doTransform(tc)
|
||||
if str != tc.wantStr || err != tc.wantErr {
|
||||
t.Errorf("%s:\ngot %q, %v\nwant %q, %v", tc, str, err, tc.wantStr, tc.wantErr)
|
||||
}
|
||||
}
|
||||
|
||||
tests := chainTests()
|
||||
sizes := []int{1, 2, 3, 4, 5, 7, 10, 100, 1000}
|
||||
addTest := func(tc testCase, t *chain) {
|
||||
if t.link[0].t != tc.t && tc.wantErr == ErrShortSrc {
|
||||
tc.wantErr = errShortInternal
|
||||
}
|
||||
if t.link[len(t.link)-2].t != tc.t && tc.wantErr == ErrShortDst {
|
||||
tc.wantErr = errShortInternal
|
||||
}
|
||||
tc.t = t
|
||||
tests = append(tests, tc)
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
for _, sz := range sizes {
|
||||
tt := tc
|
||||
tt.dstSize = sz
|
||||
addTest(tt, mkChain(tc.t, tc.dstSize, Nop))
|
||||
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, 2, Nop))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop))
|
||||
if sz >= tc.dstSize && (tc.wantErr != ErrShortDst || sz == tc.dstSize) {
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t))
|
||||
addTest(tt, mkChain(Nop, 100, Nop, tc.srcSize, tc.t))
|
||||
}
|
||||
}
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
tt := tc
|
||||
tt.dstSize = 1
|
||||
tt.wantStr = ""
|
||||
addTest(tt, mkChain(tc.t, tc.dstSize, Discard))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Discard))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, tc.dstSize, Discard))
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
tt := tc
|
||||
tt.dstSize = 100
|
||||
tt.wantStr = strings.Replace(tc.src, "0f", "", -1)
|
||||
// Chain encoders and decoders.
|
||||
if _, ok := tc.t.(rleEncode); ok && tc.wantErr == nil {
|
||||
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, 1000, rleDecode{}))
|
||||
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, tc.dstSize, rleDecode{}))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 100, rleDecode{}))
|
||||
// decoding needs larger destinations
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, rleDecode{}, 100, Nop))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 100, rleDecode{}, 100, Nop))
|
||||
} else if _, ok := tc.t.(rleDecode); ok && tc.wantErr == nil {
|
||||
// The internal buffer size may need to be the sum of the maximum segment
|
||||
// size of the two encoders!
|
||||
addTest(tt, mkChain(tc.t, 2*tc.dstSize, rleEncode{}))
|
||||
addTest(tt, mkChain(tc.t, tc.dstSize, Nop, 101, rleEncode{}))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 100, rleEncode{}))
|
||||
addTest(tt, mkChain(Nop, tc.srcSize, tc.t, tc.dstSize, Nop, 200, rleEncode{}, 100, Nop))
|
||||
}
|
||||
}
|
||||
for _, tc := range tests {
|
||||
str, iter, err := doTransform(tc)
|
||||
mi := tc.wantIter != 0 && tc.wantIter != iter
|
||||
if str != tc.wantStr || err != tc.wantErr || mi {
|
||||
t.Errorf("%s:\ngot iter:%d, %q, %v\nwant iter:%d, %q, %v", tc, iter, str, err, tc.wantIter, tc.wantStr, tc.wantErr)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveFunc(t *testing.T) {
|
||||
filter := RemoveFunc(func(r rune) bool {
|
||||
return strings.IndexRune("ab\u0300\u1234,", r) != -1
|
||||
})
|
||||
tests := []testCase{
|
||||
{
|
||||
src: ",",
|
||||
wantStr: "",
|
||||
},
|
||||
|
||||
{
|
||||
src: "c",
|
||||
wantStr: "c",
|
||||
},
|
||||
|
||||
{
|
||||
src: "\u2345",
|
||||
wantStr: "\u2345",
|
||||
},
|
||||
|
||||
{
|
||||
src: "tschüß",
|
||||
wantStr: "tschüß",
|
||||
},
|
||||
|
||||
{
|
||||
src: ",до,свидания,",
|
||||
wantStr: "досвидания",
|
||||
},
|
||||
|
||||
{
|
||||
src: "a\xbd\xb2=\xbc ⌘",
|
||||
wantStr: "\uFFFD\uFFFD=\uFFFD ⌘",
|
||||
},
|
||||
|
||||
{
|
||||
// If we didn't replace illegal bytes with RuneError, the result
|
||||
// would be \u0300 or the code would need to be more complex.
|
||||
src: "\xcc\u0300\x80",
|
||||
wantStr: "\uFFFD\uFFFD",
|
||||
},
|
||||
|
||||
{
|
||||
src: "\xcc\u0300\x80",
|
||||
dstSize: 3,
|
||||
wantStr: "\uFFFD\uFFFD",
|
||||
wantIter: 2,
|
||||
},
|
||||
|
||||
{
|
||||
src: "\u2345",
|
||||
dstSize: 2,
|
||||
wantStr: "",
|
||||
wantErr: ErrShortDst,
|
||||
},
|
||||
|
||||
{
|
||||
src: "\xcc",
|
||||
dstSize: 2,
|
||||
wantStr: "",
|
||||
wantErr: ErrShortDst,
|
||||
},
|
||||
|
||||
{
|
||||
src: "\u0300",
|
||||
dstSize: 2,
|
||||
srcSize: 1,
|
||||
wantStr: "",
|
||||
wantErr: ErrShortSrc,
|
||||
},
|
||||
|
||||
{
|
||||
t: RemoveFunc(func(r rune) bool {
|
||||
return r == utf8.RuneError
|
||||
}),
|
||||
src: "\xcc\u0300\x80",
|
||||
wantStr: "\u0300",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
tc.desc = tc.src
|
||||
if tc.t == nil {
|
||||
tc.t = filter
|
||||
}
|
||||
if tc.dstSize == 0 {
|
||||
tc.dstSize = 100
|
||||
}
|
||||
if tc.srcSize == 0 {
|
||||
tc.srcSize = 100
|
||||
}
|
||||
str, iter, err := doTransform(tc)
|
||||
mi := tc.wantIter != 0 && tc.wantIter != iter
|
||||
if str != tc.wantStr || err != tc.wantErr || mi {
|
||||
t.Errorf("%+q:\ngot iter:%d, %+q, %v\nwant iter:%d, %+q, %v", tc.src, iter, str, err, tc.wantIter, tc.wantStr, tc.wantErr)
|
||||
}
|
||||
|
||||
tc.src = str
|
||||
idem, _, _ := doTransform(tc)
|
||||
if str != idem {
|
||||
t.Errorf("%+q: found %+q; want %+q", tc.src, idem, str)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBytes(t *testing.T) {
|
||||
for _, tt := range append(testCases, chainTests()...) {
|
||||
if tt.desc == "allowStutter = true" {
|
||||
// We don't have control over the buffer size, so we eliminate tests
|
||||
// that depend on a specific buffer size being set.
|
||||
continue
|
||||
}
|
||||
got := Bytes(tt.t, []byte(tt.src))
|
||||
if tt.wantErr != nil {
|
||||
if tt.wantErr != ErrShortDst && tt.wantErr != ErrShortSrc {
|
||||
// Bytes should return nil for non-recoverable errors.
|
||||
if g, w := (got == nil), (tt.wantErr != nil); g != w {
|
||||
t.Errorf("%s:error: got %v; want %v", tt.desc, g, w)
|
||||
}
|
||||
}
|
||||
// The output strings in the tests that expect an error will
|
||||
// almost certainly not be the same as the result of Bytes.
|
||||
continue
|
||||
}
|
||||
if string(got) != tt.wantStr {
|
||||
t.Errorf("%s:string: got %q; want %q", tt.desc, got, tt.wantStr)
|
||||
}
|
||||
}
|
||||
}
|
||||
30
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/Makefile
generated
vendored
Normal file
30
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/Makefile
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
# Copyright 2011 The Go Authors. All rights reserved.
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file.
|
||||
|
||||
maketables: maketables.go triegen.go
|
||||
go build $^
|
||||
|
||||
maketesttables: maketesttables.go triegen.go
|
||||
go build $^
|
||||
|
||||
normregtest: normregtest.go
|
||||
go build $^
|
||||
|
||||
tables: maketables
|
||||
./maketables > tables.go
|
||||
gofmt -w tables.go
|
||||
|
||||
trietesttables: maketesttables
|
||||
./maketesttables > triedata_test.go
|
||||
gofmt -w triedata_test.go
|
||||
|
||||
# Downloads from www.unicode.org, so not part
|
||||
# of standard test scripts.
|
||||
test: testtables regtest
|
||||
|
||||
testtables: maketables
|
||||
./maketables -test > data_test.go && go test -tags=test
|
||||
|
||||
regtest: normregtest
|
||||
./normregtest
|
||||
130
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/composition_test.go
generated
vendored
Normal file
130
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/composition_test.go
generated
vendored
Normal file
@@ -0,0 +1,130 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm
|
||||
|
||||
import "testing"
|
||||
|
||||
// TestCase is used for most tests.
|
||||
type TestCase struct {
|
||||
in []rune
|
||||
out []rune
|
||||
}
|
||||
|
||||
func runTests(t *testing.T, name string, fm Form, tests []TestCase) {
|
||||
rb := reorderBuffer{}
|
||||
rb.init(fm, nil)
|
||||
for i, test := range tests {
|
||||
rb.setFlusher(nil, appendFlush)
|
||||
for j, rune := range test.in {
|
||||
b := []byte(string(rune))
|
||||
src := inputBytes(b)
|
||||
info := rb.f.info(src, 0)
|
||||
if j == 0 {
|
||||
rb.ss.first(info)
|
||||
} else {
|
||||
rb.ss.next(info)
|
||||
}
|
||||
if rb.insertFlush(src, 0, info) < 0 {
|
||||
t.Errorf("%s:%d: insert failed for rune %d", name, i, j)
|
||||
}
|
||||
}
|
||||
rb.doFlush()
|
||||
was := string(rb.out)
|
||||
want := string(test.out)
|
||||
if len(was) != len(want) {
|
||||
t.Errorf("%s:%d: length = %d; want %d", name, i, len(was), len(want))
|
||||
}
|
||||
if was != want {
|
||||
k, pfx := pidx(was, want)
|
||||
t.Errorf("%s:%d: \nwas %s%+q; \nwant %s%+q", name, i, pfx, was[k:], pfx, want[k:])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFlush(t *testing.T) {
|
||||
const (
|
||||
hello = "Hello "
|
||||
world = "world!"
|
||||
)
|
||||
buf := make([]byte, maxByteBufferSize)
|
||||
p := copy(buf, hello)
|
||||
out := buf[p:]
|
||||
rb := reorderBuffer{}
|
||||
rb.initString(NFC, world)
|
||||
if i := rb.flushCopy(out); i != 0 {
|
||||
t.Errorf("wrote bytes on flush of empty buffer. (len(out) = %d)", i)
|
||||
}
|
||||
|
||||
for i := range world {
|
||||
// No need to set streamSafe values for this test.
|
||||
rb.insertFlush(rb.src, i, rb.f.info(rb.src, i))
|
||||
n := rb.flushCopy(out)
|
||||
out = out[n:]
|
||||
p += n
|
||||
}
|
||||
|
||||
was := buf[:p]
|
||||
want := hello + world
|
||||
if string(was) != want {
|
||||
t.Errorf(`output after flush was "%s"; want "%s"`, string(was), want)
|
||||
}
|
||||
if rb.nrune != 0 {
|
||||
t.Errorf("non-null size of info buffer (rb.nrune == %d)", rb.nrune)
|
||||
}
|
||||
if rb.nbyte != 0 {
|
||||
t.Errorf("non-null size of byte buffer (rb.nbyte == %d)", rb.nbyte)
|
||||
}
|
||||
}
|
||||
|
||||
var insertTests = []TestCase{
|
||||
{[]rune{'a'}, []rune{'a'}},
|
||||
{[]rune{0x300}, []rune{0x300}},
|
||||
{[]rune{0x300, 0x316}, []rune{0x316, 0x300}}, // CCC(0x300)==230; CCC(0x316)==220
|
||||
{[]rune{0x316, 0x300}, []rune{0x316, 0x300}},
|
||||
{[]rune{0x41, 0x316, 0x300}, []rune{0x41, 0x316, 0x300}},
|
||||
{[]rune{0x41, 0x300, 0x316}, []rune{0x41, 0x316, 0x300}},
|
||||
{[]rune{0x300, 0x316, 0x41}, []rune{0x316, 0x300, 0x41}},
|
||||
{[]rune{0x41, 0x300, 0x40, 0x316}, []rune{0x41, 0x300, 0x40, 0x316}},
|
||||
}
|
||||
|
||||
func TestInsert(t *testing.T) {
|
||||
runTests(t, "TestInsert", NFD, insertTests)
|
||||
}
|
||||
|
||||
var decompositionNFDTest = []TestCase{
|
||||
{[]rune{0xC0}, []rune{0x41, 0x300}},
|
||||
{[]rune{0xAC00}, []rune{0x1100, 0x1161}},
|
||||
{[]rune{0x01C4}, []rune{0x01C4}},
|
||||
{[]rune{0x320E}, []rune{0x320E}},
|
||||
{[]rune("음ẻ과"), []rune{0x110B, 0x1173, 0x11B7, 0x65, 0x309, 0x1100, 0x116A}},
|
||||
}
|
||||
|
||||
var decompositionNFKDTest = []TestCase{
|
||||
{[]rune{0xC0}, []rune{0x41, 0x300}},
|
||||
{[]rune{0xAC00}, []rune{0x1100, 0x1161}},
|
||||
{[]rune{0x01C4}, []rune{0x44, 0x5A, 0x030C}},
|
||||
{[]rune{0x320E}, []rune{0x28, 0x1100, 0x1161, 0x29}},
|
||||
}
|
||||
|
||||
func TestDecomposition(t *testing.T) {
|
||||
runTests(t, "TestDecompositionNFD", NFD, decompositionNFDTest)
|
||||
runTests(t, "TestDecompositionNFKD", NFKD, decompositionNFKDTest)
|
||||
}
|
||||
|
||||
var compositionTest = []TestCase{
|
||||
{[]rune{0x41, 0x300}, []rune{0xC0}},
|
||||
{[]rune{0x41, 0x316}, []rune{0x41, 0x316}},
|
||||
{[]rune{0x41, 0x300, 0x35D}, []rune{0xC0, 0x35D}},
|
||||
{[]rune{0x41, 0x316, 0x300}, []rune{0xC0, 0x316}},
|
||||
// blocking starter
|
||||
{[]rune{0x41, 0x316, 0x40, 0x300}, []rune{0x41, 0x316, 0x40, 0x300}},
|
||||
{[]rune{0x1100, 0x1161}, []rune{0xAC00}},
|
||||
// parenthesized Hangul, alternate between ASCII and Hangul.
|
||||
{[]rune{0x28, 0x1100, 0x1161, 0x29}, []rune{0x28, 0xAC00, 0x29}},
|
||||
}
|
||||
|
||||
func TestComposition(t *testing.T) {
|
||||
runTests(t, "TestComposition", NFC, compositionTest)
|
||||
}
|
||||
82
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/example_iter_test.go
generated
vendored
Normal file
82
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/example_iter_test.go
generated
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"unicode/utf8"
|
||||
|
||||
"code.google.com/p/go.text/unicode/norm"
|
||||
)
|
||||
|
||||
// EqualSimple uses a norm.Iter to compare two non-normalized
|
||||
// strings for equivalence.
|
||||
func EqualSimple(a, b string) bool {
|
||||
var ia, ib norm.Iter
|
||||
ia.InitString(norm.NFKD, a)
|
||||
ib.InitString(norm.NFKD, b)
|
||||
for !ia.Done() && !ib.Done() {
|
||||
if !bytes.Equal(ia.Next(), ib.Next()) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return ia.Done() && ib.Done()
|
||||
}
|
||||
|
||||
// FindPrefix finds the longest common prefix of ASCII characters
|
||||
// of a and b.
|
||||
func FindPrefix(a, b string) int {
|
||||
i := 0
|
||||
for ; i < len(a) && i < len(b) && a[i] < utf8.RuneSelf && a[i] == b[i]; i++ {
|
||||
}
|
||||
return i
|
||||
}
|
||||
|
||||
// EqualOpt is like EqualSimple, but optimizes the special
|
||||
// case for ASCII characters.
|
||||
func EqualOpt(a, b string) bool {
|
||||
n := FindPrefix(a, b)
|
||||
a, b = a[n:], b[n:]
|
||||
var ia, ib norm.Iter
|
||||
ia.InitString(norm.NFKD, a)
|
||||
ib.InitString(norm.NFKD, b)
|
||||
for !ia.Done() && !ib.Done() {
|
||||
if !bytes.Equal(ia.Next(), ib.Next()) {
|
||||
return false
|
||||
}
|
||||
if n := int64(FindPrefix(a[ia.Pos():], b[ib.Pos():])); n != 0 {
|
||||
ia.Seek(n, 1)
|
||||
ib.Seek(n, 1)
|
||||
}
|
||||
}
|
||||
return ia.Done() && ib.Done()
|
||||
}
|
||||
|
||||
var compareTests = []struct{ a, b string }{
|
||||
{"aaa", "aaa"},
|
||||
{"aaa", "aab"},
|
||||
{"a\u0300a", "\u00E0a"},
|
||||
{"a\u0300\u0320b", "a\u0320\u0300b"},
|
||||
{"\u1E0A\u0323", "\x44\u0323\u0307"},
|
||||
// A character that decomposes into multiple segments
|
||||
// spans several iterations.
|
||||
{"\u3304", "\u30A4\u30CB\u30F3\u30AF\u3099"},
|
||||
}
|
||||
|
||||
func ExampleIter() {
|
||||
for i, t := range compareTests {
|
||||
r0 := EqualSimple(t.a, t.b)
|
||||
r1 := EqualOpt(t.a, t.b)
|
||||
fmt.Printf("%d: %v %v\n", i, r0, r1)
|
||||
}
|
||||
// Output:
|
||||
// 0: true true
|
||||
// 1: false false
|
||||
// 2: true true
|
||||
// 3: true true
|
||||
// 4: true true
|
||||
// 5: true true
|
||||
}
|
||||
@@ -201,17 +201,17 @@ func lookupInfoNFKC(b input, i int) Properties {
|
||||
// Properties returns properties for the first rune in s.
|
||||
func (f Form) Properties(s []byte) Properties {
|
||||
if f == NFC || f == NFD {
|
||||
return compInfo(nfcData.lookup(s))
|
||||
return compInfo(nfcTrie.lookup(s))
|
||||
}
|
||||
return compInfo(nfkcData.lookup(s))
|
||||
return compInfo(nfkcTrie.lookup(s))
|
||||
}
|
||||
|
||||
// PropertiesString returns properties for the first rune in s.
|
||||
func (f Form) PropertiesString(s string) Properties {
|
||||
if f == NFC || f == NFD {
|
||||
return compInfo(nfcData.lookupString(s))
|
||||
return compInfo(nfcTrie.lookupString(s))
|
||||
}
|
||||
return compInfo(nfkcData.lookupString(s))
|
||||
return compInfo(nfkcTrie.lookupString(s))
|
||||
}
|
||||
|
||||
// compInfo converts the information contained in v and sz
|
||||
54
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/forminfo_test.go
generated
vendored
Normal file
54
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/forminfo_test.go
generated
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build test
|
||||
|
||||
package norm
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestProperties(t *testing.T) {
|
||||
var d runeData
|
||||
CK := [2]string{"C", "K"}
|
||||
for k, r := 1, rune(0); r < 0x2ffff; r++ {
|
||||
if k < len(testData) && r == testData[k].r {
|
||||
d = testData[k]
|
||||
k++
|
||||
}
|
||||
s := string(r)
|
||||
for j, p := range []Properties{NFC.PropertiesString(s), NFKC.PropertiesString(s)} {
|
||||
f := d.f[j]
|
||||
if p.CCC() != d.ccc {
|
||||
t.Errorf("%U: ccc(%s): was %d; want %d %X", r, CK[j], p.CCC(), d.ccc, p.index)
|
||||
}
|
||||
if p.isYesC() != (f.qc == Yes) {
|
||||
t.Errorf("%U: YesC(%s): was %v; want %v", r, CK[j], p.isYesC(), f.qc == Yes)
|
||||
}
|
||||
if p.combinesBackward() != (f.qc == Maybe) {
|
||||
t.Errorf("%U: combines backwards(%s): was %v; want %v", r, CK[j], p.combinesBackward(), f.qc == Maybe)
|
||||
}
|
||||
if p.nLeadingNonStarters() != d.nLead {
|
||||
t.Errorf("%U: nLead(%s): was %d; want %d %#v %#v", r, CK[j], p.nLeadingNonStarters(), d.nLead, p, d)
|
||||
}
|
||||
if p.nTrailingNonStarters() != d.nTrail {
|
||||
t.Errorf("%U: nTrail(%s): was %d; want %d %#v %#v", r, CK[j], p.nTrailingNonStarters(), d.nTrail, p, d)
|
||||
}
|
||||
if p.combinesForward() != f.combinesForward {
|
||||
t.Errorf("%U: combines forward(%s): was %v; want %v %#v", r, CK[j], p.combinesForward(), f.combinesForward, p)
|
||||
}
|
||||
// Skip Hangul as it is algorithmically computed.
|
||||
if r >= hangulBase && r < hangulEnd {
|
||||
continue
|
||||
}
|
||||
if p.hasDecomposition() {
|
||||
if has := f.decomposition != ""; !has {
|
||||
t.Errorf("%U: hasDecomposition(%s): was %v; want %v", r, CK[j], p.hasDecomposition(), has)
|
||||
}
|
||||
if string(p.Decomposition()) != f.decomposition {
|
||||
t.Errorf("%U: decomp(%s): was %+q; want %+q", r, CK[j], p.Decomposition(), f.decomposition)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -77,16 +77,16 @@ func (in *input) copySlice(buf []byte, b, e int) int {
|
||||
|
||||
func (in *input) charinfoNFC(p int) (uint16, int) {
|
||||
if in.bytes == nil {
|
||||
return nfcData.lookupString(in.str[p:])
|
||||
return nfcTrie.lookupString(in.str[p:])
|
||||
}
|
||||
return nfcData.lookup(in.bytes[p:])
|
||||
return nfcTrie.lookup(in.bytes[p:])
|
||||
}
|
||||
|
||||
func (in *input) charinfoNFKC(p int) (uint16, int) {
|
||||
if in.bytes == nil {
|
||||
return nfkcData.lookupString(in.str[p:])
|
||||
return nfkcTrie.lookupString(in.str[p:])
|
||||
}
|
||||
return nfkcData.lookup(in.bytes[p:])
|
||||
return nfkcTrie.lookup(in.bytes[p:])
|
||||
}
|
||||
|
||||
func (in *input) hangul(p int) (r rune) {
|
||||
@@ -9,8 +9,6 @@ import (
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// MaxSegmentSize is the maximum size of a byte buffer needed to consider any
|
||||
// sequence of starter and non-starter runes for the purpose of normalization.
|
||||
const MaxSegmentSize = maxByteBufferSize
|
||||
|
||||
// An Iter iterates over a string or byte slice, while normalizing it
|
||||
98
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/iter_test.go
generated
vendored
Normal file
98
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/iter_test.go
generated
vendored
Normal file
@@ -0,0 +1,98 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func doIterNorm(f Form, s string) []byte {
|
||||
acc := []byte{}
|
||||
i := Iter{}
|
||||
i.InitString(f, s)
|
||||
for !i.Done() {
|
||||
acc = append(acc, i.Next()...)
|
||||
}
|
||||
return acc
|
||||
}
|
||||
|
||||
func TestIterNext(t *testing.T) {
|
||||
runNormTests(t, "IterNext", func(f Form, out []byte, s string) []byte {
|
||||
return doIterNorm(f, string(append(out, s...)))
|
||||
})
|
||||
}
|
||||
|
||||
type SegmentTest struct {
|
||||
in string
|
||||
out []string
|
||||
}
|
||||
|
||||
var segmentTests = []SegmentTest{
|
||||
{"\u1E0A\u0323a", []string{"\x44\u0323\u0307", "a", ""}},
|
||||
{rep('a', segSize), append(strings.Split(rep('a', segSize), ""), "")},
|
||||
{rep('a', segSize+2), append(strings.Split(rep('a', segSize+2), ""), "")},
|
||||
{rep('a', segSize) + "\u0300aa",
|
||||
append(strings.Split(rep('a', segSize-1), ""), "a\u0300", "a", "a", "")},
|
||||
|
||||
// U+0f73 is NOT treated as a starter as it is a modifier
|
||||
{"a" + grave(29) + "\u0f73", []string{"a" + grave(29), cgj + "\u0f73"}},
|
||||
{"a\u0f73", []string{"a\u0f73"}},
|
||||
|
||||
// U+ff9e is treated as a non-starter.
|
||||
// TODO: should we? Note that this will only affect iteration, as whether
|
||||
// or not we do so does not affect the normalization output and will either
|
||||
// way result in consistent iteration output.
|
||||
{"a" + grave(30) + "\uff9e", []string{"a" + grave(30), cgj + "\uff9e"}},
|
||||
{"a\uff9e", []string{"a\uff9e"}},
|
||||
}
|
||||
|
||||
var segmentTestsK = []SegmentTest{
|
||||
{"\u3332", []string{"\u30D5", "\u30A1", "\u30E9", "\u30C3", "\u30C8\u3099", ""}},
|
||||
// last segment of multi-segment decomposition needs normalization
|
||||
{"\u3332\u093C", []string{"\u30D5", "\u30A1", "\u30E9", "\u30C3", "\u30C8\u093C\u3099", ""}},
|
||||
{"\u320E", []string{"\x28", "\uAC00", "\x29"}},
|
||||
|
||||
// last segment should be copied to start of buffer.
|
||||
{"\ufdfa", []string{"\u0635", "\u0644", "\u0649", " ", "\u0627", "\u0644", "\u0644", "\u0647", " ", "\u0639", "\u0644", "\u064a", "\u0647", " ", "\u0648", "\u0633", "\u0644", "\u0645", ""}},
|
||||
{"\ufdfa" + grave(30), []string{"\u0635", "\u0644", "\u0649", " ", "\u0627", "\u0644", "\u0644", "\u0647", " ", "\u0639", "\u0644", "\u064a", "\u0647", " ", "\u0648", "\u0633", "\u0644", "\u0645" + grave(30), ""}},
|
||||
{"\uFDFA" + grave(64), []string{"\u0635", "\u0644", "\u0649", " ", "\u0627", "\u0644", "\u0644", "\u0647", " ", "\u0639", "\u0644", "\u064a", "\u0647", " ", "\u0648", "\u0633", "\u0644", "\u0645" + grave(30), cgj + grave(30), cgj + grave(4), ""}},
|
||||
|
||||
// Hangul and Jamo are grouped togeter.
|
||||
{"\uAC00", []string{"\u1100\u1161", ""}},
|
||||
{"\uAC01", []string{"\u1100\u1161\u11A8", ""}},
|
||||
{"\u1100\u1161", []string{"\u1100\u1161", ""}},
|
||||
}
|
||||
|
||||
// Note that, by design, segmentation is equal for composing and decomposing forms.
|
||||
func TestIterSegmentation(t *testing.T) {
|
||||
segmentTest(t, "SegmentTestD", NFD, segmentTests)
|
||||
segmentTest(t, "SegmentTestC", NFC, segmentTests)
|
||||
segmentTest(t, "SegmentTestKD", NFKD, segmentTestsK)
|
||||
segmentTest(t, "SegmentTestKC", NFKC, segmentTestsK)
|
||||
}
|
||||
|
||||
func segmentTest(t *testing.T, name string, f Form, tests []SegmentTest) {
|
||||
iter := Iter{}
|
||||
for i, tt := range tests {
|
||||
iter.InitString(f, tt.in)
|
||||
for j, seg := range tt.out {
|
||||
if seg == "" {
|
||||
if !iter.Done() {
|
||||
res := string(iter.Next())
|
||||
t.Errorf(`%s:%d:%d: expected Done()==true, found segment %+q`, name, i, j, res)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if iter.Done() {
|
||||
t.Errorf("%s:%d:%d: Done()==true, want false", name, i, j)
|
||||
}
|
||||
seg = f.String(seg)
|
||||
if res := string(iter.Next()); res != seg {
|
||||
t.Errorf(`%s:%d:%d" segment was %+q (%d); want %+q (%d)`, name, i, j, pc(res), len(res), pc(seg), len(seg))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -11,22 +11,23 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/internal/gen"
|
||||
"golang.org/x/text/internal/triegen"
|
||||
"golang.org/x/text/internal/ucd"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
func main() {
|
||||
gen.Init()
|
||||
flag.Parse()
|
||||
loadUnicodeData()
|
||||
compactCCC()
|
||||
loadCompositionExclusions()
|
||||
@@ -43,20 +44,50 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
tablelist = flag.String("tables",
|
||||
"all",
|
||||
"comma-separated list of which tables to generate; "+
|
||||
"can be 'decomp', 'recomp', 'info' and 'all'")
|
||||
test = flag.Bool("test",
|
||||
false,
|
||||
"test existing tables against DerivedNormalizationProps and generate test data for regression testing")
|
||||
verbose = flag.Bool("verbose",
|
||||
false,
|
||||
"write data to stdout as it is parsed")
|
||||
)
|
||||
var url = flag.String("url",
|
||||
"http://www.unicode.org/Public/"+unicode.Version+"/ucd/",
|
||||
"URL of Unicode database directory")
|
||||
var tablelist = flag.String("tables",
|
||||
"all",
|
||||
"comma-separated list of which tables to generate; "+
|
||||
"can be 'decomp', 'recomp', 'info' and 'all'")
|
||||
var test = flag.Bool("test",
|
||||
false,
|
||||
"test existing tables against DerivedNormalizationProps and generate test data for regression testing")
|
||||
var verbose = flag.Bool("verbose",
|
||||
false,
|
||||
"write data to stdout as it is parsed")
|
||||
var localFiles = flag.Bool("local",
|
||||
false,
|
||||
"data files have been copied to the current directory; for debugging only")
|
||||
|
||||
const MaxChar = 0x10FFFF // anything above this shouldn't exist
|
||||
var logger = log.New(os.Stderr, "", log.Lshortfile)
|
||||
|
||||
// UnicodeData.txt has form:
|
||||
// 0037;DIGIT SEVEN;Nd;0;EN;;7;7;7;N;;;;;
|
||||
// 007A;LATIN SMALL LETTER Z;Ll;0;L;;;;;N;;;005A;;005A
|
||||
// See http://unicode.org/reports/tr44/ for full explanation
|
||||
// The fields:
|
||||
const (
|
||||
FCodePoint = iota
|
||||
FName
|
||||
FGeneralCategory
|
||||
FCanonicalCombiningClass
|
||||
FBidiClass
|
||||
FDecompMapping
|
||||
FDecimalValue
|
||||
FDigitValue
|
||||
FNumericValue
|
||||
FBidiMirrored
|
||||
FUnicode1Name
|
||||
FISOComment
|
||||
FSimpleUppercaseMapping
|
||||
FSimpleLowercaseMapping
|
||||
FSimpleTitlecaseMapping
|
||||
NumField
|
||||
|
||||
MaxChar = 0x10FFFF // anything above this shouldn't exist
|
||||
)
|
||||
|
||||
// Quick Check properties of runes allow us to quickly
|
||||
// determine whether a rune may occur in a normal form.
|
||||
@@ -180,7 +211,28 @@ func (f FormInfo) String() string {
|
||||
|
||||
type Decomposition []rune
|
||||
|
||||
func parseDecomposition(s string, skipfirst bool) (a []rune, err error) {
|
||||
func openReader(file string) (input io.ReadCloser) {
|
||||
if *localFiles {
|
||||
f, err := os.Open(file)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
input = f
|
||||
} else {
|
||||
path := *url + file
|
||||
resp, err := http.Get(path)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
logger.Fatal("bad GET status for "+file, resp.Status)
|
||||
}
|
||||
input = resp.Body
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func parseDecomposition(s string, skipfirst bool) (a []rune, e error) {
|
||||
decomp := strings.Split(s, " ")
|
||||
if len(decomp) > 0 && skipfirst {
|
||||
decomp = decomp[1:]
|
||||
@@ -195,31 +247,56 @@ func parseDecomposition(s string, skipfirst bool) (a []rune, err error) {
|
||||
return a, nil
|
||||
}
|
||||
|
||||
func loadUnicodeData() {
|
||||
f := gen.OpenUCDFile("UnicodeData.txt")
|
||||
defer f.Close()
|
||||
p := ucd.New(f)
|
||||
for p.Next() {
|
||||
r := p.Rune(ucd.CodePoint)
|
||||
char := &chars[r]
|
||||
|
||||
char.ccc = uint8(p.Uint(ucd.CanonicalCombiningClass))
|
||||
decmap := p.String(ucd.DecompMapping)
|
||||
|
||||
exp, err := parseDecomposition(decmap, false)
|
||||
isCompat := false
|
||||
if err != nil {
|
||||
if len(decmap) > 0 {
|
||||
exp, err = parseDecomposition(decmap, true)
|
||||
if err != nil {
|
||||
log.Fatalf(`%U: bad decomp |%v|: "%s"`, r, decmap, err)
|
||||
}
|
||||
isCompat = true
|
||||
func parseCharacter(line string) {
|
||||
field := strings.Split(line, ";")
|
||||
if len(field) != NumField {
|
||||
logger.Fatalf("%5s: %d fields (expected %d)\n", line, len(field), NumField)
|
||||
}
|
||||
x, err := strconv.ParseUint(field[FCodePoint], 16, 64)
|
||||
point := int(x)
|
||||
if err != nil {
|
||||
logger.Fatalf("%.5s...: %s", line, err)
|
||||
}
|
||||
if point == 0 {
|
||||
return // not interesting and we use 0 as unset
|
||||
}
|
||||
if point > MaxChar {
|
||||
logger.Fatalf("%5s: Rune %X > MaxChar (%X)", line, point, MaxChar)
|
||||
return
|
||||
}
|
||||
state := SNormal
|
||||
switch {
|
||||
case strings.Index(field[FName], ", First>") > 0:
|
||||
state = SFirst
|
||||
case strings.Index(field[FName], ", Last>") > 0:
|
||||
state = SLast
|
||||
}
|
||||
firstChar := lastChar + 1
|
||||
lastChar = rune(point)
|
||||
if state != SLast {
|
||||
firstChar = lastChar
|
||||
}
|
||||
x, err = strconv.ParseUint(field[FCanonicalCombiningClass], 10, 64)
|
||||
if err != nil {
|
||||
logger.Fatalf("%U: bad ccc field: %s", int(x), err)
|
||||
}
|
||||
ccc := uint8(x)
|
||||
decmap := field[FDecompMapping]
|
||||
exp, e := parseDecomposition(decmap, false)
|
||||
isCompat := false
|
||||
if e != nil {
|
||||
if len(decmap) > 0 {
|
||||
exp, e = parseDecomposition(decmap, true)
|
||||
if e != nil {
|
||||
logger.Fatalf(`%U: bad decomp |%v|: "%s"`, int(x), decmap, e)
|
||||
}
|
||||
isCompat = true
|
||||
}
|
||||
|
||||
char.name = p.String(ucd.Name)
|
||||
char.codePoint = r
|
||||
}
|
||||
for i := firstChar; i <= lastChar; i++ {
|
||||
char := &chars[i]
|
||||
char.name = field[FName]
|
||||
char.codePoint = i
|
||||
char.forms[FCompatibility].decomp = exp
|
||||
if !isCompat {
|
||||
char.forms[FCanonical].decomp = exp
|
||||
@@ -229,9 +306,24 @@ func loadUnicodeData() {
|
||||
if len(decmap) > 0 {
|
||||
char.forms[FCompatibility].decomp = exp
|
||||
}
|
||||
char.ccc = ccc
|
||||
char.state = SMissing
|
||||
if i == lastChar {
|
||||
char.state = state
|
||||
}
|
||||
}
|
||||
if err := p.Err(); err != nil {
|
||||
log.Fatal(err)
|
||||
return
|
||||
}
|
||||
|
||||
func loadUnicodeData() {
|
||||
f := openReader("UnicodeData.txt")
|
||||
defer f.Close()
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
parseCharacter(scanner.Text())
|
||||
}
|
||||
if scanner.Err() != nil {
|
||||
logger.Fatal(scanner.Err())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -262,22 +354,47 @@ func compactCCC() {
|
||||
}
|
||||
}
|
||||
|
||||
var singlePointRe = regexp.MustCompile(`^([0-9A-F]+) *$`)
|
||||
|
||||
// CompositionExclusions.txt has form:
|
||||
// 0958 # ...
|
||||
// See http://unicode.org/reports/tr44/ for full explanation
|
||||
func parseExclusion(line string) int {
|
||||
comment := strings.Index(line, "#")
|
||||
if comment >= 0 {
|
||||
line = line[0:comment]
|
||||
}
|
||||
if len(line) == 0 {
|
||||
return 0
|
||||
}
|
||||
matches := singlePointRe.FindStringSubmatch(line)
|
||||
if len(matches) != 2 {
|
||||
logger.Fatalf("%s: %d matches (expected 1)\n", line, len(matches))
|
||||
}
|
||||
point, err := strconv.ParseUint(matches[1], 16, 64)
|
||||
if err != nil {
|
||||
logger.Fatalf("%.5s...: %s", line, err)
|
||||
}
|
||||
return int(point)
|
||||
}
|
||||
|
||||
func loadCompositionExclusions() {
|
||||
f := gen.OpenUCDFile("CompositionExclusions.txt")
|
||||
f := openReader("CompositionExclusions.txt")
|
||||
defer f.Close()
|
||||
p := ucd.New(f)
|
||||
for p.Next() {
|
||||
c := &chars[p.Rune(0)]
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
point := parseExclusion(scanner.Text())
|
||||
if point == 0 {
|
||||
continue
|
||||
}
|
||||
c := &chars[point]
|
||||
if c.excludeInComp {
|
||||
log.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint)
|
||||
logger.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint)
|
||||
}
|
||||
c.excludeInComp = true
|
||||
}
|
||||
if e := p.Err(); e != nil {
|
||||
log.Fatal(e)
|
||||
if scanner.Err() != nil {
|
||||
log.Fatal(scanner.Err())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -471,22 +588,29 @@ func computeNonStarterCounts() {
|
||||
if exp := c.forms[FCompatibility].expandedDecomp; len(exp) > 0 {
|
||||
runes = exp
|
||||
}
|
||||
// We consider runes that combine backwards to be non-starters for the
|
||||
// purpose of Stream-Safe Text Processing.
|
||||
for _, r := range runes {
|
||||
if cr := &chars[r]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward {
|
||||
if chars[r].ccc == 0 {
|
||||
break
|
||||
}
|
||||
c.nLeadingNonStarters++
|
||||
}
|
||||
for i := len(runes) - 1; i >= 0; i-- {
|
||||
if cr := &chars[runes[i]]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward {
|
||||
if chars[runes[i]].ccc == 0 {
|
||||
break
|
||||
}
|
||||
c.nTrailingNonStarters++
|
||||
}
|
||||
if c.nTrailingNonStarters > 3 {
|
||||
log.Fatalf("%U: Decomposition with more than 3 (%d) trailing modifiers (%U)", i, c.nTrailingNonStarters, runes)
|
||||
|
||||
// We consider runes that combine backwards to be non-starters for the
|
||||
// purpose of Stream-Safe Text Processing.
|
||||
for _, f := range c.forms {
|
||||
if c.ccc == 0 && f.combinesBackward {
|
||||
if len(c.forms[FCompatibility].expandedDecomp) > 0 {
|
||||
log.Fatalf("%U: CCC==0 modifier with an expansion is not supported.", i)
|
||||
}
|
||||
c.nTrailingNonStarters = 1
|
||||
c.nLeadingNonStarters = 1
|
||||
}
|
||||
}
|
||||
|
||||
if isHangul(rune(i)) {
|
||||
@@ -505,19 +629,19 @@ func computeNonStarterCounts() {
|
||||
}
|
||||
}
|
||||
|
||||
func printBytes(w io.Writer, b []byte, name string) {
|
||||
fmt.Fprintf(w, "// %s: %d bytes\n", name, len(b))
|
||||
fmt.Fprintf(w, "var %s = [...]byte {", name)
|
||||
func printBytes(b []byte, name string) {
|
||||
fmt.Printf("// %s: %d bytes\n", name, len(b))
|
||||
fmt.Printf("var %s = [...]byte {", name)
|
||||
for i, c := range b {
|
||||
switch {
|
||||
case i%64 == 0:
|
||||
fmt.Fprintf(w, "\n// Bytes %x - %x\n", i, i+63)
|
||||
fmt.Printf("\n// Bytes %x - %x\n", i, i+63)
|
||||
case i%8 == 0:
|
||||
fmt.Fprintf(w, "\n")
|
||||
fmt.Printf("\n")
|
||||
}
|
||||
fmt.Fprintf(w, "0x%.2X, ", c)
|
||||
fmt.Printf("0x%.2X, ", c)
|
||||
}
|
||||
fmt.Fprint(w, "\n}\n\n")
|
||||
fmt.Print("\n}\n\n")
|
||||
}
|
||||
|
||||
// See forminfo.go for format.
|
||||
@@ -573,13 +697,13 @@ func (m *decompSet) insert(key int, s string) {
|
||||
m[key][s] = true
|
||||
}
|
||||
|
||||
func printCharInfoTables(w io.Writer) int {
|
||||
func printCharInfoTables() int {
|
||||
mkstr := func(r rune, f *FormInfo) (int, string) {
|
||||
d := f.expandedDecomp
|
||||
s := string([]rune(d))
|
||||
if max := 1 << 6; len(s) >= max {
|
||||
const msg = "%U: too many bytes in decomposition: %d >= %d"
|
||||
log.Fatalf(msg, r, len(s), max)
|
||||
logger.Fatalf(msg, r, len(s), max)
|
||||
}
|
||||
head := uint8(len(s))
|
||||
if f.quickCheck[MComposed] != QCYes {
|
||||
@@ -594,11 +718,11 @@ func printCharInfoTables(w io.Writer) int {
|
||||
tccc := ccc(d[len(d)-1])
|
||||
cc := ccc(r)
|
||||
if cc != 0 && lccc == 0 && tccc == 0 {
|
||||
log.Fatalf("%U: trailing and leading ccc are 0 for non-zero ccc %d", r, cc)
|
||||
logger.Fatalf("%U: trailing and leading ccc are 0 for non-zero ccc %d", r, cc)
|
||||
}
|
||||
if tccc < lccc && lccc != 0 {
|
||||
const msg = "%U: lccc (%d) must be <= tcc (%d)"
|
||||
log.Fatalf(msg, r, lccc, tccc)
|
||||
logger.Fatalf(msg, r, lccc, tccc)
|
||||
}
|
||||
index := normalDecomp
|
||||
nTrail := chars[r].nTrailingNonStarters
|
||||
@@ -615,13 +739,13 @@ func printCharInfoTables(w io.Writer) int {
|
||||
if lccc > 0 {
|
||||
s += string([]byte{lccc})
|
||||
if index == firstCCC {
|
||||
log.Fatalf("%U: multi-segment decomposition not supported for decompositions with leading CCC != 0", r)
|
||||
logger.Fatalf("%U: multi-segment decomposition not supported for decompositions with leading CCC != 0", r)
|
||||
}
|
||||
index = firstLeadingCCC
|
||||
}
|
||||
if cc != lccc {
|
||||
if cc != 0 {
|
||||
log.Fatalf("%U: for lccc != ccc, expected ccc to be 0; was %d", r, cc)
|
||||
logger.Fatalf("%U: for lccc != ccc, expected ccc to be 0; was %d", r, cc)
|
||||
}
|
||||
index = firstCCCZeroExcept
|
||||
}
|
||||
@@ -643,7 +767,7 @@ func printCharInfoTables(w io.Writer) int {
|
||||
continue
|
||||
}
|
||||
if f.combinesBackward {
|
||||
log.Fatalf("%U: combinesBackward and decompose", c.codePoint)
|
||||
logger.Fatalf("%U: combinesBackward and decompose", c.codePoint)
|
||||
}
|
||||
index, s := mkstr(c.codePoint, &f)
|
||||
decompSet.insert(index, s)
|
||||
@@ -654,7 +778,7 @@ func printCharInfoTables(w io.Writer) int {
|
||||
size := 0
|
||||
positionMap := make(map[string]uint16)
|
||||
decompositions.WriteString("\000")
|
||||
fmt.Fprintln(w, "const (")
|
||||
fmt.Println("const (")
|
||||
for i, m := range decompSet {
|
||||
sa := []string{}
|
||||
for s := range m {
|
||||
@@ -667,44 +791,39 @@ func printCharInfoTables(w io.Writer) int {
|
||||
positionMap[s] = uint16(p)
|
||||
}
|
||||
if cname[i] != "" {
|
||||
fmt.Fprintf(w, "%s = 0x%X\n", cname[i], decompositions.Len())
|
||||
fmt.Printf("%s = 0x%X\n", cname[i], decompositions.Len())
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(w, "maxDecomp = 0x8000")
|
||||
fmt.Fprintln(w, ")")
|
||||
fmt.Println("maxDecomp = 0x8000")
|
||||
fmt.Println(")")
|
||||
b := decompositions.Bytes()
|
||||
printBytes(w, b, "decomps")
|
||||
printBytes(b, "decomps")
|
||||
size += len(b)
|
||||
|
||||
varnames := []string{"nfc", "nfkc"}
|
||||
for i := 0; i < FNumberOfFormTypes; i++ {
|
||||
trie := triegen.NewTrie(varnames[i])
|
||||
|
||||
trie := newNode()
|
||||
for r, c := range chars {
|
||||
f := c.forms[i]
|
||||
d := f.expandedDecomp
|
||||
if len(d) != 0 {
|
||||
_, key := mkstr(c.codePoint, &f)
|
||||
trie.Insert(rune(r), uint64(positionMap[key]))
|
||||
trie.insert(rune(r), positionMap[key])
|
||||
if c.ccc != ccc(d[0]) {
|
||||
// We assume the lead ccc of a decomposition !=0 in this case.
|
||||
if ccc(d[0]) == 0 {
|
||||
log.Fatalf("Expected leading CCC to be non-zero; ccc is %d", c.ccc)
|
||||
logger.Fatalf("Expected leading CCC to be non-zero; ccc is %d", c.ccc)
|
||||
}
|
||||
}
|
||||
} else if c.nLeadingNonStarters > 0 && len(f.expandedDecomp) == 0 && c.ccc == 0 && !f.combinesBackward {
|
||||
// Handle cases where it can't be detected that the nLead should be equal
|
||||
// to nTrail.
|
||||
trie.Insert(c.codePoint, uint64(positionMap[nLeadStr]))
|
||||
trie.insert(c.codePoint, positionMap[nLeadStr])
|
||||
} else if v := makeEntry(&f, &c)<<8 | uint16(c.ccc); v != 0 {
|
||||
trie.Insert(c.codePoint, uint64(0x8000|v))
|
||||
trie.insert(c.codePoint, 0x8000|v)
|
||||
}
|
||||
}
|
||||
sz, err := trie.Gen(w, triegen.Compact(&normCompacter{name: varnames[i]}))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
size += sz
|
||||
size += trie.printTables(varnames[i])
|
||||
}
|
||||
return size
|
||||
}
|
||||
@@ -718,9 +837,30 @@ func contains(sa []string, s string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func makeTables() {
|
||||
w := &bytes.Buffer{}
|
||||
// Extract the version number from the URL.
|
||||
func version() string {
|
||||
// From http://www.unicode.org/standard/versions/#Version_Numbering:
|
||||
// for the later Unicode versions, data files are located in
|
||||
// versioned directories.
|
||||
fields := strings.Split(*url, "/")
|
||||
for _, f := range fields {
|
||||
if match, _ := regexp.MatchString(`[0-9]\.[0-9]\.[0-9]`, f); match {
|
||||
return f
|
||||
}
|
||||
}
|
||||
logger.Fatal("unknown version")
|
||||
return "Unknown"
|
||||
}
|
||||
|
||||
const fileHeader = `// Generated by running
|
||||
// maketables --tables=%s --url=%s
|
||||
// DO NOT EDIT
|
||||
|
||||
package norm
|
||||
|
||||
`
|
||||
|
||||
func makeTables() {
|
||||
size := 0
|
||||
if *tablelist == "" {
|
||||
return
|
||||
@@ -729,6 +869,7 @@ func makeTables() {
|
||||
if *tablelist == "all" {
|
||||
list = []string{"recomp", "info"}
|
||||
}
|
||||
fmt.Printf(fileHeader, *tablelist, *url)
|
||||
|
||||
// Compute maximum decomposition size.
|
||||
max := 0
|
||||
@@ -738,30 +879,30 @@ func makeTables() {
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintln(w, "const (")
|
||||
fmt.Fprintln(w, "\t// Version is the Unicode edition from which the tables are derived.")
|
||||
fmt.Fprintf(w, "\tVersion = %q\n", gen.UnicodeVersion())
|
||||
fmt.Fprintln(w)
|
||||
fmt.Fprintln(w, "\t// MaxTransformChunkSize indicates the maximum number of bytes that Transform")
|
||||
fmt.Fprintln(w, "\t// may need to write atomically for any Form. Making a destination buffer at")
|
||||
fmt.Fprintln(w, "\t// least this size ensures that Transform can always make progress and that")
|
||||
fmt.Fprintln(w, "\t// the user does not need to grow the buffer on an ErrShortDst.")
|
||||
fmt.Fprintf(w, "\tMaxTransformChunkSize = %d+maxNonStarters*4\n", len(string(0x034F))+max)
|
||||
fmt.Fprintln(w, ")\n")
|
||||
fmt.Println("const (")
|
||||
fmt.Println("\t// Version is the Unicode edition from which the tables are derived.")
|
||||
fmt.Printf("\tVersion = %q\n", version())
|
||||
fmt.Println()
|
||||
fmt.Println("\t// MaxTransformChunkSize indicates the maximum number of bytes that Transform")
|
||||
fmt.Println("\t// may need to write atomically for any Form. Making a destination buffer at")
|
||||
fmt.Println("\t// least this size ensures that Transform can always make progress and that")
|
||||
fmt.Println("\t// the user does not need to grow the buffer on an ErrShortDst.")
|
||||
fmt.Printf("\tMaxTransformChunkSize = %d+maxNonStarters*4\n", len(string(0x034F))+max)
|
||||
fmt.Println(")\n")
|
||||
|
||||
// Print the CCC remap table.
|
||||
size += len(cccMap)
|
||||
fmt.Fprintf(w, "var ccc = [%d]uint8{", len(cccMap))
|
||||
fmt.Printf("var ccc = [%d]uint8{", len(cccMap))
|
||||
for i := 0; i < len(cccMap); i++ {
|
||||
if i%8 == 0 {
|
||||
fmt.Fprintln(w)
|
||||
fmt.Println()
|
||||
}
|
||||
fmt.Fprintf(w, "%3d, ", cccMap[uint8(i)])
|
||||
fmt.Printf("%3d, ", cccMap[uint8(i)])
|
||||
}
|
||||
fmt.Fprintln(w, "\n}\n")
|
||||
fmt.Println("\n}\n")
|
||||
|
||||
if contains(list, "info") {
|
||||
size += printCharInfoTables(w)
|
||||
size += printCharInfoTables()
|
||||
}
|
||||
|
||||
if contains(list, "recomp") {
|
||||
@@ -783,21 +924,20 @@ func makeTables() {
|
||||
}
|
||||
sz := nrentries * 8
|
||||
size += sz
|
||||
fmt.Fprintf(w, "// recompMap: %d bytes (entries only)\n", sz)
|
||||
fmt.Fprintln(w, "var recompMap = map[uint32]rune{")
|
||||
fmt.Printf("// recompMap: %d bytes (entries only)\n", sz)
|
||||
fmt.Println("var recompMap = map[uint32]rune{")
|
||||
for i, c := range chars {
|
||||
f := c.forms[FCanonical]
|
||||
d := f.decomp
|
||||
if !f.isOneWay && len(d) > 0 {
|
||||
key := uint32(uint16(d[0]))<<16 + uint32(uint16(d[1]))
|
||||
fmt.Fprintf(w, "0x%.8X: 0x%.4X,\n", key, i)
|
||||
fmt.Printf("0x%.8X: 0x%.4X,\n", key, i)
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(w, "}\n\n")
|
||||
fmt.Printf("}\n\n")
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "// Total size of tables: %dKB (%d bytes)\n", (size+512)/1024, size)
|
||||
gen.WriteGoFile("tables.go", "norm", w.Bytes())
|
||||
fmt.Printf("// Total size of tables: %dKB (%d bytes)\n", (size+512)/1024, size)
|
||||
}
|
||||
|
||||
func printChars() {
|
||||
@@ -832,16 +972,10 @@ func verifyComputed() {
|
||||
continue
|
||||
}
|
||||
if a, b := c.nLeadingNonStarters > 0, (c.ccc > 0 || f.combinesBackward); a != b {
|
||||
// We accept these runes to be treated differently (it only affects
|
||||
// segment breaking in iteration, most likely on improper use), but
|
||||
// We accept these two runes to be treated differently (it only affects
|
||||
// segment breaking in iteration, most likely on inproper use), but
|
||||
// reconsider if more characters are added.
|
||||
// U+FF9E HALFWIDTH KATAKANA VOICED SOUND MARK;Lm;0;L;<narrow> 3099;;;;N;;;;;
|
||||
// U+FF9F HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK;Lm;0;L;<narrow> 309A;;;;N;;;;;
|
||||
// U+3133 HANGUL LETTER KIYEOK-SIOS;Lo;0;L;<compat> 11AA;;;;N;HANGUL LETTER GIYEOG SIOS;;;;
|
||||
// U+318E HANGUL LETTER ARAEAE;Lo;0;L;<compat> 11A1;;;;N;HANGUL LETTER ALAE AE;;;;
|
||||
// U+FFA3 HALFWIDTH HANGUL LETTER KIYEOK-SIOS;Lo;0;L;<narrow> 3133;;;;N;HALFWIDTH HANGUL LETTER GIYEOG SIOS;;;;
|
||||
// U+FFDC HALFWIDTH HANGUL LETTER I;Lo;0;L;<narrow> 3163;;;;N;;;;;
|
||||
if i != 0xFF9E && i != 0xFF9F && !(0x3133 <= i && i <= 0x318E) && !(0xFFA3 <= i && i <= 0xFFDC) {
|
||||
if i != 0xFF9E && i != 0xFF9F {
|
||||
log.Fatalf("%U: nLead was %v; want %v", i, a, b)
|
||||
}
|
||||
}
|
||||
@@ -849,11 +983,13 @@ func verifyComputed() {
|
||||
nfc := c.forms[FCanonical]
|
||||
nfkc := c.forms[FCompatibility]
|
||||
if nfc.combinesBackward != nfkc.combinesBackward {
|
||||
log.Fatalf("%U: Cannot combine combinesBackward\n", c.codePoint)
|
||||
logger.Fatalf("%U: Cannot combine combinesBackward\n", c.codePoint)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var qcRe = regexp.MustCompile(`([0-9A-F\.]+) *; (NF.*_QC); ([YNM]) #.*`)
|
||||
|
||||
// Use values in DerivedNormalizationProps.txt to compare against the
|
||||
// values we computed.
|
||||
// DerivedNormalizationProps.txt has form:
|
||||
@@ -861,15 +997,29 @@ func verifyComputed() {
|
||||
// 0374 ; NFD_QC; N # ...
|
||||
// See http://unicode.org/reports/tr44/ for full explanation
|
||||
func testDerived() {
|
||||
f := gen.OpenUCDFile("DerivedNormalizationProps.txt")
|
||||
f := openReader("DerivedNormalizationProps.txt")
|
||||
defer f.Close()
|
||||
p := ucd.New(f)
|
||||
for p.Next() {
|
||||
r := p.Rune(0)
|
||||
c := &chars[r]
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
qc := qcRe.FindStringSubmatch(line)
|
||||
if qc == nil {
|
||||
continue
|
||||
}
|
||||
rng := strings.Split(qc[1], "..")
|
||||
i, err := strconv.ParseUint(rng[0], 16, 64)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
j := i
|
||||
if len(rng) > 1 {
|
||||
j, err = strconv.ParseUint(rng[1], 16, 64)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
var ftype, mode int
|
||||
qt := p.String(1)
|
||||
qt := strings.TrimSpace(qc[2])
|
||||
switch qt {
|
||||
case "NFC_QC":
|
||||
ftype, mode = FCanonical, MComposed
|
||||
@@ -880,10 +1030,10 @@ func testDerived() {
|
||||
case "NFKD_QC":
|
||||
ftype, mode = FCompatibility, MDecomposed
|
||||
default:
|
||||
continue
|
||||
log.Fatalf(`Unexpected quick check type "%s"`, qt)
|
||||
}
|
||||
var qr QCResult
|
||||
switch p.String(2) {
|
||||
switch qc[3] {
|
||||
case "Y":
|
||||
qr = QCYes
|
||||
case "N":
|
||||
@@ -891,15 +1041,27 @@ func testDerived() {
|
||||
case "M":
|
||||
qr = QCMaybe
|
||||
default:
|
||||
log.Fatalf(`Unexpected quick check value "%s"`, p.String(2))
|
||||
log.Fatalf(`Unexpected quick check value "%s"`, qc[3])
|
||||
}
|
||||
if got := c.forms[ftype].quickCheck[mode]; got != qr {
|
||||
log.Printf("%U: FAILED %s (was %v need %v)\n", r, qt, got, qr)
|
||||
var lastFailed bool
|
||||
// Verify current
|
||||
for ; i <= j; i++ {
|
||||
c := &chars[int(i)]
|
||||
c.forms[ftype].verified[mode] = true
|
||||
curqr := c.forms[ftype].quickCheck[mode]
|
||||
if curqr != qr {
|
||||
if !lastFailed {
|
||||
logger.Printf("%s: %.4X..%.4X -- %s\n",
|
||||
qt, int(i), int(j), line[0:50])
|
||||
}
|
||||
logger.Printf("%U: FAILED %s (was %v need %v)\n",
|
||||
int(i), qt, curqr, qr)
|
||||
lastFailed = true
|
||||
}
|
||||
}
|
||||
c.forms[ftype].verified[mode] = true
|
||||
}
|
||||
if err := p.Err(); err != nil {
|
||||
log.Fatal(err)
|
||||
if scanner.Err() != nil {
|
||||
logger.Fatal(scanner.Err())
|
||||
}
|
||||
// Any unspecified value must be QCYes. Verify this.
|
||||
for i, c := range chars {
|
||||
@@ -907,14 +1069,20 @@ func testDerived() {
|
||||
for k, qr := range fd.quickCheck {
|
||||
if !fd.verified[k] && qr != QCYes {
|
||||
m := "%U: FAIL F:%d M:%d (was %v need Yes) %s\n"
|
||||
log.Printf(m, i, j, k, qr, c.name)
|
||||
logger.Printf(m, i, j, k, qr, c.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var testHeader = `const (
|
||||
var testHeader = `// Generated by running
|
||||
// maketables --test --url=%s
|
||||
// +build test
|
||||
|
||||
package norm
|
||||
|
||||
const (
|
||||
Yes = iota
|
||||
No
|
||||
Maybe
|
||||
@@ -952,10 +1120,8 @@ func printTestdata() {
|
||||
nTrail uint8
|
||||
f string
|
||||
}
|
||||
|
||||
last := lastInfo{}
|
||||
w := &bytes.Buffer{}
|
||||
fmt.Fprintf(w, testHeader)
|
||||
fmt.Printf(testHeader, *url)
|
||||
for r, c := range chars {
|
||||
f := c.forms[FCanonical]
|
||||
qc, cf, d := f.quickCheck[MComposed], f.combinesForward, string(f.expandedDecomp)
|
||||
@@ -969,10 +1135,9 @@ func printTestdata() {
|
||||
}
|
||||
current := lastInfo{c.ccc, c.nLeadingNonStarters, c.nTrailingNonStarters, s}
|
||||
if last != current {
|
||||
fmt.Fprintf(w, "\t{0x%x, %d, %d, %d, %s},\n", r, c.origCCC, c.nLeadingNonStarters, c.nTrailingNonStarters, s)
|
||||
fmt.Printf("\t{0x%x, %d, %d, %d, %s},\n", r, c.origCCC, c.nLeadingNonStarters, c.nTrailingNonStarters, s)
|
||||
last = current
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(w, "}")
|
||||
gen.WriteGoFile("data_test.go", "norm", w.Bytes())
|
||||
fmt.Println("}")
|
||||
}
|
||||
45
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/maketesttables.go
generated
vendored
Normal file
45
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/maketesttables.go
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
// Generate test data for trie code.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
printTestTables()
|
||||
}
|
||||
|
||||
// We take the smallest, largest and an arbitrary value for each
|
||||
// of the UTF-8 sequence lengths.
|
||||
var testRunes = []rune{
|
||||
0x01, 0x0C, 0x7F, // 1-byte sequences
|
||||
0x80, 0x100, 0x7FF, // 2-byte sequences
|
||||
0x800, 0x999, 0xFFFF, // 3-byte sequences
|
||||
0x10000, 0x10101, 0x10FFFF, // 4-byte sequences
|
||||
0x200, 0x201, 0x202, 0x210, 0x215, // five entries in one sparse block
|
||||
}
|
||||
|
||||
const fileHeader = `// Generated by running
|
||||
// maketesttables
|
||||
// DO NOT EDIT
|
||||
|
||||
package norm
|
||||
|
||||
`
|
||||
|
||||
func printTestTables() {
|
||||
fmt.Print(fileHeader)
|
||||
fmt.Printf("var testRunes = %#v\n\n", testRunes)
|
||||
t := newNode()
|
||||
for i, r := range testRunes {
|
||||
t.insert(r, uint16(i))
|
||||
}
|
||||
t.printTables("testdata")
|
||||
}
|
||||
14
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/norm_test.go
generated
vendored
Normal file
14
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/norm_test.go
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPlaceHolder(t *testing.T) {
|
||||
// Does nothing, just allows the Makefile to be canonical
|
||||
// while waiting for the package itself to be written.
|
||||
}
|
||||
@@ -2,9 +2,6 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:generate go run maketables.go triegen.go
|
||||
//go:generate go run maketables.go triegen.go -test
|
||||
|
||||
// Package norm contains types and functions for normalizing Unicode strings.
|
||||
package norm
|
||||
|
||||
1086
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/normalize_test.go
generated
vendored
Normal file
1086
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/normalize_test.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
318
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/normregtest.go
generated
vendored
Normal file
318
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/normregtest.go
generated
vendored
Normal file
@@ -0,0 +1,318 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"code.google.com/p/go.text/unicode/norm"
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
loadTestData()
|
||||
CharacterByCharacterTests()
|
||||
StandardTests()
|
||||
PerformanceTest()
|
||||
if errorCount == 0 {
|
||||
fmt.Println("PASS")
|
||||
}
|
||||
}
|
||||
|
||||
const file = "NormalizationTest.txt"
|
||||
|
||||
var url = flag.String("url",
|
||||
"http://www.unicode.org/Public/"+unicode.Version+"/ucd/"+file,
|
||||
"URL of Unicode database directory")
|
||||
var localFiles = flag.Bool("local",
|
||||
false,
|
||||
"data files have been copied to the current directory; for debugging only")
|
||||
|
||||
var logger = log.New(os.Stderr, "", log.Lshortfile)
|
||||
|
||||
// This regression test runs the test set in NormalizationTest.txt
|
||||
// (taken from http://www.unicode.org/Public/<unicode.Version>/ucd/).
|
||||
//
|
||||
// NormalizationTest.txt has form:
|
||||
// @Part0 # Specific cases
|
||||
// #
|
||||
// 1E0A;1E0A;0044 0307;1E0A;0044 0307; # (Ḋ; Ḋ; D◌̇; Ḋ; D◌̇; ) LATIN CAPITAL LETTER D WITH DOT ABOVE
|
||||
// 1E0C;1E0C;0044 0323;1E0C;0044 0323; # (Ḍ; Ḍ; D◌̣; Ḍ; D◌̣; ) LATIN CAPITAL LETTER D WITH DOT BELOW
|
||||
//
|
||||
// Each test has 5 columns (c1, c2, c3, c4, c5), where
|
||||
// (c1, c2, c3, c4, c5) == (c1, NFC(c1), NFD(c1), NFKC(c1), NFKD(c1))
|
||||
//
|
||||
// CONFORMANCE:
|
||||
// 1. The following invariants must be true for all conformant implementations
|
||||
//
|
||||
// NFC
|
||||
// c2 == NFC(c1) == NFC(c2) == NFC(c3)
|
||||
// c4 == NFC(c4) == NFC(c5)
|
||||
//
|
||||
// NFD
|
||||
// c3 == NFD(c1) == NFD(c2) == NFD(c3)
|
||||
// c5 == NFD(c4) == NFD(c5)
|
||||
//
|
||||
// NFKC
|
||||
// c4 == NFKC(c1) == NFKC(c2) == NFKC(c3) == NFKC(c4) == NFKC(c5)
|
||||
//
|
||||
// NFKD
|
||||
// c5 == NFKD(c1) == NFKD(c2) == NFKD(c3) == NFKD(c4) == NFKD(c5)
|
||||
//
|
||||
// 2. For every code point X assigned in this version of Unicode that is not
|
||||
// specifically listed in Part 1, the following invariants must be true
|
||||
// for all conformant implementations:
|
||||
//
|
||||
// X == NFC(X) == NFD(X) == NFKC(X) == NFKD(X)
|
||||
//
|
||||
|
||||
// Column types.
|
||||
const (
|
||||
cRaw = iota
|
||||
cNFC
|
||||
cNFD
|
||||
cNFKC
|
||||
cNFKD
|
||||
cMaxColumns
|
||||
)
|
||||
|
||||
// Holds data from NormalizationTest.txt
|
||||
var part []Part
|
||||
|
||||
type Part struct {
|
||||
name string
|
||||
number int
|
||||
tests []Test
|
||||
}
|
||||
|
||||
type Test struct {
|
||||
name string
|
||||
partnr int
|
||||
number int
|
||||
r rune // used for character by character test
|
||||
cols [cMaxColumns]string // Each has 5 entries, see below.
|
||||
}
|
||||
|
||||
func (t Test) Name() string {
|
||||
if t.number < 0 {
|
||||
return part[t.partnr].name
|
||||
}
|
||||
return fmt.Sprintf("%s:%d", part[t.partnr].name, t.number)
|
||||
}
|
||||
|
||||
var partRe = regexp.MustCompile(`@Part(\d) # (.*)$`)
|
||||
var testRe = regexp.MustCompile(`^` + strings.Repeat(`([\dA-F ]+);`, 5) + ` # (.*)$`)
|
||||
|
||||
var counter int
|
||||
|
||||
// Load the data form NormalizationTest.txt
|
||||
func loadTestData() {
|
||||
if *localFiles {
|
||||
pwd, _ := os.Getwd()
|
||||
*url = "file://" + path.Join(pwd, file)
|
||||
}
|
||||
t := &http.Transport{}
|
||||
t.RegisterProtocol("file", http.NewFileTransport(http.Dir("/")))
|
||||
c := &http.Client{Transport: t}
|
||||
resp, err := c.Get(*url)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
logger.Fatal("bad GET status for "+file, resp.Status)
|
||||
}
|
||||
f := resp.Body
|
||||
defer f.Close()
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if len(line) == 0 || line[0] == '#' {
|
||||
continue
|
||||
}
|
||||
m := partRe.FindStringSubmatch(line)
|
||||
if m != nil {
|
||||
if len(m) < 3 {
|
||||
logger.Fatal("Failed to parse Part: ", line)
|
||||
}
|
||||
i, err := strconv.Atoi(m[1])
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
name := m[2]
|
||||
part = append(part, Part{name: name[:len(name)-1], number: i})
|
||||
continue
|
||||
}
|
||||
m = testRe.FindStringSubmatch(line)
|
||||
if m == nil || len(m) < 7 {
|
||||
logger.Fatalf(`Failed to parse: "%s" result: %#v`, line, m)
|
||||
}
|
||||
test := Test{name: m[6], partnr: len(part) - 1, number: counter}
|
||||
counter++
|
||||
for j := 1; j < len(m)-1; j++ {
|
||||
for _, split := range strings.Split(m[j], " ") {
|
||||
r, err := strconv.ParseUint(split, 16, 64)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
if test.r == 0 {
|
||||
// save for CharacterByCharacterTests
|
||||
test.r = rune(r)
|
||||
}
|
||||
var buf [utf8.UTFMax]byte
|
||||
sz := utf8.EncodeRune(buf[:], rune(r))
|
||||
test.cols[j-1] += string(buf[:sz])
|
||||
}
|
||||
}
|
||||
part := &part[len(part)-1]
|
||||
part.tests = append(part.tests, test)
|
||||
}
|
||||
if scanner.Err() != nil {
|
||||
logger.Fatal(scanner.Err())
|
||||
}
|
||||
}
|
||||
|
||||
var fstr = []string{"NFC", "NFD", "NFKC", "NFKD"}
|
||||
|
||||
var errorCount int
|
||||
|
||||
func cmpResult(t *Test, name string, f norm.Form, gold, test, result string) {
|
||||
if gold != result {
|
||||
errorCount++
|
||||
if errorCount > 20 {
|
||||
return
|
||||
}
|
||||
logger.Printf("%s:%s: %s(%+q)=%+q; want %+q: %s",
|
||||
t.Name(), name, fstr[f], test, result, gold, t.name)
|
||||
}
|
||||
}
|
||||
|
||||
func cmpIsNormal(t *Test, name string, f norm.Form, test string, result, want bool) {
|
||||
if result != want {
|
||||
errorCount++
|
||||
if errorCount > 20 {
|
||||
return
|
||||
}
|
||||
logger.Printf("%s:%s: %s(%+q)=%v; want %v", t.Name(), name, fstr[f], test, result, want)
|
||||
}
|
||||
}
|
||||
|
||||
func doTest(t *Test, f norm.Form, gold, test string) {
|
||||
testb := []byte(test)
|
||||
result := f.Bytes(testb)
|
||||
cmpResult(t, "Bytes", f, gold, test, string(result))
|
||||
|
||||
sresult := f.String(test)
|
||||
cmpResult(t, "String", f, gold, test, sresult)
|
||||
|
||||
acc := []byte{}
|
||||
i := norm.Iter{}
|
||||
i.InitString(f, test)
|
||||
for !i.Done() {
|
||||
acc = append(acc, i.Next()...)
|
||||
}
|
||||
cmpResult(t, "Iter.Next", f, gold, test, string(acc))
|
||||
|
||||
buf := make([]byte, 128)
|
||||
acc = nil
|
||||
for p := 0; p < len(testb); {
|
||||
nDst, nSrc, _ := f.Transform(buf, testb[p:], true)
|
||||
acc = append(acc, buf[:nDst]...)
|
||||
p += nSrc
|
||||
}
|
||||
cmpResult(t, "Transform", f, gold, test, string(acc))
|
||||
|
||||
for i := range test {
|
||||
out := f.Append(f.Bytes([]byte(test[:i])), []byte(test[i:])...)
|
||||
cmpResult(t, fmt.Sprintf(":Append:%d", i), f, gold, test, string(out))
|
||||
}
|
||||
cmpIsNormal(t, "IsNormal", f, test, f.IsNormal([]byte(test)), test == gold)
|
||||
cmpIsNormal(t, "IsNormalString", f, test, f.IsNormalString(test), test == gold)
|
||||
}
|
||||
|
||||
func doConformanceTests(t *Test, partn int) {
|
||||
for i := 0; i <= 2; i++ {
|
||||
doTest(t, norm.NFC, t.cols[1], t.cols[i])
|
||||
doTest(t, norm.NFD, t.cols[2], t.cols[i])
|
||||
doTest(t, norm.NFKC, t.cols[3], t.cols[i])
|
||||
doTest(t, norm.NFKD, t.cols[4], t.cols[i])
|
||||
}
|
||||
for i := 3; i <= 4; i++ {
|
||||
doTest(t, norm.NFC, t.cols[3], t.cols[i])
|
||||
doTest(t, norm.NFD, t.cols[4], t.cols[i])
|
||||
doTest(t, norm.NFKC, t.cols[3], t.cols[i])
|
||||
doTest(t, norm.NFKD, t.cols[4], t.cols[i])
|
||||
}
|
||||
}
|
||||
|
||||
func CharacterByCharacterTests() {
|
||||
tests := part[1].tests
|
||||
var last rune = 0
|
||||
for i := 0; i <= len(tests); i++ { // last one is special case
|
||||
var r rune
|
||||
if i == len(tests) {
|
||||
r = 0x2FA1E // Don't have to go to 0x10FFFF
|
||||
} else {
|
||||
r = tests[i].r
|
||||
}
|
||||
for last++; last < r; last++ {
|
||||
// Check all characters that were not explicitly listed in the test.
|
||||
t := &Test{partnr: 1, number: -1}
|
||||
char := string(last)
|
||||
doTest(t, norm.NFC, char, char)
|
||||
doTest(t, norm.NFD, char, char)
|
||||
doTest(t, norm.NFKC, char, char)
|
||||
doTest(t, norm.NFKD, char, char)
|
||||
}
|
||||
if i < len(tests) {
|
||||
doConformanceTests(&tests[i], 1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func StandardTests() {
|
||||
for _, j := range []int{0, 2, 3} {
|
||||
for _, test := range part[j].tests {
|
||||
doConformanceTests(&test, j)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// PerformanceTest verifies that normalization is O(n). If any of the
|
||||
// code does not properly check for maxCombiningChars, normalization
|
||||
// may exhibit O(n**2) behavior.
|
||||
func PerformanceTest() {
|
||||
runtime.GOMAXPROCS(2)
|
||||
success := make(chan bool, 1)
|
||||
go func() {
|
||||
buf := bytes.Repeat([]byte("\u035D"), 1024*1024)
|
||||
buf = append(buf, "\u035B"...)
|
||||
norm.NFC.Append(nil, buf...)
|
||||
success <- true
|
||||
}()
|
||||
timeout := time.After(1 * time.Second)
|
||||
select {
|
||||
case <-success:
|
||||
// test completed before the timeout
|
||||
case <-timeout:
|
||||
errorCount++
|
||||
logger.Printf(`unexpectedly long time to complete PerformanceTest`)
|
||||
}
|
||||
}
|
||||
56
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/readwriter_test.go
generated
vendored
Normal file
56
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/readwriter_test.go
generated
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var bufSizes = []int{1, 2, 3, 4, 5, 6, 7, 8, 100, 101, 102, 103, 4000, 4001, 4002, 4003}
|
||||
|
||||
func readFunc(size int) appendFunc {
|
||||
return func(f Form, out []byte, s string) []byte {
|
||||
out = append(out, s...)
|
||||
r := f.Reader(bytes.NewBuffer(out))
|
||||
buf := make([]byte, size)
|
||||
result := []byte{}
|
||||
for n, err := 0, error(nil); err == nil; {
|
||||
n, err = r.Read(buf)
|
||||
result = append(result, buf[:n]...)
|
||||
}
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
func TestReader(t *testing.T) {
|
||||
for _, s := range bufSizes {
|
||||
name := fmt.Sprintf("TestReader%d", s)
|
||||
runNormTests(t, name, readFunc(s))
|
||||
}
|
||||
}
|
||||
|
||||
func writeFunc(size int) appendFunc {
|
||||
return func(f Form, out []byte, s string) []byte {
|
||||
in := append(out, s...)
|
||||
result := new(bytes.Buffer)
|
||||
w := f.Writer(result)
|
||||
buf := make([]byte, size)
|
||||
for n := 0; len(in) > 0; in = in[n:] {
|
||||
n = copy(buf, in)
|
||||
_, _ = w.Write(buf[:n])
|
||||
}
|
||||
w.Close()
|
||||
return result.Bytes()
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriter(t *testing.T) {
|
||||
for _, s := range bufSizes {
|
||||
name := fmt.Sprintf("TestWriter%d", s)
|
||||
runNormTests(t, name, writeFunc(s))
|
||||
}
|
||||
}
|
||||
6989
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/tables.go
generated
vendored
Normal file
6989
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/tables.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
@@ -7,16 +7,13 @@ package norm
|
||||
import (
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/transform"
|
||||
"code.google.com/p/go.text/transform"
|
||||
)
|
||||
|
||||
// Reset implements the Reset method of the transform.Transformer interface.
|
||||
func (Form) Reset() {}
|
||||
|
||||
// Transform implements the Transform method of the transform.Transformer
|
||||
// interface. It may need to write segments of up to MaxSegmentSize at once.
|
||||
// Users should either catch ErrShortDst and allow dst to grow or have dst be at
|
||||
// least of size MaxTransformChunkSize to be guaranteed of progress.
|
||||
// Transform implements the transform.Transformer interface. It may need to
|
||||
// write segments of up to MaxSegmentSize at once. Users should either catch
|
||||
// ErrShortDst and allow dst to grow or have dst be at least of size
|
||||
// MaxTransformChunkSize to be guaranteed of progress.
|
||||
func (f Form) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
n := 0
|
||||
// Cap the maximum number of src bytes to check.
|
||||
101
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/transform_test.go
generated
vendored
Normal file
101
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/transform_test.go
generated
vendored
Normal file
@@ -0,0 +1,101 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"code.google.com/p/go.text/transform"
|
||||
)
|
||||
|
||||
func TestTransform(t *testing.T) {
|
||||
tests := []struct {
|
||||
f Form
|
||||
in, out string
|
||||
eof bool
|
||||
dstSize int
|
||||
err error
|
||||
}{
|
||||
{NFC, "ab", "ab", true, 2, nil},
|
||||
{NFC, "qx", "qx", true, 2, nil},
|
||||
{NFD, "qx", "qx", true, 2, nil},
|
||||
{NFC, "", "", true, 1, nil},
|
||||
{NFD, "", "", true, 1, nil},
|
||||
{NFC, "", "", false, 1, nil},
|
||||
{NFD, "", "", false, 1, nil},
|
||||
|
||||
// Normalized segment does not fit in destination.
|
||||
{NFD, "ö", "", true, 1, transform.ErrShortDst},
|
||||
{NFD, "ö", "", true, 2, transform.ErrShortDst},
|
||||
|
||||
// As an artifact of the algorithm, only full segments are written.
|
||||
// This is not strictly required, and some bytes could be written.
|
||||
// In practice, for Transform to not block, the destination buffer
|
||||
// should be at least MaxSegmentSize to work anyway and these edge
|
||||
// conditions will be relatively rare.
|
||||
{NFC, "ab", "", true, 1, transform.ErrShortDst},
|
||||
// This is even true for inert runes.
|
||||
{NFC, "qx", "", true, 1, transform.ErrShortDst},
|
||||
{NFC, "a\u0300abc", "\u00e0a", true, 4, transform.ErrShortDst},
|
||||
|
||||
// We cannot write a segment if succesive runes could still change the result.
|
||||
{NFD, "ö", "", false, 3, transform.ErrShortSrc},
|
||||
{NFC, "a\u0300", "", false, 4, transform.ErrShortSrc},
|
||||
{NFD, "a\u0300", "", false, 4, transform.ErrShortSrc},
|
||||
{NFC, "ö", "", false, 3, transform.ErrShortSrc},
|
||||
|
||||
{NFC, "a\u0300", "", true, 1, transform.ErrShortDst},
|
||||
// Theoretically could fit, but won't due to simplified checks.
|
||||
{NFC, "a\u0300", "", true, 2, transform.ErrShortDst},
|
||||
{NFC, "a\u0300", "", true, 3, transform.ErrShortDst},
|
||||
{NFC, "a\u0300", "\u00e0", true, 4, nil},
|
||||
|
||||
{NFD, "öa\u0300", "o\u0308", false, 8, transform.ErrShortSrc},
|
||||
{NFD, "öa\u0300ö", "o\u0308a\u0300", true, 8, transform.ErrShortDst},
|
||||
{NFD, "öa\u0300ö", "o\u0308a\u0300", false, 12, transform.ErrShortSrc},
|
||||
|
||||
// Illegal input is copied verbatim.
|
||||
{NFD, "\xbd\xb2=\xbc ", "\xbd\xb2=\xbc ", true, 8, nil},
|
||||
}
|
||||
b := make([]byte, 100)
|
||||
for i, tt := range tests {
|
||||
nDst, _, err := tt.f.Transform(b[:tt.dstSize], []byte(tt.in), tt.eof)
|
||||
out := string(b[:nDst])
|
||||
if out != tt.out || err != tt.err {
|
||||
t.Errorf("%d: was %+q (%v); want %+q (%v)", i, out, err, tt.out, tt.err)
|
||||
}
|
||||
if want := tt.f.String(tt.in)[:nDst]; want != out {
|
||||
t.Errorf("%d: incorect normalization: was %+q; want %+q", i, out, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var transBufSizes = []int{
|
||||
MaxTransformChunkSize,
|
||||
3 * MaxTransformChunkSize / 2,
|
||||
2 * MaxTransformChunkSize,
|
||||
3 * MaxTransformChunkSize,
|
||||
100 * MaxTransformChunkSize,
|
||||
}
|
||||
|
||||
func doTransNorm(f Form, buf []byte, b []byte) []byte {
|
||||
acc := []byte{}
|
||||
for p := 0; p < len(b); {
|
||||
nd, ns, _ := f.Transform(buf[:], b[p:], true)
|
||||
p += ns
|
||||
acc = append(acc, buf[:nd]...)
|
||||
}
|
||||
return acc
|
||||
}
|
||||
|
||||
func TestTransformNorm(t *testing.T) {
|
||||
for _, sz := range transBufSizes {
|
||||
buf := make([]byte, sz)
|
||||
runNormTests(t, fmt.Sprintf("Transform:%d", sz), func(f Form, out []byte, s string) []byte {
|
||||
return doTransNorm(f, buf, append(out, s...))
|
||||
})
|
||||
}
|
||||
}
|
||||
232
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/trie.go
generated
vendored
Normal file
232
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/trie.go
generated
vendored
Normal file
@@ -0,0 +1,232 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm
|
||||
|
||||
type valueRange struct {
|
||||
value uint16 // header: value:stride
|
||||
lo, hi byte // header: lo:n
|
||||
}
|
||||
|
||||
type trie struct {
|
||||
index []uint8
|
||||
values []uint16
|
||||
sparse []valueRange
|
||||
sparseOffset []uint16
|
||||
cutoff uint8 // indices >= cutoff are sparse
|
||||
}
|
||||
|
||||
// lookupValue determines the type of block n and looks up the value for b.
|
||||
// For n < t.cutoff, the block is a simple lookup table. Otherwise, the block
|
||||
// is a list of ranges with an accompanying value. Given a matching range r,
|
||||
// the value for b is by r.value + (b - r.lo) * stride.
|
||||
func (t *trie) lookupValue(n uint8, b byte) uint16 {
|
||||
if n < t.cutoff {
|
||||
return t.values[uint16(n)<<6+uint16(b)]
|
||||
}
|
||||
offset := t.sparseOffset[n-t.cutoff]
|
||||
header := t.sparse[offset]
|
||||
lo := offset + 1
|
||||
hi := lo + uint16(header.lo)
|
||||
for lo < hi {
|
||||
m := lo + (hi-lo)/2
|
||||
r := t.sparse[m]
|
||||
if r.lo <= b && b <= r.hi {
|
||||
return r.value + uint16(b-r.lo)*header.value
|
||||
}
|
||||
if b < r.lo {
|
||||
hi = m
|
||||
} else {
|
||||
lo = m + 1
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
const (
|
||||
t1 = 0x00 // 0000 0000
|
||||
tx = 0x80 // 1000 0000
|
||||
t2 = 0xC0 // 1100 0000
|
||||
t3 = 0xE0 // 1110 0000
|
||||
t4 = 0xF0 // 1111 0000
|
||||
t5 = 0xF8 // 1111 1000
|
||||
t6 = 0xFC // 1111 1100
|
||||
te = 0xFE // 1111 1110
|
||||
)
|
||||
|
||||
// lookup returns the trie value for the first UTF-8 encoding in s and
|
||||
// the width in bytes of this encoding. The size will be 0 if s does not
|
||||
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
|
||||
func (t *trie) lookup(s []byte) (v uint16, sz int) {
|
||||
c0 := s[0]
|
||||
switch {
|
||||
case c0 < tx:
|
||||
return t.values[c0], 1
|
||||
case c0 < t2:
|
||||
return 0, 1
|
||||
case c0 < t3:
|
||||
if len(s) < 2 {
|
||||
return 0, 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
c1 := s[1]
|
||||
if c1 < tx || t2 <= c1 {
|
||||
return 0, 1
|
||||
}
|
||||
return t.lookupValue(i, c1), 2
|
||||
case c0 < t4:
|
||||
if len(s) < 3 {
|
||||
return 0, 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
c1 := s[1]
|
||||
if c1 < tx || t2 <= c1 {
|
||||
return 0, 1
|
||||
}
|
||||
o := uint16(i)<<6 + uint16(c1)
|
||||
i = t.index[o]
|
||||
c2 := s[2]
|
||||
if c2 < tx || t2 <= c2 {
|
||||
return 0, 2
|
||||
}
|
||||
return t.lookupValue(i, c2), 3
|
||||
case c0 < t5:
|
||||
if len(s) < 4 {
|
||||
return 0, 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
c1 := s[1]
|
||||
if c1 < tx || t2 <= c1 {
|
||||
return 0, 1
|
||||
}
|
||||
o := uint16(i)<<6 + uint16(c1)
|
||||
i = t.index[o]
|
||||
c2 := s[2]
|
||||
if c2 < tx || t2 <= c2 {
|
||||
return 0, 2
|
||||
}
|
||||
o = uint16(i)<<6 + uint16(c2)
|
||||
i = t.index[o]
|
||||
c3 := s[3]
|
||||
if c3 < tx || t2 <= c3 {
|
||||
return 0, 3
|
||||
}
|
||||
return t.lookupValue(i, c3), 4
|
||||
}
|
||||
// Illegal rune
|
||||
return 0, 1
|
||||
}
|
||||
|
||||
// lookupString returns the trie value for the first UTF-8 encoding in s and
|
||||
// the width in bytes of this encoding. The size will be 0 if s does not
|
||||
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
|
||||
func (t *trie) lookupString(s string) (v uint16, sz int) {
|
||||
c0 := s[0]
|
||||
switch {
|
||||
case c0 < tx:
|
||||
return t.values[c0], 1
|
||||
case c0 < t2:
|
||||
return 0, 1
|
||||
case c0 < t3:
|
||||
if len(s) < 2 {
|
||||
return 0, 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
c1 := s[1]
|
||||
if c1 < tx || t2 <= c1 {
|
||||
return 0, 1
|
||||
}
|
||||
return t.lookupValue(i, c1), 2
|
||||
case c0 < t4:
|
||||
if len(s) < 3 {
|
||||
return 0, 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
c1 := s[1]
|
||||
if c1 < tx || t2 <= c1 {
|
||||
return 0, 1
|
||||
}
|
||||
o := uint16(i)<<6 + uint16(c1)
|
||||
i = t.index[o]
|
||||
c2 := s[2]
|
||||
if c2 < tx || t2 <= c2 {
|
||||
return 0, 2
|
||||
}
|
||||
return t.lookupValue(i, c2), 3
|
||||
case c0 < t5:
|
||||
if len(s) < 4 {
|
||||
return 0, 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
c1 := s[1]
|
||||
if c1 < tx || t2 <= c1 {
|
||||
return 0, 1
|
||||
}
|
||||
o := uint16(i)<<6 + uint16(c1)
|
||||
i = t.index[o]
|
||||
c2 := s[2]
|
||||
if c2 < tx || t2 <= c2 {
|
||||
return 0, 2
|
||||
}
|
||||
o = uint16(i)<<6 + uint16(c2)
|
||||
i = t.index[o]
|
||||
c3 := s[3]
|
||||
if c3 < tx || t2 <= c3 {
|
||||
return 0, 3
|
||||
}
|
||||
return t.lookupValue(i, c3), 4
|
||||
}
|
||||
// Illegal rune
|
||||
return 0, 1
|
||||
}
|
||||
|
||||
// lookupUnsafe returns the trie value for the first UTF-8 encoding in s.
|
||||
// s must hold a full encoding.
|
||||
func (t *trie) lookupUnsafe(s []byte) uint16 {
|
||||
c0 := s[0]
|
||||
if c0 < tx {
|
||||
return t.values[c0]
|
||||
}
|
||||
if c0 < t2 {
|
||||
return 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
if c0 < t3 {
|
||||
return t.lookupValue(i, s[1])
|
||||
}
|
||||
i = t.index[uint16(i)<<6+uint16(s[1])]
|
||||
if c0 < t4 {
|
||||
return t.lookupValue(i, s[2])
|
||||
}
|
||||
i = t.index[uint16(i)<<6+uint16(s[2])]
|
||||
if c0 < t5 {
|
||||
return t.lookupValue(i, s[3])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// lookupStringUnsafe returns the trie value for the first UTF-8 encoding in s.
|
||||
// s must hold a full encoding.
|
||||
func (t *trie) lookupStringUnsafe(s string) uint16 {
|
||||
c0 := s[0]
|
||||
if c0 < tx {
|
||||
return t.values[c0]
|
||||
}
|
||||
if c0 < t2 {
|
||||
return 0
|
||||
}
|
||||
i := t.index[c0]
|
||||
if c0 < t3 {
|
||||
return t.lookupValue(i, s[1])
|
||||
}
|
||||
i = t.index[uint16(i)<<6+uint16(s[1])]
|
||||
if c0 < t4 {
|
||||
return t.lookupValue(i, s[2])
|
||||
}
|
||||
i = t.index[uint16(i)<<6+uint16(s[2])]
|
||||
if c0 < t5 {
|
||||
return t.lookupValue(i, s[3])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
152
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/trie_test.go
generated
vendored
Normal file
152
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/trie_test.go
generated
vendored
Normal file
@@ -0,0 +1,152 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package norm
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// Test data is located in triedata_test.go; generated by maketesttables.
|
||||
var testdata = testdataTrie
|
||||
|
||||
type rangeTest struct {
|
||||
block uint8
|
||||
lookup byte
|
||||
result uint16
|
||||
table []valueRange
|
||||
offsets []uint16
|
||||
}
|
||||
|
||||
var range1Off = []uint16{0, 2}
|
||||
var range1 = []valueRange{
|
||||
{0, 1, 0},
|
||||
{1, 0x80, 0x80},
|
||||
{0, 2, 0},
|
||||
{1, 0x80, 0x80},
|
||||
{9, 0xff, 0xff},
|
||||
}
|
||||
|
||||
var rangeTests = []rangeTest{
|
||||
{10, 0x80, 1, range1, range1Off},
|
||||
{10, 0x00, 0, range1, range1Off},
|
||||
{11, 0x80, 1, range1, range1Off},
|
||||
{11, 0xff, 9, range1, range1Off},
|
||||
{11, 0x00, 0, range1, range1Off},
|
||||
}
|
||||
|
||||
func TestLookupSparse(t *testing.T) {
|
||||
for i, test := range rangeTests {
|
||||
n := trie{sparse: test.table, sparseOffset: test.offsets, cutoff: 10}
|
||||
v := n.lookupValue(test.block, test.lookup)
|
||||
if v != test.result {
|
||||
t.Errorf("LookupSparse:%d: found %X; want %X", i, v, test.result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test cases for illegal runes.
|
||||
type trietest struct {
|
||||
size int
|
||||
bytes []byte
|
||||
}
|
||||
|
||||
var tests = []trietest{
|
||||
// illegal runes
|
||||
{1, []byte{0x80}},
|
||||
{1, []byte{0xFF}},
|
||||
{1, []byte{t2, tx - 1}},
|
||||
{1, []byte{t2, t2}},
|
||||
{2, []byte{t3, tx, tx - 1}},
|
||||
{2, []byte{t3, tx, t2}},
|
||||
{1, []byte{t3, tx - 1, tx}},
|
||||
{3, []byte{t4, tx, tx, tx - 1}},
|
||||
{3, []byte{t4, tx, tx, t2}},
|
||||
{1, []byte{t4, t2, tx, tx - 1}},
|
||||
{2, []byte{t4, tx, t2, tx - 1}},
|
||||
|
||||
// short runes
|
||||
{0, []byte{t2}},
|
||||
{0, []byte{t3, tx}},
|
||||
{0, []byte{t4, tx, tx}},
|
||||
|
||||
// we only support UTF-8 up to utf8.UTFMax bytes (4 bytes)
|
||||
{1, []byte{t5, tx, tx, tx, tx}},
|
||||
{1, []byte{t6, tx, tx, tx, tx, tx}},
|
||||
}
|
||||
|
||||
func mkUTF8(r rune) ([]byte, int) {
|
||||
var b [utf8.UTFMax]byte
|
||||
sz := utf8.EncodeRune(b[:], r)
|
||||
return b[:sz], sz
|
||||
}
|
||||
|
||||
func TestLookup(t *testing.T) {
|
||||
for i, tt := range testRunes {
|
||||
b, szg := mkUTF8(tt)
|
||||
v, szt := testdata.lookup(b)
|
||||
if int(v) != i {
|
||||
t.Errorf("lookup(%U): found value %#x, expected %#x", tt, v, i)
|
||||
}
|
||||
if szt != szg {
|
||||
t.Errorf("lookup(%U): found size %d, expected %d", tt, szt, szg)
|
||||
}
|
||||
}
|
||||
for i, tt := range tests {
|
||||
v, sz := testdata.lookup(tt.bytes)
|
||||
if v != 0 {
|
||||
t.Errorf("lookup of illegal rune, case %d: found value %#x, expected 0", i, v)
|
||||
}
|
||||
if sz != tt.size {
|
||||
t.Errorf("lookup of illegal rune, case %d: found size %d, expected %d", i, sz, tt.size)
|
||||
}
|
||||
}
|
||||
// Verify defaults.
|
||||
if v, _ := testdata.lookup([]byte{0xC1, 0x8C}); v != 0 {
|
||||
t.Errorf("lookup of non-existing rune should be 0; found %X", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLookupUnsafe(t *testing.T) {
|
||||
for i, tt := range testRunes {
|
||||
b, _ := mkUTF8(tt)
|
||||
v := testdata.lookupUnsafe(b)
|
||||
if int(v) != i {
|
||||
t.Errorf("lookupUnsafe(%U): found value %#x, expected %#x", i, v, i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLookupString(t *testing.T) {
|
||||
for i, tt := range testRunes {
|
||||
b, szg := mkUTF8(tt)
|
||||
v, szt := testdata.lookupString(string(b))
|
||||
if int(v) != i {
|
||||
t.Errorf("lookup(%U): found value %#x, expected %#x", i, v, i)
|
||||
}
|
||||
if szt != szg {
|
||||
t.Errorf("lookup(%U): found size %d, expected %d", i, szt, szg)
|
||||
}
|
||||
}
|
||||
for i, tt := range tests {
|
||||
v, sz := testdata.lookupString(string(tt.bytes))
|
||||
if int(v) != 0 {
|
||||
t.Errorf("lookup of illegal rune, case %d: found value %#x, expected 0", i, v)
|
||||
}
|
||||
if sz != tt.size {
|
||||
t.Errorf("lookup of illegal rune, case %d: found size %d, expected %d", i, sz, tt.size)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLookupStringUnsafe(t *testing.T) {
|
||||
for i, tt := range testRunes {
|
||||
b, _ := mkUTF8(tt)
|
||||
v := testdata.lookupStringUnsafe(string(b))
|
||||
if int(v) != i {
|
||||
t.Errorf("lookupUnsafe(%U): found value %#x, expected %#x", i, v, i)
|
||||
}
|
||||
}
|
||||
}
|
||||
85
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/triedata_test.go
generated
vendored
Normal file
85
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/triedata_test.go
generated
vendored
Normal file
@@ -0,0 +1,85 @@
|
||||
// Generated by running
|
||||
// maketesttables
|
||||
// DO NOT EDIT
|
||||
|
||||
package norm
|
||||
|
||||
var testRunes = []int32{1, 12, 127, 128, 256, 2047, 2048, 2457, 65535, 65536, 65793, 1114111, 512, 513, 514, 528, 533}
|
||||
|
||||
// testdataValues: 192 entries, 384 bytes
|
||||
// Block 2 is the null block.
|
||||
var testdataValues = [192]uint16{
|
||||
// Block 0x0, offset 0x0
|
||||
0x000c: 0x0001,
|
||||
// Block 0x1, offset 0x40
|
||||
0x007f: 0x0002,
|
||||
// Block 0x2, offset 0x80
|
||||
}
|
||||
|
||||
// testdataSparseOffset: 10 entries, 20 bytes
|
||||
var testdataSparseOffset = []uint16{0x0, 0x2, 0x4, 0x8, 0xa, 0xc, 0xe, 0x10, 0x12, 0x14}
|
||||
|
||||
// testdataSparseValues: 22 entries, 88 bytes
|
||||
var testdataSparseValues = [22]valueRange{
|
||||
// Block 0x0, offset 0x1
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0003, lo: 0x80, hi: 0x80},
|
||||
// Block 0x1, offset 0x2
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0004, lo: 0x80, hi: 0x80},
|
||||
// Block 0x2, offset 0x3
|
||||
{value: 0x0001, lo: 0x03},
|
||||
{value: 0x000c, lo: 0x80, hi: 0x82},
|
||||
{value: 0x000f, lo: 0x90, hi: 0x90},
|
||||
{value: 0x0010, lo: 0x95, hi: 0x95},
|
||||
// Block 0x3, offset 0x4
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0005, lo: 0xbf, hi: 0xbf},
|
||||
// Block 0x4, offset 0x5
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0006, lo: 0x80, hi: 0x80},
|
||||
// Block 0x5, offset 0x6
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0007, lo: 0x99, hi: 0x99},
|
||||
// Block 0x6, offset 0x7
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0008, lo: 0xbf, hi: 0xbf},
|
||||
// Block 0x7, offset 0x8
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x0009, lo: 0x80, hi: 0x80},
|
||||
// Block 0x8, offset 0x9
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x000a, lo: 0x81, hi: 0x81},
|
||||
// Block 0x9, offset 0xa
|
||||
{value: 0x0000, lo: 0x01},
|
||||
{value: 0x000b, lo: 0xbf, hi: 0xbf},
|
||||
}
|
||||
|
||||
// testdataLookup: 640 bytes
|
||||
// Block 0 is the null block.
|
||||
var testdataLookup = [640]uint8{
|
||||
// Block 0x0, offset 0x0
|
||||
// Block 0x1, offset 0x40
|
||||
// Block 0x2, offset 0x80
|
||||
// Block 0x3, offset 0xc0
|
||||
0x0c2: 0x01, 0x0c4: 0x02,
|
||||
0x0c8: 0x03,
|
||||
0x0df: 0x04,
|
||||
0x0e0: 0x02,
|
||||
0x0ef: 0x03,
|
||||
0x0f0: 0x05, 0x0f4: 0x07,
|
||||
// Block 0x4, offset 0x100
|
||||
0x120: 0x05, 0x126: 0x06,
|
||||
// Block 0x5, offset 0x140
|
||||
0x17f: 0x07,
|
||||
// Block 0x6, offset 0x180
|
||||
0x180: 0x08, 0x184: 0x09,
|
||||
// Block 0x7, offset 0x1c0
|
||||
0x1d0: 0x04,
|
||||
// Block 0x8, offset 0x200
|
||||
0x23f: 0x0a,
|
||||
// Block 0x9, offset 0x240
|
||||
0x24f: 0x06,
|
||||
}
|
||||
|
||||
var testdataTrie = trie{testdataLookup[:], testdataValues[:], testdataSparseValues[:], testdataSparseOffset[:], 1}
|
||||
317
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/triegen.go
generated
vendored
Normal file
317
Godeps/_workspace/src/code.google.com/p/go.text/unicode/norm/triegen.go
generated
vendored
Normal file
@@ -0,0 +1,317 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
// Trie table generator.
|
||||
// Used by make*tables tools to generate a go file with trie data structures
|
||||
// for mapping UTF-8 to a 16-bit value. All but the last byte in a UTF-8 byte
|
||||
// sequence are used to lookup offsets in the index table to be used for the
|
||||
// next byte. The last byte is used to index into a table with 16-bit values.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"log"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
const (
|
||||
blockSize = 64
|
||||
blockOffset = 2 // Subtract two blocks to compensate for the 0x80 added to continuation bytes.
|
||||
maxSparseEntries = 16
|
||||
)
|
||||
|
||||
// Intermediate trie structure
|
||||
type trieNode struct {
|
||||
table [256]*trieNode
|
||||
value int
|
||||
b byte
|
||||
leaf bool
|
||||
}
|
||||
|
||||
func newNode() *trieNode {
|
||||
return new(trieNode)
|
||||
}
|
||||
|
||||
func (n trieNode) String() string {
|
||||
s := fmt.Sprint("trieNode{table: { non-nil at index: ")
|
||||
for i, v := range n.table {
|
||||
if v != nil {
|
||||
s += fmt.Sprintf("%d, ", i)
|
||||
}
|
||||
}
|
||||
s += fmt.Sprintf("}, value:%#x, b:%#x leaf:%v}", n.value, n.b, n.leaf)
|
||||
return s
|
||||
}
|
||||
|
||||
func (n trieNode) isInternal() bool {
|
||||
internal := true
|
||||
for i := 0; i < 256; i++ {
|
||||
if nn := n.table[i]; nn != nil {
|
||||
if !internal && !nn.leaf {
|
||||
log.Fatalf("triegen: isInternal: node contains both leaf and non-leaf children (%v)", n)
|
||||
}
|
||||
internal = internal && !nn.leaf
|
||||
}
|
||||
}
|
||||
return internal
|
||||
}
|
||||
|
||||
func (n trieNode) mostFrequentStride() int {
|
||||
counts := make(map[int]int)
|
||||
v := 0
|
||||
for _, t := range n.table[0x80 : 0x80+blockSize] {
|
||||
if t != nil {
|
||||
if stride := t.value - v; v != 0 && stride >= 0 {
|
||||
counts[stride]++
|
||||
}
|
||||
v = t.value
|
||||
} else {
|
||||
v = 0
|
||||
}
|
||||
}
|
||||
var maxs, maxc int
|
||||
for stride, cnt := range counts {
|
||||
if cnt > maxc || (cnt == maxc && stride < maxs) {
|
||||
maxs, maxc = stride, cnt
|
||||
}
|
||||
}
|
||||
return maxs
|
||||
}
|
||||
|
||||
func (n trieNode) countSparseEntries() int {
|
||||
stride := n.mostFrequentStride()
|
||||
var count, v int
|
||||
for _, t := range n.table[0x80 : 0x80+blockSize] {
|
||||
tv := 0
|
||||
if t != nil {
|
||||
tv = t.value
|
||||
}
|
||||
if tv-v != stride {
|
||||
if tv != 0 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
v = tv
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func (n *trieNode) insert(r rune, value uint16) {
|
||||
var p [utf8.UTFMax]byte
|
||||
sz := utf8.EncodeRune(p[:], r)
|
||||
|
||||
for i := 0; i < sz; i++ {
|
||||
if n.leaf {
|
||||
log.Fatalf("triegen: insert: node (%#v) should not be a leaf", n)
|
||||
}
|
||||
nn := n.table[p[i]]
|
||||
if nn == nil {
|
||||
nn = newNode()
|
||||
nn.b = p[i]
|
||||
n.table[p[i]] = nn
|
||||
}
|
||||
n = nn
|
||||
}
|
||||
n.value = int(value)
|
||||
n.leaf = true
|
||||
}
|
||||
|
||||
type nodeIndex struct {
|
||||
lookupBlocks []*trieNode
|
||||
valueBlocks []*trieNode
|
||||
sparseBlocks []*trieNode
|
||||
sparseOffset []uint16
|
||||
sparseCount int
|
||||
|
||||
lookupBlockIdx map[uint32]int
|
||||
valueBlockIdx map[uint32]int
|
||||
}
|
||||
|
||||
func newIndex() *nodeIndex {
|
||||
index := &nodeIndex{}
|
||||
index.lookupBlocks = make([]*trieNode, 0)
|
||||
index.valueBlocks = make([]*trieNode, 0)
|
||||
index.sparseBlocks = make([]*trieNode, 0)
|
||||
index.sparseOffset = make([]uint16, 1)
|
||||
index.lookupBlockIdx = make(map[uint32]int)
|
||||
index.valueBlockIdx = make(map[uint32]int)
|
||||
return index
|
||||
}
|
||||
|
||||
func computeOffsets(index *nodeIndex, n *trieNode) int {
|
||||
if n.leaf {
|
||||
return n.value
|
||||
}
|
||||
hasher := crc32.New(crc32.MakeTable(crc32.IEEE))
|
||||
// We only index continuation bytes.
|
||||
for i := 0; i < blockSize; i++ {
|
||||
v := 0
|
||||
if nn := n.table[0x80+i]; nn != nil {
|
||||
v = computeOffsets(index, nn)
|
||||
}
|
||||
hasher.Write([]byte{uint8(v >> 8), uint8(v)})
|
||||
}
|
||||
h := hasher.Sum32()
|
||||
if n.isInternal() {
|
||||
v, ok := index.lookupBlockIdx[h]
|
||||
if !ok {
|
||||
v = len(index.lookupBlocks) - blockOffset
|
||||
index.lookupBlocks = append(index.lookupBlocks, n)
|
||||
index.lookupBlockIdx[h] = v
|
||||
}
|
||||
n.value = v
|
||||
} else {
|
||||
v, ok := index.valueBlockIdx[h]
|
||||
if !ok {
|
||||
if c := n.countSparseEntries(); c > maxSparseEntries {
|
||||
v = len(index.valueBlocks) - blockOffset
|
||||
index.valueBlocks = append(index.valueBlocks, n)
|
||||
index.valueBlockIdx[h] = v
|
||||
} else {
|
||||
v = -len(index.sparseOffset)
|
||||
index.sparseBlocks = append(index.sparseBlocks, n)
|
||||
index.sparseOffset = append(index.sparseOffset, uint16(index.sparseCount))
|
||||
index.sparseCount += c + 1
|
||||
index.valueBlockIdx[h] = v
|
||||
}
|
||||
}
|
||||
n.value = v
|
||||
}
|
||||
return n.value
|
||||
}
|
||||
|
||||
func printValueBlock(nr int, n *trieNode, offset int) {
|
||||
boff := nr * blockSize
|
||||
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
|
||||
var printnewline bool
|
||||
for i := 0; i < blockSize; i++ {
|
||||
if i%6 == 0 {
|
||||
printnewline = true
|
||||
}
|
||||
v := 0
|
||||
if nn := n.table[i+offset]; nn != nil {
|
||||
v = nn.value
|
||||
}
|
||||
if v != 0 {
|
||||
if printnewline {
|
||||
fmt.Printf("\n")
|
||||
printnewline = false
|
||||
}
|
||||
fmt.Printf("%#04x:%#04x, ", boff+i, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func printSparseBlock(nr int, n *trieNode) {
|
||||
boff := -n.value
|
||||
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
|
||||
v := 0
|
||||
//stride := f(n)
|
||||
stride := n.mostFrequentStride()
|
||||
c := n.countSparseEntries()
|
||||
fmt.Printf("\n{value:%#04x,lo:%#02x},", stride, uint8(c))
|
||||
for i, nn := range n.table[0x80 : 0x80+blockSize] {
|
||||
nv := 0
|
||||
if nn != nil {
|
||||
nv = nn.value
|
||||
}
|
||||
if nv-v != stride {
|
||||
if v != 0 {
|
||||
fmt.Printf(",hi:%#02x},", 0x80+i-1)
|
||||
}
|
||||
if nv != 0 {
|
||||
fmt.Printf("\n{value:%#04x,lo:%#02x", nv, nn.b)
|
||||
}
|
||||
}
|
||||
v = nv
|
||||
}
|
||||
if v != 0 {
|
||||
fmt.Printf(",hi:%#02x},", 0x80+blockSize-1)
|
||||
}
|
||||
}
|
||||
|
||||
func printLookupBlock(nr int, n *trieNode, offset, cutoff int) {
|
||||
boff := nr * blockSize
|
||||
fmt.Printf("\n// Block %#x, offset %#x", nr, boff)
|
||||
var printnewline bool
|
||||
for i := 0; i < blockSize; i++ {
|
||||
if i%8 == 0 {
|
||||
printnewline = true
|
||||
}
|
||||
v := 0
|
||||
if nn := n.table[i+offset]; nn != nil {
|
||||
v = nn.value
|
||||
}
|
||||
if v != 0 {
|
||||
if v < 0 {
|
||||
v = -v - 1 + cutoff
|
||||
}
|
||||
if printnewline {
|
||||
fmt.Printf("\n")
|
||||
printnewline = false
|
||||
}
|
||||
fmt.Printf("%#03x:%#02x, ", boff+i, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// printTables returns the size in bytes of the generated tables.
|
||||
func (t *trieNode) printTables(name string) int {
|
||||
index := newIndex()
|
||||
// Values for 7-bit ASCII are stored in first two block, followed by nil block.
|
||||
index.valueBlocks = append(index.valueBlocks, nil, nil, nil)
|
||||
// First byte of multi-byte UTF-8 codepoints are indexed in 4th block.
|
||||
index.lookupBlocks = append(index.lookupBlocks, nil, nil, nil, nil)
|
||||
// Index starter bytes of multi-byte UTF-8.
|
||||
for i := 0xC0; i < 0x100; i++ {
|
||||
if t.table[i] != nil {
|
||||
computeOffsets(index, t.table[i])
|
||||
}
|
||||
}
|
||||
|
||||
nv := len(index.valueBlocks) * blockSize
|
||||
fmt.Printf("// %sValues: %d entries, %d bytes\n", name, nv, nv*2)
|
||||
fmt.Printf("// Block 2 is the null block.\n")
|
||||
fmt.Printf("var %sValues = [%d]uint16 {", name, nv)
|
||||
printValueBlock(0, t, 0)
|
||||
printValueBlock(1, t, 64)
|
||||
printValueBlock(2, newNode(), 0)
|
||||
for i := 3; i < len(index.valueBlocks); i++ {
|
||||
printValueBlock(i, index.valueBlocks[i], 0x80)
|
||||
}
|
||||
fmt.Print("\n}\n\n")
|
||||
|
||||
ls := len(index.sparseBlocks)
|
||||
fmt.Printf("// %sSparseOffset: %d entries, %d bytes\n", name, ls, ls*2)
|
||||
fmt.Printf("var %sSparseOffset = %#v\n\n", name, index.sparseOffset[1:])
|
||||
|
||||
ns := index.sparseCount
|
||||
fmt.Printf("// %sSparseValues: %d entries, %d bytes\n", name, ns, ns*4)
|
||||
fmt.Printf("var %sSparseValues = [%d]valueRange {", name, ns)
|
||||
for i, n := range index.sparseBlocks {
|
||||
printSparseBlock(i, n)
|
||||
}
|
||||
fmt.Print("\n}\n\n")
|
||||
|
||||
cutoff := len(index.valueBlocks) - blockOffset
|
||||
ni := len(index.lookupBlocks) * blockSize
|
||||
fmt.Printf("// %sLookup: %d bytes\n", name, ni)
|
||||
fmt.Printf("// Block 0 is the null block.\n")
|
||||
fmt.Printf("var %sLookup = [%d]uint8 {", name, ni)
|
||||
printLookupBlock(0, newNode(), 0, cutoff)
|
||||
printLookupBlock(1, newNode(), 0, cutoff)
|
||||
printLookupBlock(2, newNode(), 0, cutoff)
|
||||
printLookupBlock(3, t, 0xC0, cutoff)
|
||||
for i := 4; i < len(index.lookupBlocks); i++ {
|
||||
printLookupBlock(i, index.lookupBlocks[i], 0x80, cutoff)
|
||||
}
|
||||
fmt.Print("\n}\n\n")
|
||||
fmt.Printf("var %sTrie = trie{ %sLookup[:], %sValues[:], %sSparseValues[:], %sSparseOffset[:], %d}\n\n",
|
||||
name, name, name, name, name, cutoff)
|
||||
return nv*2 + ns*4 + ni + ls*2
|
||||
}
|
||||
1
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/.gitignore
generated
vendored
1
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/.gitignore
generated
vendored
@@ -1 +0,0 @@
|
||||
/lz4-example/lz4-example
|
||||
9
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/.travis.yml
generated
vendored
9
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/.travis.yml
generated
vendored
@@ -1,9 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.1
|
||||
- 1.2
|
||||
- 1.3
|
||||
- 1.4
|
||||
- 1.5
|
||||
- tip
|
||||
24
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/LICENSE
generated
vendored
24
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/LICENSE
generated
vendored
@@ -1,24 +0,0 @@
|
||||
Copyright 2011-2012 Branimir Karadzic. All rights reserved.
|
||||
Copyright 2013 Damian Gryski. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
|
||||
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
|
||||
SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
||||
THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
71
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/README.md
generated
vendored
71
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/README.md
generated
vendored
@@ -1,71 +0,0 @@
|
||||
go-lz4
|
||||
======
|
||||
|
||||
go-lz4 is port of LZ4 lossless compression algorithm to Go. The original C code
|
||||
is located at:
|
||||
|
||||
https://github.com/Cyan4973/lz4
|
||||
|
||||
Status
|
||||
------
|
||||
[](http://travis-ci.org/bkaradzic/go-lz4)
|
||||
[](https://godoc.org/github.com/bkaradzic/go-lz4)
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
go get github.com/bkaradzic/go-lz4
|
||||
|
||||
import "github.com/bkaradzic/go-lz4"
|
||||
|
||||
The package name is `lz4`
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
* go-lz4 saves a uint32 with the original uncompressed length at the beginning
|
||||
of the encoded buffer. They may get in the way of interoperability with
|
||||
other implementations.
|
||||
|
||||
Contributors
|
||||
------------
|
||||
|
||||
Damian Gryski ([@dgryski](https://github.com/dgryski))
|
||||
Dustin Sallings ([@dustin](https://github.com/dustin))
|
||||
|
||||
Contact
|
||||
-------
|
||||
|
||||
[@bkaradzic](https://twitter.com/bkaradzic)
|
||||
http://www.stuckingeometry.com
|
||||
|
||||
Project page
|
||||
https://github.com/bkaradzic/go-lz4
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
Copyright 2011-2012 Branimir Karadzic. All rights reserved.
|
||||
Copyright 2013 Damian Gryski. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
|
||||
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
|
||||
SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
||||
THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
23
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/fuzz.go
generated
vendored
23
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/fuzz.go
generated
vendored
@@ -1,23 +0,0 @@
|
||||
// +build gofuzz
|
||||
|
||||
package lz4
|
||||
|
||||
import "encoding/binary"
|
||||
|
||||
func Fuzz(data []byte) int {
|
||||
|
||||
if len(data) < 4 {
|
||||
return 0
|
||||
}
|
||||
|
||||
ln := binary.LittleEndian.Uint32(data)
|
||||
if ln > (1 << 21) {
|
||||
return 0
|
||||
}
|
||||
|
||||
if _, err := Decode(nil, data); err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return 1
|
||||
}
|
||||
74
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/fuzzer/main.go
generated
vendored
74
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/fuzzer/main.go
generated
vendored
@@ -1,74 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
|
||||
"github.com/bkaradzic/go-lz4"
|
||||
|
||||
// lz4's API matches snappy's, so we can easily see how it performs
|
||||
// lz4 "code.google.com/p/snappy-go/snappy"
|
||||
)
|
||||
|
||||
var input = `
|
||||
ADVENTURE I. A SCANDAL IN BOHEMIA
|
||||
|
||||
I.
|
||||
|
||||
To Sherlock Holmes she is always THE woman. I have seldom heard
|
||||
him mention her under any other name. In his eyes she eclipses
|
||||
and predominates the whole of her sex. It was not that he felt
|
||||
any emotion akin to love for Irene Adler. All emotions, and that
|
||||
one particularly, were abhorrent to his cold, precise but
|
||||
admirably balanced mind. He was, I take it, the most perfect
|
||||
reasoning and observing machine that the world has seen, but as a
|
||||
lover he would have placed himself in a false position. He never
|
||||
spoke of the softer passions, save with a gibe and a sneer. They
|
||||
were admirable things for the observer--excellent for drawing the
|
||||
veil from men's motives and actions. But for the trained reasoner
|
||||
to admit such intrusions into his own delicate and finely
|
||||
adjusted temperament was to introduce a distracting factor which
|
||||
might throw a doubt upon all his mental results. Grit in a
|
||||
sensitive instrument, or a crack in one of his own high-power
|
||||
lenses, would not be more disturbing than a strong emotion in a
|
||||
nature such as his. And yet there was but one woman to him, and
|
||||
that woman was the late Irene Adler, of dubious and questionable
|
||||
memory.
|
||||
|
||||
I had seen little of Holmes lately. My marriage had drifted us
|
||||
away from each other. My own complete happiness, and the
|
||||
home-centred interests which rise up around the man who first
|
||||
finds himself master of his own establishment, were sufficient to
|
||||
absorb all my attention, while Holmes, who loathed every form of
|
||||
society with his whole Bohemian soul, remained in our lodgings in
|
||||
Baker Street, buried among his old books, and alternating from
|
||||
week to week between cocaine and ambition, the drowsiness of the
|
||||
drug, and the fierce energy of his own keen nature. He was still,
|
||||
as ever, deeply attracted by the study of crime, and occupied his
|
||||
immense faculties and extraordinary powers of observation in
|
||||
following out those clues, and clearing up those mysteries which
|
||||
had been abandoned as hopeless by the official police. From time
|
||||
to time I heard some vague account of his doings: of his summons
|
||||
to Odessa in the case of the Trepoff murder, of his clearing up
|
||||
of the singular tragedy of the Atkinson brothers at Trincomalee,
|
||||
and finally of the mission which he had accomplished so
|
||||
delicately and successfully for the reigning family of Holland.
|
||||
Beyond these signs of his activity, however, which I merely
|
||||
shared with all the readers of the daily press, I knew little of
|
||||
my former friend and companion.
|
||||
`
|
||||
|
||||
func main() {
|
||||
|
||||
compressed, _ := lz4.Encode(nil, []byte(input))
|
||||
|
||||
modified := make([]byte, len(compressed))
|
||||
|
||||
for {
|
||||
copy(modified, compressed)
|
||||
for i := 0; i < 100; i++ {
|
||||
modified[rand.Intn(len(compressed)-4)+4] = byte(rand.Intn(256))
|
||||
}
|
||||
lz4.Decode(nil, modified)
|
||||
}
|
||||
|
||||
}
|
||||
94
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/lz4-example/main.go
generated
vendored
94
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/lz4-example/main.go
generated
vendored
@@ -1,94 +0,0 @@
|
||||
/*
|
||||
* Copyright 2011 Branimir Karadzic. All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without modification,
|
||||
* are permitted provided that the following conditions are met:
|
||||
*
|
||||
* 1. Redistributions of source code must retain the above copyright notice, this
|
||||
* list of conditions and the following disclaimer.
|
||||
*
|
||||
* 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
* this list of conditions and the following disclaimer in the documentation
|
||||
* and/or other materials provided with the distribution.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
|
||||
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
|
||||
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
||||
* THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"runtime/pprof"
|
||||
|
||||
lz4 "github.com/bkaradzic/go-lz4"
|
||||
)
|
||||
|
||||
var (
|
||||
decompress = flag.Bool("d", false, "decompress")
|
||||
)
|
||||
|
||||
func main() {
|
||||
|
||||
var optCPUProfile = flag.String("cpuprofile", "", "profile")
|
||||
flag.Parse()
|
||||
|
||||
if *optCPUProfile != "" {
|
||||
f, err := os.Create(*optCPUProfile)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
pprof.StartCPUProfile(f)
|
||||
defer pprof.StopCPUProfile()
|
||||
}
|
||||
|
||||
args := flag.Args()
|
||||
|
||||
var data []byte
|
||||
|
||||
if len(args) < 2 {
|
||||
fmt.Print("Usage: lz4 [-d] <input> <output>\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
input, err := os.OpenFile(args[0], os.O_RDONLY, 0644)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to open input file %s\n", args[0])
|
||||
os.Exit(1)
|
||||
}
|
||||
defer input.Close()
|
||||
|
||||
if *decompress {
|
||||
data, _ = ioutil.ReadAll(input)
|
||||
data, err = lz4.Decode(nil, data)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to decode:", err)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
data, _ = ioutil.ReadAll(input)
|
||||
data, err = lz4.Encode(nil, data)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to encode:", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(args[1], data, 0644)
|
||||
if err != nil {
|
||||
fmt.Printf("Failed to open output file %s\n", args[1])
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
199
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/reader.go
generated
vendored
199
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/reader.go
generated
vendored
@@ -1,199 +0,0 @@
|
||||
/*
|
||||
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without modification,
|
||||
* are permitted provided that the following conditions are met:
|
||||
*
|
||||
* 1. Redistributions of source code must retain the above copyright notice, this
|
||||
* list of conditions and the following disclaimer.
|
||||
*
|
||||
* 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
* this list of conditions and the following disclaimer in the documentation
|
||||
* and/or other materials provided with the distribution.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
|
||||
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
|
||||
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
||||
* THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
package lz4
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrCorrupt indicates the input was corrupt
|
||||
ErrCorrupt = errors.New("corrupt input")
|
||||
)
|
||||
|
||||
const (
|
||||
mlBits = 4
|
||||
mlMask = (1 << mlBits) - 1
|
||||
runBits = 8 - mlBits
|
||||
runMask = (1 << runBits) - 1
|
||||
)
|
||||
|
||||
type decoder struct {
|
||||
src []byte
|
||||
dst []byte
|
||||
spos uint32
|
||||
dpos uint32
|
||||
ref uint32
|
||||
}
|
||||
|
||||
func (d *decoder) readByte() (uint8, error) {
|
||||
if int(d.spos) == len(d.src) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
b := d.src[d.spos]
|
||||
d.spos++
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (d *decoder) getLen() (uint32, error) {
|
||||
|
||||
length := uint32(0)
|
||||
ln, err := d.readByte()
|
||||
if err != nil {
|
||||
return 0, ErrCorrupt
|
||||
}
|
||||
for ln == 255 {
|
||||
length += 255
|
||||
ln, err = d.readByte()
|
||||
if err != nil {
|
||||
return 0, ErrCorrupt
|
||||
}
|
||||
}
|
||||
length += uint32(ln)
|
||||
|
||||
return length, nil
|
||||
}
|
||||
|
||||
func (d *decoder) cp(length, decr uint32) {
|
||||
|
||||
if int(d.ref+length) < int(d.dpos) {
|
||||
copy(d.dst[d.dpos:], d.dst[d.ref:d.ref+length])
|
||||
} else {
|
||||
for ii := uint32(0); ii < length; ii++ {
|
||||
d.dst[d.dpos+ii] = d.dst[d.ref+ii]
|
||||
}
|
||||
}
|
||||
d.dpos += length
|
||||
d.ref += length - decr
|
||||
}
|
||||
|
||||
func (d *decoder) finish(err error) error {
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Decode returns the decoded form of src. The returned slice may be a
|
||||
// subslice of dst if it was large enough to hold the entire decoded block.
|
||||
func Decode(dst, src []byte) ([]byte, error) {
|
||||
|
||||
if len(src) < 4 {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
uncompressedLen := binary.LittleEndian.Uint32(src)
|
||||
|
||||
if uncompressedLen == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if uncompressedLen > MaxInputSize {
|
||||
return nil, ErrTooLarge
|
||||
}
|
||||
|
||||
if dst == nil || len(dst) < int(uncompressedLen) {
|
||||
dst = make([]byte, uncompressedLen)
|
||||
}
|
||||
|
||||
d := decoder{src: src, dst: dst[:uncompressedLen], spos: 4}
|
||||
|
||||
decr := []uint32{0, 3, 2, 3}
|
||||
|
||||
for {
|
||||
code, err := d.readByte()
|
||||
if err != nil {
|
||||
return d.dst, d.finish(err)
|
||||
}
|
||||
|
||||
length := uint32(code >> mlBits)
|
||||
if length == runMask {
|
||||
ln, err := d.getLen()
|
||||
if err != nil {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
length += ln
|
||||
}
|
||||
|
||||
if int(d.spos+length) > len(d.src) || int(d.dpos+length) > len(d.dst) {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
for ii := uint32(0); ii < length; ii++ {
|
||||
d.dst[d.dpos+ii] = d.src[d.spos+ii]
|
||||
}
|
||||
|
||||
d.spos += length
|
||||
d.dpos += length
|
||||
|
||||
if int(d.spos) == len(d.src) {
|
||||
return d.dst, nil
|
||||
}
|
||||
|
||||
if int(d.spos+2) >= len(d.src) {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
back := uint32(d.src[d.spos]) | uint32(d.src[d.spos+1])<<8
|
||||
|
||||
if back > d.dpos {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
d.spos += 2
|
||||
d.ref = d.dpos - back
|
||||
|
||||
length = uint32(code & mlMask)
|
||||
if length == mlMask {
|
||||
ln, err := d.getLen()
|
||||
if err != nil {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
length += ln
|
||||
}
|
||||
|
||||
literal := d.dpos - d.ref
|
||||
|
||||
if literal < 4 {
|
||||
if int(d.dpos+4) > len(d.dst) {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
d.cp(4, decr[literal])
|
||||
} else {
|
||||
length += 4
|
||||
}
|
||||
|
||||
if d.dpos+length > uncompressedLen {
|
||||
return nil, ErrCorrupt
|
||||
}
|
||||
|
||||
d.cp(length, 0)
|
||||
}
|
||||
}
|
||||
190
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/writer.go
generated
vendored
190
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/writer.go
generated
vendored
@@ -1,190 +0,0 @@
|
||||
/*
|
||||
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without modification,
|
||||
* are permitted provided that the following conditions are met:
|
||||
*
|
||||
* 1. Redistributions of source code must retain the above copyright notice, this
|
||||
* list of conditions and the following disclaimer.
|
||||
*
|
||||
* 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
* this list of conditions and the following disclaimer in the documentation
|
||||
* and/or other materials provided with the distribution.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
|
||||
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
|
||||
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
||||
* THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
package lz4
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
)
|
||||
|
||||
const (
|
||||
minMatch = 4
|
||||
hashLog = 17
|
||||
hashTableSize = 1 << hashLog
|
||||
hashShift = (minMatch * 8) - hashLog
|
||||
incompressible uint32 = 128
|
||||
uninitHash = 0x88888888
|
||||
|
||||
// MaxInputSize is the largest buffer than can be compressed in a single block
|
||||
MaxInputSize = 0x7E000000
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrTooLarge indicates the input buffer was too large
|
||||
ErrTooLarge = errors.New("input too large")
|
||||
)
|
||||
|
||||
type encoder struct {
|
||||
src []byte
|
||||
dst []byte
|
||||
hashTable []uint32
|
||||
pos uint32
|
||||
anchor uint32
|
||||
dpos uint32
|
||||
}
|
||||
|
||||
// CompressBound returns the maximum length of a lz4 block, given it's uncompressed length
|
||||
func CompressBound(isize int) int {
|
||||
if isize > MaxInputSize {
|
||||
return 0
|
||||
}
|
||||
return isize + ((isize) / 255) + 16 + 4
|
||||
}
|
||||
|
||||
func (e *encoder) writeLiterals(length, mlLen, pos uint32) {
|
||||
|
||||
ln := length
|
||||
|
||||
var code byte
|
||||
if ln > runMask-1 {
|
||||
code = runMask
|
||||
} else {
|
||||
code = byte(ln)
|
||||
}
|
||||
|
||||
if mlLen > mlMask-1 {
|
||||
e.dst[e.dpos] = (code << mlBits) + byte(mlMask)
|
||||
} else {
|
||||
e.dst[e.dpos] = (code << mlBits) + byte(mlLen)
|
||||
}
|
||||
e.dpos++
|
||||
|
||||
if code == runMask {
|
||||
ln -= runMask
|
||||
for ; ln > 254; ln -= 255 {
|
||||
e.dst[e.dpos] = 255
|
||||
e.dpos++
|
||||
}
|
||||
|
||||
e.dst[e.dpos] = byte(ln)
|
||||
e.dpos++
|
||||
}
|
||||
|
||||
for ii := uint32(0); ii < length; ii++ {
|
||||
e.dst[e.dpos+ii] = e.src[pos+ii]
|
||||
}
|
||||
|
||||
e.dpos += length
|
||||
}
|
||||
|
||||
// Encode returns the encoded form of src. The returned array may be a
|
||||
// sub-slice of dst if it was large enough to hold the entire output.
|
||||
func Encode(dst, src []byte) ([]byte, error) {
|
||||
|
||||
if len(src) >= MaxInputSize {
|
||||
return nil, ErrTooLarge
|
||||
}
|
||||
|
||||
if n := CompressBound(len(src)); len(dst) < n {
|
||||
dst = make([]byte, n)
|
||||
}
|
||||
|
||||
e := encoder{src: src, dst: dst, hashTable: make([]uint32, hashTableSize)}
|
||||
|
||||
binary.LittleEndian.PutUint32(dst, uint32(len(src)))
|
||||
e.dpos = 4
|
||||
|
||||
var (
|
||||
step uint32 = 1
|
||||
limit = incompressible
|
||||
)
|
||||
|
||||
for {
|
||||
if int(e.pos)+12 >= len(e.src) {
|
||||
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
|
||||
return e.dst[:e.dpos], nil
|
||||
}
|
||||
|
||||
sequence := uint32(e.src[e.pos+3])<<24 | uint32(e.src[e.pos+2])<<16 | uint32(e.src[e.pos+1])<<8 | uint32(e.src[e.pos+0])
|
||||
|
||||
hash := (sequence * 2654435761) >> hashShift
|
||||
ref := e.hashTable[hash] + uninitHash
|
||||
e.hashTable[hash] = e.pos - uninitHash
|
||||
|
||||
if ((e.pos-ref)>>16) != 0 || uint32(e.src[ref+3])<<24|uint32(e.src[ref+2])<<16|uint32(e.src[ref+1])<<8|uint32(e.src[ref+0]) != sequence {
|
||||
if e.pos-e.anchor > limit {
|
||||
limit <<= 1
|
||||
step += 1 + (step >> 2)
|
||||
}
|
||||
e.pos += step
|
||||
continue
|
||||
}
|
||||
|
||||
if step > 1 {
|
||||
e.hashTable[hash] = ref - uninitHash
|
||||
e.pos -= step - 1
|
||||
step = 1
|
||||
continue
|
||||
}
|
||||
limit = incompressible
|
||||
|
||||
ln := e.pos - e.anchor
|
||||
back := e.pos - ref
|
||||
|
||||
anchor := e.anchor
|
||||
|
||||
e.pos += minMatch
|
||||
ref += minMatch
|
||||
e.anchor = e.pos
|
||||
|
||||
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
|
||||
e.pos++
|
||||
ref++
|
||||
}
|
||||
|
||||
mlLen := e.pos - e.anchor
|
||||
|
||||
e.writeLiterals(ln, mlLen, anchor)
|
||||
e.dst[e.dpos] = uint8(back)
|
||||
e.dst[e.dpos+1] = uint8(back >> 8)
|
||||
e.dpos += 2
|
||||
|
||||
if mlLen > mlMask-1 {
|
||||
mlLen -= mlMask
|
||||
for mlLen > 254 {
|
||||
mlLen -= 255
|
||||
|
||||
e.dst[e.dpos] = 255
|
||||
e.dpos++
|
||||
}
|
||||
|
||||
e.dst[e.dpos] = byte(mlLen)
|
||||
e.dpos++
|
||||
}
|
||||
|
||||
e.anchor = e.pos
|
||||
}
|
||||
}
|
||||
24
Godeps/_workspace/src/github.com/calmh/du/LICENSE
generated
vendored
24
Godeps/_workspace/src/github.com/calmh/du/LICENSE
generated
vendored
@@ -1,24 +0,0 @@
|
||||
This is free and unencumbered software released into the public domain.
|
||||
|
||||
Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||
distribute this software, either in source code form or as a compiled
|
||||
binary, for any purpose, commercial or non-commercial, and by any
|
||||
means.
|
||||
|
||||
In jurisdictions that recognize copyright laws, the author or authors
|
||||
of this software dedicate any and all copyright interest in the
|
||||
software to the public domain. We make this dedication for the benefit
|
||||
of the public at large and to the detriment of our heirs and
|
||||
successors. We intend this dedication to be an overt act of
|
||||
relinquishment in perpetuity of all present and future rights to this
|
||||
software under copyright law.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
For more information, please refer to <http://unlicense.org>
|
||||
14
Godeps/_workspace/src/github.com/calmh/du/README.md
generated
vendored
14
Godeps/_workspace/src/github.com/calmh/du/README.md
generated
vendored
@@ -1,14 +0,0 @@
|
||||
du
|
||||
==
|
||||
|
||||
Get total and available disk space on a given volume.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
http://godoc.org/github.com/calmh/du
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
Public Domain
|
||||
21
Godeps/_workspace/src/github.com/calmh/du/cmd/du/main.go
generated
vendored
21
Godeps/_workspace/src/github.com/calmh/du/cmd/du/main.go
generated
vendored
@@ -1,21 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
|
||||
"github.com/calmh/du"
|
||||
)
|
||||
|
||||
var KB = int64(1024)
|
||||
|
||||
func main() {
|
||||
usage, err := du.Get(os.Args[1])
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Println("Free:", usage.FreeBytes/(KB*KB), "MiB")
|
||||
fmt.Println("Available:", usage.AvailBytes/(KB*KB), "MiB")
|
||||
fmt.Println("Size:", usage.TotalBytes/(KB*KB), "MiB")
|
||||
}
|
||||
8
Godeps/_workspace/src/github.com/calmh/du/diskusage.go
generated
vendored
8
Godeps/_workspace/src/github.com/calmh/du/diskusage.go
generated
vendored
@@ -1,8 +0,0 @@
|
||||
package du
|
||||
|
||||
// Usage holds information about total and available storage on a volume.
|
||||
type Usage struct {
|
||||
TotalBytes int64 // Size of volume
|
||||
FreeBytes int64 // Unused size
|
||||
AvailBytes int64 // Available to a non-privileged user
|
||||
}
|
||||
24
Godeps/_workspace/src/github.com/calmh/du/diskusage_posix.go
generated
vendored
24
Godeps/_workspace/src/github.com/calmh/du/diskusage_posix.go
generated
vendored
@@ -1,24 +0,0 @@
|
||||
// +build !windows,!netbsd,!openbsd,!solaris
|
||||
|
||||
package du
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// Get returns the Usage of a given path, or an error if usage data is
|
||||
// unavailable.
|
||||
func Get(path string) (Usage, error) {
|
||||
var stat syscall.Statfs_t
|
||||
err := syscall.Statfs(filepath.Clean(path), &stat)
|
||||
if err != nil {
|
||||
return Usage{}, err
|
||||
}
|
||||
u := Usage{
|
||||
FreeBytes: int64(stat.Bfree) * int64(stat.Bsize),
|
||||
TotalBytes: int64(stat.Blocks) * int64(stat.Bsize),
|
||||
AvailBytes: int64(stat.Bavail) * int64(stat.Bsize),
|
||||
}
|
||||
return u, nil
|
||||
}
|
||||
13
Godeps/_workspace/src/github.com/calmh/du/diskusage_unsupported.go
generated
vendored
13
Godeps/_workspace/src/github.com/calmh/du/diskusage_unsupported.go
generated
vendored
@@ -1,13 +0,0 @@
|
||||
// +build netbsd openbsd solaris
|
||||
|
||||
package du
|
||||
|
||||
import "errors"
|
||||
|
||||
var ErrUnsupported = errors.New("unsupported platform")
|
||||
|
||||
// Get returns the Usage of a given path, or an error if usage data is
|
||||
// unavailable.
|
||||
func Get(path string) (Usage, error) {
|
||||
return Usage{}, ErrUnsupported
|
||||
}
|
||||
27
Godeps/_workspace/src/github.com/calmh/du/diskusage_windows.go
generated
vendored
27
Godeps/_workspace/src/github.com/calmh/du/diskusage_windows.go
generated
vendored
@@ -1,27 +0,0 @@
|
||||
package du
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Get returns the Usage of a given path, or an error if usage data is
|
||||
// unavailable.
|
||||
func Get(path string) (Usage, error) {
|
||||
h := syscall.MustLoadDLL("kernel32.dll")
|
||||
c := h.MustFindProc("GetDiskFreeSpaceExW")
|
||||
|
||||
var u Usage
|
||||
|
||||
ret, _, err := c.Call(
|
||||
uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(path))),
|
||||
uintptr(unsafe.Pointer(&u.FreeBytes)),
|
||||
uintptr(unsafe.Pointer(&u.TotalBytes)),
|
||||
uintptr(unsafe.Pointer(&u.AvailBytes)))
|
||||
|
||||
if ret == 0 {
|
||||
return u, err
|
||||
}
|
||||
|
||||
return u, nil
|
||||
}
|
||||
0
lib/logger/LICENSE → Godeps/_workspace/src/github.com/calmh/ini/LICENSE
generated
vendored
0
lib/logger/LICENSE → Godeps/_workspace/src/github.com/calmh/ini/LICENSE
generated
vendored
39
Godeps/_workspace/src/github.com/calmh/ini/README.md
generated
vendored
Normal file
39
Godeps/_workspace/src/github.com/calmh/ini/README.md
generated
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
ini [](https://drone.io/github.com/calmh/ini/latest)
|
||||
===
|
||||
|
||||
Yet another .INI file parser / writer. Created because the existing ones
|
||||
were either not general enough (allowing easy access to all parts of the
|
||||
original file) or made annoying assumptions about the format. And
|
||||
probably equal parts NIH. You might want to just write your own instead
|
||||
of using this one, you know that's where you'll end up in the end
|
||||
anyhow.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
http://godoc.org/github.com/calmh/ini
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
```go
|
||||
fd, _ := os.Open("foo.ini")
|
||||
cfg := ini.Parse(fd)
|
||||
fd.Close()
|
||||
|
||||
val := cfg.Get("general", "foo")
|
||||
cfg.Set("general", "bar", "baz")
|
||||
|
||||
fd, _ = os.Create("bar.ini")
|
||||
err := cfg.Write(fd)
|
||||
if err != nil {
|
||||
// ...
|
||||
}
|
||||
err = fd.Close()
|
||||
|
||||
```
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
MIT
|
||||
235
Godeps/_workspace/src/github.com/calmh/ini/ini.go
generated
vendored
Normal file
235
Godeps/_workspace/src/github.com/calmh/ini/ini.go
generated
vendored
Normal file
@@ -0,0 +1,235 @@
|
||||
// Package ini provides trivial parsing of .INI format files.
|
||||
package ini
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Config is a parsed INI format file.
|
||||
type Config struct {
|
||||
sections []section
|
||||
comments []string
|
||||
}
|
||||
|
||||
type section struct {
|
||||
name string
|
||||
comments []string
|
||||
options []option
|
||||
}
|
||||
|
||||
type option struct {
|
||||
name, value string
|
||||
}
|
||||
|
||||
var (
|
||||
iniSectionRe = regexp.MustCompile(`^\[(.+)\]$`)
|
||||
iniOptionRe = regexp.MustCompile(`^([^\s=]+)\s*=\s*(.+?)$`)
|
||||
)
|
||||
|
||||
// Sections returns the list of sections in the file.
|
||||
func (c *Config) Sections() []string {
|
||||
var sections []string
|
||||
for _, sect := range c.sections {
|
||||
sections = append(sections, sect.name)
|
||||
}
|
||||
return sections
|
||||
}
|
||||
|
||||
// Options returns the list of options in a given section.
|
||||
func (c *Config) Options(section string) []string {
|
||||
var options []string
|
||||
for _, sect := range c.sections {
|
||||
if sect.name == section {
|
||||
for _, opt := range sect.options {
|
||||
options = append(options, opt.name)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return options
|
||||
}
|
||||
|
||||
// OptionMap returns the map option => value for a given section.
|
||||
func (c *Config) OptionMap(section string) map[string]string {
|
||||
options := make(map[string]string)
|
||||
for _, sect := range c.sections {
|
||||
if sect.name == section {
|
||||
for _, opt := range sect.options {
|
||||
options[opt.name] = opt.value
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return options
|
||||
}
|
||||
|
||||
// Comments returns the list of comments in a given section.
|
||||
// For the empty string, returns the file comments.
|
||||
func (c *Config) Comments(section string) []string {
|
||||
if section == "" {
|
||||
return c.comments
|
||||
}
|
||||
for _, sect := range c.sections {
|
||||
if sect.name == section {
|
||||
return sect.comments
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddComments appends the comment to the list of comments for the section.
|
||||
func (c *Config) AddComment(sect, comment string) {
|
||||
if sect == "" {
|
||||
c.comments = append(c.comments, comment)
|
||||
return
|
||||
}
|
||||
|
||||
for i, s := range c.sections {
|
||||
if s.name == sect {
|
||||
c.sections[i].comments = append(s.comments, comment)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
c.sections = append(c.sections, section{
|
||||
name: sect,
|
||||
comments: []string{comment},
|
||||
})
|
||||
}
|
||||
|
||||
// Parse reads the given io.Reader and returns a parsed Config object.
|
||||
func Parse(stream io.Reader) Config {
|
||||
var cfg Config
|
||||
var curSection string
|
||||
|
||||
scanner := bufio.NewScanner(bufio.NewReader(stream))
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if strings.HasPrefix(line, "#") || strings.HasPrefix(line, ";") {
|
||||
comment := strings.TrimLeft(line, ";# ")
|
||||
cfg.AddComment(curSection, comment)
|
||||
} else if len(line) > 0 {
|
||||
if m := iniSectionRe.FindStringSubmatch(line); len(m) > 0 {
|
||||
curSection = m[1]
|
||||
} else if m := iniOptionRe.FindStringSubmatch(line); len(m) > 0 {
|
||||
key := m[1]
|
||||
val := m[2]
|
||||
if !strings.Contains(val, "\"") {
|
||||
// If val does not contain any quote characers, we can make it
|
||||
// a quoted string and safely let strconv.Unquote sort out any
|
||||
// escapes
|
||||
val = "\"" + val + "\""
|
||||
}
|
||||
if val[0] == '"' {
|
||||
val, _ = strconv.Unquote(val)
|
||||
}
|
||||
|
||||
cfg.Set(curSection, key, val)
|
||||
}
|
||||
}
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
|
||||
// Write writes the sections and options to the io.Writer in INI format.
|
||||
func (c *Config) Write(out io.Writer) error {
|
||||
for _, cmt := range c.comments {
|
||||
fmt.Fprintln(out, "; "+cmt)
|
||||
}
|
||||
if len(c.comments) > 0 {
|
||||
fmt.Fprintln(out)
|
||||
}
|
||||
|
||||
for _, sect := range c.sections {
|
||||
fmt.Fprintf(out, "[%s]\n", sect.name)
|
||||
for _, cmt := range sect.comments {
|
||||
fmt.Fprintln(out, "; "+cmt)
|
||||
}
|
||||
for _, opt := range sect.options {
|
||||
val := opt.value
|
||||
if len(val) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Quote the string if it begins or ends with space
|
||||
needsQuoting := val[0] == ' ' || val[len(val)-1] == ' '
|
||||
|
||||
if !needsQuoting {
|
||||
// Quote the string if it contains any unprintable characters
|
||||
for _, r := range val {
|
||||
if !strconv.IsPrint(r) {
|
||||
needsQuoting = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if needsQuoting {
|
||||
val = strconv.Quote(val)
|
||||
}
|
||||
|
||||
fmt.Fprintf(out, "%s=%s\n", opt.name, val)
|
||||
}
|
||||
fmt.Fprintln(out)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get gets the value from the specified section and key name, or the empty
|
||||
// string if either the section or the key is missing.
|
||||
func (c *Config) Get(section, key string) string {
|
||||
for _, sect := range c.sections {
|
||||
if sect.name == section {
|
||||
for _, opt := range sect.options {
|
||||
if opt.name == key {
|
||||
return opt.value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Set sets a value for an option in a section. If the option exists, it's
|
||||
// value will be overwritten. If the option does not exist, it will be added.
|
||||
// If the section does not exist, it will be added and the option added to it.
|
||||
func (c *Config) Set(sectionName, key, value string) {
|
||||
for i, sect := range c.sections {
|
||||
if sect.name == sectionName {
|
||||
for j, opt := range sect.options {
|
||||
if opt.name == key {
|
||||
c.sections[i].options[j].value = value
|
||||
return
|
||||
}
|
||||
}
|
||||
c.sections[i].options = append(sect.options, option{key, value})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
c.sections = append(c.sections, section{
|
||||
name: sectionName,
|
||||
options: []option{{key, value}},
|
||||
})
|
||||
}
|
||||
|
||||
// Delete removes the option from the specified section.
|
||||
func (c *Config) Delete(section, key string) {
|
||||
for sn, sect := range c.sections {
|
||||
if sect.name == section {
|
||||
for i, opt := range sect.options {
|
||||
if opt.name == key {
|
||||
c.sections[sn].options = append(sect.options[:i], sect.options[i+1:]...)
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
214
Godeps/_workspace/src/github.com/calmh/ini/ini_test.go
generated
vendored
Normal file
214
Godeps/_workspace/src/github.com/calmh/ini/ini_test.go
generated
vendored
Normal file
@@ -0,0 +1,214 @@
|
||||
package ini_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/calmh/ini"
|
||||
)
|
||||
|
||||
func TestParseValues(t *testing.T) {
|
||||
strs := []string{
|
||||
`[general]`,
|
||||
`k1=v1`,
|
||||
`k2 = v2`,
|
||||
` k3 = v3 `,
|
||||
`k4=" quoted spaces "`,
|
||||
`k5 = " quoted spaces " `,
|
||||
`k6 = with\nnewline`,
|
||||
`k7 = "with\nnewline"`,
|
||||
`k8 = a "quoted" word`,
|
||||
`k9 = "a \"quoted\" word"`,
|
||||
}
|
||||
buf := bytes.NewBufferString(strings.Join(strs, "\n"))
|
||||
cfg := ini.Parse(buf)
|
||||
|
||||
correct := map[string]string{
|
||||
"k1": "v1",
|
||||
"k2": "v2",
|
||||
"k3": "v3",
|
||||
"k4": " quoted spaces ",
|
||||
"k5": " quoted spaces ",
|
||||
"k6": "with\nnewline",
|
||||
"k7": "with\nnewline",
|
||||
"k8": "a \"quoted\" word",
|
||||
"k9": "a \"quoted\" word",
|
||||
}
|
||||
|
||||
for k, v := range correct {
|
||||
if v2 := cfg.Get("general", k); v2 != v {
|
||||
t.Errorf("Incorrect general.%s, %q != %q", k, v2, v)
|
||||
}
|
||||
}
|
||||
|
||||
if v := cfg.Get("general", "nonexistant"); v != "" {
|
||||
t.Errorf("Unexpected non-empty value %q", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseComments(t *testing.T) {
|
||||
strs := []string{
|
||||
";file comment 1", // No leading space
|
||||
"; file comment 2 ", // Trailing space
|
||||
"; file comment 3", // Multiple leading spaces
|
||||
"[general]",
|
||||
"; b general comment 1", // Comments in unsorted order
|
||||
"somekey = somevalue",
|
||||
"; a general comment 2",
|
||||
"[other]",
|
||||
"; other comment 1", // Comments in section with no values
|
||||
"; other comment 2",
|
||||
"[other2]",
|
||||
"; other2 comment 1",
|
||||
"; other2 comment 2", // Comments on last section
|
||||
"somekey = somevalue",
|
||||
}
|
||||
buf := bytes.NewBufferString(strings.Join(strs, "\n"))
|
||||
|
||||
correct := map[string][]string{
|
||||
"": []string{"file comment 1", "file comment 2", "file comment 3"},
|
||||
"general": []string{"b general comment 1", "a general comment 2"},
|
||||
"other": []string{"other comment 1", "other comment 2"},
|
||||
"other2": []string{"other2 comment 1", "other2 comment 2"},
|
||||
}
|
||||
|
||||
cfg := ini.Parse(buf)
|
||||
|
||||
for section, comments := range correct {
|
||||
cmts := cfg.Comments(section)
|
||||
if len(cmts) != len(comments) {
|
||||
t.Errorf("Incorrect number of comments for section %q: %d != %d", section, len(cmts), len(comments))
|
||||
} else {
|
||||
for i := range comments {
|
||||
if cmts[i] != comments[i] {
|
||||
t.Errorf("Incorrect comment: %q != %q", cmts[i], comments[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestWrite(t *testing.T) {
|
||||
cfg := ini.Config{}
|
||||
cfg.Set("general", "k1", "v1")
|
||||
cfg.Set("general", "k2", "foo bar")
|
||||
cfg.Set("general", "k3", " foo bar ")
|
||||
cfg.Set("general", "k4", "foo\nbar")
|
||||
|
||||
var out bytes.Buffer
|
||||
cfg.Write(&out)
|
||||
|
||||
correct := `[general]
|
||||
k1=v1
|
||||
k2=foo bar
|
||||
k3=" foo bar "
|
||||
k4="foo\nbar"
|
||||
|
||||
`
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect written .INI:\n%s\ncorrect:\n%s", s, correct)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSet(t *testing.T) {
|
||||
buf := bytes.NewBufferString("[general]\nfoo=bar\nfoo2=bar2\n")
|
||||
cfg := ini.Parse(buf)
|
||||
|
||||
cfg.Set("general", "foo", "baz") // Overwrite existing
|
||||
cfg.Set("general", "baz", "quux") // Create new value
|
||||
cfg.Set("other", "baz2", "quux2") // Create new section + value
|
||||
|
||||
var out bytes.Buffer
|
||||
cfg.Write(&out)
|
||||
|
||||
correct := `[general]
|
||||
foo=baz
|
||||
foo2=bar2
|
||||
baz=quux
|
||||
|
||||
[other]
|
||||
baz2=quux2
|
||||
|
||||
`
|
||||
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect INI after set:\n%s", s)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDelete(t *testing.T) {
|
||||
buf := bytes.NewBufferString("[general]\nfoo=bar\nfoo2=bar2\nfoo3=baz\n")
|
||||
cfg := ini.Parse(buf)
|
||||
cfg.Delete("general", "foo")
|
||||
out := new(bytes.Buffer)
|
||||
cfg.Write(out)
|
||||
correct := "[general]\nfoo2=bar2\nfoo3=baz\n\n"
|
||||
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect INI after delete:\n%s", s)
|
||||
}
|
||||
|
||||
buf = bytes.NewBufferString("[general]\nfoo=bar\nfoo2=bar2\nfoo3=baz\n")
|
||||
cfg = ini.Parse(buf)
|
||||
cfg.Delete("general", "foo2")
|
||||
out = new(bytes.Buffer)
|
||||
cfg.Write(out)
|
||||
correct = "[general]\nfoo=bar\nfoo3=baz\n\n"
|
||||
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect INI after delete:\n%s", s)
|
||||
}
|
||||
|
||||
buf = bytes.NewBufferString("[general]\nfoo=bar\nfoo2=bar2\nfoo3=baz\n")
|
||||
cfg = ini.Parse(buf)
|
||||
cfg.Delete("general", "foo3")
|
||||
out = new(bytes.Buffer)
|
||||
cfg.Write(out)
|
||||
correct = "[general]\nfoo=bar\nfoo2=bar2\n\n"
|
||||
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect INI after delete:\n%s", s)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetManyEquals(t *testing.T) {
|
||||
buf := bytes.NewBufferString("[general]\nfoo=bar==\nfoo2=bar2==\n")
|
||||
cfg := ini.Parse(buf)
|
||||
|
||||
cfg.Set("general", "foo", "baz==")
|
||||
|
||||
var out bytes.Buffer
|
||||
cfg.Write(&out)
|
||||
|
||||
correct := `[general]
|
||||
foo=baz==
|
||||
foo2=bar2==
|
||||
|
||||
`
|
||||
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect INI after set:\n%s", s)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRewriteDuplicate(t *testing.T) {
|
||||
buf := bytes.NewBufferString("[general]\nfoo=bar==\nfoo=bar2==\n")
|
||||
cfg := ini.Parse(buf)
|
||||
|
||||
if v := cfg.Get("general", "foo"); v != "bar2==" {
|
||||
t.Errorf("incorrect get %q", v)
|
||||
}
|
||||
|
||||
var out bytes.Buffer
|
||||
cfg.Write(&out)
|
||||
|
||||
correct := `[general]
|
||||
foo=bar2==
|
||||
|
||||
`
|
||||
|
||||
if s := out.String(); s != correct {
|
||||
t.Errorf("Incorrect INI after set:\n%s", s)
|
||||
}
|
||||
}
|
||||
19
Godeps/_workspace/src/github.com/calmh/luhn/LICENSE
generated
vendored
19
Godeps/_workspace/src/github.com/calmh/luhn/LICENSE
generated
vendored
@@ -1,19 +0,0 @@
|
||||
Copyright (C) 2014 Jakob Borg
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so, subject to the following conditions:
|
||||
|
||||
- The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
70
Godeps/_workspace/src/github.com/calmh/luhn/luhn.go
generated
vendored
70
Godeps/_workspace/src/github.com/calmh/luhn/luhn.go
generated
vendored
@@ -1,70 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg
|
||||
|
||||
// Package luhn generates and validates Luhn mod N check digits.
|
||||
package luhn
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// An alphabet is a string of N characters, representing the digits of a given
|
||||
// base N.
|
||||
type Alphabet string
|
||||
|
||||
var (
|
||||
Base32 Alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567"
|
||||
)
|
||||
|
||||
// Generate returns a check digit for the string s, which should be composed
|
||||
// of characters from the Alphabet a.
|
||||
func (a Alphabet) Generate(s string) (rune, error) {
|
||||
if err := a.check(); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
factor := 1
|
||||
sum := 0
|
||||
n := len(a)
|
||||
|
||||
for i := range s {
|
||||
codepoint := strings.IndexByte(string(a), s[i])
|
||||
if codepoint == -1 {
|
||||
return 0, fmt.Errorf("Digit %q not valid in alphabet %q", s[i], a)
|
||||
}
|
||||
addend := factor * codepoint
|
||||
if factor == 2 {
|
||||
factor = 1
|
||||
} else {
|
||||
factor = 2
|
||||
}
|
||||
addend = (addend / n) + (addend % n)
|
||||
sum += addend
|
||||
}
|
||||
remainder := sum % n
|
||||
checkCodepoint := (n - remainder) % n
|
||||
return rune(a[checkCodepoint]), nil
|
||||
}
|
||||
|
||||
// Validate returns true if the last character of the string s is correct, for
|
||||
// a string s composed of characters in the alphabet a.
|
||||
func (a Alphabet) Validate(s string) bool {
|
||||
t := s[:len(s)-1]
|
||||
c, err := a.Generate(t)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return rune(s[len(s)-1]) == c
|
||||
}
|
||||
|
||||
// check returns an error if the given alphabet does not consist of unique characters
|
||||
func (a Alphabet) check() error {
|
||||
cm := make(map[byte]bool, len(a))
|
||||
for i := range a {
|
||||
if cm[a[i]] {
|
||||
return fmt.Errorf("Digit %q non-unique in alphabet %q", a[i], a)
|
||||
}
|
||||
cm[a[i]] = true
|
||||
}
|
||||
return nil
|
||||
}
|
||||
1
Godeps/_workspace/src/github.com/calmh/xdr/.gitignore
generated
vendored
1
Godeps/_workspace/src/github.com/calmh/xdr/.gitignore
generated
vendored
@@ -1 +0,0 @@
|
||||
coverage.out
|
||||
19
Godeps/_workspace/src/github.com/calmh/xdr/.travis.yml
generated
vendored
19
Godeps/_workspace/src/github.com/calmh/xdr/.travis.yml
generated
vendored
@@ -1,19 +0,0 @@
|
||||
language: go
|
||||
go:
|
||||
- tip
|
||||
|
||||
install:
|
||||
- export PATH=$PATH:$HOME/gopath/bin
|
||||
- go get golang.org/x/tools/cover
|
||||
- go get github.com/mattn/goveralls
|
||||
|
||||
script:
|
||||
- ./generate.sh
|
||||
- go test -coverprofile=coverage.out
|
||||
|
||||
after_success:
|
||||
- goveralls -coverprofile=coverage.out -service=travis-ci -package=calmh/xdr -repotoken="$COVERALLS_TOKEN"
|
||||
|
||||
env:
|
||||
global:
|
||||
secure: SmgnrGfp2zLrA44ChRMpjPeujubt9veZ8Fx/OseMWECmacyV5N/TuDhzIbwo6QwV4xB0sBacoPzvxQbJRVjNKsPiSu72UbcQmQ7flN4Tf7nW09tSh1iW8NgrpBCq/3UYLoBu2iPBEBKm93IK0aGNAKs6oEkB0fU27iTVBwiTXOY=
|
||||
19
Godeps/_workspace/src/github.com/calmh/xdr/LICENSE
generated
vendored
19
Godeps/_workspace/src/github.com/calmh/xdr/LICENSE
generated
vendored
@@ -1,19 +0,0 @@
|
||||
Copyright (C) 2014 Jakob Borg.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
deal in the Software without restriction, including without limitation the
|
||||
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
- The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
IN THE SOFTWARE.
|
||||
12
Godeps/_workspace/src/github.com/calmh/xdr/README.md
generated
vendored
12
Godeps/_workspace/src/github.com/calmh/xdr/README.md
generated
vendored
@@ -1,12 +0,0 @@
|
||||
xdr
|
||||
===
|
||||
|
||||
[](https://circleci.com/gh/calmh/xdr)
|
||||
[](https://coveralls.io/r/calmh/xdr?branch=master)
|
||||
[](http://godoc.org/github.com/calmh/xdr)
|
||||
[](http://opensource.org/licenses/MIT)
|
||||
|
||||
This is an XDR encoding/decoding library. It uses code generation and
|
||||
not reflection. It supports the IPDR bastardized XDR format when built
|
||||
with `-tags ipdr`.
|
||||
|
||||
3
Godeps/_workspace/src/github.com/calmh/xdr/circle.yml
generated
vendored
3
Godeps/_workspace/src/github.com/calmh/xdr/circle.yml
generated
vendored
@@ -1,3 +0,0 @@
|
||||
dependencies:
|
||||
post:
|
||||
- ./generate.sh
|
||||
530
Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
generated
vendored
530
Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
generated
vendored
@@ -1,530 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/format"
|
||||
"go/parser"
|
||||
"go/token"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/template"
|
||||
)
|
||||
|
||||
type fieldInfo struct {
|
||||
Name string
|
||||
IsBasic bool // handled by one the native Read/WriteUint64 etc functions
|
||||
IsSlice bool // field is a slice of FieldType
|
||||
FieldType string // original type of field, i.e. "int"
|
||||
Encoder string // the encoder name, i.e. "Uint64" for Read/WriteUint64
|
||||
Convert string // what to convert to when encoding, i.e. "uint64"
|
||||
Max int // max size for slices and strings
|
||||
Submax int // max size for strings inside slices
|
||||
}
|
||||
|
||||
type structInfo struct {
|
||||
Name string
|
||||
Fields []fieldInfo
|
||||
}
|
||||
|
||||
var headerTpl = template.Must(template.New("header").Parse(`// ************************************************************
|
||||
// This file is automatically generated by genxdr. Do not edit.
|
||||
// ************************************************************
|
||||
|
||||
package {{.Package}}
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
|
||||
"github.com/calmh/xdr"
|
||||
)
|
||||
`))
|
||||
|
||||
var encodeTpl = template.Must(template.New("encoder").Parse(`
|
||||
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
|
||||
return o.AppendXDR(make([]byte, 0, 128))
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) MustMarshalXDR() []byte {
|
||||
bs, err := o.MarshalXDR()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return bs
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
{{range $fieldInfo := .Fields}}
|
||||
{{if not $fieldInfo.IsSlice}}
|
||||
{{if ne $fieldInfo.Convert ""}}
|
||||
xw.Write{{$fieldInfo.Encoder}}({{$fieldInfo.Convert}}(o.{{$fieldInfo.Name}}))
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if l := len(o.{{$fieldInfo.Name}}); l > {{$fieldInfo.Max}} {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", l, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{end}}
|
||||
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
|
||||
{{else}}
|
||||
_, err := o.{{$fieldInfo.Name}}.EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
{{end}}
|
||||
{{else}}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if l := len(o.{{$fieldInfo.Name}}); l > {{$fieldInfo.Max}} {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", l, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{end}}
|
||||
xw.WriteUint32(uint32(len(o.{{$fieldInfo.Name}})))
|
||||
for i := range o.{{$fieldInfo.Name}} {
|
||||
{{if ne $fieldInfo.Convert ""}}
|
||||
xw.Write{{$fieldInfo.Encoder}}({{$fieldInfo.Convert}}(o.{{$fieldInfo.Name}}[i]))
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}}[i])
|
||||
{{else}}
|
||||
_, err := o.{{$fieldInfo.Name}}[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
{{end}}
|
||||
}
|
||||
{{end}}
|
||||
{{end}}
|
||||
return xw.Tot(), xw.Error()
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
{{range $fieldInfo := .Fields}}
|
||||
{{if not $fieldInfo.IsSlice}}
|
||||
{{if ne $fieldInfo.Convert ""}}
|
||||
o.{{$fieldInfo.Name}} = {{$fieldInfo.FieldType}}(xr.Read{{$fieldInfo.Encoder}}())
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}Max({{$fieldInfo.Max}})
|
||||
{{else}}
|
||||
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}()
|
||||
{{end}}
|
||||
{{else}}
|
||||
(&o.{{$fieldInfo.Name}}).DecodeXDRFrom(xr)
|
||||
{{end}}
|
||||
{{else}}
|
||||
_{{$fieldInfo.Name}}Size := int(xr.ReadUint32())
|
||||
if _{{$fieldInfo.Name}}Size < 0 {
|
||||
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if _{{$fieldInfo.Name}}Size > {{$fieldInfo.Max}} {
|
||||
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{end}}
|
||||
o.{{$fieldInfo.Name}} = make([]{{$fieldInfo.FieldType}}, _{{$fieldInfo.Name}}Size)
|
||||
for i := range o.{{$fieldInfo.Name}} {
|
||||
{{if ne $fieldInfo.Convert ""}}
|
||||
o.{{$fieldInfo.Name}}[i] = {{$fieldInfo.FieldType}}(xr.Read{{$fieldInfo.Encoder}}())
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
{{if ge $fieldInfo.Submax 1}}
|
||||
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}Max({{$fieldInfo.Submax}})
|
||||
{{else}}
|
||||
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
|
||||
{{end}}
|
||||
{{else}}
|
||||
(&o.{{$fieldInfo.Name}}[i]).DecodeXDRFrom(xr)
|
||||
{{end}}
|
||||
}
|
||||
{{end}}
|
||||
{{end}}
|
||||
return xr.Error()
|
||||
}`))
|
||||
|
||||
var emptyTypeTpl = template.Must(template.New("encoder").Parse(`
|
||||
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
|
||||
return 0, nil
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
|
||||
return nil, nil
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) MustMarshalXDR() []byte {
|
||||
return nil
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
|
||||
return bs, nil
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
return xw.Tot(), xw.Error()
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
|
||||
return nil
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
|
||||
return nil
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
return xr.Error()
|
||||
}`))
|
||||
|
||||
var maxRe = regexp.MustCompile(`(?:\Wmax:)(\d+)(?:\s*,\s*(\d+))?`)
|
||||
|
||||
type typeSet struct {
|
||||
Type string
|
||||
Encoder string
|
||||
}
|
||||
|
||||
var xdrEncoders = map[string]typeSet{
|
||||
"int8": typeSet{"uint8", "Uint8"},
|
||||
"uint8": typeSet{"", "Uint8"},
|
||||
"int16": typeSet{"uint16", "Uint16"},
|
||||
"uint16": typeSet{"", "Uint16"},
|
||||
"int32": typeSet{"uint32", "Uint32"},
|
||||
"uint32": typeSet{"", "Uint32"},
|
||||
"int64": typeSet{"uint64", "Uint64"},
|
||||
"uint64": typeSet{"", "Uint64"},
|
||||
"int": typeSet{"uint64", "Uint64"},
|
||||
"string": typeSet{"", "String"},
|
||||
"[]byte": typeSet{"", "Bytes"},
|
||||
"bool": typeSet{"", "Bool"},
|
||||
}
|
||||
|
||||
func handleStruct(t *ast.StructType) []fieldInfo {
|
||||
var fs []fieldInfo
|
||||
|
||||
for _, sf := range t.Fields.List {
|
||||
if len(sf.Names) == 0 {
|
||||
// We don't handle anonymous fields
|
||||
continue
|
||||
}
|
||||
|
||||
fn := sf.Names[0].Name
|
||||
var max1, max2 int
|
||||
if sf.Comment != nil {
|
||||
c := sf.Comment.List[0].Text
|
||||
m := maxRe.FindStringSubmatch(c)
|
||||
if len(m) >= 2 {
|
||||
max1, _ = strconv.Atoi(m[1])
|
||||
}
|
||||
if len(m) >= 3 {
|
||||
max2, _ = strconv.Atoi(m[2])
|
||||
}
|
||||
if strings.Contains(c, "noencode") {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
var f fieldInfo
|
||||
switch ft := sf.Type.(type) {
|
||||
case *ast.Ident:
|
||||
tn := ft.Name
|
||||
if enc, ok := xdrEncoders[tn]; ok {
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
IsBasic: true,
|
||||
FieldType: tn,
|
||||
Encoder: enc.Encoder,
|
||||
Convert: enc.Type,
|
||||
Max: max1,
|
||||
Submax: max2,
|
||||
}
|
||||
} else {
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
IsBasic: false,
|
||||
FieldType: tn,
|
||||
Max: max1,
|
||||
Submax: max2,
|
||||
}
|
||||
}
|
||||
|
||||
case *ast.ArrayType:
|
||||
if ft.Len != nil {
|
||||
// We don't handle arrays
|
||||
continue
|
||||
}
|
||||
|
||||
tn := ft.Elt.(*ast.Ident).Name
|
||||
if enc, ok := xdrEncoders["[]"+tn]; ok {
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
IsBasic: true,
|
||||
FieldType: tn,
|
||||
Encoder: enc.Encoder,
|
||||
Convert: enc.Type,
|
||||
Max: max1,
|
||||
Submax: max2,
|
||||
}
|
||||
} else if enc, ok := xdrEncoders[tn]; ok {
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
IsBasic: true,
|
||||
IsSlice: true,
|
||||
FieldType: tn,
|
||||
Encoder: enc.Encoder,
|
||||
Convert: enc.Type,
|
||||
Max: max1,
|
||||
Submax: max2,
|
||||
}
|
||||
} else {
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
IsSlice: true,
|
||||
FieldType: tn,
|
||||
Max: max1,
|
||||
Submax: max2,
|
||||
}
|
||||
}
|
||||
|
||||
case *ast.SelectorExpr:
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
FieldType: ft.Sel.Name,
|
||||
Max: max1,
|
||||
Submax: max2,
|
||||
}
|
||||
}
|
||||
|
||||
fs = append(fs, f)
|
||||
}
|
||||
|
||||
return fs
|
||||
}
|
||||
|
||||
func generateCode(output io.Writer, s structInfo) {
|
||||
name := s.Name
|
||||
fs := s.Fields
|
||||
|
||||
var buf bytes.Buffer
|
||||
var err error
|
||||
if len(fs) == 0 {
|
||||
// This is an empty type. We can create a quite simple codec for it.
|
||||
err = emptyTypeTpl.Execute(&buf, map[string]interface{}{"TypeName": name})
|
||||
} else {
|
||||
// Generate with the default template.
|
||||
err = encodeTpl.Execute(&buf, map[string]interface{}{"TypeName": name, "Fields": fs})
|
||||
}
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
bs := regexp.MustCompile(`(\s*\n)+`).ReplaceAll(buf.Bytes(), []byte("\n"))
|
||||
bs = bytes.Replace(bs, []byte("//+n"), []byte("\n"), -1)
|
||||
|
||||
bs, err = format.Source(bs)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
fmt.Fprintln(output, string(bs))
|
||||
}
|
||||
|
||||
func uncamelize(s string) string {
|
||||
return regexp.MustCompile("[a-z][A-Z]").ReplaceAllStringFunc(s, func(camel string) string {
|
||||
return camel[:1] + " " + camel[1:]
|
||||
})
|
||||
}
|
||||
|
||||
func generateDiagram(output io.Writer, s structInfo) {
|
||||
sn := s.Name
|
||||
fs := s.Fields
|
||||
|
||||
fmt.Fprintln(output, sn+" Structure:")
|
||||
|
||||
if len(fs) == 0 {
|
||||
fmt.Fprintln(output, "(contains no fields)")
|
||||
fmt.Fprintln(output)
|
||||
fmt.Fprintln(output)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Fprintln(output)
|
||||
fmt.Fprintln(output, " 0 1 2 3")
|
||||
fmt.Fprintln(output, " 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1")
|
||||
line := "+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+"
|
||||
fmt.Fprintln(output, line)
|
||||
|
||||
for _, f := range fs {
|
||||
tn := f.FieldType
|
||||
name := uncamelize(f.Name)
|
||||
|
||||
if f.IsSlice {
|
||||
fmt.Fprintf(output, "| %s |\n", center("Number of "+name, 61))
|
||||
fmt.Fprintln(output, line)
|
||||
}
|
||||
switch tn {
|
||||
case "bool":
|
||||
fmt.Fprintf(output, "| %s |V|\n", center(name+" (V=0 or 1)", 59))
|
||||
fmt.Fprintln(output, line)
|
||||
case "int16", "uint16":
|
||||
fmt.Fprintf(output, "| %s | %s |\n", center("0x0000", 29), center(name, 29))
|
||||
fmt.Fprintln(output, line)
|
||||
case "int32", "uint32":
|
||||
fmt.Fprintf(output, "| %s |\n", center(name, 61))
|
||||
fmt.Fprintln(output, line)
|
||||
case "int64", "uint64":
|
||||
fmt.Fprintf(output, "| %-61s |\n", "")
|
||||
fmt.Fprintf(output, "+ %s +\n", center(name+" (64 bits)", 61))
|
||||
fmt.Fprintf(output, "| %-61s |\n", "")
|
||||
fmt.Fprintln(output, line)
|
||||
case "string", "byte": // XXX We assume slice of byte!
|
||||
fmt.Fprintf(output, "| %s |\n", center("Length of "+name, 61))
|
||||
fmt.Fprintln(output, line)
|
||||
fmt.Fprintf(output, "/ %61s /\n", "")
|
||||
fmt.Fprintf(output, "\\ %s \\\n", center(name+" (variable length)", 61))
|
||||
fmt.Fprintf(output, "/ %61s /\n", "")
|
||||
fmt.Fprintln(output, line)
|
||||
default:
|
||||
if f.IsSlice {
|
||||
tn = "Zero or more " + tn + " Structures"
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
} else {
|
||||
tn = tn + " Structure"
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
}
|
||||
fmt.Fprintln(output, line)
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(output)
|
||||
fmt.Fprintln(output)
|
||||
}
|
||||
|
||||
func generateXdr(output io.Writer, s structInfo) {
|
||||
sn := s.Name
|
||||
fs := s.Fields
|
||||
|
||||
fmt.Fprintf(output, "struct %s {\n", sn)
|
||||
|
||||
for _, f := range fs {
|
||||
tn := f.FieldType
|
||||
fn := f.Name
|
||||
suf := ""
|
||||
l := ""
|
||||
if f.Max > 0 {
|
||||
l = strconv.Itoa(f.Max)
|
||||
}
|
||||
if f.IsSlice {
|
||||
suf = "<" + l + ">"
|
||||
}
|
||||
|
||||
switch tn {
|
||||
case "int16", "int32":
|
||||
fmt.Fprintf(output, "\tint %s%s;\n", fn, suf)
|
||||
case "uint16", "uint32":
|
||||
fmt.Fprintf(output, "\tunsigned int %s%s;\n", fn, suf)
|
||||
case "int64":
|
||||
fmt.Fprintf(output, "\thyper %s%s;\n", fn, suf)
|
||||
case "uint64":
|
||||
fmt.Fprintf(output, "\tunsigned hyper %s%s;\n", fn, suf)
|
||||
case "string":
|
||||
fmt.Fprintf(output, "\tstring %s<%s>;\n", fn, l)
|
||||
case "byte":
|
||||
fmt.Fprintf(output, "\topaque %s<%s>;\n", fn, l)
|
||||
default:
|
||||
fmt.Fprintf(output, "\t%s %s%s;\n", tn, fn, suf)
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(output, "}")
|
||||
fmt.Fprintln(output)
|
||||
}
|
||||
|
||||
func center(s string, w int) string {
|
||||
w -= len(s)
|
||||
l := w / 2
|
||||
r := l
|
||||
if l+r < w {
|
||||
r++
|
||||
}
|
||||
return strings.Repeat(" ", l) + s + strings.Repeat(" ", r)
|
||||
}
|
||||
|
||||
func inspector(structs *[]structInfo) func(ast.Node) bool {
|
||||
return func(n ast.Node) bool {
|
||||
switch n := n.(type) {
|
||||
case *ast.TypeSpec:
|
||||
switch t := n.Type.(type) {
|
||||
case *ast.StructType:
|
||||
name := n.Name.Name
|
||||
fs := handleStruct(t)
|
||||
*structs = append(*structs, structInfo{name, fs})
|
||||
}
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
outputFile := flag.String("o", "", "Output file, blank for stdout")
|
||||
flag.Parse()
|
||||
fname := flag.Arg(0)
|
||||
|
||||
fset := token.NewFileSet()
|
||||
f, err := parser.ParseFile(fset, fname, nil, parser.ParseComments)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
var structs []structInfo
|
||||
i := inspector(&structs)
|
||||
ast.Inspect(f, i)
|
||||
|
||||
var output io.Writer = os.Stdout
|
||||
if *outputFile != "" {
|
||||
fd, err := os.Create(*outputFile)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
output = fd
|
||||
}
|
||||
|
||||
headerTpl.Execute(output, map[string]string{"Package": f.Name.Name})
|
||||
for _, s := range structs {
|
||||
fmt.Fprintf(output, "\n/*\n\n")
|
||||
generateDiagram(output, s)
|
||||
generateXdr(output, s)
|
||||
fmt.Fprintf(output, "*/\n")
|
||||
generateCode(output, s)
|
||||
}
|
||||
}
|
||||
16
Godeps/_workspace/src/github.com/calmh/xdr/debug.go
generated
vendored
16
Godeps/_workspace/src/github.com/calmh/xdr/debug.go
generated
vendored
@@ -1,16 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
package xdr
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
|
||||
var (
|
||||
debug = len(os.Getenv("XDRTRACE")) > 0
|
||||
dl = log.New(os.Stdout, "xdr: ", log.Lshortfile|log.Ltime|log.Lmicroseconds)
|
||||
)
|
||||
|
||||
const maxDebugBytes = 32
|
||||
5
Godeps/_workspace/src/github.com/calmh/xdr/doc.go
generated
vendored
5
Godeps/_workspace/src/github.com/calmh/xdr/doc.go
generated
vendored
@@ -1,5 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
// Package xdr implements an XDR (RFC 4506) encoder/decoder.
|
||||
package xdr
|
||||
4
Godeps/_workspace/src/github.com/calmh/xdr/generate.sh
generated
vendored
4
Godeps/_workspace/src/github.com/calmh/xdr/generate.sh
generated
vendored
@@ -1,4 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
go run cmd/genxdr/main.go -- bench_test.go > bench_xdr_test.go
|
||||
go run cmd/genxdr/main.go -- encdec_test.go > encdec_xdr_test.go
|
||||
10
Godeps/_workspace/src/github.com/calmh/xdr/pad_ipdr.go
generated
vendored
10
Godeps/_workspace/src/github.com/calmh/xdr/pad_ipdr.go
generated
vendored
@@ -1,10 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
// +build ipdr
|
||||
|
||||
package xdr
|
||||
|
||||
func pad(l int) int {
|
||||
return 0
|
||||
}
|
||||
14
Godeps/_workspace/src/github.com/calmh/xdr/pad_xdr.go
generated
vendored
14
Godeps/_workspace/src/github.com/calmh/xdr/pad_xdr.go
generated
vendored
@@ -1,14 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
// +build !ipdr
|
||||
|
||||
package xdr
|
||||
|
||||
func pad(l int) int {
|
||||
d := l % 4
|
||||
if d == 0 {
|
||||
return 0
|
||||
}
|
||||
return 4 - d
|
||||
}
|
||||
171
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
171
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
@@ -1,171 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package xdr
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
type Reader struct {
|
||||
r io.Reader
|
||||
err error
|
||||
b [8]byte
|
||||
}
|
||||
|
||||
func NewReader(r io.Reader) *Reader {
|
||||
return &Reader{
|
||||
r: r,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Reader) ReadRaw(bs []byte) (int, error) {
|
||||
if r.err != nil {
|
||||
return 0, r.err
|
||||
}
|
||||
|
||||
var n int
|
||||
n, r.err = io.ReadFull(r.r, bs)
|
||||
return n, r.err
|
||||
}
|
||||
|
||||
func (r *Reader) ReadString() string {
|
||||
return r.ReadStringMax(0)
|
||||
}
|
||||
|
||||
func (r *Reader) ReadStringMax(max int) string {
|
||||
buf := r.ReadBytesMaxInto(max, nil)
|
||||
bh := (*reflect.SliceHeader)(unsafe.Pointer(&buf))
|
||||
sh := reflect.StringHeader{
|
||||
Data: bh.Data,
|
||||
Len: bh.Len,
|
||||
}
|
||||
return *((*string)(unsafe.Pointer(&sh)))
|
||||
}
|
||||
|
||||
func (r *Reader) ReadBytes() []byte {
|
||||
return r.ReadBytesInto(nil)
|
||||
}
|
||||
|
||||
func (r *Reader) ReadBytesMax(max int) []byte {
|
||||
return r.ReadBytesMaxInto(max, nil)
|
||||
}
|
||||
|
||||
func (r *Reader) ReadBytesInto(dst []byte) []byte {
|
||||
return r.ReadBytesMaxInto(0, dst)
|
||||
}
|
||||
|
||||
func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
|
||||
if r.err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
l := int(r.ReadUint32())
|
||||
if r.err != nil {
|
||||
return nil
|
||||
}
|
||||
if l < 0 || max > 0 && l > max {
|
||||
// l may be negative on 32 bit builds
|
||||
r.err = ElementSizeExceeded("bytes field", l, max)
|
||||
return nil
|
||||
}
|
||||
|
||||
if fullLen := l + pad(l); fullLen > len(dst) {
|
||||
dst = make([]byte, fullLen)
|
||||
} else {
|
||||
dst = dst[:fullLen]
|
||||
}
|
||||
|
||||
var n int
|
||||
n, r.err = io.ReadFull(r.r, dst)
|
||||
if r.err != nil {
|
||||
if debug {
|
||||
dl.Printf("rd bytes (%d): %v", len(dst), r.err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if debug {
|
||||
if n > maxDebugBytes {
|
||||
dl.Printf("rd bytes (%d): %x...", len(dst), dst[:maxDebugBytes])
|
||||
} else {
|
||||
dl.Printf("rd bytes (%d): %x", len(dst), dst)
|
||||
}
|
||||
}
|
||||
return dst[:l]
|
||||
}
|
||||
|
||||
func (r *Reader) ReadBool() bool {
|
||||
return r.ReadUint8() != 0
|
||||
}
|
||||
|
||||
func (r *Reader) ReadUint32() uint32 {
|
||||
if r.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
_, r.err = io.ReadFull(r.r, r.b[:4])
|
||||
if r.err != nil {
|
||||
if debug {
|
||||
dl.Printf("rd uint32: %v", r.err)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
v := uint32(r.b[3]) | uint32(r.b[2])<<8 | uint32(r.b[1])<<16 | uint32(r.b[0])<<24
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint32=%d (0x%08x)", v, v)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func (r *Reader) ReadUint64() uint64 {
|
||||
if r.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
_, r.err = io.ReadFull(r.r, r.b[:8])
|
||||
if r.err != nil {
|
||||
if debug {
|
||||
dl.Printf("rd uint64: %v", r.err)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
v := uint64(r.b[7]) | uint64(r.b[6])<<8 | uint64(r.b[5])<<16 | uint64(r.b[4])<<24 |
|
||||
uint64(r.b[3])<<32 | uint64(r.b[2])<<40 | uint64(r.b[1])<<48 | uint64(r.b[0])<<56
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint64=%d (0x%016x)", v, v)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
type XDRError struct {
|
||||
op string
|
||||
err error
|
||||
}
|
||||
|
||||
func (e XDRError) Error() string {
|
||||
return "xdr " + e.op + ": " + e.err.Error()
|
||||
}
|
||||
|
||||
func (e XDRError) IsEOF() bool {
|
||||
return e.err == io.EOF
|
||||
}
|
||||
|
||||
func (r *Reader) Error() error {
|
||||
if r.err == nil {
|
||||
return nil
|
||||
}
|
||||
return XDRError{"read", r.err}
|
||||
}
|
||||
|
||||
func ElementSizeExceeded(field string, size, limit int) error {
|
||||
return fmt.Errorf("%s exceeds size limit; %d > %d", field, size, limit)
|
||||
}
|
||||
49
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
49
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
@@ -1,49 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ipdr
|
||||
|
||||
package xdr
|
||||
|
||||
import "io"
|
||||
|
||||
func (r *Reader) ReadUint8() uint8 {
|
||||
if r.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
_, r.err = io.ReadFull(r.r, r.b[:1])
|
||||
if r.err != nil {
|
||||
if debug {
|
||||
dl.Printf("rd uint8: %v", r.err)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
|
||||
}
|
||||
return r.b[0]
|
||||
}
|
||||
|
||||
func (r *Reader) ReadUint16() uint16 {
|
||||
if r.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
_, r.err = io.ReadFull(r.r, r.b[:2])
|
||||
if r.err != nil {
|
||||
if debug {
|
||||
dl.Printf("rd uint16: %v", r.err)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
v := uint16(r.b[1]) | uint16(r.b[0])<<8
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint16=%d (0x%04x)", v, v)
|
||||
}
|
||||
return v
|
||||
}
|
||||
15
Godeps/_workspace/src/github.com/calmh/xdr/reader_xdr.go
generated
vendored
15
Godeps/_workspace/src/github.com/calmh/xdr/reader_xdr.go
generated
vendored
@@ -1,15 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !ipdr
|
||||
|
||||
package xdr
|
||||
|
||||
func (r *Reader) ReadUint8() uint8 {
|
||||
return uint8(r.ReadUint32())
|
||||
}
|
||||
|
||||
func (r *Reader) ReadUint16() uint16 {
|
||||
return uint16(r.ReadUint32())
|
||||
}
|
||||
41
Godeps/_workspace/src/github.com/calmh/xdr/writer_ipdr.go
generated
vendored
41
Godeps/_workspace/src/github.com/calmh/xdr/writer_ipdr.go
generated
vendored
@@ -1,41 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
// +build ipdr
|
||||
|
||||
package xdr
|
||||
|
||||
func (w *Writer) WriteUint8(v uint8) (int, error) {
|
||||
if w.err != nil {
|
||||
return 0, w.err
|
||||
}
|
||||
|
||||
if debug {
|
||||
dl.Printf("wr uint8=%d", v)
|
||||
}
|
||||
|
||||
w.b[0] = byte(v)
|
||||
|
||||
var l int
|
||||
l, w.err = w.w.Write(w.b[:1])
|
||||
w.tot += l
|
||||
return l, w.err
|
||||
}
|
||||
|
||||
func (w *Writer) WriteUint16(v uint16) (int, error) {
|
||||
if w.err != nil {
|
||||
return 0, w.err
|
||||
}
|
||||
|
||||
if debug {
|
||||
dl.Printf("wr uint8=%d", v)
|
||||
}
|
||||
|
||||
w.b[0] = byte(v >> 8)
|
||||
w.b[1] = byte(v)
|
||||
|
||||
var l int
|
||||
l, w.err = w.w.Write(w.b[:2])
|
||||
w.tot += l
|
||||
return l, w.err
|
||||
}
|
||||
14
Godeps/_workspace/src/github.com/calmh/xdr/writer_xdr.go
generated
vendored
14
Godeps/_workspace/src/github.com/calmh/xdr/writer_xdr.go
generated
vendored
@@ -1,14 +0,0 @@
|
||||
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
|
||||
// is governed by an MIT-style license that can be found in the LICENSE file.
|
||||
|
||||
// +build !ipdr
|
||||
|
||||
package xdr
|
||||
|
||||
func (w *Writer) WriteUint8(v uint8) (int, error) {
|
||||
return w.WriteUint32(uint32(v))
|
||||
}
|
||||
|
||||
func (w *Writer) WriteUint16(v uint16) (int, error) {
|
||||
return w.WriteUint32(uint32(v))
|
||||
}
|
||||
2
Godeps/_workspace/src/github.com/codegangsta/inject/.gitignore
generated
vendored
Normal file
2
Godeps/_workspace/src/github.com/codegangsta/inject/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
inject
|
||||
inject.test
|
||||
20
Godeps/_workspace/src/github.com/codegangsta/inject/LICENSE
generated
vendored
Normal file
20
Godeps/_workspace/src/github.com/codegangsta/inject/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 Jeremy Saenz
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
4
Godeps/_workspace/src/github.com/codegangsta/inject/README.md
generated
vendored
Normal file
4
Godeps/_workspace/src/github.com/codegangsta/inject/README.md
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
inject
|
||||
======
|
||||
|
||||
Dependency injection for go
|
||||
168
Godeps/_workspace/src/github.com/codegangsta/inject/inject.go
generated
vendored
Normal file
168
Godeps/_workspace/src/github.com/codegangsta/inject/inject.go
generated
vendored
Normal file
@@ -0,0 +1,168 @@
|
||||
// Package inject provides utilities for mapping and injecting dependencies in various ways.
|
||||
package inject
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Injector represents an interface for mapping and injecting dependencies into structs
|
||||
// and function arguments.
|
||||
type Injector interface {
|
||||
Applicator
|
||||
Invoker
|
||||
TypeMapper
|
||||
// SetParent sets the parent of the injector. If the injector cannot find a
|
||||
// dependency in its Type map it will check its parent before returning an
|
||||
// error.
|
||||
SetParent(Injector)
|
||||
}
|
||||
|
||||
// Applicator represents an interface for mapping dependencies to a struct.
|
||||
type Applicator interface {
|
||||
// Maps dependencies in the Type map to each field in the struct
|
||||
// that is tagged with 'inject'. Returns an error if the injection
|
||||
// fails.
|
||||
Apply(interface{}) error
|
||||
}
|
||||
|
||||
// Invoker represents an interface for calling functions via reflection.
|
||||
type Invoker interface {
|
||||
// Invoke attempts to call the interface{} provided as a function,
|
||||
// providing dependencies for function arguments based on Type. Returns
|
||||
// a slice of reflect.Value representing the returned values of the function.
|
||||
// Returns an error if the injection fails.
|
||||
Invoke(interface{}) ([]reflect.Value, error)
|
||||
}
|
||||
|
||||
// TypeMapper represents an interface for mapping interface{} values based on type.
|
||||
type TypeMapper interface {
|
||||
// Maps the interface{} value based on its immediate type from reflect.TypeOf.
|
||||
Map(interface{}) TypeMapper
|
||||
// Maps the interface{} value based on the pointer of an Interface provided.
|
||||
// This is really only useful for mapping a value as an interface, as interfaces
|
||||
// cannot at this time be referenced directly without a pointer.
|
||||
MapTo(interface{}, interface{}) TypeMapper
|
||||
// Provides a possibility to directly insert a mapping based on type and value.
|
||||
// This makes it possible to directly map type arguments not possible to instantiate
|
||||
// with reflect like unidirectional channels.
|
||||
Set(reflect.Type, reflect.Value) TypeMapper
|
||||
// Returns the Value that is mapped to the current type. Returns a zeroed Value if
|
||||
// the Type has not been mapped.
|
||||
Get(reflect.Type) reflect.Value
|
||||
}
|
||||
|
||||
type injector struct {
|
||||
values map[reflect.Type]reflect.Value
|
||||
parent Injector
|
||||
}
|
||||
|
||||
// InterfaceOf dereferences a pointer to an Interface type.
|
||||
// It panics if value is not an pointer to an interface.
|
||||
func InterfaceOf(value interface{}) reflect.Type {
|
||||
t := reflect.TypeOf(value)
|
||||
|
||||
for t.Kind() == reflect.Ptr {
|
||||
t = t.Elem()
|
||||
}
|
||||
|
||||
if t.Kind() != reflect.Interface {
|
||||
panic("Called inject.InterfaceOf with a value that is not a pointer to an interface. (*MyInterface)(nil)")
|
||||
}
|
||||
|
||||
return t
|
||||
}
|
||||
|
||||
// New returns a new Injector.
|
||||
func New() Injector {
|
||||
return &injector{
|
||||
values: make(map[reflect.Type]reflect.Value),
|
||||
}
|
||||
}
|
||||
|
||||
// Invoke attempts to call the interface{} provided as a function,
|
||||
// providing dependencies for function arguments based on Type.
|
||||
// Returns a slice of reflect.Value representing the returned values of the function.
|
||||
// Returns an error if the injection fails.
|
||||
// It panics if f is not a function
|
||||
func (inj *injector) Invoke(f interface{}) ([]reflect.Value, error) {
|
||||
t := reflect.TypeOf(f)
|
||||
|
||||
var in = make([]reflect.Value, t.NumIn()) //Panic if t is not kind of Func
|
||||
for i := 0; i < t.NumIn(); i++ {
|
||||
argType := t.In(i)
|
||||
val := inj.Get(argType)
|
||||
if !val.IsValid() {
|
||||
return nil, fmt.Errorf("Value not found for type %v", argType)
|
||||
}
|
||||
|
||||
in[i] = val
|
||||
}
|
||||
|
||||
return reflect.ValueOf(f).Call(in), nil
|
||||
}
|
||||
|
||||
// Maps dependencies in the Type map to each field in the struct
|
||||
// that is tagged with 'inject'.
|
||||
// Returns an error if the injection fails.
|
||||
func (inj *injector) Apply(val interface{}) error {
|
||||
v := reflect.ValueOf(val)
|
||||
|
||||
for v.Kind() == reflect.Ptr {
|
||||
v = v.Elem()
|
||||
}
|
||||
|
||||
if v.Kind() != reflect.Struct {
|
||||
return nil // Should not panic here ?
|
||||
}
|
||||
|
||||
t := v.Type()
|
||||
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
f := v.Field(i)
|
||||
structField := t.Field(i)
|
||||
if f.CanSet() && structField.Tag == "inject" {
|
||||
ft := f.Type()
|
||||
v := inj.Get(ft)
|
||||
if !v.IsValid() {
|
||||
return fmt.Errorf("Value not found for type %v", ft)
|
||||
}
|
||||
|
||||
f.Set(v)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Maps the concrete value of val to its dynamic type using reflect.TypeOf,
|
||||
// It returns the TypeMapper registered in.
|
||||
func (i *injector) Map(val interface{}) TypeMapper {
|
||||
i.values[reflect.TypeOf(val)] = reflect.ValueOf(val)
|
||||
return i
|
||||
}
|
||||
|
||||
func (i *injector) MapTo(val interface{}, ifacePtr interface{}) TypeMapper {
|
||||
i.values[InterfaceOf(ifacePtr)] = reflect.ValueOf(val)
|
||||
return i
|
||||
}
|
||||
|
||||
// Maps the given reflect.Type to the given reflect.Value and returns
|
||||
// the Typemapper the mapping has been registered in.
|
||||
func (i *injector) Set(typ reflect.Type, val reflect.Value) TypeMapper {
|
||||
i.values[typ] = val
|
||||
return i
|
||||
}
|
||||
|
||||
func (i *injector) Get(t reflect.Type) reflect.Value {
|
||||
val := i.values[t]
|
||||
if !val.IsValid() && i.parent != nil {
|
||||
val = i.parent.Get(t)
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
func (i *injector) SetParent(parent Injector) {
|
||||
i.parent = parent
|
||||
}
|
||||
142
Godeps/_workspace/src/github.com/codegangsta/inject/inject_test.go
generated
vendored
Normal file
142
Godeps/_workspace/src/github.com/codegangsta/inject/inject_test.go
generated
vendored
Normal file
@@ -0,0 +1,142 @@
|
||||
package inject_test
|
||||
|
||||
import (
|
||||
"github.com/codegangsta/inject"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type SpecialString interface {
|
||||
}
|
||||
|
||||
type TestStruct struct {
|
||||
Dep1 string `inject`
|
||||
Dep2 SpecialString `inject`
|
||||
Dep3 string
|
||||
}
|
||||
|
||||
/* Test Helpers */
|
||||
func expect(t *testing.T, a interface{}, b interface{}) {
|
||||
if a != b {
|
||||
t.Errorf("Expected %v (type %v) - Got %v (type %v)", b, reflect.TypeOf(b), a, reflect.TypeOf(a))
|
||||
}
|
||||
}
|
||||
|
||||
func refute(t *testing.T, a interface{}, b interface{}) {
|
||||
if a == b {
|
||||
t.Errorf("Did not expect %v (type %v) - Got %v (type %v)", b, reflect.TypeOf(b), a, reflect.TypeOf(a))
|
||||
}
|
||||
}
|
||||
|
||||
func Test_InjectorInvoke(t *testing.T) {
|
||||
injector := inject.New()
|
||||
expect(t, injector == nil, false)
|
||||
|
||||
dep := "some dependency"
|
||||
injector.Map(dep)
|
||||
dep2 := "another dep"
|
||||
injector.MapTo(dep2, (*SpecialString)(nil))
|
||||
dep3 := make(chan *SpecialString)
|
||||
dep4 := make(chan *SpecialString)
|
||||
typRecv := reflect.ChanOf(reflect.RecvDir, reflect.TypeOf(dep3).Elem())
|
||||
typSend := reflect.ChanOf(reflect.SendDir, reflect.TypeOf(dep4).Elem())
|
||||
injector.Set(typRecv, reflect.ValueOf(dep3))
|
||||
injector.Set(typSend, reflect.ValueOf(dep4))
|
||||
|
||||
_, err := injector.Invoke(func(d1 string, d2 SpecialString, d3 <-chan *SpecialString, d4 chan<- *SpecialString) {
|
||||
expect(t, d1, dep)
|
||||
expect(t, d2, dep2)
|
||||
expect(t, reflect.TypeOf(d3).Elem(), reflect.TypeOf(dep3).Elem())
|
||||
expect(t, reflect.TypeOf(d4).Elem(), reflect.TypeOf(dep4).Elem())
|
||||
expect(t, reflect.TypeOf(d3).ChanDir(), reflect.RecvDir)
|
||||
expect(t, reflect.TypeOf(d4).ChanDir(), reflect.SendDir)
|
||||
})
|
||||
|
||||
expect(t, err, nil)
|
||||
}
|
||||
|
||||
func Test_InjectorInvokeReturnValues(t *testing.T) {
|
||||
injector := inject.New()
|
||||
expect(t, injector == nil, false)
|
||||
|
||||
dep := "some dependency"
|
||||
injector.Map(dep)
|
||||
dep2 := "another dep"
|
||||
injector.MapTo(dep2, (*SpecialString)(nil))
|
||||
|
||||
result, err := injector.Invoke(func(d1 string, d2 SpecialString) string {
|
||||
expect(t, d1, dep)
|
||||
expect(t, d2, dep2)
|
||||
return "Hello world"
|
||||
})
|
||||
|
||||
expect(t, result[0].String(), "Hello world")
|
||||
expect(t, err, nil)
|
||||
}
|
||||
|
||||
func Test_InjectorApply(t *testing.T) {
|
||||
injector := inject.New()
|
||||
|
||||
injector.Map("a dep").MapTo("another dep", (*SpecialString)(nil))
|
||||
|
||||
s := TestStruct{}
|
||||
err := injector.Apply(&s)
|
||||
expect(t, err, nil)
|
||||
|
||||
expect(t, s.Dep1, "a dep")
|
||||
expect(t, s.Dep2, "another dep")
|
||||
}
|
||||
|
||||
func Test_InterfaceOf(t *testing.T) {
|
||||
iType := inject.InterfaceOf((*SpecialString)(nil))
|
||||
expect(t, iType.Kind(), reflect.Interface)
|
||||
|
||||
iType = inject.InterfaceOf((**SpecialString)(nil))
|
||||
expect(t, iType.Kind(), reflect.Interface)
|
||||
|
||||
// Expecting nil
|
||||
defer func() {
|
||||
rec := recover()
|
||||
refute(t, rec, nil)
|
||||
}()
|
||||
iType = inject.InterfaceOf((*testing.T)(nil))
|
||||
}
|
||||
|
||||
func Test_InjectorSet(t *testing.T) {
|
||||
injector := inject.New()
|
||||
typ := reflect.TypeOf("string")
|
||||
typSend := reflect.ChanOf(reflect.SendDir, typ)
|
||||
typRecv := reflect.ChanOf(reflect.RecvDir, typ)
|
||||
|
||||
// instantiating unidirectional channels is not possible using reflect
|
||||
// http://golang.org/src/pkg/reflect/value.go?s=60463:60504#L2064
|
||||
chanRecv := reflect.MakeChan(reflect.ChanOf(reflect.BothDir, typ), 0)
|
||||
chanSend := reflect.MakeChan(reflect.ChanOf(reflect.BothDir, typ), 0)
|
||||
|
||||
injector.Set(typSend, chanSend)
|
||||
injector.Set(typRecv, chanRecv)
|
||||
|
||||
expect(t, injector.Get(typSend).IsValid(), true)
|
||||
expect(t, injector.Get(typRecv).IsValid(), true)
|
||||
expect(t, injector.Get(chanSend.Type()).IsValid(), false)
|
||||
}
|
||||
|
||||
|
||||
func Test_InjectorGet(t *testing.T) {
|
||||
injector := inject.New()
|
||||
|
||||
injector.Map("some dependency")
|
||||
|
||||
expect(t, injector.Get(reflect.TypeOf("string")).IsValid(), true)
|
||||
expect(t, injector.Get(reflect.TypeOf(11)).IsValid(), false)
|
||||
}
|
||||
|
||||
func Test_InjectorSetParent(t *testing.T) {
|
||||
injector := inject.New()
|
||||
injector.MapTo("another dep", (*SpecialString)(nil))
|
||||
|
||||
injector2 := inject.New()
|
||||
injector2.SetParent(injector)
|
||||
|
||||
expect(t, injector2.Get(inject.InterfaceOf((*SpecialString)(nil))).IsValid(), true)
|
||||
}
|
||||
23
Godeps/_workspace/src/github.com/codegangsta/martini/.gitignore
generated
vendored
Normal file
23
Godeps/_workspace/src/github.com/codegangsta/martini/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
||||
*.test
|
||||
20
Godeps/_workspace/src/github.com/codegangsta/martini/LICENSE
generated
vendored
Normal file
20
Godeps/_workspace/src/github.com/codegangsta/martini/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 Jeremy Saenz
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
345
Godeps/_workspace/src/github.com/codegangsta/martini/README.md
generated
vendored
Normal file
345
Godeps/_workspace/src/github.com/codegangsta/martini/README.md
generated
vendored
Normal file
@@ -0,0 +1,345 @@
|
||||
# Martini [](https://app.wercker.com/project/bykey/174bef7e3c999e103cacfe2770102266) [](http://godoc.org/github.com/codegangsta/martini)
|
||||
|
||||
Martini is a powerful package for quickly writing modular web applications/services in Golang.
|
||||
|
||||
Language Translations: [Simplified Chinese (zh_CN)](translations/README_zh_cn.md)
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
After installing Go and setting up your [GOPATH](http://golang.org/doc/code.html#GOPATH), create your first `.go` file. We'll call it `server.go`.
|
||||
|
||||
~~~ go
|
||||
package main
|
||||
|
||||
import "github.com/codegangsta/martini"
|
||||
|
||||
func main() {
|
||||
m := martini.Classic()
|
||||
m.Get("/", func() string {
|
||||
return "Hello world!"
|
||||
})
|
||||
m.Run()
|
||||
}
|
||||
~~~
|
||||
|
||||
Then install the Martini package (**go 1.1** and greater is required):
|
||||
~~~
|
||||
go get github.com/codegangsta/martini
|
||||
~~~
|
||||
|
||||
Then run your server:
|
||||
~~~
|
||||
go run server.go
|
||||
~~~
|
||||
|
||||
You will now have a Martini webserver running on `localhost:3000`.
|
||||
|
||||
## Getting Help
|
||||
|
||||
Join the [Mailing list](https://groups.google.com/forum/#!forum/martini-go)
|
||||
|
||||
Watch the [Demo Video](http://martini.codegangsta.io/#demo)
|
||||
|
||||
## Features
|
||||
* Extremely simple to use.
|
||||
* Non-intrusive design.
|
||||
* Plays nice with other Golang packages.
|
||||
* Awesome path matching and routing.
|
||||
* Modular design - Easy to add functionality, easy to rip stuff out.
|
||||
* Lots of good handlers/middlewares to use.
|
||||
* Great 'out of the box' feature set.
|
||||
* **Fully compatible with the [http.HandlerFunc](http://godoc.org/net/http#HandlerFunc) interface.**
|
||||
|
||||
## More Middleware
|
||||
For more middleware and functionality, check out the repositories in the [martini-contrib](https://github.com/martini-contrib) organization.
|
||||
|
||||
## Table of Contents
|
||||
* [Classic Martini](#classic-martini)
|
||||
* [Handlers](#handlers)
|
||||
* [Routing](#routing)
|
||||
* [Services](#services)
|
||||
* [Serving Static Files](#serving-static-files)
|
||||
* [Middleware Handlers](#middleware-handlers)
|
||||
* [Next()](#next)
|
||||
* [Martini Env](#martini-env)
|
||||
* [FAQ](#faq)
|
||||
|
||||
## Classic Martini
|
||||
To get up and running quickly, [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic) provides some reasonable defaults that work well for most web applications:
|
||||
~~~ go
|
||||
m := martini.Classic()
|
||||
// ... middleware and routing goes here
|
||||
m.Run()
|
||||
~~~
|
||||
|
||||
Below is some of the functionality [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic) pulls in automatically:
|
||||
* Request/Response Logging - [martini.Logger](http://godoc.org/github.com/codegangsta/martini#Logger)
|
||||
* Panic Recovery - [martini.Recovery](http://godoc.org/github.com/codegangsta/martini#Recovery)
|
||||
* Static File serving - [martini.Static](http://godoc.org/github.com/codegangsta/martini#Static)
|
||||
* Routing - [martini.Router](http://godoc.org/github.com/codegangsta/martini#Router)
|
||||
|
||||
### Handlers
|
||||
Handlers are the heart and soul of Martini. A handler is basically any kind of callable function:
|
||||
~~~ go
|
||||
m.Get("/", func() {
|
||||
println("hello world")
|
||||
})
|
||||
~~~
|
||||
|
||||
#### Return Values
|
||||
If a handler returns something, Martini will write the result to the current [http.ResponseWriter](http://godoc.org/net/http#ResponseWriter) as a string:
|
||||
~~~ go
|
||||
m.Get("/", func() string {
|
||||
return "hello world" // HTTP 200 : "hello world"
|
||||
})
|
||||
~~~
|
||||
|
||||
You can also optionally return a status code:
|
||||
~~~ go
|
||||
m.Get("/", func() (int, string) {
|
||||
return 418, "i'm a teapot" // HTTP 418 : "i'm a teapot"
|
||||
})
|
||||
~~~
|
||||
|
||||
#### Service Injection
|
||||
Handlers are invoked via reflection. Martini makes use of *Dependency Injection* to resolve dependencies in a Handlers argument list. **This makes Martini completely compatible with golang's `http.HandlerFunc` interface.**
|
||||
|
||||
If you add an argument to your Handler, Martini will search its list of services and attempt to resolve the dependency via type assertion:
|
||||
~~~ go
|
||||
m.Get("/", func(res http.ResponseWriter, req *http.Request) { // res and req are injected by Martini
|
||||
res.WriteHeader(200) // HTTP 200
|
||||
})
|
||||
~~~
|
||||
|
||||
The following services are included with [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic):
|
||||
* [*log.Logger](http://godoc.org/log#Logger) - Global logger for Martini.
|
||||
* [martini.Context](http://godoc.org/github.com/codegangsta/martini#Context) - http request context.
|
||||
* [martini.Params](http://godoc.org/github.com/codegangsta/martini#Params) - `map[string]string` of named params found by route matching.
|
||||
* [martini.Routes](http://godoc.org/github.com/codegangsta/martini#Routes) - Route helper service.
|
||||
* [http.ResponseWriter](http://godoc.org/net/http/#ResponseWriter) - http Response writer interface.
|
||||
* [*http.Request](http://godoc.org/net/http/#Request) - http Request.
|
||||
|
||||
### Routing
|
||||
In Martini, a route is an HTTP method paired with a URL-matching pattern.
|
||||
Each route can take one or more handler methods:
|
||||
~~~ go
|
||||
m.Get("/", func() {
|
||||
// show something
|
||||
})
|
||||
|
||||
m.Patch("/", func() {
|
||||
// update something
|
||||
})
|
||||
|
||||
m.Post("/", func() {
|
||||
// create something
|
||||
})
|
||||
|
||||
m.Put("/", func() {
|
||||
// replace something
|
||||
})
|
||||
|
||||
m.Delete("/", func() {
|
||||
// destroy something
|
||||
})
|
||||
|
||||
m.Options("/", func() {
|
||||
// http options
|
||||
})
|
||||
|
||||
m.NotFound(func() {
|
||||
// handle 404
|
||||
})
|
||||
~~~
|
||||
|
||||
Routes are matched in the order they are defined. The first route that
|
||||
matches the request is invoked.
|
||||
|
||||
Route patterns may include named parameters, accessible via the [martini.Params](http://godoc.org/github.com/codegangsta/martini#Params) service:
|
||||
~~~ go
|
||||
m.Get("/hello/:name", func(params martini.Params) string {
|
||||
return "Hello " + params["name"]
|
||||
})
|
||||
~~~
|
||||
|
||||
Routes can be matched with regular expressions and globs as well:
|
||||
~~~ go
|
||||
m.Get("/hello/**", func(params martini.Params) string {
|
||||
return "Hello " + params["_1"]
|
||||
})
|
||||
~~~
|
||||
|
||||
Route handlers can be stacked on top of each other, which is useful for things like authentication and authorization:
|
||||
~~~ go
|
||||
m.Get("/secret", authorize, func() {
|
||||
// this will execute as long as authorize doesn't write a response
|
||||
})
|
||||
~~~
|
||||
|
||||
Route groups can be added too using the Group method.
|
||||
~~~ go
|
||||
m.Group("/books", func(r Router) {
|
||||
r.Get("/:id", GetBooks)
|
||||
r.Post("/new", NewBook)
|
||||
r.Put("/update/:id", UpdateBook)
|
||||
r.Delete("/delete/:id", DeleteBook)
|
||||
})
|
||||
~~~
|
||||
|
||||
Just like you can pass middlewares to a handler you can pass middlewares to groups.
|
||||
~~~ go
|
||||
m.Group("/books", func(r Router) {
|
||||
r.Get("/:id", GetBooks)
|
||||
r.Post("/new", NewBook)
|
||||
r.Put("/update/:id", UpdateBook)
|
||||
r.Delete("/delete/:id", DeleteBook)
|
||||
}, MyMiddleware1, MyMiddleware2)
|
||||
~~~
|
||||
|
||||
### Services
|
||||
Services are objects that are available to be injected into a Handler's argument list. You can map a service on a *Global* or *Request* level.
|
||||
|
||||
#### Global Mapping
|
||||
A Martini instance implements the inject.Injector interface, so mapping a service is easy:
|
||||
~~~ go
|
||||
db := &MyDatabase{}
|
||||
m := martini.Classic()
|
||||
m.Map(db) // the service will be available to all handlers as *MyDatabase
|
||||
// ...
|
||||
m.Run()
|
||||
~~~
|
||||
|
||||
#### Request-Level Mapping
|
||||
Mapping on the request level can be done in a handler via [martini.Context](http://godoc.org/github.com/codegangsta/martini#Context):
|
||||
~~~ go
|
||||
func MyCustomLoggerHandler(c martini.Context, req *http.Request) {
|
||||
logger := &MyCustomLogger{req}
|
||||
c.Map(logger) // mapped as *MyCustomLogger
|
||||
}
|
||||
~~~
|
||||
|
||||
#### Mapping values to Interfaces
|
||||
One of the most powerful parts about services is the ability to map a service to an interface. For instance, if you wanted to override the [http.ResponseWriter](http://godoc.org/net/http#ResponseWriter) with an object that wrapped it and performed extra operations, you can write the following handler:
|
||||
~~~ go
|
||||
func WrapResponseWriter(res http.ResponseWriter, c martini.Context) {
|
||||
rw := NewSpecialResponseWriter(res)
|
||||
c.MapTo(rw, (*http.ResponseWriter)(nil)) // override ResponseWriter with our wrapper ResponseWriter
|
||||
}
|
||||
~~~
|
||||
|
||||
### Serving Static Files
|
||||
A [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic) instance automatically serves static files from the "public" directory in the root of your server.
|
||||
You can serve from more directories by adding more [martini.Static](http://godoc.org/github.com/codegangsta/martini#Static) handlers.
|
||||
~~~ go
|
||||
m.Use(martini.Static("assets")) // serve from the "assets" directory as well
|
||||
~~~
|
||||
|
||||
## Middleware Handlers
|
||||
Middleware Handlers sit between the incoming http request and the router. In essence they are no different than any other Handler in Martini. You can add a middleware handler to the stack like so:
|
||||
~~~ go
|
||||
m.Use(func() {
|
||||
// do some middleware stuff
|
||||
})
|
||||
~~~
|
||||
|
||||
You can have full control over the middleware stack with the `Handlers` function. This will replace any handlers that have been previously set:
|
||||
~~~ go
|
||||
m.Handlers(
|
||||
Middleware1,
|
||||
Middleware2,
|
||||
Middleware3,
|
||||
)
|
||||
~~~
|
||||
|
||||
Middleware Handlers work really well for things like logging, authorization, authentication, sessions, gzipping, error pages and any other operations that must happen before or after an http request:
|
||||
~~~ go
|
||||
// validate an api key
|
||||
m.Use(func(res http.ResponseWriter, req *http.Request) {
|
||||
if req.Header.Get("X-API-KEY") != "secret123" {
|
||||
res.WriteHeader(http.StatusUnauthorized)
|
||||
}
|
||||
})
|
||||
~~~
|
||||
|
||||
### Next()
|
||||
[Context.Next()](http://godoc.org/github.com/codegangsta/martini#Context) is an optional function that Middleware Handlers can call to yield the until after the other Handlers have been executed. This works really well for any operations that must happen after an http request:
|
||||
~~~ go
|
||||
// log before and after a request
|
||||
m.Use(func(c martini.Context, log *log.Logger){
|
||||
log.Println("before a request")
|
||||
|
||||
c.Next()
|
||||
|
||||
log.Println("after a request")
|
||||
})
|
||||
~~~
|
||||
|
||||
## Martini Env
|
||||
|
||||
Some Martini handlers make use of the `martini.Env` global variable to provide special functionality for development environments vs production environments. It is reccomended that the `MARTINI_ENV=production` environment variable to be set when deploying a Martini server into a production environment.
|
||||
|
||||
## FAQ
|
||||
|
||||
### Where do I find middleware X?
|
||||
|
||||
Start by looking in the [martini-contrib](https://github.com/martini-contrib) projects. If it is not there feel free to contact a martini-contrib team member about adding a new repo to the organization.
|
||||
|
||||
* [auth](https://github.com/martini-contrib/auth) - Handlers for authentication.
|
||||
* [binding](https://github.com/martini-contrib/binding) - Handler for mapping/validating a raw request into a structure.
|
||||
* [gzip](https://github.com/martini-contrib/gzip) - Handler for adding gzip compress to requests
|
||||
* [render](https://github.com/martini-contrib/render) - Handler that provides a service for easily rendering JSON and HTML templates.
|
||||
* [acceptlang](https://github.com/martini-contrib/acceptlang) - Handler for parsing the `Accept-Language` HTTP header.
|
||||
* [sessions](https://github.com/martini-contrib/sessions) - Handler that provides a Session service.
|
||||
* [strip](https://github.com/martini-contrib/strip) - URL Prefix stripping.
|
||||
* [method](https://github.com/martini-contrib/method) - HTTP method overriding via Header or form fields.
|
||||
* [secure](https://github.com/martini-contrib/secure) - Implements a few quick security wins.
|
||||
* [encoder](https://github.com/martini-contrib/encoder) - Encoder service for rendering data in several formats and content negotiation.
|
||||
* [cors](https://github.com/martini-contrib/cors) - Handler that enables CORS support.
|
||||
* [oauth2](https://github.com/martini-contrib/oauth2) - Handler that provides OAuth 2.0 login for Martini apps. Google Sign-in, Facebook Connect and Github login is supported.
|
||||
|
||||
### How do I integrate with existing servers?
|
||||
|
||||
A Martini instance implements `http.Handler`, so it can easily be used to serve subtrees
|
||||
on existing Go servers. For example this is a working Martini app for Google App Engine:
|
||||
|
||||
~~~ go
|
||||
package hello
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"github.com/codegangsta/martini"
|
||||
)
|
||||
|
||||
func init() {
|
||||
m := martini.Classic()
|
||||
m.Get("/", func() string {
|
||||
return "Hello world!"
|
||||
})
|
||||
http.Handle("/", m)
|
||||
}
|
||||
~~~
|
||||
|
||||
### How do I change the port/host?
|
||||
|
||||
Martini's `Run` function looks for the PORT and HOST environment variables and uses those. Otherwise Martini will default to localhost:3000.
|
||||
To have more flexibility over port and host, use the `http.ListenAndServe` function instead.
|
||||
|
||||
~~~ go
|
||||
m := martini.Classic()
|
||||
// ...
|
||||
log.Fatal(http.ListenAndServe(":8080", m))
|
||||
~~~
|
||||
|
||||
### Live code reload?
|
||||
|
||||
[gin](https://github.com/codegangsta/gin) and [fresh](https://github.com/pilu/fresh) both live reload martini apps.
|
||||
|
||||
## Contributing
|
||||
Martini is meant to be kept tiny and clean. Most contributions should end up in a repository in the [martini-contrib](https://github.com/martini-contrib) organization. If you do have a contribution for the core of Martini feel free to put up a Pull Request.
|
||||
|
||||
## About
|
||||
|
||||
Inspired by [express](https://github.com/visionmedia/express) and [sinatra](https://github.com/sinatra/sinatra)
|
||||
|
||||
Martini is obsessively designed by none other than the [Code Gangsta](http://codegangsta.io/)
|
||||
25
Godeps/_workspace/src/github.com/codegangsta/martini/env.go
generated
vendored
Normal file
25
Godeps/_workspace/src/github.com/codegangsta/martini/env.go
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
// Envs
|
||||
const (
|
||||
Dev string = "development"
|
||||
Prod string = "production"
|
||||
Test string = "test"
|
||||
)
|
||||
|
||||
// Env is the environment that Martini is executing in. The MARTINI_ENV is read on initialization to set this variable.
|
||||
var Env = Dev
|
||||
|
||||
func setENV(e string) {
|
||||
if len(e) > 0 {
|
||||
Env = e
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
setENV(os.Getenv("MARTINI_ENV"))
|
||||
}
|
||||
22
Godeps/_workspace/src/github.com/codegangsta/martini/env_test.go
generated
vendored
Normal file
22
Godeps/_workspace/src/github.com/codegangsta/martini/env_test.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_SetENV(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
out string
|
||||
}{
|
||||
{"", "development"},
|
||||
{"not_development", "not_development"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
setENV(test.in)
|
||||
if Env != test.out {
|
||||
expect(t, Env, test.out)
|
||||
}
|
||||
}
|
||||
}
|
||||
7
Godeps/_workspace/src/github.com/codegangsta/martini/go_version.go
generated
vendored
Normal file
7
Godeps/_workspace/src/github.com/codegangsta/martini/go_version.go
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
// +build !go1.1
|
||||
|
||||
package martini
|
||||
|
||||
func MartiniDoesNotSupportGo1Point0() {
|
||||
"Martini requires Go 1.1 or greater."
|
||||
}
|
||||
20
Godeps/_workspace/src/github.com/codegangsta/martini/logger.go
generated
vendored
Normal file
20
Godeps/_workspace/src/github.com/codegangsta/martini/logger.go
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"log"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Logger returns a middleware handler that logs the request as it goes in and the response as it goes out.
|
||||
func Logger() Handler {
|
||||
return func(res http.ResponseWriter, req *http.Request, c Context, log *log.Logger) {
|
||||
start := time.Now()
|
||||
log.Printf("Started %s %s", req.Method, req.URL.Path)
|
||||
|
||||
rw := res.(ResponseWriter)
|
||||
c.Next()
|
||||
|
||||
log.Printf("Completed %v %s in %v\n", rw.Status(), http.StatusText(rw.Status()), time.Since(start))
|
||||
}
|
||||
}
|
||||
31
Godeps/_workspace/src/github.com/codegangsta/martini/logger_test.go
generated
vendored
Normal file
31
Godeps/_workspace/src/github.com/codegangsta/martini/logger_test.go
generated
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_Logger(t *testing.T) {
|
||||
buff := bytes.NewBufferString("")
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
m := New()
|
||||
// replace log for testing
|
||||
m.Map(log.New(buff, "[martini] ", 0))
|
||||
m.Use(Logger())
|
||||
m.Use(func(res http.ResponseWriter) {
|
||||
res.WriteHeader(http.StatusNotFound)
|
||||
})
|
||||
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/foobar", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(recorder, req)
|
||||
expect(t, recorder.Code, http.StatusNotFound)
|
||||
refute(t, len(buff.String()), 0)
|
||||
}
|
||||
173
Godeps/_workspace/src/github.com/codegangsta/martini/martini.go
generated
vendored
Normal file
173
Godeps/_workspace/src/github.com/codegangsta/martini/martini.go
generated
vendored
Normal file
@@ -0,0 +1,173 @@
|
||||
// Package martini is a powerful package for quickly writing modular web applications/services in Golang.
|
||||
//
|
||||
// For a full guide visit http://github.com/codegangsta/martini
|
||||
//
|
||||
// package main
|
||||
//
|
||||
// import "github.com/codegangsta/martini"
|
||||
//
|
||||
// func main() {
|
||||
// m := martini.Classic()
|
||||
//
|
||||
// m.Get("/", func() string {
|
||||
// return "Hello world!"
|
||||
// })
|
||||
//
|
||||
// m.Run()
|
||||
// }
|
||||
package martini
|
||||
|
||||
import (
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"reflect"
|
||||
|
||||
"github.com/codegangsta/inject"
|
||||
)
|
||||
|
||||
// Martini represents the top level web application. inject.Injector methods can be invoked to map services on a global level.
|
||||
type Martini struct {
|
||||
inject.Injector
|
||||
handlers []Handler
|
||||
action Handler
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
// New creates a bare bones Martini instance. Use this method if you want to have full control over the middleware that is used.
|
||||
func New() *Martini {
|
||||
m := &Martini{Injector: inject.New(), action: func() {}, logger: log.New(os.Stdout, "[martini] ", 0)}
|
||||
m.Map(m.logger)
|
||||
m.Map(defaultReturnHandler())
|
||||
return m
|
||||
}
|
||||
|
||||
// Handlers sets the entire middleware stack with the given Handlers. This will clear any current middleware handlers.
|
||||
// Will panic if any of the handlers is not a callable function
|
||||
func (m *Martini) Handlers(handlers ...Handler) {
|
||||
m.handlers = make([]Handler, 0)
|
||||
for _, handler := range handlers {
|
||||
m.Use(handler)
|
||||
}
|
||||
}
|
||||
|
||||
// Action sets the handler that will be called after all the middleware has been invoked. This is set to martini.Router in a martini.Classic().
|
||||
func (m *Martini) Action(handler Handler) {
|
||||
validateHandler(handler)
|
||||
m.action = handler
|
||||
}
|
||||
|
||||
// Use adds a middleware Handler to the stack. Will panic if the handler is not a callable func. Middleware Handlers are invoked in the order that they are added.
|
||||
func (m *Martini) Use(handler Handler) {
|
||||
validateHandler(handler)
|
||||
|
||||
m.handlers = append(m.handlers, handler)
|
||||
}
|
||||
|
||||
// ServeHTTP is the HTTP Entry point for a Martini instance. Useful if you want to control your own HTTP server.
|
||||
func (m *Martini) ServeHTTP(res http.ResponseWriter, req *http.Request) {
|
||||
m.createContext(res, req).run()
|
||||
}
|
||||
|
||||
// Run the http server. Listening on os.GetEnv("PORT") or 3000 by default.
|
||||
func (m *Martini) Run() {
|
||||
port := os.Getenv("PORT")
|
||||
if port == "" {
|
||||
port = "3000"
|
||||
}
|
||||
|
||||
host := os.Getenv("HOST")
|
||||
|
||||
m.logger.Println("listening on " + host + ":" + port)
|
||||
m.logger.Fatalln(http.ListenAndServe(host+":"+port, m))
|
||||
}
|
||||
|
||||
func (m *Martini) createContext(res http.ResponseWriter, req *http.Request) *context {
|
||||
c := &context{inject.New(), m.handlers, m.action, NewResponseWriter(res), 0}
|
||||
c.SetParent(m)
|
||||
c.MapTo(c, (*Context)(nil))
|
||||
c.MapTo(c.rw, (*http.ResponseWriter)(nil))
|
||||
c.Map(req)
|
||||
return c
|
||||
}
|
||||
|
||||
// ClassicMartini represents a Martini with some reasonable defaults. Embeds the router functions for convenience.
|
||||
type ClassicMartini struct {
|
||||
*Martini
|
||||
Router
|
||||
}
|
||||
|
||||
// Classic creates a classic Martini with some basic default middleware - martini.Logger, martini.Recovery and martini.Static.
|
||||
// Classic also maps martini.Routes as a service.
|
||||
func Classic() *ClassicMartini {
|
||||
r := NewRouter()
|
||||
m := New()
|
||||
m.Use(Logger())
|
||||
m.Use(Recovery())
|
||||
m.Use(Static("public"))
|
||||
m.MapTo(r, (*Routes)(nil))
|
||||
m.Action(r.Handle)
|
||||
return &ClassicMartini{m, r}
|
||||
}
|
||||
|
||||
// Handler can be any callable function. Martini attempts to inject services into the handler's argument list.
|
||||
// Martini will panic if an argument could not be fullfilled via dependency injection.
|
||||
type Handler interface{}
|
||||
|
||||
func validateHandler(handler Handler) {
|
||||
if reflect.TypeOf(handler).Kind() != reflect.Func {
|
||||
panic("martini handler must be a callable func")
|
||||
}
|
||||
}
|
||||
|
||||
// Context represents a request context. Services can be mapped on the request level from this interface.
|
||||
type Context interface {
|
||||
inject.Injector
|
||||
// Next is an optional function that Middleware Handlers can call to yield the until after
|
||||
// the other Handlers have been executed. This works really well for any operations that must
|
||||
// happen after an http request
|
||||
Next()
|
||||
// Written returns whether or not the response for this context has been written.
|
||||
Written() bool
|
||||
}
|
||||
|
||||
type context struct {
|
||||
inject.Injector
|
||||
handlers []Handler
|
||||
action Handler
|
||||
rw ResponseWriter
|
||||
index int
|
||||
}
|
||||
|
||||
func (c *context) handler() Handler {
|
||||
if c.index < len(c.handlers) {
|
||||
return c.handlers[c.index]
|
||||
}
|
||||
if c.index == len(c.handlers) {
|
||||
return c.action
|
||||
}
|
||||
panic("invalid index for context handler")
|
||||
}
|
||||
|
||||
func (c *context) Next() {
|
||||
c.index += 1
|
||||
c.run()
|
||||
}
|
||||
|
||||
func (c *context) Written() bool {
|
||||
return c.rw.Written()
|
||||
}
|
||||
|
||||
func (c *context) run() {
|
||||
for c.index <= len(c.handlers) {
|
||||
_, err := c.Invoke(c.handler())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
c.index += 1
|
||||
|
||||
if c.Written() {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
141
Godeps/_workspace/src/github.com/codegangsta/martini/martini_test.go
generated
vendored
Normal file
141
Godeps/_workspace/src/github.com/codegangsta/martini/martini_test.go
generated
vendored
Normal file
@@ -0,0 +1,141 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
/* Test Helpers */
|
||||
func expect(t *testing.T, a interface{}, b interface{}) {
|
||||
if a != b {
|
||||
t.Errorf("Expected %v (type %v) - Got %v (type %v)", b, reflect.TypeOf(b), a, reflect.TypeOf(a))
|
||||
}
|
||||
}
|
||||
|
||||
func refute(t *testing.T, a interface{}, b interface{}) {
|
||||
if a == b {
|
||||
t.Errorf("Did not expect %v (type %v) - Got %v (type %v)", b, reflect.TypeOf(b), a, reflect.TypeOf(a))
|
||||
}
|
||||
}
|
||||
|
||||
func Test_New(t *testing.T) {
|
||||
m := New()
|
||||
if m == nil {
|
||||
t.Error("martini.New() cannot return nil")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_Martini_Run(t *testing.T) {
|
||||
// just test that Run doesn't bomb
|
||||
go New().Run()
|
||||
}
|
||||
|
||||
func Test_Martini_ServeHTTP(t *testing.T) {
|
||||
result := ""
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
m := New()
|
||||
m.Use(func(c Context) {
|
||||
result += "foo"
|
||||
c.Next()
|
||||
result += "ban"
|
||||
})
|
||||
m.Use(func(c Context) {
|
||||
result += "bar"
|
||||
c.Next()
|
||||
result += "baz"
|
||||
})
|
||||
m.Action(func(res http.ResponseWriter, req *http.Request) {
|
||||
result += "bat"
|
||||
res.WriteHeader(http.StatusBadRequest)
|
||||
})
|
||||
|
||||
m.ServeHTTP(response, (*http.Request)(nil))
|
||||
|
||||
expect(t, result, "foobarbatbazban")
|
||||
expect(t, response.Code, http.StatusBadRequest)
|
||||
}
|
||||
|
||||
func Test_Martini_Handlers(t *testing.T) {
|
||||
result := ""
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
batman := func(c Context) {
|
||||
result += "batman!"
|
||||
}
|
||||
|
||||
m := New()
|
||||
m.Use(func(c Context) {
|
||||
result += "foo"
|
||||
c.Next()
|
||||
result += "ban"
|
||||
})
|
||||
m.Handlers(
|
||||
batman,
|
||||
batman,
|
||||
batman,
|
||||
)
|
||||
m.Action(func(res http.ResponseWriter, req *http.Request) {
|
||||
result += "bat"
|
||||
res.WriteHeader(http.StatusBadRequest)
|
||||
})
|
||||
|
||||
m.ServeHTTP(response, (*http.Request)(nil))
|
||||
|
||||
expect(t, result, "batman!batman!batman!bat")
|
||||
expect(t, response.Code, http.StatusBadRequest)
|
||||
}
|
||||
|
||||
func Test_Martini_EarlyWrite(t *testing.T) {
|
||||
result := ""
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
m := New()
|
||||
m.Use(func(res http.ResponseWriter) {
|
||||
result += "foobar"
|
||||
res.Write([]byte("Hello world"))
|
||||
})
|
||||
m.Use(func() {
|
||||
result += "bat"
|
||||
})
|
||||
m.Action(func(res http.ResponseWriter) {
|
||||
result += "baz"
|
||||
res.WriteHeader(http.StatusBadRequest)
|
||||
})
|
||||
|
||||
m.ServeHTTP(response, (*http.Request)(nil))
|
||||
|
||||
expect(t, result, "foobar")
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
}
|
||||
|
||||
func Test_Martini_Written(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
m := New()
|
||||
m.Handlers(func(res http.ResponseWriter) {
|
||||
res.WriteHeader(http.StatusOK)
|
||||
})
|
||||
|
||||
ctx := m.createContext(response, (*http.Request)(nil))
|
||||
expect(t, ctx.Written(), false)
|
||||
|
||||
ctx.run()
|
||||
expect(t, ctx.Written(), true)
|
||||
}
|
||||
|
||||
func Test_Martini_Basic_NoRace(t *testing.T) {
|
||||
m := New()
|
||||
handlers := []Handler{func() {}, func() {}}
|
||||
// Ensure append will not realloc to trigger the race condition
|
||||
m.handlers = handlers[:1]
|
||||
req, _ := http.NewRequest("GET", "/", nil)
|
||||
for i := 0; i < 2; i++ {
|
||||
go func() {
|
||||
response := httptest.NewRecorder()
|
||||
m.ServeHTTP(response, req)
|
||||
}()
|
||||
}
|
||||
}
|
||||
142
Godeps/_workspace/src/github.com/codegangsta/martini/recovery.go
generated
vendored
Normal file
142
Godeps/_workspace/src/github.com/codegangsta/martini/recovery.go
generated
vendored
Normal file
@@ -0,0 +1,142 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"runtime"
|
||||
|
||||
"github.com/codegangsta/inject"
|
||||
)
|
||||
|
||||
const (
|
||||
panicHtml = `<html>
|
||||
<head><title>PANIC: %s</title>
|
||||
<style type="text/css">
|
||||
html, body {
|
||||
font-family: "Roboto", sans-serif;
|
||||
color: #333333;
|
||||
background-color: #ea5343;
|
||||
margin: 0px;
|
||||
}
|
||||
h1 {
|
||||
color: #d04526;
|
||||
background-color: #ffffff;
|
||||
padding: 20px;
|
||||
border-bottom: 1px dashed #2b3848;
|
||||
}
|
||||
pre {
|
||||
margin: 20px;
|
||||
padding: 20px;
|
||||
border: 2px solid #2b3848;
|
||||
background-color: #ffffff;
|
||||
}
|
||||
</style>
|
||||
</head><body>
|
||||
<h1>PANIC</h1>
|
||||
<pre style="font-weight: bold;">%s</pre>
|
||||
<pre>%s</pre>
|
||||
</body>
|
||||
</html>`
|
||||
)
|
||||
|
||||
var (
|
||||
dunno = []byte("???")
|
||||
centerDot = []byte("·")
|
||||
dot = []byte(".")
|
||||
slash = []byte("/")
|
||||
)
|
||||
|
||||
// stack returns a nicely formated stack frame, skipping skip frames
|
||||
func stack(skip int) []byte {
|
||||
buf := new(bytes.Buffer) // the returned data
|
||||
// As we loop, we open files and read them. These variables record the currently
|
||||
// loaded file.
|
||||
var lines [][]byte
|
||||
var lastFile string
|
||||
for i := skip; ; i++ { // Skip the expected number of frames
|
||||
pc, file, line, ok := runtime.Caller(i)
|
||||
if !ok {
|
||||
break
|
||||
}
|
||||
// Print this much at least. If we can't find the source, it won't show.
|
||||
fmt.Fprintf(buf, "%s:%d (0x%x)\n", file, line, pc)
|
||||
if file != lastFile {
|
||||
data, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
lines = bytes.Split(data, []byte{'\n'})
|
||||
lastFile = file
|
||||
}
|
||||
fmt.Fprintf(buf, "\t%s: %s\n", function(pc), source(lines, line))
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
// source returns a space-trimmed slice of the n'th line.
|
||||
func source(lines [][]byte, n int) []byte {
|
||||
n-- // in stack trace, lines are 1-indexed but our array is 0-indexed
|
||||
if n < 0 || n >= len(lines) {
|
||||
return dunno
|
||||
}
|
||||
return bytes.TrimSpace(lines[n])
|
||||
}
|
||||
|
||||
// function returns, if possible, the name of the function containing the PC.
|
||||
func function(pc uintptr) []byte {
|
||||
fn := runtime.FuncForPC(pc)
|
||||
if fn == nil {
|
||||
return dunno
|
||||
}
|
||||
name := []byte(fn.Name())
|
||||
// The name includes the path name to the package, which is unnecessary
|
||||
// since the file name is already included. Plus, it has center dots.
|
||||
// That is, we see
|
||||
// runtime/debug.*T·ptrmethod
|
||||
// and want
|
||||
// *T.ptrmethod
|
||||
// Also the package path might contains dot (e.g. code.google.com/...),
|
||||
// so first eliminate the path prefix
|
||||
if lastslash := bytes.LastIndex(name, slash); lastslash >= 0 {
|
||||
name = name[lastslash+1:]
|
||||
}
|
||||
if period := bytes.Index(name, dot); period >= 0 {
|
||||
name = name[period+1:]
|
||||
}
|
||||
name = bytes.Replace(name, centerDot, dot, -1)
|
||||
return name
|
||||
}
|
||||
|
||||
// Recovery returns a middleware that recovers from any panics and writes a 500 if there was one.
|
||||
// While Martini is in development mode, Recovery will also output the panic as HTML.
|
||||
func Recovery() Handler {
|
||||
return func(c Context, log *log.Logger) {
|
||||
defer func() {
|
||||
if err := recover(); err != nil {
|
||||
stack := stack(3)
|
||||
log.Printf("PANIC: %s\n%s", err, stack)
|
||||
|
||||
// Lookup the current responsewriter
|
||||
val := c.Get(inject.InterfaceOf((*http.ResponseWriter)(nil)))
|
||||
res := val.Interface().(http.ResponseWriter)
|
||||
|
||||
// respond with panic message while in development mode
|
||||
var body []byte
|
||||
if Env == Dev {
|
||||
res.Header().Set("Content-Type", "text/html")
|
||||
body = []byte(fmt.Sprintf(panicHtml, err, err, stack))
|
||||
}
|
||||
|
||||
res.WriteHeader(http.StatusInternalServerError)
|
||||
if nil != body {
|
||||
res.Write(body)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
49
Godeps/_workspace/src/github.com/codegangsta/martini/recovery_test.go
generated
vendored
Normal file
49
Godeps/_workspace/src/github.com/codegangsta/martini/recovery_test.go
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_Recovery(t *testing.T) {
|
||||
buff := bytes.NewBufferString("")
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
setENV(Dev)
|
||||
m := New()
|
||||
// replace log for testing
|
||||
m.Map(log.New(buff, "[martini] ", 0))
|
||||
m.Use(func(res http.ResponseWriter, req *http.Request) {
|
||||
res.Header().Set("Content-Type", "unpredictable")
|
||||
})
|
||||
m.Use(Recovery())
|
||||
m.Use(func(res http.ResponseWriter, req *http.Request) {
|
||||
panic("here is a panic!")
|
||||
})
|
||||
m.ServeHTTP(recorder, (*http.Request)(nil))
|
||||
expect(t, recorder.Code, http.StatusInternalServerError)
|
||||
expect(t, recorder.HeaderMap.Get("Content-Type"), "text/html")
|
||||
refute(t, recorder.Body.Len(), 0)
|
||||
refute(t, len(buff.String()), 0)
|
||||
}
|
||||
|
||||
func Test_Recovery_ResponseWriter(t *testing.T) {
|
||||
recorder := httptest.NewRecorder()
|
||||
recorder2 := httptest.NewRecorder()
|
||||
|
||||
setENV(Dev)
|
||||
m := New()
|
||||
m.Use(Recovery())
|
||||
m.Use(func(c Context) {
|
||||
c.MapTo(recorder2, (*http.ResponseWriter)(nil))
|
||||
panic("here is a panic!")
|
||||
})
|
||||
m.ServeHTTP(recorder, (*http.Request)(nil))
|
||||
|
||||
expect(t, recorder2.Code, http.StatusInternalServerError)
|
||||
expect(t, recorder2.HeaderMap.Get("Content-Type"), "text/html")
|
||||
refute(t, recorder2.Body.Len(), 0)
|
||||
}
|
||||
97
Godeps/_workspace/src/github.com/codegangsta/martini/response_writer.go
generated
vendored
Normal file
97
Godeps/_workspace/src/github.com/codegangsta/martini/response_writer.go
generated
vendored
Normal file
@@ -0,0 +1,97 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// ResponseWriter is a wrapper around http.ResponseWriter that provides extra information about
|
||||
// the response. It is recommended that middleware handlers use this construct to wrap a responsewriter
|
||||
// if the functionality calls for it.
|
||||
type ResponseWriter interface {
|
||||
http.ResponseWriter
|
||||
http.Flusher
|
||||
// Status returns the status code of the response or 0 if the response has not been written.
|
||||
Status() int
|
||||
// Written returns whether or not the ResponseWriter has been written.
|
||||
Written() bool
|
||||
// Size returns the size of the response body.
|
||||
Size() int
|
||||
// Before allows for a function to be called before the ResponseWriter has been written to. This is
|
||||
// useful for setting headers or any other operations that must happen before a response has been written.
|
||||
Before(BeforeFunc)
|
||||
}
|
||||
|
||||
// BeforeFunc is a function that is called before the ResponseWriter has been written to.
|
||||
type BeforeFunc func(ResponseWriter)
|
||||
|
||||
// NewResponseWriter creates a ResponseWriter that wraps an http.ResponseWriter
|
||||
func NewResponseWriter(rw http.ResponseWriter) ResponseWriter {
|
||||
return &responseWriter{rw, 0, 0, nil}
|
||||
}
|
||||
|
||||
type responseWriter struct {
|
||||
http.ResponseWriter
|
||||
status int
|
||||
size int
|
||||
beforeFuncs []BeforeFunc
|
||||
}
|
||||
|
||||
func (rw *responseWriter) WriteHeader(s int) {
|
||||
rw.callBefore()
|
||||
rw.ResponseWriter.WriteHeader(s)
|
||||
rw.status = s
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Write(b []byte) (int, error) {
|
||||
if !rw.Written() {
|
||||
// The status will be StatusOK if WriteHeader has not been called yet
|
||||
rw.WriteHeader(http.StatusOK)
|
||||
}
|
||||
size, err := rw.ResponseWriter.Write(b)
|
||||
rw.size += size
|
||||
return size, err
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Status() int {
|
||||
return rw.status
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Size() int {
|
||||
return rw.size
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Written() bool {
|
||||
return rw.status != 0
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Before(before BeforeFunc) {
|
||||
rw.beforeFuncs = append(rw.beforeFuncs, before)
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||
hijacker, ok := rw.ResponseWriter.(http.Hijacker)
|
||||
if !ok {
|
||||
return nil, nil, fmt.Errorf("the ResponseWriter doesn't support the Hijacker interface")
|
||||
}
|
||||
return hijacker.Hijack()
|
||||
}
|
||||
|
||||
func (rw *responseWriter) CloseNotify() <-chan bool {
|
||||
return rw.ResponseWriter.(http.CloseNotifier).CloseNotify()
|
||||
}
|
||||
|
||||
func (rw *responseWriter) callBefore() {
|
||||
for i := len(rw.beforeFuncs) - 1; i >= 0; i-- {
|
||||
rw.beforeFuncs[i](rw)
|
||||
}
|
||||
}
|
||||
|
||||
func (rw *responseWriter) Flush() {
|
||||
flusher, ok := rw.ResponseWriter.(http.Flusher)
|
||||
if ok {
|
||||
flusher.Flush()
|
||||
}
|
||||
}
|
||||
188
Godeps/_workspace/src/github.com/codegangsta/martini/response_writer_test.go
generated
vendored
Normal file
188
Godeps/_workspace/src/github.com/codegangsta/martini/response_writer_test.go
generated
vendored
Normal file
@@ -0,0 +1,188 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
type closeNotifyingRecorder struct {
|
||||
*httptest.ResponseRecorder
|
||||
closed chan bool
|
||||
}
|
||||
|
||||
func newCloseNotifyingRecorder() *closeNotifyingRecorder {
|
||||
return &closeNotifyingRecorder{
|
||||
httptest.NewRecorder(),
|
||||
make(chan bool, 1),
|
||||
}
|
||||
}
|
||||
|
||||
func (c *closeNotifyingRecorder) close() {
|
||||
c.closed <- true
|
||||
}
|
||||
|
||||
func (c *closeNotifyingRecorder) CloseNotify() <-chan bool {
|
||||
return c.closed
|
||||
}
|
||||
|
||||
type hijackableResponse struct {
|
||||
Hijacked bool
|
||||
}
|
||||
|
||||
func newHijackableResponse() *hijackableResponse {
|
||||
return &hijackableResponse{}
|
||||
}
|
||||
|
||||
func (h *hijackableResponse) Header() http.Header { return nil }
|
||||
func (h *hijackableResponse) Write(buf []byte) (int, error) { return 0, nil }
|
||||
func (h *hijackableResponse) WriteHeader(code int) {}
|
||||
func (h *hijackableResponse) Flush() {}
|
||||
func (h *hijackableResponse) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||
h.Hijacked = true
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_WritingString(t *testing.T) {
|
||||
rec := httptest.NewRecorder()
|
||||
rw := NewResponseWriter(rec)
|
||||
|
||||
rw.Write([]byte("Hello world"))
|
||||
|
||||
expect(t, rec.Code, rw.Status())
|
||||
expect(t, rec.Body.String(), "Hello world")
|
||||
expect(t, rw.Status(), http.StatusOK)
|
||||
expect(t, rw.Size(), 11)
|
||||
expect(t, rw.Written(), true)
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_WritingStrings(t *testing.T) {
|
||||
rec := httptest.NewRecorder()
|
||||
rw := NewResponseWriter(rec)
|
||||
|
||||
rw.Write([]byte("Hello world"))
|
||||
rw.Write([]byte("foo bar bat baz"))
|
||||
|
||||
expect(t, rec.Code, rw.Status())
|
||||
expect(t, rec.Body.String(), "Hello worldfoo bar bat baz")
|
||||
expect(t, rw.Status(), http.StatusOK)
|
||||
expect(t, rw.Size(), 26)
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_WritingHeader(t *testing.T) {
|
||||
rec := httptest.NewRecorder()
|
||||
rw := NewResponseWriter(rec)
|
||||
|
||||
rw.WriteHeader(http.StatusNotFound)
|
||||
|
||||
expect(t, rec.Code, rw.Status())
|
||||
expect(t, rec.Body.String(), "")
|
||||
expect(t, rw.Status(), http.StatusNotFound)
|
||||
expect(t, rw.Size(), 0)
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_Before(t *testing.T) {
|
||||
rec := httptest.NewRecorder()
|
||||
rw := NewResponseWriter(rec)
|
||||
result := ""
|
||||
|
||||
rw.Before(func(ResponseWriter) {
|
||||
result += "foo"
|
||||
})
|
||||
rw.Before(func(ResponseWriter) {
|
||||
result += "bar"
|
||||
})
|
||||
|
||||
rw.WriteHeader(http.StatusNotFound)
|
||||
|
||||
expect(t, rec.Code, rw.Status())
|
||||
expect(t, rec.Body.String(), "")
|
||||
expect(t, rw.Status(), http.StatusNotFound)
|
||||
expect(t, rw.Size(), 0)
|
||||
expect(t, result, "barfoo")
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_Hijack(t *testing.T) {
|
||||
hijackable := newHijackableResponse()
|
||||
rw := NewResponseWriter(hijackable)
|
||||
hijacker, ok := rw.(http.Hijacker)
|
||||
expect(t, ok, true)
|
||||
_, _, err := hijacker.Hijack()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
expect(t, hijackable.Hijacked, true)
|
||||
}
|
||||
|
||||
func Test_ResponseWrite_Hijack_NotOK(t *testing.T) {
|
||||
hijackable := new(http.ResponseWriter)
|
||||
rw := NewResponseWriter(*hijackable)
|
||||
hijacker, ok := rw.(http.Hijacker)
|
||||
expect(t, ok, true)
|
||||
_, _, err := hijacker.Hijack()
|
||||
|
||||
refute(t, err, nil)
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_CloseNotify(t *testing.T) {
|
||||
rec := newCloseNotifyingRecorder()
|
||||
rw := NewResponseWriter(rec)
|
||||
closed := false
|
||||
notifier := rw.(http.CloseNotifier).CloseNotify()
|
||||
rec.close()
|
||||
select {
|
||||
case <-notifier:
|
||||
closed = true
|
||||
case <-time.After(time.Second):
|
||||
}
|
||||
expect(t, closed, true)
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_Flusher(t *testing.T) {
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
rw := NewResponseWriter(rec)
|
||||
|
||||
_, ok := rw.(http.Flusher)
|
||||
expect(t, ok, true)
|
||||
}
|
||||
|
||||
func Test_ResponseWriter_FlusherHandler(t *testing.T) {
|
||||
|
||||
// New martini instance
|
||||
m := Classic()
|
||||
|
||||
m.Get("/events", func(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
f, ok := w.(http.Flusher)
|
||||
expect(t, ok, true)
|
||||
|
||||
w.Header().Set("Content-Type", "text/event-stream")
|
||||
w.Header().Set("Cache-Control", "no-cache")
|
||||
w.Header().Set("Connection", "keep-alive")
|
||||
|
||||
for i := 0; i < 2; i++ {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
io.WriteString(w, "data: Hello\n\n")
|
||||
f.Flush()
|
||||
}
|
||||
|
||||
})
|
||||
|
||||
recorder := httptest.NewRecorder()
|
||||
r, _ := http.NewRequest("GET", "/events", nil)
|
||||
m.ServeHTTP(recorder, r)
|
||||
|
||||
if recorder.Code != 200 {
|
||||
t.Error("Response not 200")
|
||||
}
|
||||
|
||||
if recorder.Body.String() != "data: Hello\n\ndata: Hello\n\n" {
|
||||
t.Error("Didn't receive correct body, got:", recorder.Body.String())
|
||||
}
|
||||
|
||||
}
|
||||
43
Godeps/_workspace/src/github.com/codegangsta/martini/return_handler.go
generated
vendored
Normal file
43
Godeps/_workspace/src/github.com/codegangsta/martini/return_handler.go
generated
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"github.com/codegangsta/inject"
|
||||
"net/http"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// ReturnHandler is a service that Martini provides that is called
|
||||
// when a route handler returns something. The ReturnHandler is
|
||||
// responsible for writing to the ResponseWriter based on the values
|
||||
// that are passed into this function.
|
||||
type ReturnHandler func(Context, []reflect.Value)
|
||||
|
||||
func defaultReturnHandler() ReturnHandler {
|
||||
return func(ctx Context, vals []reflect.Value) {
|
||||
rv := ctx.Get(inject.InterfaceOf((*http.ResponseWriter)(nil)))
|
||||
res := rv.Interface().(http.ResponseWriter)
|
||||
var responseVal reflect.Value
|
||||
if len(vals) > 1 && vals[0].Kind() == reflect.Int {
|
||||
res.WriteHeader(int(vals[0].Int()))
|
||||
responseVal = vals[1]
|
||||
} else if len(vals) > 0 {
|
||||
responseVal = vals[0]
|
||||
}
|
||||
if canDeref(responseVal) {
|
||||
responseVal = responseVal.Elem()
|
||||
}
|
||||
if isByteSlice(responseVal) {
|
||||
res.Write(responseVal.Bytes())
|
||||
} else {
|
||||
res.Write([]byte(responseVal.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func isByteSlice(val reflect.Value) bool {
|
||||
return val.Kind() == reflect.Slice && val.Type().Elem().Kind() == reflect.Uint8
|
||||
}
|
||||
|
||||
func canDeref(val reflect.Value) bool {
|
||||
return val.Kind() == reflect.Interface || val.Kind() == reflect.Ptr
|
||||
}
|
||||
331
Godeps/_workspace/src/github.com/codegangsta/martini/router.go
generated
vendored
Normal file
331
Godeps/_workspace/src/github.com/codegangsta/martini/router.go
generated
vendored
Normal file
@@ -0,0 +1,331 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// Params is a map of name/value pairs for named routes. An instance of martini.Params is available to be injected into any route handler.
|
||||
type Params map[string]string
|
||||
|
||||
// Router is Martini's de-facto routing interface. Supports HTTP verbs, stacked handlers, and dependency injection.
|
||||
type Router interface {
|
||||
Routes
|
||||
|
||||
// Group adds a group where related routes can be added.
|
||||
Group(string, func(Router), ...Handler)
|
||||
// Get adds a route for a HTTP GET request to the specified matching pattern.
|
||||
Get(string, ...Handler) Route
|
||||
// Patch adds a route for a HTTP PATCH request to the specified matching pattern.
|
||||
Patch(string, ...Handler) Route
|
||||
// Post adds a route for a HTTP POST request to the specified matching pattern.
|
||||
Post(string, ...Handler) Route
|
||||
// Put adds a route for a HTTP PUT request to the specified matching pattern.
|
||||
Put(string, ...Handler) Route
|
||||
// Delete adds a route for a HTTP DELETE request to the specified matching pattern.
|
||||
Delete(string, ...Handler) Route
|
||||
// Options adds a route for a HTTP OPTIONS request to the specified matching pattern.
|
||||
Options(string, ...Handler) Route
|
||||
// Head adds a route for a HTTP HEAD request to the specified matching pattern.
|
||||
Head(string, ...Handler) Route
|
||||
// Any adds a route for any HTTP method request to the specified matching pattern.
|
||||
Any(string, ...Handler) Route
|
||||
|
||||
// NotFound sets the handlers that are called when a no route matches a request. Throws a basic 404 by default.
|
||||
NotFound(...Handler)
|
||||
|
||||
// Handle is the entry point for routing. This is used as a martini.Handler
|
||||
Handle(http.ResponseWriter, *http.Request, Context)
|
||||
}
|
||||
|
||||
type router struct {
|
||||
routes []*route
|
||||
notFounds []Handler
|
||||
groups []group
|
||||
}
|
||||
|
||||
type group struct {
|
||||
pattern string
|
||||
handlers []Handler
|
||||
}
|
||||
|
||||
// NewRouter creates a new Router instance.
|
||||
// If you aren't using ClassicMartini, then you can add Routes as a
|
||||
// service with:
|
||||
//
|
||||
// m := martini.New()
|
||||
// r := martini.NewRouter()
|
||||
// m.MapTo(r, (*martini.Routes)(nil))
|
||||
//
|
||||
// If you are using ClassicMartini, then this is done for you.
|
||||
func NewRouter() Router {
|
||||
return &router{notFounds: []Handler{http.NotFound}, groups: make([]group, 0)}
|
||||
}
|
||||
|
||||
func (r *router) Group(pattern string, fn func(Router), h ...Handler) {
|
||||
r.groups = append(r.groups, group{pattern, h})
|
||||
fn(r)
|
||||
r.groups = r.groups[:len(r.groups)-1]
|
||||
}
|
||||
|
||||
func (r *router) Get(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("GET", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Patch(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("PATCH", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Post(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("POST", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Put(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("PUT", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Delete(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("DELETE", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Options(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("OPTIONS", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Head(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("HEAD", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Any(pattern string, h ...Handler) Route {
|
||||
return r.addRoute("*", pattern, h)
|
||||
}
|
||||
|
||||
func (r *router) Handle(res http.ResponseWriter, req *http.Request, context Context) {
|
||||
for _, route := range r.routes {
|
||||
ok, vals := route.Match(req.Method, req.URL.Path)
|
||||
if ok {
|
||||
params := Params(vals)
|
||||
context.Map(params)
|
||||
route.Handle(context, res)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// no routes exist, 404
|
||||
c := &routeContext{context, 0, r.notFounds}
|
||||
context.MapTo(c, (*Context)(nil))
|
||||
c.run()
|
||||
}
|
||||
|
||||
func (r *router) NotFound(handler ...Handler) {
|
||||
r.notFounds = handler
|
||||
}
|
||||
|
||||
func (r *router) addRoute(method string, pattern string, handlers []Handler) *route {
|
||||
if len(r.groups) > 0 {
|
||||
group := r.groups[len(r.groups)-1]
|
||||
pattern = group.pattern + pattern
|
||||
h := make([]Handler, len(group.handlers)+len(handlers))
|
||||
copy(h, group.handlers)
|
||||
copy(h[len(group.handlers):], handlers)
|
||||
handlers = h
|
||||
}
|
||||
|
||||
route := newRoute(method, pattern, handlers)
|
||||
route.Validate()
|
||||
r.routes = append(r.routes, route)
|
||||
return route
|
||||
}
|
||||
|
||||
func (r *router) findRoute(name string) *route {
|
||||
for _, route := range r.routes {
|
||||
if route.name == name {
|
||||
return route
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Route is an interface representing a Route in Martini's routing layer.
|
||||
type Route interface {
|
||||
// URLWith returns a rendering of the Route's url with the given string params.
|
||||
URLWith([]string) string
|
||||
Name(string)
|
||||
}
|
||||
|
||||
type route struct {
|
||||
method string
|
||||
regex *regexp.Regexp
|
||||
handlers []Handler
|
||||
pattern string
|
||||
name string
|
||||
}
|
||||
|
||||
func newRoute(method string, pattern string, handlers []Handler) *route {
|
||||
route := route{method, nil, handlers, pattern, ""}
|
||||
r := regexp.MustCompile(`:[^/#?()\.\\]+`)
|
||||
pattern = r.ReplaceAllStringFunc(pattern, func(m string) string {
|
||||
return fmt.Sprintf(`(?P<%s>[^/#?]+)`, m[1:])
|
||||
})
|
||||
r2 := regexp.MustCompile(`\*\*`)
|
||||
var index int
|
||||
pattern = r2.ReplaceAllStringFunc(pattern, func(m string) string {
|
||||
index++
|
||||
return fmt.Sprintf(`(?P<_%d>[^#?]*)`, index)
|
||||
})
|
||||
pattern += `\/?`
|
||||
route.regex = regexp.MustCompile(pattern)
|
||||
return &route
|
||||
}
|
||||
|
||||
func (r route) MatchMethod(method string) bool {
|
||||
return r.method == "*" || method == r.method || (method == "HEAD" && r.method == "GET")
|
||||
}
|
||||
|
||||
func (r route) Match(method string, path string) (bool, map[string]string) {
|
||||
// add Any method matching support
|
||||
if !r.MatchMethod(method) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
matches := r.regex.FindStringSubmatch(path)
|
||||
if len(matches) > 0 && matches[0] == path {
|
||||
params := make(map[string]string)
|
||||
for i, name := range r.regex.SubexpNames() {
|
||||
if len(name) > 0 {
|
||||
params[name] = matches[i]
|
||||
}
|
||||
}
|
||||
return true, params
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func (r *route) Validate() {
|
||||
for _, handler := range r.handlers {
|
||||
validateHandler(handler)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *route) Handle(c Context, res http.ResponseWriter) {
|
||||
context := &routeContext{c, 0, r.handlers}
|
||||
c.MapTo(context, (*Context)(nil))
|
||||
context.run()
|
||||
}
|
||||
|
||||
// URLWith returns the url pattern replacing the parameters for its values
|
||||
func (r *route) URLWith(args []string) string {
|
||||
if len(args) > 0 {
|
||||
reg := regexp.MustCompile(`:[^/#?()\.\\]+`)
|
||||
argCount := len(args)
|
||||
i := 0
|
||||
url := reg.ReplaceAllStringFunc(r.pattern, func(m string) string {
|
||||
var val interface{}
|
||||
if i < argCount {
|
||||
val = args[i]
|
||||
} else {
|
||||
val = m
|
||||
}
|
||||
i += 1
|
||||
return fmt.Sprintf(`%v`, val)
|
||||
})
|
||||
|
||||
return url
|
||||
}
|
||||
return r.pattern
|
||||
}
|
||||
|
||||
func (r *route) Name(name string) {
|
||||
r.name = name
|
||||
}
|
||||
|
||||
// Routes is a helper service for Martini's routing layer.
|
||||
type Routes interface {
|
||||
// URLFor returns a rendered URL for the given route. Optional params can be passed to fulfill named parameters in the route.
|
||||
URLFor(name string, params ...interface{}) string
|
||||
// MethodsFor returns an array of methods available for the path
|
||||
MethodsFor(path string) []string
|
||||
}
|
||||
|
||||
// URLFor returns the url for the given route name.
|
||||
func (r *router) URLFor(name string, params ...interface{}) string {
|
||||
route := r.findRoute(name)
|
||||
|
||||
if route == nil {
|
||||
panic("route not found")
|
||||
}
|
||||
|
||||
var args []string
|
||||
for _, param := range params {
|
||||
switch v := param.(type) {
|
||||
case int:
|
||||
args = append(args, strconv.FormatInt(int64(v), 10))
|
||||
case string:
|
||||
args = append(args, v)
|
||||
default:
|
||||
if v != nil {
|
||||
panic("Arguments passed to URLFor must be integers or strings")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return route.URLWith(args)
|
||||
}
|
||||
|
||||
func hasMethod(methods []string, method string) bool {
|
||||
for _, v := range methods {
|
||||
if v == method {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// MethodsFor returns all methods available for path
|
||||
func (r *router) MethodsFor(path string) []string {
|
||||
methods := []string{}
|
||||
for _, route := range r.routes {
|
||||
matches := route.regex.FindStringSubmatch(path)
|
||||
if len(matches) > 0 && matches[0] == path && !hasMethod(methods, route.method) {
|
||||
methods = append(methods, route.method)
|
||||
}
|
||||
}
|
||||
return methods
|
||||
}
|
||||
|
||||
type routeContext struct {
|
||||
Context
|
||||
index int
|
||||
handlers []Handler
|
||||
}
|
||||
|
||||
func (r *routeContext) Next() {
|
||||
r.index += 1
|
||||
r.run()
|
||||
}
|
||||
|
||||
func (r *routeContext) run() {
|
||||
for r.index < len(r.handlers) {
|
||||
handler := r.handlers[r.index]
|
||||
vals, err := r.Invoke(handler)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
r.index += 1
|
||||
|
||||
// if the handler returned something, write it to the http response
|
||||
if len(vals) > 0 {
|
||||
ev := r.Get(reflect.TypeOf(ReturnHandler(nil)))
|
||||
handleReturn := ev.Interface().(ReturnHandler)
|
||||
handleReturn(r, vals)
|
||||
}
|
||||
|
||||
if r.Written() {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
410
Godeps/_workspace/src/github.com/codegangsta/martini/router_test.go
generated
vendored
Normal file
410
Godeps/_workspace/src/github.com/codegangsta/martini/router_test.go
generated
vendored
Normal file
@@ -0,0 +1,410 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_Routing(t *testing.T) {
|
||||
router := NewRouter()
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
|
||||
req2, _ := http.NewRequest("POST", "http://localhost:3000/bar/bat", nil)
|
||||
context2 := New().createContext(recorder, req2)
|
||||
|
||||
req3, _ := http.NewRequest("DELETE", "http://localhost:3000/baz", nil)
|
||||
context3 := New().createContext(recorder, req3)
|
||||
|
||||
req4, _ := http.NewRequest("PATCH", "http://localhost:3000/bar/foo", nil)
|
||||
context4 := New().createContext(recorder, req4)
|
||||
|
||||
req5, _ := http.NewRequest("GET", "http://localhost:3000/fez/this/should/match", nil)
|
||||
context5 := New().createContext(recorder, req5)
|
||||
|
||||
req6, _ := http.NewRequest("PUT", "http://localhost:3000/pop/blah/blah/blah/bap/foo/", nil)
|
||||
context6 := New().createContext(recorder, req6)
|
||||
|
||||
req7, _ := http.NewRequest("DELETE", "http://localhost:3000/wap//pow", nil)
|
||||
context7 := New().createContext(recorder, req7)
|
||||
|
||||
req8, _ := http.NewRequest("HEAD", "http://localhost:3000/wap//pow", nil)
|
||||
context8 := New().createContext(recorder, req8)
|
||||
|
||||
req9, _ := http.NewRequest("OPTIONS", "http://localhost:3000/opts", nil)
|
||||
context9 := New().createContext(recorder, req9)
|
||||
|
||||
req10, _ := http.NewRequest("HEAD", "http://localhost:3000/foo", nil)
|
||||
context10 := New().createContext(recorder, req10)
|
||||
|
||||
req11, _ := http.NewRequest("GET", "http://localhost:3000/bazz/inga", nil)
|
||||
context11 := New().createContext(recorder, req11)
|
||||
|
||||
req12, _ := http.NewRequest("POST", "http://localhost:3000/bazz/inga", nil)
|
||||
context12 := New().createContext(recorder, req12)
|
||||
|
||||
result := ""
|
||||
router.Get("/foo", func(req *http.Request) {
|
||||
result += "foo"
|
||||
})
|
||||
router.Patch("/bar/:id", func(params Params) {
|
||||
expect(t, params["id"], "foo")
|
||||
result += "barfoo"
|
||||
})
|
||||
router.Post("/bar/:id", func(params Params) {
|
||||
expect(t, params["id"], "bat")
|
||||
result += "barbat"
|
||||
})
|
||||
router.Put("/fizzbuzz", func() {
|
||||
result += "fizzbuzz"
|
||||
})
|
||||
router.Delete("/bazzer", func(c Context) {
|
||||
result += "baz"
|
||||
})
|
||||
router.Get("/fez/**", func(params Params) {
|
||||
expect(t, params["_1"], "this/should/match")
|
||||
result += "fez"
|
||||
})
|
||||
router.Put("/pop/**/bap/:id/**", func(params Params) {
|
||||
expect(t, params["id"], "foo")
|
||||
expect(t, params["_1"], "blah/blah/blah")
|
||||
expect(t, params["_2"], "")
|
||||
result += "popbap"
|
||||
})
|
||||
router.Delete("/wap/**/pow", func(params Params) {
|
||||
expect(t, params["_1"], "")
|
||||
result += "wappow"
|
||||
})
|
||||
router.Options("/opts", func() {
|
||||
result += "opts"
|
||||
})
|
||||
router.Head("/wap/**/pow", func(params Params) {
|
||||
expect(t, params["_1"], "")
|
||||
result += "wappow"
|
||||
})
|
||||
router.Group("/bazz", func(r Router) {
|
||||
r.Get("/inga", func() {
|
||||
result += "get"
|
||||
})
|
||||
|
||||
r.Post("/inga", func() {
|
||||
result += "post"
|
||||
})
|
||||
}, func() {
|
||||
result += "bazz"
|
||||
}, func() {
|
||||
result += "inga"
|
||||
})
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
router.Handle(recorder, req2, context2)
|
||||
router.Handle(recorder, req3, context3)
|
||||
router.Handle(recorder, req4, context4)
|
||||
router.Handle(recorder, req5, context5)
|
||||
router.Handle(recorder, req6, context6)
|
||||
router.Handle(recorder, req7, context7)
|
||||
router.Handle(recorder, req8, context8)
|
||||
router.Handle(recorder, req9, context9)
|
||||
router.Handle(recorder, req10, context10)
|
||||
router.Handle(recorder, req11, context11)
|
||||
router.Handle(recorder, req12, context12)
|
||||
expect(t, result, "foobarbatbarfoofezpopbapwappowwappowoptsfoobazzingagetbazzingapost")
|
||||
expect(t, recorder.Code, http.StatusNotFound)
|
||||
expect(t, recorder.Body.String(), "404 page not found\n")
|
||||
}
|
||||
|
||||
func Test_RouterHandlerStatusCode(t *testing.T) {
|
||||
router := NewRouter()
|
||||
router.Get("/foo", func() string {
|
||||
return "foo"
|
||||
})
|
||||
router.Get("/bar", func() (int, string) {
|
||||
return http.StatusForbidden, "bar"
|
||||
})
|
||||
router.Get("/baz", func() (string, string) {
|
||||
return "baz", "BAZ!"
|
||||
})
|
||||
router.Get("/bytes", func() []byte {
|
||||
return []byte("Bytes!")
|
||||
})
|
||||
router.Get("/interface", func() interface{} {
|
||||
return "Interface!"
|
||||
})
|
||||
|
||||
// code should be 200 if none is returned from the handler
|
||||
recorder := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusOK)
|
||||
expect(t, recorder.Body.String(), "foo")
|
||||
|
||||
// if a status code is returned, it should be used
|
||||
recorder = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("GET", "http://localhost:3000/bar", nil)
|
||||
context = New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusForbidden)
|
||||
expect(t, recorder.Body.String(), "bar")
|
||||
|
||||
// shouldn't use the first returned value as a status code if not an integer
|
||||
recorder = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("GET", "http://localhost:3000/baz", nil)
|
||||
context = New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusOK)
|
||||
expect(t, recorder.Body.String(), "baz")
|
||||
|
||||
// Should render bytes as a return value as well.
|
||||
recorder = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("GET", "http://localhost:3000/bytes", nil)
|
||||
context = New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusOK)
|
||||
expect(t, recorder.Body.String(), "Bytes!")
|
||||
|
||||
// Should render interface{} values.
|
||||
recorder = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("GET", "http://localhost:3000/interface", nil)
|
||||
context = New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusOK)
|
||||
expect(t, recorder.Body.String(), "Interface!")
|
||||
}
|
||||
|
||||
func Test_RouterHandlerStacking(t *testing.T) {
|
||||
router := NewRouter()
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
|
||||
result := ""
|
||||
|
||||
f1 := func() {
|
||||
result += "foo"
|
||||
}
|
||||
|
||||
f2 := func(c Context) {
|
||||
result += "bar"
|
||||
c.Next()
|
||||
result += "bing"
|
||||
}
|
||||
|
||||
f3 := func() string {
|
||||
result += "bat"
|
||||
return "Hello world"
|
||||
}
|
||||
|
||||
f4 := func() {
|
||||
result += "baz"
|
||||
}
|
||||
|
||||
router.Get("/foo", f1, f2, f3, f4)
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, result, "foobarbatbing")
|
||||
expect(t, recorder.Body.String(), "Hello world")
|
||||
}
|
||||
|
||||
var routeTests = []struct {
|
||||
// in
|
||||
method string
|
||||
path string
|
||||
|
||||
// out
|
||||
ok bool
|
||||
params map[string]string
|
||||
}{
|
||||
{"GET", "/foo/123/bat/321", true, map[string]string{"bar": "123", "baz": "321"}},
|
||||
{"POST", "/foo/123/bat/321", false, map[string]string{}},
|
||||
{"GET", "/foo/hello/bat/world", true, map[string]string{"bar": "hello", "baz": "world"}},
|
||||
{"GET", "foo/hello/bat/world", false, map[string]string{}},
|
||||
{"GET", "/foo/123/bat/321/", true, map[string]string{"bar": "123", "baz": "321"}},
|
||||
{"GET", "/foo/123/bat/321//", false, map[string]string{}},
|
||||
{"GET", "/foo/123//bat/321/", false, map[string]string{}},
|
||||
}
|
||||
|
||||
func Test_RouteMatching(t *testing.T) {
|
||||
route := newRoute("GET", "/foo/:bar/bat/:baz", nil)
|
||||
for _, tt := range routeTests {
|
||||
ok, params := route.Match(tt.method, tt.path)
|
||||
if ok != tt.ok || params["bar"] != tt.params["bar"] || params["baz"] != tt.params["baz"] {
|
||||
t.Errorf("expected: (%v, %v) got: (%v, %v)", tt.ok, tt.params, ok, params)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Test_MethodsFor(t *testing.T) {
|
||||
router := NewRouter()
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
req, _ := http.NewRequest("POST", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
context.MapTo(router, (*Routes)(nil))
|
||||
router.Post("/foo/bar", func() {
|
||||
})
|
||||
|
||||
router.Post("/fo", func() {
|
||||
})
|
||||
|
||||
router.Get("/foo", func() {
|
||||
})
|
||||
|
||||
router.Put("/foo", func() {
|
||||
})
|
||||
|
||||
router.NotFound(func(routes Routes, w http.ResponseWriter, r *http.Request) {
|
||||
methods := routes.MethodsFor(r.URL.Path)
|
||||
if len(methods) != 0 {
|
||||
w.Header().Set("Allow", strings.Join(methods, ","))
|
||||
w.WriteHeader(http.StatusMethodNotAllowed)
|
||||
}
|
||||
})
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusMethodNotAllowed)
|
||||
expect(t, recorder.Header().Get("Allow"), "GET,PUT")
|
||||
}
|
||||
|
||||
func Test_NotFound(t *testing.T) {
|
||||
router := NewRouter()
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
|
||||
router.NotFound(func(res http.ResponseWriter) {
|
||||
http.Error(res, "Nope", http.StatusNotFound)
|
||||
})
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusNotFound)
|
||||
expect(t, recorder.Body.String(), "Nope\n")
|
||||
}
|
||||
|
||||
func Test_NotFoundAsHandler(t *testing.T) {
|
||||
router := NewRouter()
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
|
||||
router.NotFound(func() string {
|
||||
return "not found"
|
||||
})
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusOK)
|
||||
expect(t, recorder.Body.String(), "not found")
|
||||
|
||||
recorder = httptest.NewRecorder()
|
||||
|
||||
context = New().createContext(recorder, req)
|
||||
|
||||
router.NotFound(func() (int, string) {
|
||||
return 404, "not found"
|
||||
})
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusNotFound)
|
||||
expect(t, recorder.Body.String(), "not found")
|
||||
|
||||
recorder = httptest.NewRecorder()
|
||||
|
||||
context = New().createContext(recorder, req)
|
||||
|
||||
router.NotFound(func() (int, string) {
|
||||
return 200, ""
|
||||
})
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, recorder.Code, http.StatusOK)
|
||||
expect(t, recorder.Body.String(), "")
|
||||
}
|
||||
|
||||
func Test_NotFoundStacking(t *testing.T) {
|
||||
router := NewRouter()
|
||||
recorder := httptest.NewRecorder()
|
||||
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
|
||||
result := ""
|
||||
|
||||
f1 := func() {
|
||||
result += "foo"
|
||||
}
|
||||
|
||||
f2 := func(c Context) {
|
||||
result += "bar"
|
||||
c.Next()
|
||||
result += "bing"
|
||||
}
|
||||
|
||||
f3 := func() string {
|
||||
result += "bat"
|
||||
return "Not Found"
|
||||
}
|
||||
|
||||
f4 := func() {
|
||||
result += "baz"
|
||||
}
|
||||
|
||||
router.NotFound(f1, f2, f3, f4)
|
||||
|
||||
router.Handle(recorder, req, context)
|
||||
expect(t, result, "foobarbatbing")
|
||||
expect(t, recorder.Body.String(), "Not Found")
|
||||
}
|
||||
|
||||
func Test_Any(t *testing.T) {
|
||||
router := NewRouter()
|
||||
router.Any("/foo", func(res http.ResponseWriter) {
|
||||
http.Error(res, "Nope", http.StatusNotFound)
|
||||
})
|
||||
|
||||
recorder := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/foo", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
|
||||
expect(t, recorder.Code, http.StatusNotFound)
|
||||
expect(t, recorder.Body.String(), "Nope\n")
|
||||
|
||||
recorder = httptest.NewRecorder()
|
||||
req, _ = http.NewRequest("PUT", "http://localhost:3000/foo", nil)
|
||||
context = New().createContext(recorder, req)
|
||||
router.Handle(recorder, req, context)
|
||||
|
||||
expect(t, recorder.Code, http.StatusNotFound)
|
||||
expect(t, recorder.Body.String(), "Nope\n")
|
||||
}
|
||||
|
||||
func Test_URLFor(t *testing.T) {
|
||||
router := NewRouter()
|
||||
|
||||
router.Get("/foo", func() {
|
||||
// Nothing
|
||||
}).Name("foo")
|
||||
|
||||
router.Post("/bar/:id", func(params Params) {
|
||||
// Nothing
|
||||
}).Name("bar")
|
||||
|
||||
router.Get("/bar/:id/:name", func(params Params, routes Routes) {
|
||||
expect(t, routes.URLFor("foo", nil), "/foo")
|
||||
expect(t, routes.URLFor("bar", 5), "/bar/5")
|
||||
expect(t, routes.URLFor("bar_id", 5, "john"), "/bar/5/john")
|
||||
}).Name("bar_id")
|
||||
|
||||
// code should be 200 if none is returned from the handler
|
||||
recorder := httptest.NewRecorder()
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000/bar/foo/bar", nil)
|
||||
context := New().createContext(recorder, req)
|
||||
context.MapTo(router, (*Routes)(nil))
|
||||
router.Handle(recorder, req, context)
|
||||
}
|
||||
109
Godeps/_workspace/src/github.com/codegangsta/martini/static.go
generated
vendored
Normal file
109
Godeps/_workspace/src/github.com/codegangsta/martini/static.go
generated
vendored
Normal file
@@ -0,0 +1,109 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"log"
|
||||
"net/http"
|
||||
"path"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// StaticOptions is a struct for specifying configuration options for the martini.Static middleware.
|
||||
type StaticOptions struct {
|
||||
// Prefix is the optional prefix used to serve the static directory content
|
||||
Prefix string
|
||||
// SkipLogging will disable [Static] log messages when a static file is served.
|
||||
SkipLogging bool
|
||||
// IndexFile defines which file to serve as index if it exists.
|
||||
IndexFile string
|
||||
// Expires defines which user-defined function to use for producing a HTTP Expires Header
|
||||
// https://developers.google.com/speed/docs/insights/LeverageBrowserCaching
|
||||
Expires func() string
|
||||
}
|
||||
|
||||
func prepareStaticOptions(options []StaticOptions) StaticOptions {
|
||||
var opt StaticOptions
|
||||
if len(options) > 0 {
|
||||
opt = options[0]
|
||||
}
|
||||
|
||||
// Defaults
|
||||
if len(opt.IndexFile) == 0 {
|
||||
opt.IndexFile = "index.html"
|
||||
}
|
||||
// Normalize the prefix if provided
|
||||
if opt.Prefix != "" {
|
||||
// Ensure we have a leading '/'
|
||||
if opt.Prefix[0] != '/' {
|
||||
opt.Prefix = "/" + opt.Prefix
|
||||
}
|
||||
// Remove any trailing '/'
|
||||
opt.Prefix = strings.TrimRight(opt.Prefix, "/")
|
||||
}
|
||||
return opt
|
||||
}
|
||||
|
||||
// Static returns a middleware handler that serves static files in the given directory.
|
||||
func Static(directory string, staticOpt ...StaticOptions) Handler {
|
||||
dir := http.Dir(directory)
|
||||
opt := prepareStaticOptions(staticOpt)
|
||||
|
||||
return func(res http.ResponseWriter, req *http.Request, log *log.Logger) {
|
||||
if req.Method != "GET" && req.Method != "HEAD" {
|
||||
return
|
||||
}
|
||||
file := req.URL.Path
|
||||
// if we have a prefix, filter requests by stripping the prefix
|
||||
if opt.Prefix != "" {
|
||||
if !strings.HasPrefix(file, opt.Prefix) {
|
||||
return
|
||||
}
|
||||
file = file[len(opt.Prefix):]
|
||||
if file != "" && file[0] != '/' {
|
||||
return
|
||||
}
|
||||
}
|
||||
f, err := dir.Open(file)
|
||||
if err != nil {
|
||||
// discard the error?
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
fi, err := f.Stat()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// try to serve index file
|
||||
if fi.IsDir() {
|
||||
// redirect if missing trailing slash
|
||||
if !strings.HasSuffix(req.URL.Path, "/") {
|
||||
http.Redirect(res, req, req.URL.Path+"/", http.StatusFound)
|
||||
return
|
||||
}
|
||||
|
||||
file = path.Join(file, opt.IndexFile)
|
||||
f, err = dir.Open(file)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
fi, err = f.Stat()
|
||||
if err != nil || fi.IsDir() {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if !opt.SkipLogging {
|
||||
log.Println("[Static] Serving " + file)
|
||||
}
|
||||
|
||||
// Add an Expires header to the static content
|
||||
if opt.Expires != nil {
|
||||
res.Header().Set("Expires", opt.Expires())
|
||||
}
|
||||
|
||||
http.ServeContent(res, req, file, fi.ModTime(), f)
|
||||
}
|
||||
}
|
||||
200
Godeps/_workspace/src/github.com/codegangsta/martini/static_test.go
generated
vendored
Normal file
200
Godeps/_workspace/src/github.com/codegangsta/martini/static_test.go
generated
vendored
Normal file
@@ -0,0 +1,200 @@
|
||||
package martini
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/codegangsta/inject"
|
||||
)
|
||||
|
||||
func Test_Static(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
response.Body = new(bytes.Buffer)
|
||||
|
||||
m := New()
|
||||
r := NewRouter()
|
||||
|
||||
m.Use(Static("."))
|
||||
m.Action(r.Handle)
|
||||
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
expect(t, response.Header().Get("Expires"), "")
|
||||
if response.Body.Len() == 0 {
|
||||
t.Errorf("Got empty body for GET request")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_Static_Head(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
response.Body = new(bytes.Buffer)
|
||||
|
||||
m := New()
|
||||
r := NewRouter()
|
||||
|
||||
m.Use(Static("."))
|
||||
m.Action(r.Handle)
|
||||
|
||||
req, err := http.NewRequest("HEAD", "http://localhost:3000/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
if response.Body.Len() != 0 {
|
||||
t.Errorf("Got non-empty body for HEAD request")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_Static_As_Post(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
m := New()
|
||||
r := NewRouter()
|
||||
|
||||
m.Use(Static("."))
|
||||
m.Action(r.Handle)
|
||||
|
||||
req, err := http.NewRequest("POST", "http://localhost:3000/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusNotFound)
|
||||
}
|
||||
|
||||
func Test_Static_BadDir(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
m := Classic()
|
||||
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
refute(t, response.Code, http.StatusOK)
|
||||
}
|
||||
|
||||
func Test_Static_Options_Logging(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
var buffer bytes.Buffer
|
||||
m := &Martini{Injector: inject.New(), action: func() {}, logger: log.New(&buffer, "[martini] ", 0)}
|
||||
m.Map(m.logger)
|
||||
m.Map(defaultReturnHandler())
|
||||
|
||||
opt := StaticOptions{}
|
||||
m.Use(Static(".", opt))
|
||||
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
expect(t, buffer.String(), "[martini] [Static] Serving /martini.go\n")
|
||||
|
||||
// Now without logging
|
||||
m.Handlers()
|
||||
buffer.Reset()
|
||||
|
||||
// This should disable logging
|
||||
opt.SkipLogging = true
|
||||
m.Use(Static(".", opt))
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
expect(t, buffer.String(), "")
|
||||
}
|
||||
|
||||
func Test_Static_Options_ServeIndex(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
var buffer bytes.Buffer
|
||||
m := &Martini{Injector: inject.New(), action: func() {}, logger: log.New(&buffer, "[martini] ", 0)}
|
||||
m.Map(m.logger)
|
||||
m.Map(defaultReturnHandler())
|
||||
|
||||
opt := StaticOptions{IndexFile: "martini.go"} // Define martini.go as index file
|
||||
m.Use(Static(".", opt))
|
||||
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
expect(t, buffer.String(), "[martini] [Static] Serving /martini.go\n")
|
||||
}
|
||||
|
||||
func Test_Static_Options_Prefix(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
var buffer bytes.Buffer
|
||||
m := &Martini{Injector: inject.New(), action: func() {}, logger: log.New(&buffer, "[martini] ", 0)}
|
||||
m.Map(m.logger)
|
||||
m.Map(defaultReturnHandler())
|
||||
|
||||
// Serve current directory under /public
|
||||
m.Use(Static(".", StaticOptions{Prefix: "/public"}))
|
||||
|
||||
// Check file content behaviour
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/public/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusOK)
|
||||
expect(t, buffer.String(), "[martini] [Static] Serving /martini.go\n")
|
||||
}
|
||||
|
||||
func Test_Static_Options_Expires(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
var buffer bytes.Buffer
|
||||
m := &Martini{Injector: inject.New(), action: func() {}, logger: log.New(&buffer, "[martini] ", 0)}
|
||||
m.Map(m.logger)
|
||||
m.Map(defaultReturnHandler())
|
||||
|
||||
// Serve current directory under /public
|
||||
m.Use(Static(".", StaticOptions{Expires: func() string { return "46" }}))
|
||||
|
||||
// Check file content behaviour
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/martini.go", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Header().Get("Expires"), "46")
|
||||
}
|
||||
|
||||
func Test_Static_Redirect(t *testing.T) {
|
||||
response := httptest.NewRecorder()
|
||||
|
||||
m := New()
|
||||
m.Use(Static(".", StaticOptions{Prefix: "/public"}))
|
||||
|
||||
req, err := http.NewRequest("GET", "http://localhost:3000/public", nil)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m.ServeHTTP(response, req)
|
||||
expect(t, response.Code, http.StatusFound)
|
||||
expect(t, response.Header().Get("Location"), "/public/")
|
||||
}
|
||||
311
Godeps/_workspace/src/github.com/codegangsta/martini/translations/README_zh_cn.md
generated
vendored
Normal file
311
Godeps/_workspace/src/github.com/codegangsta/martini/translations/README_zh_cn.md
generated
vendored
Normal file
@@ -0,0 +1,311 @@
|
||||
# Martini [](https://app.wercker.com/project/bykey/174bef7e3c999e103cacfe2770102266) [](http://godoc.org/github.com/codegangsta/martini)
|
||||
|
||||
Martini是一个强大为了编写模块化Web应用而生的GO语言框架.
|
||||
|
||||
## 第一个应用
|
||||
|
||||
在你安装了GO语言和设置了你的[GOPATH](http://golang.org/doc/code.html#GOPATH)之后, 创建你的自己的`.go`文件, 这里我们假设它的名字叫做 `server.go`.
|
||||
|
||||
~~~ go
|
||||
package main
|
||||
|
||||
import "github.com/codegangsta/martini"
|
||||
|
||||
func main() {
|
||||
m := martini.Classic()
|
||||
m.Get("/", func() string {
|
||||
return "Hello world!"
|
||||
})
|
||||
m.Run()
|
||||
}
|
||||
~~~
|
||||
|
||||
然后安装Martini的包. (注意Martini需要Go语言1.1或者以上的版本支持):
|
||||
~~~
|
||||
go get github.com/codegangsta/martini
|
||||
~~~
|
||||
|
||||
最后运行你的服务:
|
||||
~~~
|
||||
go run server.go
|
||||
~~~
|
||||
|
||||
这时你将会有一个Martini的服务监听了, 地址是: `localhost:3000`.
|
||||
|
||||
## 获得帮助
|
||||
|
||||
请加入: [邮件列表](https://groups.google.com/forum/#!forum/martini-go)
|
||||
|
||||
或者可以查看在线演示地址: [演示视频](http://martini.codegangsta.io/#demo)
|
||||
|
||||
## 功能列表
|
||||
* 使用极其简单.
|
||||
* 无侵入式的设计.
|
||||
* 很好的与其他的Go语言包协同使用.
|
||||
* 超赞的路径匹配和路由.
|
||||
* 模块化的设计 - 容易插入功能件,也容易将其拔出来.
|
||||
* 已有很多的中间件可以直接使用.
|
||||
* 框架内已拥有很好的开箱即用的功能支持.
|
||||
* **完全兼容[http.HandlerFunc](http://godoc.org/net/http#HandlerFunc)接口.**
|
||||
|
||||
## 更多中间件
|
||||
更多的中间件和功能组件, 请查看代码仓库: [martini-contrib](https://github.com/martini-contrib).
|
||||
|
||||
## 目录
|
||||
* [核心 Martini](#classic-martini)
|
||||
* [处理器](#handlers)
|
||||
* [路由](#routing)
|
||||
* [服务](#services)
|
||||
* [服务静态文件](#serving-static-files)
|
||||
* [中间件处理器](#middleware-handlers)
|
||||
* [Next()](#next)
|
||||
* [常见问答](#faq)
|
||||
|
||||
## 核心 Martini
|
||||
为了更快速的启用Martini, [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic) 提供了一些默认的方便Web开发的工具:
|
||||
~~~ go
|
||||
m := martini.Classic()
|
||||
// ... middleware and routing goes here
|
||||
m.Run()
|
||||
~~~
|
||||
|
||||
下面是Martini核心已经包含的功能 [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic):
|
||||
* Request/Response Logging (请求/相应日志) - [martini.Logger](http://godoc.org/github.com/codegangsta/martini#Logger)
|
||||
* Panic Recovery (容错) - [martini.Recovery](http://godoc.org/github.com/codegangsta/martini#Recovery)
|
||||
* Static File serving (静态文件服务) - [martini.Static](http://godoc.org/github.com/codegangsta/martini#Static)
|
||||
* Routing (路由) - [martini.Router](http://godoc.org/github.com/codegangsta/martini#Router)
|
||||
|
||||
### 处理器
|
||||
处理器是Martini的灵魂和核心所在. 一个处理器基本上可以是任何的函数:
|
||||
~~~ go
|
||||
m.Get("/", func() {
|
||||
println("hello world")
|
||||
})
|
||||
~~~
|
||||
|
||||
#### 返回值
|
||||
当一个处理器返回结果的时候, Martini将会把返回值作为字符串写入到当前的[http.ResponseWriter](http://godoc.org/net/http#ResponseWriter)里面:
|
||||
~~~ go
|
||||
m.Get("/", func() string {
|
||||
return "hello world" // HTTP 200 : "hello world"
|
||||
})
|
||||
~~~
|
||||
|
||||
另外你也可以选择性的返回多一个状态码:
|
||||
~~~ go
|
||||
m.Get("/", func() (int, string) {
|
||||
return 418, "i'm a teapot" // HTTP 418 : "i'm a teapot"
|
||||
})
|
||||
~~~
|
||||
|
||||
#### 服务的注入
|
||||
处理器是通过反射来调用的. Martini 通过*Dependency Injection* *(依赖注入)* 来为处理器注入参数列表. **这样使得Martini与Go语言的`http.HandlerFunc`接口完全兼容.**
|
||||
|
||||
如果你加入一个参数到你的处理器, Martini将会搜索它参数列表中的服务,并且通过类型判断来解决依赖关系:
|
||||
~~~ go
|
||||
m.Get("/", func(res http.ResponseWriter, req *http.Request) { // res 和 req 是通过Martini注入的
|
||||
res.WriteHeader(200) // HTTP 200
|
||||
})
|
||||
~~~
|
||||
|
||||
下面的这些服务已经被包含在核心Martini中: [martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic):
|
||||
* [*log.Logger](http://godoc.org/log#Logger) - Martini的全局日志.
|
||||
* [martini.Context](http://godoc.org/github.com/codegangsta/martini#Context) - http request context (请求上下文).
|
||||
* [martini.Params](http://godoc.org/github.com/codegangsta/martini#Params) - `map[string]string` of named params found by route matching. (名字和参数键值对的参数列表)
|
||||
* [martini.Routes](http://godoc.org/github.com/codegangsta/martini#Routes) - Route helper service. (路由协助处理)
|
||||
* [http.ResponseWriter](http://godoc.org/net/http/#ResponseWriter) - http Response writer interface. (响应结果的流接口)
|
||||
* [*http.Request](http://godoc.org/net/http/#Request) - http Request. (http请求)
|
||||
|
||||
### 路由
|
||||
在Martini中, 路由是一个HTTP方法配对一个URL匹配模型. 每一个路由可以对应一个或多个处理器方法:
|
||||
~~~ go
|
||||
m.Get("/", func() {
|
||||
// 显示
|
||||
})
|
||||
|
||||
m.Patch("/", func() {
|
||||
// 更新
|
||||
})
|
||||
|
||||
m.Post("/", func() {
|
||||
// 创建
|
||||
})
|
||||
|
||||
m.Put("/", func() {
|
||||
// 替换
|
||||
})
|
||||
|
||||
m.Delete("/", func() {
|
||||
// 删除
|
||||
})
|
||||
|
||||
m.Options("/", func() {
|
||||
// http 选项
|
||||
})
|
||||
|
||||
m.NotFound(func() {
|
||||
// 处理 404
|
||||
})
|
||||
~~~
|
||||
|
||||
路由匹配的顺序是按照他们被定义的顺序执行的. 最先被定义的路由将会首先被用户请求匹配并调用.
|
||||
|
||||
路由模型可能包含参数列表, 可以通过[martini.Params](http://godoc.org/github.com/codegangsta/martini#Params)服务来获取:
|
||||
~~~ go
|
||||
m.Get("/hello/:name", func(params martini.Params) string {
|
||||
return "Hello " + params["name"]
|
||||
})
|
||||
~~~
|
||||
|
||||
路由匹配可以通过正则表达式或者glob的形式:
|
||||
~~~ go
|
||||
m.Get("/hello/**", func(params martini.Params) string {
|
||||
return "Hello " + params["_1"]
|
||||
})
|
||||
~~~
|
||||
|
||||
路由处理器可以被相互叠加使用, 例如很有用的地方可以是在验证和授权的时候:
|
||||
~~~ go
|
||||
m.Get("/secret", authorize, func() {
|
||||
// 该方法将会在authorize方法没有输出结果的时候执行.
|
||||
})
|
||||
~~~
|
||||
|
||||
### 服务
|
||||
服务即是被注入到处理器中的参数. 你可以映射一个服务到 *全局* 或者 *请求* 的级别.
|
||||
|
||||
|
||||
#### 全局映射
|
||||
如果一个Martini实现了inject.Injector的接口, 那么映射成为一个服务就非常简单:
|
||||
~~~ go
|
||||
db := &MyDatabase{}
|
||||
m := martini.Classic()
|
||||
m.Map(db) // *MyDatabase 这个服务将可以在所有的处理器中被使用到.
|
||||
// ...
|
||||
m.Run()
|
||||
~~~
|
||||
|
||||
#### 请求级别的映射
|
||||
映射在请求级别的服务可以用[martini.Context](http://godoc.org/github.com/codegangsta/martini#Context)来完成:
|
||||
~~~ go
|
||||
func MyCustomLoggerHandler(c martini.Context, req *http.Request) {
|
||||
logger := &MyCustomLogger{req}
|
||||
c.Map(logger) // 映射成为了 *MyCustomLogger
|
||||
}
|
||||
~~~
|
||||
|
||||
#### 映射值到接口
|
||||
关于服务最强悍的地方之一就是它能够映射服务到接口. 例如说, 假设你想要覆盖[http.ResponseWriter](http://godoc.org/net/http#ResponseWriter)成为一个对象, 那么你可以封装它并包含你自己的额外操作, 你可以如下这样来编写你的处理器:
|
||||
~~~ go
|
||||
func WrapResponseWriter(res http.ResponseWriter, c martini.Context) {
|
||||
rw := NewSpecialResponseWriter(res)
|
||||
c.MapTo(rw, (*http.ResponseWriter)(nil)) // 覆盖 ResponseWriter 成为我们封装过的 ResponseWriter
|
||||
}
|
||||
~~~
|
||||
|
||||
### 服务静态文件
|
||||
[martini.Classic()](http://godoc.org/github.com/codegangsta/martini#Classic) 默认会服务位于你服务器环境根目录下的"public"文件夹.
|
||||
你可以通过加入[martini.Static](http://godoc.org/github.com/codegangsta/martini#Static)的处理器来加入更多的静态文件服务的文件夹.
|
||||
~~~ go
|
||||
m.Use(martini.Static("assets")) // 也会服务静态文件于"assets"的文件夹
|
||||
~~~
|
||||
|
||||
## 中间件处理器
|
||||
中间件处理器是工作于请求和路由之间的. 本质上来说和Martini其他的处理器没有分别. 你可以像如下这样添加一个中间件处理器到它的堆中:
|
||||
~~~ go
|
||||
m.Use(func() {
|
||||
// 做一些中间件该做的事情
|
||||
})
|
||||
~~~
|
||||
|
||||
你可以通过`Handlers`函数对中间件堆有完全的控制. 它将会替换掉之前的任何设置过的处理器:
|
||||
~~~ go
|
||||
m.Handlers(
|
||||
Middleware1,
|
||||
Middleware2,
|
||||
Middleware3,
|
||||
)
|
||||
~~~
|
||||
|
||||
中间件处理器可以非常好处理一些功能,像logging(日志), authorization(授权), authentication(认证), sessions(会话), error pages(错误页面), 以及任何其他的操作需要在http请求发生之前或者之后的:
|
||||
|
||||
~~~ go
|
||||
// 验证api密匙
|
||||
m.Use(func(res http.ResponseWriter, req *http.Request) {
|
||||
if req.Header.Get("X-API-KEY") != "secret123" {
|
||||
res.WriteHeader(http.StatusUnauthorized)
|
||||
}
|
||||
})
|
||||
~~~
|
||||
|
||||
### Next()
|
||||
[Context.Next()](http://godoc.org/github.com/codegangsta/martini#Context)是一个可选的函数用于中间件处理器暂时放弃执行直到其他的处理器都执行完毕. 这样就可以很好的处理在http请求完成后需要做的操作.
|
||||
~~~ go
|
||||
// log 记录请求完成前后 (*译者注: 很巧妙,掌声鼓励.)
|
||||
m.Use(func(c martini.Context, log *log.Logger){
|
||||
log.Println("before a request")
|
||||
|
||||
c.Next()
|
||||
|
||||
log.Println("after a request")
|
||||
})
|
||||
~~~
|
||||
|
||||
## 常见问答
|
||||
|
||||
### 我在哪里可以找到中间件资源?
|
||||
|
||||
可以查看 [martini-contrib](https://github.com/martini-contrib) 项目. 如果看了觉得没有什么好货色, 可以联系martini-contrib的团队成员为你创建一个新的代码资源库.
|
||||
|
||||
* [auth](https://github.com/martini-contrib/auth) - 认证处理器.
|
||||
* [binding](https://github.com/martini-contrib/binding) - 映射/验证raw请求到结构体(structure)里的处理器
|
||||
* [gzip](https://github.com/martini-contrib/gzip) - 加入giz支持的处理器
|
||||
* [render](https://github.com/martini-contrib/render) - 渲染JSON和HTML模板的处理器.
|
||||
* [acceptlang](https://github.com/martini-contrib/acceptlang) - 解析`Accept-Language` HTTP报头的处理器.
|
||||
* [sessions](https://github.com/martini-contrib/sessions) - 提供会话服务支持的处理器.
|
||||
* [strip](https://github.com/martini-contrib/strip) - URL Prefix stripping.
|
||||
* [method](https://github.com/martini-contrib/method) - HTTP method overriding via Header or form fields.
|
||||
* [secure](https://github.com/martini-contrib/secure) - Implements a few quick security wins.
|
||||
* [encoder](https://github.com/martini-contrib/encoder) - Encoder service for rendering data in several formats and content negotiation.
|
||||
|
||||
### 我如何整合到我现有的服务器中?
|
||||
|
||||
由于Martini实现了 `http.Handler`, 所以它可以很简单的应用到现有Go服务器的子集中. 例如说这是一段在Google App Engine中的示例:
|
||||
|
||||
~~~ go
|
||||
package hello
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"github.com/codegangsta/martini"
|
||||
)
|
||||
|
||||
func init() {
|
||||
m := martini.Classic()
|
||||
m.Get("/", func() string {
|
||||
return "Hello world!"
|
||||
})
|
||||
http.Handle("/", m)
|
||||
}
|
||||
~~~
|
||||
|
||||
### 我如何修改port/host?
|
||||
|
||||
Martini的`Run`函数会检查PORT和HOST的环境变量并使用它们. 否则Martini将会默认使用localhost:3000
|
||||
如果想要自定义PORT和HOST, 使用`http.ListenAndServe`函数来代替.
|
||||
|
||||
~~~ go
|
||||
m := martini.Classic()
|
||||
// ...
|
||||
http.ListenAndServe(":8080", m)
|
||||
~~~
|
||||
|
||||
## 贡献
|
||||
Martini项目想要保持简单且干净的代码. 大部分的代码应该贡献到[martini-contrib](https://github.com/martini-contrib)组织中作为一个项目. 如果你想要贡献Martini的核心代码也可以发起一个Pull Request.
|
||||
|
||||
## 关于
|
||||
|
||||
灵感来自于 [express](https://github.com/visionmedia/express) 和 [sinatra](https://github.com/sinatra/sinatra)
|
||||
|
||||
Martini作者 [Code Gangsta](http://codegangsta.io/)
|
||||
译者: [Leon](http://github.com/leonli)
|
||||
1
Godeps/_workspace/src/github.com/codegangsta/martini/wercker.yml
generated
vendored
Normal file
1
Godeps/_workspace/src/github.com/codegangsta/martini/wercker.yml
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
box: wercker/golang@1.1.1
|
||||
14
Godeps/_workspace/src/github.com/golang/snappy/AUTHORS
generated
vendored
14
Godeps/_workspace/src/github.com/golang/snappy/AUTHORS
generated
vendored
@@ -1,14 +0,0 @@
|
||||
# This is the official list of Snappy-Go authors for copyright purposes.
|
||||
# This file is distinct from the CONTRIBUTORS files.
|
||||
# See the latter for an explanation.
|
||||
|
||||
# Names should be added to this file as
|
||||
# Name or Organization <email address>
|
||||
# The email address is not required for organizations.
|
||||
|
||||
# Please keep the list sorted.
|
||||
|
||||
Damian Gryski <dgryski@gmail.com>
|
||||
Google Inc.
|
||||
Jan Mercl <0xjnml@gmail.com>
|
||||
Sebastien Binet <seb.binet@gmail.com>
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user