mirror of
https://github.com/syncthing/syncthing.git
synced 2025-12-31 09:59:14 -05:00
Compare commits
154 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2df78a9313 | ||
|
|
d2d32f26c7 | ||
|
|
473df1bd19 | ||
|
|
880f417ae3 | ||
|
|
043fa7f489 | ||
|
|
9b0768a71b | ||
|
|
446b21c568 | ||
|
|
b3c2ffc96a | ||
|
|
b5f652a815 | ||
|
|
9ec7de643e | ||
|
|
2553ba0463 | ||
|
|
52ee7d5724 | ||
|
|
d4ef6a6285 | ||
|
|
56b7d3c28d | ||
|
|
ae94b726a7 | ||
|
|
a88e4db1ee | ||
|
|
0ebd4a6ba1 | ||
|
|
1448cfe66a | ||
|
|
d6c9afd07f | ||
|
|
799f55e7ae | ||
|
|
04a3db132f | ||
|
|
d06204959e | ||
|
|
2d0600de38 | ||
|
|
6a1c055288 | ||
|
|
b9ec30ebdb | ||
|
|
428164f395 | ||
|
|
ba59e0d3f0 | ||
|
|
5d8f0f835e | ||
|
|
b4a1aadd1b | ||
|
|
8f41d90ab1 | ||
|
|
9743386166 | ||
|
|
0afcb5b7e7 | ||
|
|
043dea760f | ||
|
|
0618e2b9b4 | ||
|
|
3c171d281c | ||
|
|
c217b7cd22 | ||
|
|
23593c3d20 | ||
|
|
192117dc11 | ||
|
|
24b8f9211a | ||
|
|
51788d6f0e | ||
|
|
ea0bed2238 | ||
|
|
e2fe57c440 | ||
|
|
434a0ccf2a | ||
|
|
e7bf3ac108 | ||
|
|
c5bdaebf2b | ||
|
|
645233e7dc | ||
|
|
c6e396e8fb | ||
|
|
a57e2b358f | ||
|
|
d0863d495c | ||
|
|
5837277f8d | ||
|
|
87d473dc8f | ||
|
|
9744629c4b | ||
|
|
8f0a015abf | ||
|
|
f89fa6caed | ||
|
|
21a7f3960a | ||
|
|
9f63feef30 | ||
|
|
c171780c0d | ||
|
|
5daf6ecf70 | ||
|
|
6c8135126d | ||
|
|
91d5c4a1ae | ||
|
|
2cbe81f1c7 | ||
|
|
a26ce61d92 | ||
|
|
478300f6d8 | ||
|
|
3a5b816125 | ||
|
|
b6814241cc | ||
|
|
fc6eabea28 | ||
|
|
14b3791b2b | ||
|
|
e6b29988e5 | ||
|
|
3cb7b8f22b | ||
|
|
2297e29502 | ||
|
|
ea41acfff5 | ||
|
|
1aefc50e35 | ||
|
|
9bd4fa5008 | ||
|
|
89c2f61b30 | ||
|
|
a1d575894a | ||
|
|
71def3a970 | ||
|
|
13854250b3 | ||
|
|
e6078f9449 | ||
|
|
5980952495 | ||
|
|
618c376e18 | ||
|
|
d31a126408 | ||
|
|
6d3f8a2c06 | ||
|
|
b1ba976122 | ||
|
|
81d5d1d4a6 | ||
|
|
ea5ef28c5a | ||
|
|
fc2ebc6cad | ||
|
|
01096fff6c | ||
|
|
2ea3558283 | ||
|
|
20a47695fb | ||
|
|
1dde9ec2d8 | ||
|
|
0841a46055 | ||
|
|
84c0749d20 | ||
|
|
6b02f9e44f | ||
|
|
84d7452f9e | ||
|
|
9b449cb527 | ||
|
|
d9ffd359e2 | ||
|
|
b67443eb40 | ||
|
|
4ac204b604 | ||
|
|
fff50b5472 | ||
|
|
8d5aed410f | ||
|
|
ba0e4ded65 | ||
|
|
f0b18685a5 | ||
|
|
fc2b557ae6 | ||
|
|
af399ae9f3 | ||
|
|
45fcf4bc84 | ||
|
|
55f61ccb5e | ||
|
|
b601fc5627 | ||
|
|
832c0ffad0 | ||
|
|
cb33f27f23 | ||
|
|
92dee7c082 | ||
|
|
b9af45bc6b | ||
|
|
a18f6c6d90 | ||
|
|
6e11e3cda9 | ||
|
|
2935aebe53 | ||
|
|
71f78f0d62 | ||
|
|
3e1194e5ff | ||
|
|
6d64992e64 | ||
|
|
211180108e | ||
|
|
17e78d6f7e | ||
|
|
1ef86379fb | ||
|
|
884a7d6a1b | ||
|
|
334961fe10 | ||
|
|
2cfb24892f | ||
|
|
d4fe1400d2 | ||
|
|
69ef4d261d | ||
|
|
91c102e4fe | ||
|
|
b4db177045 | ||
|
|
340c9095dd | ||
|
|
e3bc33dc88 | ||
|
|
eebc145055 | ||
|
|
92b01fa48a | ||
|
|
2a0d1ab294 | ||
|
|
2bdab426ff | ||
|
|
e769de9986 | ||
|
|
4b11e66914 | ||
|
|
28d3936a3c | ||
|
|
986b15573a | ||
|
|
46d828e349 | ||
|
|
48603a1619 | ||
|
|
17d5f2bbfc | ||
|
|
b64af73607 | ||
|
|
c9cce9613e | ||
|
|
1392905d63 | ||
|
|
271d7eedc4 | ||
|
|
ab8482a424 | ||
|
|
c8a14d1c3d | ||
|
|
8974c33f2f | ||
|
|
ed675a61d7 | ||
|
|
60b00af0bb | ||
|
|
0ceddc4fa3 | ||
|
|
8c1996f7e5 | ||
|
|
6679c84cfb | ||
|
|
7b6f43cbb5 | ||
|
|
c124989163 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,3 +1,4 @@
|
||||
syncthing
|
||||
syncthing.exe
|
||||
*.tar.gz
|
||||
build
|
||||
*.zip
|
||||
|
||||
22
CONTRIBUTING.md
Normal file
22
CONTRIBUTING.md
Normal file
@@ -0,0 +1,22 @@
|
||||
Please do contribute!
|
||||
|
||||
## Building
|
||||
|
||||
[See the wiki](https://github.com/calmh/syncthing/wiki/Building)
|
||||
|
||||
## Tests
|
||||
|
||||
Yes please!
|
||||
|
||||
## Style
|
||||
|
||||
`go fmt`
|
||||
|
||||
## Documentation
|
||||
|
||||
[Hack it here](https://github.com/calmh/syncthing/wiki)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
2
LICENSE
2
LICENSE
@@ -1,4 +1,4 @@
|
||||
Copyright (C) 2013 Jakob Borg
|
||||
Copyright (C) 2013-2014 Jakob Borg
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
|
||||
173
README.md
173
README.md
@@ -1,18 +1,18 @@
|
||||
syncthing
|
||||
syncthing [](https://drone.io/github.com/calmh/syncthing/latest)
|
||||
=========
|
||||
|
||||
This is `syncthing`, an open BitTorrent Sync alternative. It is
|
||||
currently far from ready for mass consumption, but it is a usable proof
|
||||
of concept and tech demo. The following are the project goals:
|
||||
This is the `syncthing` project. The following are the project goals:
|
||||
|
||||
1. Define an open, secure, language neutral protocol usable for
|
||||
efficient synchronization of a file repository between an arbitrary
|
||||
number of nodes. This is the [Block Exchange
|
||||
Protocol](https://github.com/calmh/syncthing/blob/master/protocol/PROTOCOL.md)
|
||||
(BEP).
|
||||
1. Define a protocol for synchronization of a file repository between a
|
||||
number of collaborating nodes. The protocol should be well defined,
|
||||
unambigous, easily understood, free to use, efficient, secure and
|
||||
languange neutral. This is the [Block Exchange
|
||||
Protocol](https://github.com/calmh/syncthing/blob/master/protocol/PROTOCOL.md).
|
||||
|
||||
2. Provide the reference implementation to demonstrate the usability of
|
||||
said protocol. This is the `syncthing` utility.
|
||||
said protocol. This is the `syncthing` utility. It is the hope that
|
||||
alternative, compatible implementations of the protocol will come to
|
||||
exist.
|
||||
|
||||
The two are evolving together; the protocol is not to be considered
|
||||
stable until syncthing 1.0 is released, at which point it is locked down
|
||||
@@ -25,153 +25,18 @@ making sure large swarms of selfish agents behave and somehow work
|
||||
towards a common goal. Here we have a much smaller swarm of cooperative
|
||||
agents and a simpler approach will suffice.
|
||||
|
||||
Features
|
||||
--------
|
||||
Documentation
|
||||
=============
|
||||
|
||||
The following features are _currently implemented and working_:
|
||||
|
||||
* The formation of a cluster of nodes, certificate authenticated and
|
||||
communicating over TLS over TCP.
|
||||
|
||||
* Synchronization of a single directory among the cluster nodes.
|
||||
|
||||
* Change detection by periodic scanning of the local repository.
|
||||
|
||||
* Static configuration of cluster nodes.
|
||||
|
||||
* Automatic discovery of cluster nodes. See [discover.go][discover.go]
|
||||
for the protocol specification. Discovery on the LAN is performed by
|
||||
broadcasts, Internet wide discovery is performed with the assistance
|
||||
of a global server.
|
||||
|
||||
* Handling of deleted files. Deletes can be propagated or ignored per
|
||||
client.
|
||||
|
||||
* Synchronizing multiple unrelated directory trees by following
|
||||
symlinks directly below the repository level.
|
||||
|
||||
The following features are _not yet implemented but planned_:
|
||||
|
||||
* Change detection by listening to file system notifications instead of
|
||||
periodic scanning.
|
||||
|
||||
* HTTP GUI.
|
||||
|
||||
The following features are _not implemented but may be implemented_ in
|
||||
the future:
|
||||
|
||||
* Syncing multiple directories from the same syncthing instance.
|
||||
|
||||
* Automatic NAT handling via UPNP.
|
||||
|
||||
* Conflict resolution. Currently whichever file has the newest
|
||||
modification time "wins". The correct behavior in the face of
|
||||
conflicts is open for discussion.
|
||||
|
||||
[discover.go]: (https://github.com/calmh/syncthing/blob/master/discover/discover.go
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
Security is one of the primary project goals. This means that it should
|
||||
not be possible for an attacker to join a cluster uninvited, and it
|
||||
should not be possible to extract private information from intercepted
|
||||
traffic. Currently this is implemented as follows.
|
||||
|
||||
All traffic is protected by TLS. To prevent uninvited nodes from joining
|
||||
a cluster, the certificate fingerprint of each node is compared to a
|
||||
preset list of acceptable nodes at connection establishment. The
|
||||
fingerprint is computed as the SHA-1 hash of the certificate and
|
||||
displayed in BASE32 encoding to form a compact yet convenient string.
|
||||
Currently SHA-1 is deemed secure against preimage attacks.
|
||||
|
||||
Installing
|
||||
==========
|
||||
|
||||
Download the appropriate precompiled binary from the
|
||||
[releases](https://github.com/calmh/syncthing/releases) page. Untar and
|
||||
put the `syncthing` binary somewhere convenient in your `$PATH`.
|
||||
|
||||
If you are a developer and have Go 1.2 installed you can also install
|
||||
the latest version from source:
|
||||
|
||||
`go get github.com/calmh/syncthing`
|
||||
|
||||
Usage
|
||||
=====
|
||||
|
||||
Check out the options:
|
||||
|
||||
```
|
||||
$ syncthing --help
|
||||
Usage:
|
||||
syncthing [options]
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
Run syncthing to let it create it's config directory and certificate:
|
||||
|
||||
```
|
||||
$ syncthing
|
||||
11:34:13 main.go:85: INFO: Version v0.1-40-gbb0fd87
|
||||
11:34:13 tls.go:61: OK: Created TLS certificate file
|
||||
11:34:13 tls.go:67: OK: Created TLS key file
|
||||
11:34:13 main.go:66: INFO: My ID: NCTBZAAHXR6ZZP3D7SL3DLYFFQERMW4Q
|
||||
11:34:13 main.go:90: FATAL: No config file
|
||||
```
|
||||
|
||||
Take note of the "My ID: ..." line. Perform the same operation on
|
||||
another computer to create another node. Take note of that ID as well,
|
||||
and create a config file `~/.syncthing/syncthing.ini` looking something
|
||||
like this:
|
||||
|
||||
```
|
||||
[repository]
|
||||
dir = /Users/jb/Synced
|
||||
|
||||
[nodes]
|
||||
NCTBZAAHXR6ZZP3D7SL3DLYFFQERMW4Q = 172.16.32.1:22000 192.23.34.56:22000
|
||||
CUGAE43Y5N64CRJU26YFH6MTWPSBLSUL = dynamic
|
||||
```
|
||||
|
||||
This assumes that the first node is reachable on either of the two
|
||||
addresses listed (perhaps one internal and one port-forwarded external)
|
||||
and that the other node is not normally reachable from the outside. Save
|
||||
this config file, identically, to both nodes.
|
||||
|
||||
If the nodes are running on the same network, or reachable on port 22000
|
||||
from the outside world, you can set all addresses to "dynamic" and they
|
||||
will find each other using automatic discovery. (This discovery,
|
||||
including port numbers, can be tweaked or disabled using command line
|
||||
options.)
|
||||
|
||||
Start syncthing on both nodes. For the cautious, one side can be set to
|
||||
be read only.
|
||||
|
||||
```
|
||||
$ syncthing --ro
|
||||
13:30:55 main.go:85: INFO: Version v0.1-40-gbb0fd87
|
||||
13:30:55 main.go:102: INFO: My ID: NCTBZAAHXR6ZZP3D7SL3DLYFFQERMW4Q
|
||||
13:30:55 main.go:149: INFO: Initial repository scan in progress
|
||||
13:30:59 main.go:153: INFO: Listening for incoming connections
|
||||
13:30:59 main.go:157: INFO: Attempting to connect to other nodes
|
||||
13:30:59 main.go:247: INFO: Starting local discovery
|
||||
13:30:59 main.go:165: OK: Ready to synchronize
|
||||
13:31:04 discover.go:113: INFO: Discovered node CUGAE43Y5N64CRJU26YFH6MTWPSBLSUL at 172.16.32.24:22000
|
||||
13:31:14 main.go:296: INFO: Connected to node CUGAE43Y5N64CRJU26YFH6MTWPSBLSUL
|
||||
13:31:19 main.go:345: INFO: Transferred 139 KiB in (14 KiB/s), 139 KiB out (14 KiB/s)
|
||||
13:32:20 model.go:94: INFO: CUGAE43Y5N64CRJU26YFH6MTWPSBLSUL: 263.4 KB/s in, 69.1 KB/s out
|
||||
13:32:20 model.go:104: INFO: 18289 files, 24.24 GB in cluster
|
||||
13:32:20 model.go:111: INFO: 17132 files, 22.39 GB in local repo
|
||||
13:32:20 model.go:117: INFO: 1157 files, 1.84 GB to synchronize
|
||||
...
|
||||
```
|
||||
You should see the synchronization start and then finish a short while
|
||||
later. Add nodes to taste.
|
||||
The syncthing documentation is kept on the
|
||||
[GitHub Wiki](https://github.com/calmh/syncthing/wiki).
|
||||
|
||||
License
|
||||
=======
|
||||
|
||||
MIT
|
||||
All documentation and protocol specifications are licensed
|
||||
under the [Creative Commons Attribution 4.0 International
|
||||
License](http://creativecommons.org/licenses/by/4.0/).
|
||||
|
||||
All code is licensed under the [MIT
|
||||
License](https://github.com/calmh/syncthing/blob/master/LICENSE).
|
||||
|
||||
27
assets.sh
Executable file
27
assets.sh
Executable file
@@ -0,0 +1,27 @@
|
||||
#!/bin/bash
|
||||
|
||||
cat <<EOT
|
||||
package auto
|
||||
|
||||
import "compress/gzip"
|
||||
import "bytes"
|
||||
import "io/ioutil"
|
||||
|
||||
var Assets = make(map[string][]byte)
|
||||
|
||||
func init() {
|
||||
var data []byte
|
||||
var gr *gzip.Reader
|
||||
EOT
|
||||
|
||||
cd gui
|
||||
for f in $(find . -type f) ; do
|
||||
f="${f#./}"
|
||||
echo "gr, _ = gzip.NewReader(bytes.NewBuffer([]byte{"
|
||||
gzip -n -c $f | od -vt x1 | sed 's/^[0-9a-f]*//' | sed 's/\([0-9a-f][0-9a-f]\)/0x\1,/g'
|
||||
echo "}))"
|
||||
echo "data, _ = ioutil.ReadAll(gr)"
|
||||
echo "Assets[\"$f\"] = data"
|
||||
done
|
||||
echo "}"
|
||||
|
||||
BIN
assets/st-logo.pxm
Normal file
BIN
assets/st-logo.pxm
Normal file
Binary file not shown.
2
auto/doc.go
Normal file
2
auto/doc.go
Normal file
@@ -0,0 +1,2 @@
|
||||
// Package auto contains auto generated files for web assets.
|
||||
package auto
|
||||
12927
auto/gui.files.go
Normal file
12927
auto/gui.files.go
Normal file
File diff suppressed because it is too large
Load Diff
67
blocks.go
67
blocks.go
@@ -1,67 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"io"
|
||||
)
|
||||
|
||||
type BlockList []Block
|
||||
|
||||
type Block struct {
|
||||
Offset uint64
|
||||
Length uint32
|
||||
Hash []byte
|
||||
}
|
||||
|
||||
// Blocks returns the blockwise hash of the reader.
|
||||
func Blocks(r io.Reader, blocksize int) (BlockList, error) {
|
||||
var blocks BlockList
|
||||
var offset uint64
|
||||
for {
|
||||
lr := &io.LimitedReader{r, int64(blocksize)}
|
||||
hf := sha256.New()
|
||||
n, err := io.Copy(hf, lr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if n == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
b := Block{
|
||||
Offset: offset,
|
||||
Length: uint32(n),
|
||||
Hash: hf.Sum(nil),
|
||||
}
|
||||
blocks = append(blocks, b)
|
||||
offset += uint64(n)
|
||||
}
|
||||
|
||||
return blocks, nil
|
||||
}
|
||||
|
||||
// To returns the list of blocks necessary to transform src into dst.
|
||||
// Both block lists must have been created with the same block size.
|
||||
func (src BlockList) To(tgt BlockList) (have, need BlockList) {
|
||||
if len(tgt) == 0 && len(src) != 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if len(tgt) != 0 && len(src) == 0 {
|
||||
// Copy the entire file
|
||||
return nil, tgt
|
||||
}
|
||||
|
||||
for i := range tgt {
|
||||
if i >= len(src) || bytes.Compare(tgt[i].Hash, src[i].Hash) != 0 {
|
||||
// Copy differing block
|
||||
need = append(need, tgt[i])
|
||||
} else {
|
||||
have = append(have, tgt[i])
|
||||
}
|
||||
}
|
||||
|
||||
return have, need
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
// Package buffers manages a set of reusable byte buffers.
|
||||
package buffers
|
||||
|
||||
const (
|
||||
|
||||
116
build.sh
116
build.sh
@@ -1,39 +1,91 @@
|
||||
#!/bin/bash
|
||||
|
||||
export COPYFILE_DISABLE=true
|
||||
|
||||
distFiles=(README.md LICENSE) # apart from the binary itself
|
||||
version=$(git describe --always)
|
||||
|
||||
go test ./... || exit 1
|
||||
build() {
|
||||
go build -ldflags "-w -X main.Version $version" ./cmd/syncthing
|
||||
}
|
||||
|
||||
rm -rf build
|
||||
mkdir -p build || exit 1
|
||||
prepare() {
|
||||
./assets.sh | gofmt > auto/gui.files.go
|
||||
go get -d
|
||||
}
|
||||
|
||||
for goos in darwin linux freebsd ; do
|
||||
for goarch in amd64 386 ; do
|
||||
echo "$goos-$goarch"
|
||||
export GOOS="$goos"
|
||||
export GOARCH="$goarch"
|
||||
export name="syncthing-$goos-$goarch"
|
||||
go build -ldflags "-X main.Version $version" \
|
||||
&& mkdir -p "$name" \
|
||||
&& cp syncthing "build/$name" \
|
||||
&& cp README.md LICENSE "$name" \
|
||||
&& mv syncthing "$name" \
|
||||
&& tar zcf "build/$name.tar.gz" "$name" \
|
||||
&& rm -r "$name"
|
||||
done
|
||||
done
|
||||
test() {
|
||||
go test ./...
|
||||
}
|
||||
|
||||
for goos in windows ; do
|
||||
for goarch in amd64 386 ; do
|
||||
echo "$goos-$goarch"
|
||||
export GOOS="$goos"
|
||||
export GOARCH="$goarch"
|
||||
export name="syncthing-$goos-$goarch"
|
||||
go build -ldflags "-X main.Version $version" \
|
||||
&& mkdir -p "$name" \
|
||||
&& cp syncthing.exe "build/$name.exe" \
|
||||
&& cp README.md LICENSE "$name" \
|
||||
&& zip -qr "build/$name.zip" "$name" \
|
||||
&& rm -r "$name"
|
||||
done
|
||||
done
|
||||
tarDist() {
|
||||
name="$1"
|
||||
mkdir -p "$name"
|
||||
cp syncthing "${distFiles[@]}" "$name"
|
||||
tar zcvf "$name.tar.gz" "$name"
|
||||
rm -rf "$name"
|
||||
}
|
||||
|
||||
zipDist() {
|
||||
name="$1"
|
||||
mkdir -p "$name"
|
||||
cp syncthing.exe "${distFiles[@]}" "$name"
|
||||
zip -r "$name.zip" "$name"
|
||||
rm -rf "$name"
|
||||
}
|
||||
|
||||
case "$1" in
|
||||
"")
|
||||
build
|
||||
;;
|
||||
|
||||
tar)
|
||||
rm -f *.tar.gz *.zip
|
||||
prepare
|
||||
test || exit 1
|
||||
build
|
||||
|
||||
eval $(go env)
|
||||
name="syncthing-$GOOS-$GOARCH-$version"
|
||||
|
||||
tarDist "$name"
|
||||
;;
|
||||
|
||||
all)
|
||||
rm -f *.tar.gz *.zip
|
||||
prepare
|
||||
test || exit 1
|
||||
|
||||
export GOARM=7
|
||||
for os in darwin-amd64 linux-amd64 linux-arm freebsd-amd64 windows-amd64 ; do
|
||||
export GOOS=${os%-*}
|
||||
export GOARCH=${os#*-}
|
||||
|
||||
build
|
||||
|
||||
name="syncthing-$os-$version"
|
||||
case $GOOS in
|
||||
windows)
|
||||
zipDist "$name"
|
||||
rm -f syncthing.exe
|
||||
;;
|
||||
*)
|
||||
tarDist "$name"
|
||||
rm -f syncthing
|
||||
;;
|
||||
esac
|
||||
done
|
||||
;;
|
||||
|
||||
upload)
|
||||
tag=$(git describe)
|
||||
shopt -s nullglob
|
||||
for f in *gz *zip ; do
|
||||
relup calmh/syncthing "$tag" "$f"
|
||||
done
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Unknown build parameter $1"
|
||||
;;
|
||||
esac
|
||||
|
||||
43
cid/cid.go
Normal file
43
cid/cid.go
Normal file
@@ -0,0 +1,43 @@
|
||||
// Package cid provides a manager for mappings between node ID:s and connection ID:s.
|
||||
package cid
|
||||
|
||||
type Map struct {
|
||||
toCid map[string]int
|
||||
toName []string
|
||||
}
|
||||
|
||||
func NewMap() *Map {
|
||||
return &Map{
|
||||
toCid: make(map[string]int),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Map) Get(name string) int {
|
||||
cid, ok := m.toCid[name]
|
||||
if ok {
|
||||
return cid
|
||||
}
|
||||
|
||||
// Find a free slot to get a new ID
|
||||
for i, n := range m.toName {
|
||||
if n == "" {
|
||||
m.toName[i] = name
|
||||
m.toCid[name] = i
|
||||
return i
|
||||
}
|
||||
}
|
||||
|
||||
// Add it to the end since we didn't find a free slot
|
||||
m.toName = append(m.toName, name)
|
||||
cid = len(m.toName) - 1
|
||||
m.toCid[name] = cid
|
||||
return cid
|
||||
}
|
||||
|
||||
func (m *Map) Clear(name string) {
|
||||
cid, ok := m.toCid[name]
|
||||
if ok {
|
||||
m.toName[cid] = ""
|
||||
delete(m.toCid, name)
|
||||
}
|
||||
}
|
||||
1
cmd/.gitignore
vendored
Normal file
1
cmd/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
!syncthing
|
||||
229
cmd/syncthing/config.go
Normal file
229
cmd/syncthing/config.go
Normal file
@@ -0,0 +1,229 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Configuration struct {
|
||||
Version int `xml:"version,attr" default:"1"`
|
||||
Repositories []RepositoryConfiguration `xml:"repository"`
|
||||
Options OptionsConfiguration `xml:"options"`
|
||||
XMLName xml.Name `xml:"configuration" json:"-"`
|
||||
}
|
||||
|
||||
type RepositoryConfiguration struct {
|
||||
Directory string `xml:"directory,attr"`
|
||||
Nodes []NodeConfiguration `xml:"node"`
|
||||
}
|
||||
|
||||
type NodeConfiguration struct {
|
||||
NodeID string `xml:"id,attr"`
|
||||
Name string `xml:"name,attr"`
|
||||
Addresses []string `xml:"address"`
|
||||
}
|
||||
|
||||
type OptionsConfiguration struct {
|
||||
ListenAddress []string `xml:"listenAddress" default:":22000" ini:"listen-address"`
|
||||
ReadOnly bool `xml:"readOnly" ini:"read-only"`
|
||||
AllowDelete bool `xml:"allowDelete" default:"true" ini:"allow-delete"`
|
||||
FollowSymlinks bool `xml:"followSymlinks" default:"true" ini:"follow-symlinks"`
|
||||
GUIEnabled bool `xml:"guiEnabled" default:"true" ini:"gui-enabled"`
|
||||
GUIAddress string `xml:"guiAddress" default:"127.0.0.1:8080" ini:"gui-address"`
|
||||
GlobalAnnServer string `xml:"globalAnnounceServer" default:"announce.syncthing.net:22025" ini:"global-announce-server"`
|
||||
GlobalAnnEnabled bool `xml:"globalAnnounceEnabled" default:"true" ini:"global-announce-enabled"`
|
||||
LocalAnnEnabled bool `xml:"localAnnounceEnabled" default:"true" ini:"local-announce-enabled"`
|
||||
ParallelRequests int `xml:"parallelRequests" default:"16" ini:"parallel-requests"`
|
||||
MaxSendKbps int `xml:"maxSendKbps" ini:"max-send-kbps"`
|
||||
RescanIntervalS int `xml:"rescanIntervalS" default:"60" ini:"rescan-interval"`
|
||||
ReconnectIntervalS int `xml:"reconnectionIntervalS" default:"60" ini:"reconnection-interval"`
|
||||
MaxChangeKbps int `xml:"maxChangeKbps" default:"1000" ini:"max-change-bw"`
|
||||
StartBrowser bool `xml:"startBrowser" default:"true"`
|
||||
}
|
||||
|
||||
func setDefaults(data interface{}) error {
|
||||
s := reflect.ValueOf(data).Elem()
|
||||
t := s.Type()
|
||||
|
||||
for i := 0; i < s.NumField(); i++ {
|
||||
f := s.Field(i)
|
||||
tag := t.Field(i).Tag
|
||||
|
||||
v := tag.Get("default")
|
||||
if len(v) > 0 {
|
||||
switch f.Interface().(type) {
|
||||
case string:
|
||||
f.SetString(v)
|
||||
|
||||
case int:
|
||||
i, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
f.SetInt(i)
|
||||
|
||||
case bool:
|
||||
f.SetBool(v == "true")
|
||||
|
||||
case []string:
|
||||
// We don't do anything with string slices here. Any default
|
||||
// we set will be appended to by the XML decoder, so we fill
|
||||
// those after decoding.
|
||||
|
||||
default:
|
||||
panic(f.Type())
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// fillNilSlices sets default value on slices that are still nil.
|
||||
func fillNilSlices(data interface{}) error {
|
||||
s := reflect.ValueOf(data).Elem()
|
||||
t := s.Type()
|
||||
|
||||
for i := 0; i < s.NumField(); i++ {
|
||||
f := s.Field(i)
|
||||
tag := t.Field(i).Tag
|
||||
|
||||
v := tag.Get("default")
|
||||
if len(v) > 0 {
|
||||
switch f.Interface().(type) {
|
||||
case []string:
|
||||
if f.IsNil() {
|
||||
rv := reflect.MakeSlice(reflect.TypeOf([]string{}), 1, 1)
|
||||
rv.Index(0).SetString(v)
|
||||
f.Set(rv)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func readConfigINI(m map[string]string, data interface{}) error {
|
||||
s := reflect.ValueOf(data).Elem()
|
||||
t := s.Type()
|
||||
|
||||
for i := 0; i < s.NumField(); i++ {
|
||||
f := s.Field(i)
|
||||
tag := t.Field(i).Tag
|
||||
|
||||
name := tag.Get("ini")
|
||||
if len(name) == 0 {
|
||||
name = strings.ToLower(t.Field(i).Name)
|
||||
}
|
||||
|
||||
if v, ok := m[name]; ok {
|
||||
switch f.Interface().(type) {
|
||||
case string:
|
||||
f.SetString(v)
|
||||
|
||||
case int:
|
||||
i, err := strconv.ParseInt(v, 10, 64)
|
||||
if err == nil {
|
||||
f.SetInt(i)
|
||||
}
|
||||
|
||||
case bool:
|
||||
f.SetBool(v == "true")
|
||||
|
||||
default:
|
||||
panic(f.Type())
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeConfigXML(wr io.Writer, cfg Configuration) error {
|
||||
e := xml.NewEncoder(wr)
|
||||
e.Indent("", " ")
|
||||
err := e.Encode(cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = wr.Write([]byte("\n"))
|
||||
return err
|
||||
}
|
||||
|
||||
func uniqueStrings(ss []string) []string {
|
||||
var m = make(map[string]bool, len(ss))
|
||||
for _, s := range ss {
|
||||
m[s] = true
|
||||
}
|
||||
|
||||
var us = make([]string, 0, len(m))
|
||||
for k := range m {
|
||||
us = append(us, k)
|
||||
}
|
||||
|
||||
return us
|
||||
}
|
||||
|
||||
func readConfigXML(rd io.Reader) (Configuration, error) {
|
||||
var cfg Configuration
|
||||
|
||||
setDefaults(&cfg)
|
||||
setDefaults(&cfg.Options)
|
||||
|
||||
var err error
|
||||
if rd != nil {
|
||||
err = xml.NewDecoder(rd).Decode(&cfg)
|
||||
}
|
||||
|
||||
fillNilSlices(&cfg.Options)
|
||||
|
||||
cfg.Options.ListenAddress = uniqueStrings(cfg.Options.ListenAddress)
|
||||
return cfg, err
|
||||
}
|
||||
|
||||
type NodeConfigurationList []NodeConfiguration
|
||||
|
||||
func (l NodeConfigurationList) Less(a, b int) bool {
|
||||
return l[a].NodeID < l[b].NodeID
|
||||
}
|
||||
func (l NodeConfigurationList) Swap(a, b int) {
|
||||
l[a], l[b] = l[b], l[a]
|
||||
}
|
||||
func (l NodeConfigurationList) Len() int {
|
||||
return len(l)
|
||||
}
|
||||
|
||||
func clusterHash(nodes []NodeConfiguration) string {
|
||||
sort.Sort(NodeConfigurationList(nodes))
|
||||
h := sha256.New()
|
||||
for _, n := range nodes {
|
||||
h.Write([]byte(n.NodeID))
|
||||
}
|
||||
return fmt.Sprintf("%x", h.Sum(nil))
|
||||
}
|
||||
|
||||
func cleanNodeList(nodes []NodeConfiguration, myID string) []NodeConfiguration {
|
||||
var myIDExists bool
|
||||
for _, node := range nodes {
|
||||
if node.NodeID == myID {
|
||||
myIDExists = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !myIDExists {
|
||||
nodes = append(nodes, NodeConfiguration{
|
||||
NodeID: myID,
|
||||
Addresses: []string{"dynamic"},
|
||||
Name: "",
|
||||
})
|
||||
}
|
||||
|
||||
sort.Sort(NodeConfigurationList(nodes))
|
||||
|
||||
return nodes
|
||||
}
|
||||
116
cmd/syncthing/config_test.go
Normal file
116
cmd/syncthing/config_test.go
Normal file
@@ -0,0 +1,116 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestDefaultValues(t *testing.T) {
|
||||
expected := OptionsConfiguration{
|
||||
ListenAddress: []string{":22000"},
|
||||
ReadOnly: false,
|
||||
AllowDelete: true,
|
||||
FollowSymlinks: true,
|
||||
GUIEnabled: true,
|
||||
GUIAddress: "127.0.0.1:8080",
|
||||
GlobalAnnServer: "announce.syncthing.net:22025",
|
||||
GlobalAnnEnabled: true,
|
||||
LocalAnnEnabled: true,
|
||||
ParallelRequests: 16,
|
||||
MaxSendKbps: 0,
|
||||
RescanIntervalS: 60,
|
||||
ReconnectIntervalS: 60,
|
||||
MaxChangeKbps: 1000,
|
||||
StartBrowser: true,
|
||||
}
|
||||
|
||||
cfg, err := readConfigXML(bytes.NewReader(nil))
|
||||
if err != io.EOF {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(cfg.Options, expected) {
|
||||
t.Errorf("Default config differs;\n E: %#v\n A: %#v", expected, cfg.Options)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoListenAddress(t *testing.T) {
|
||||
data := []byte(`<configuration version="1">
|
||||
<repository directory="~/Sync">
|
||||
<node id="..." name="...">
|
||||
<address>dynamic</address>
|
||||
</node>
|
||||
</repository>
|
||||
<options>
|
||||
<listenAddress></listenAddress>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
cfg, err := readConfigXML(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
expected := []string{""}
|
||||
if !reflect.DeepEqual(cfg.Options.ListenAddress, expected) {
|
||||
t.Errorf("Unexpected ListenAddress %#v", cfg.Options.ListenAddress)
|
||||
}
|
||||
}
|
||||
|
||||
func TestOverriddenValues(t *testing.T) {
|
||||
data := []byte(`<configuration version="1">
|
||||
<repository directory="~/Sync">
|
||||
<node id="..." name="...">
|
||||
<address>dynamic</address>
|
||||
</node>
|
||||
</repository>
|
||||
<options>
|
||||
<listenAddress>:23000</listenAddress>
|
||||
<readOnly>true</readOnly>
|
||||
<allowDelete>false</allowDelete>
|
||||
<followSymlinks>false</followSymlinks>
|
||||
<guiEnabled>false</guiEnabled>
|
||||
<guiAddress>125.2.2.2:8080</guiAddress>
|
||||
<globalAnnounceServer>syncthing.nym.se:22025</globalAnnounceServer>
|
||||
<globalAnnounceEnabled>false</globalAnnounceEnabled>
|
||||
<localAnnounceEnabled>false</localAnnounceEnabled>
|
||||
<parallelRequests>32</parallelRequests>
|
||||
<maxSendKbps>1234</maxSendKbps>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
<reconnectionIntervalS>6000</reconnectionIntervalS>
|
||||
<maxChangeKbps>2345</maxChangeKbps>
|
||||
<startBrowser>false</startBrowser>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
expected := OptionsConfiguration{
|
||||
ListenAddress: []string{":23000"},
|
||||
ReadOnly: true,
|
||||
AllowDelete: false,
|
||||
FollowSymlinks: false,
|
||||
GUIEnabled: false,
|
||||
GUIAddress: "125.2.2.2:8080",
|
||||
GlobalAnnServer: "syncthing.nym.se:22025",
|
||||
GlobalAnnEnabled: false,
|
||||
LocalAnnEnabled: false,
|
||||
ParallelRequests: 32,
|
||||
MaxSendKbps: 1234,
|
||||
RescanIntervalS: 600,
|
||||
ReconnectIntervalS: 6000,
|
||||
MaxChangeKbps: 2345,
|
||||
StartBrowser: false,
|
||||
}
|
||||
|
||||
cfg, err := readConfigXML(bytes.NewReader(data))
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(cfg.Options, expected) {
|
||||
t.Errorf("Overridden config differs;\n E: %#v\n A: %#v", expected, cfg.Options)
|
||||
}
|
||||
}
|
||||
15
cmd/syncthing/debug.go
Normal file
15
cmd/syncthing/debug.go
Normal file
@@ -0,0 +1,15 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
dlog = log.New(os.Stderr, "main: ", log.Lmicroseconds|log.Lshortfile)
|
||||
debugNet = strings.Contains(os.Getenv("STTRACE"), "net")
|
||||
debugIdx = strings.Contains(os.Getenv("STTRACE"), "idx")
|
||||
debugNeed = strings.Contains(os.Getenv("STTRACE"), "need")
|
||||
debugPull = strings.Contains(os.Getenv("STTRACE"), "pull")
|
||||
)
|
||||
173
cmd/syncthing/filemonitor.go
Normal file
173
cmd/syncthing/filemonitor.go
Normal file
@@ -0,0 +1,173 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/buffers"
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
)
|
||||
|
||||
type fileMonitor struct {
|
||||
name string // in-repo name
|
||||
path string // full path
|
||||
writeDone sync.WaitGroup
|
||||
model *Model
|
||||
global scanner.File
|
||||
localBlocks []scanner.Block
|
||||
copyError error
|
||||
writeError error
|
||||
}
|
||||
|
||||
func (m *fileMonitor) FileBegins(cc <-chan content) error {
|
||||
if debugPull {
|
||||
dlog.Println("file begins:", m.name)
|
||||
}
|
||||
|
||||
tmp := defTempNamer.TempName(m.path)
|
||||
|
||||
dir := path.Dir(tmp)
|
||||
_, err := os.Stat(dir)
|
||||
if err != nil && os.IsNotExist(err) {
|
||||
err = os.MkdirAll(dir, 0777)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
outFile, err := os.Create(tmp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
m.writeDone.Add(1)
|
||||
|
||||
var writeWg sync.WaitGroup
|
||||
if len(m.localBlocks) > 0 {
|
||||
writeWg.Add(1)
|
||||
inFile, err := os.Open(m.path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Copy local blocks, close infile when done
|
||||
go m.copyLocalBlocks(inFile, outFile, &writeWg)
|
||||
}
|
||||
|
||||
// Write remote blocks,
|
||||
writeWg.Add(1)
|
||||
go m.copyRemoteBlocks(cc, outFile, &writeWg)
|
||||
|
||||
// Wait for both writing routines, then close the outfile
|
||||
go func() {
|
||||
writeWg.Wait()
|
||||
outFile.Close()
|
||||
m.writeDone.Done()
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *fileMonitor) copyLocalBlocks(inFile, outFile *os.File, writeWg *sync.WaitGroup) {
|
||||
defer inFile.Close()
|
||||
defer writeWg.Done()
|
||||
|
||||
var buf = buffers.Get(BlockSize)
|
||||
defer buffers.Put(buf)
|
||||
|
||||
for _, lb := range m.localBlocks {
|
||||
buf = buf[:lb.Size]
|
||||
_, err := inFile.ReadAt(buf, lb.Offset)
|
||||
if err != nil {
|
||||
m.copyError = err
|
||||
return
|
||||
}
|
||||
_, err = outFile.WriteAt(buf, lb.Offset)
|
||||
if err != nil {
|
||||
m.copyError = err
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *fileMonitor) copyRemoteBlocks(cc <-chan content, outFile *os.File, writeWg *sync.WaitGroup) {
|
||||
defer writeWg.Done()
|
||||
|
||||
for content := range cc {
|
||||
_, err := outFile.WriteAt(content.data, content.offset)
|
||||
buffers.Put(content.data)
|
||||
if err != nil {
|
||||
m.writeError = err
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *fileMonitor) FileDone() error {
|
||||
if debugPull {
|
||||
dlog.Println("file done:", m.name)
|
||||
}
|
||||
|
||||
m.writeDone.Wait()
|
||||
|
||||
tmp := defTempNamer.TempName(m.path)
|
||||
defer os.Remove(tmp)
|
||||
|
||||
if m.copyError != nil {
|
||||
return m.copyError
|
||||
}
|
||||
if m.writeError != nil {
|
||||
return m.writeError
|
||||
}
|
||||
|
||||
err := hashCheck(tmp, m.global.Blocks)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.Chtimes(tmp, time.Unix(m.global.Modified, 0), time.Unix(m.global.Modified, 0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.Chmod(tmp, os.FileMode(m.global.Flags&0777))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.Rename(tmp, m.path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
m.model.updateLocal(m.global)
|
||||
return nil
|
||||
}
|
||||
|
||||
func hashCheck(name string, correct []scanner.Block) error {
|
||||
rf, err := os.Open(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rf.Close()
|
||||
|
||||
current, err := scanner.Blocks(rf, BlockSize)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(current) != len(correct) {
|
||||
return errors.New("incorrect number of blocks")
|
||||
}
|
||||
for i := range current {
|
||||
if bytes.Compare(current[i].Hash, correct[i].Hash) != 0 {
|
||||
return fmt.Errorf("hash mismatch: %x != %x", current[i], correct[i])
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
241
cmd/syncthing/filequeue.go
Normal file
241
cmd/syncthing/filequeue.go
Normal file
@@ -0,0 +1,241 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
)
|
||||
|
||||
type Monitor interface {
|
||||
FileBegins(<-chan content) error
|
||||
FileDone() error
|
||||
}
|
||||
|
||||
type FileQueue struct {
|
||||
files queuedFileList
|
||||
sorted bool
|
||||
fmut sync.Mutex // protects files and sorted
|
||||
availability map[string][]string
|
||||
amut sync.Mutex // protects availability
|
||||
queued map[string]bool
|
||||
}
|
||||
|
||||
type queuedFile struct {
|
||||
name string
|
||||
blocks []scanner.Block
|
||||
activeBlocks []bool
|
||||
given int
|
||||
remaining int
|
||||
channel chan content
|
||||
nodes []string
|
||||
nodesChecked time.Time
|
||||
monitor Monitor
|
||||
}
|
||||
|
||||
type content struct {
|
||||
offset int64
|
||||
data []byte
|
||||
}
|
||||
|
||||
type queuedFileList []queuedFile
|
||||
|
||||
func (l queuedFileList) Len() int { return len(l) }
|
||||
|
||||
func (l queuedFileList) Swap(a, b int) { l[a], l[b] = l[b], l[a] }
|
||||
|
||||
func (l queuedFileList) Less(a, b int) bool {
|
||||
// Sort by most blocks already given out, then alphabetically
|
||||
if l[a].given != l[b].given {
|
||||
return l[a].given > l[b].given
|
||||
}
|
||||
return l[a].name < l[b].name
|
||||
}
|
||||
|
||||
type queuedBlock struct {
|
||||
name string
|
||||
block scanner.Block
|
||||
index int
|
||||
}
|
||||
|
||||
func NewFileQueue() *FileQueue {
|
||||
return &FileQueue{
|
||||
availability: make(map[string][]string),
|
||||
queued: make(map[string]bool),
|
||||
}
|
||||
}
|
||||
|
||||
func (q *FileQueue) Add(name string, blocks []scanner.Block, monitor Monitor) {
|
||||
q.fmut.Lock()
|
||||
defer q.fmut.Unlock()
|
||||
|
||||
if q.queued[name] {
|
||||
return
|
||||
}
|
||||
|
||||
q.files = append(q.files, queuedFile{
|
||||
name: name,
|
||||
blocks: blocks,
|
||||
activeBlocks: make([]bool, len(blocks)),
|
||||
remaining: len(blocks),
|
||||
channel: make(chan content),
|
||||
monitor: monitor,
|
||||
})
|
||||
q.queued[name] = true
|
||||
q.sorted = false
|
||||
}
|
||||
|
||||
func (q *FileQueue) Len() int {
|
||||
q.fmut.Lock()
|
||||
defer q.fmut.Unlock()
|
||||
|
||||
return len(q.files)
|
||||
}
|
||||
|
||||
func (q *FileQueue) Get(nodeID string) (queuedBlock, bool) {
|
||||
q.fmut.Lock()
|
||||
defer q.fmut.Unlock()
|
||||
|
||||
if !q.sorted {
|
||||
sort.Sort(q.files)
|
||||
q.sorted = true
|
||||
}
|
||||
|
||||
for i := range q.files {
|
||||
qf := &q.files[i]
|
||||
|
||||
q.amut.Lock()
|
||||
av := q.availability[qf.name]
|
||||
q.amut.Unlock()
|
||||
|
||||
if len(av) == 0 {
|
||||
// Noone has the file we want; abort.
|
||||
if qf.remaining != len(qf.blocks) {
|
||||
// We have already started on this file; close it down
|
||||
close(qf.channel)
|
||||
if mon := qf.monitor; mon != nil {
|
||||
mon.FileDone()
|
||||
}
|
||||
}
|
||||
delete(q.queued, qf.name)
|
||||
q.deleteAt(i)
|
||||
return queuedBlock{}, false
|
||||
}
|
||||
|
||||
for _, ni := range av {
|
||||
// Find and return the next block in the queue
|
||||
if ni == nodeID {
|
||||
for j, b := range qf.blocks {
|
||||
if !qf.activeBlocks[j] {
|
||||
qf.activeBlocks[j] = true
|
||||
qf.given++
|
||||
return queuedBlock{
|
||||
name: qf.name,
|
||||
block: b,
|
||||
index: j,
|
||||
}, true
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// We found nothing to do
|
||||
return queuedBlock{}, false
|
||||
}
|
||||
|
||||
func (q *FileQueue) Done(file string, offset int64, data []byte) {
|
||||
q.fmut.Lock()
|
||||
defer q.fmut.Unlock()
|
||||
|
||||
c := content{
|
||||
offset: offset,
|
||||
data: data,
|
||||
}
|
||||
for i := range q.files {
|
||||
qf := &q.files[i]
|
||||
|
||||
if qf.name == file {
|
||||
if qf.monitor != nil && qf.remaining == len(qf.blocks) {
|
||||
err := qf.monitor.FileBegins(qf.channel)
|
||||
if err != nil {
|
||||
log.Printf("WARNING: %s: %v (not synced)", qf.name, err)
|
||||
delete(q.queued, qf.name)
|
||||
q.deleteAt(i)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
qf.channel <- c
|
||||
qf.remaining--
|
||||
|
||||
if qf.remaining == 0 {
|
||||
close(qf.channel)
|
||||
if qf.monitor != nil {
|
||||
err := qf.monitor.FileDone()
|
||||
if err != nil {
|
||||
log.Printf("WARNING: %s: %v", qf.name, err)
|
||||
}
|
||||
}
|
||||
delete(q.queued, qf.name)
|
||||
q.deleteAt(i)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// We found nothing, might have errored out already
|
||||
}
|
||||
|
||||
func (q *FileQueue) QueuedFiles() (files []string) {
|
||||
q.fmut.Lock()
|
||||
defer q.fmut.Unlock()
|
||||
|
||||
for _, qf := range q.files {
|
||||
files = append(files, qf.name)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (q *FileQueue) deleteAt(i int) {
|
||||
q.files = append(q.files[:i], q.files[i+1:]...)
|
||||
}
|
||||
|
||||
func (q *FileQueue) deleteFile(n string) {
|
||||
for i, file := range q.files {
|
||||
if n == file.name {
|
||||
q.deleteAt(i)
|
||||
delete(q.queued, file.name)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (q *FileQueue) SetAvailable(file string, nodes []string) {
|
||||
q.amut.Lock()
|
||||
defer q.amut.Unlock()
|
||||
|
||||
q.availability[file] = nodes
|
||||
}
|
||||
|
||||
func (q *FileQueue) RemoveAvailable(toRemove string) {
|
||||
q.fmut.Lock()
|
||||
q.amut.Lock()
|
||||
defer q.amut.Unlock()
|
||||
defer q.fmut.Unlock()
|
||||
|
||||
for file, nodes := range q.availability {
|
||||
for i, node := range nodes {
|
||||
if node == toRemove {
|
||||
q.availability[file] = nodes[:i+copy(nodes[i:], nodes[i+1:])]
|
||||
if len(q.availability[file]) == 0 {
|
||||
q.deleteFile(file)
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
297
cmd/syncthing/filequeue_test.go
Normal file
297
cmd/syncthing/filequeue_test.go
Normal file
@@ -0,0 +1,297 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
)
|
||||
|
||||
func TestFileQueueAdd(t *testing.T) {
|
||||
q := NewFileQueue()
|
||||
q.Add("foo", nil, nil)
|
||||
}
|
||||
|
||||
func TestFileQueueAddSorting(t *testing.T) {
|
||||
q := NewFileQueue()
|
||||
q.SetAvailable("zzz", []string{"nodeID"})
|
||||
q.SetAvailable("aaa", []string{"nodeID"})
|
||||
|
||||
q.Add("zzz", []scanner.Block{{Offset: 0, Size: 128}, {Offset: 128, Size: 128}}, nil)
|
||||
q.Add("aaa", []scanner.Block{{Offset: 0, Size: 128}, {Offset: 128, Size: 128}}, nil)
|
||||
b, _ := q.Get("nodeID")
|
||||
if b.name != "aaa" {
|
||||
t.Errorf("Incorrectly sorted get: %+v", b)
|
||||
}
|
||||
|
||||
q = NewFileQueue()
|
||||
q.SetAvailable("zzz", []string{"nodeID"})
|
||||
q.SetAvailable("aaa", []string{"nodeID"})
|
||||
|
||||
q.Add("zzz", []scanner.Block{{Offset: 0, Size: 128}, {Offset: 128, Size: 128}}, nil)
|
||||
b, _ = q.Get("nodeID") // Start on zzzz
|
||||
if b.name != "zzz" {
|
||||
t.Errorf("Incorrectly sorted get: %+v", b)
|
||||
}
|
||||
q.Add("aaa", []scanner.Block{{Offset: 0, Size: 128}, {Offset: 128, Size: 128}}, nil)
|
||||
b, _ = q.Get("nodeID")
|
||||
if b.name != "zzz" {
|
||||
// Continue rather than starting a new file
|
||||
t.Errorf("Incorrectly sorted get: %+v", b)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFileQueueLen(t *testing.T) {
|
||||
q := NewFileQueue()
|
||||
q.Add("foo", nil, nil)
|
||||
q.Add("bar", nil, nil)
|
||||
|
||||
if l := q.Len(); l != 2 {
|
||||
t.Errorf("Incorrect len %d != 2 after adds", l)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFileQueueGet(t *testing.T) {
|
||||
q := NewFileQueue()
|
||||
q.SetAvailable("foo", []string{"nodeID"})
|
||||
q.SetAvailable("bar", []string{"nodeID"})
|
||||
|
||||
q.Add("foo", []scanner.Block{
|
||||
{Offset: 0, Size: 128, Hash: []byte("some foo hash bytes")},
|
||||
{Offset: 128, Size: 128, Hash: []byte("some other foo hash bytes")},
|
||||
{Offset: 256, Size: 128, Hash: []byte("more foo hash bytes")},
|
||||
}, nil)
|
||||
q.Add("bar", []scanner.Block{
|
||||
{Offset: 0, Size: 128, Hash: []byte("some bar hash bytes")},
|
||||
{Offset: 128, Size: 128, Hash: []byte("some other bar hash bytes")},
|
||||
}, nil)
|
||||
|
||||
// First get should return the first block of the first file
|
||||
|
||||
expected := queuedBlock{
|
||||
name: "bar",
|
||||
block: scanner.Block{
|
||||
Offset: 0,
|
||||
Size: 128,
|
||||
Hash: []byte("some bar hash bytes"),
|
||||
},
|
||||
}
|
||||
actual, ok := q.Get("nodeID")
|
||||
|
||||
if !ok {
|
||||
t.Error("Unexpected non-OK Get()")
|
||||
}
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Errorf("Incorrect block returned (first)\n E: %+v\n A: %+v", expected, actual)
|
||||
}
|
||||
|
||||
// Second get should return the next block of the first file
|
||||
|
||||
expected = queuedBlock{
|
||||
name: "bar",
|
||||
block: scanner.Block{
|
||||
Offset: 128,
|
||||
Size: 128,
|
||||
Hash: []byte("some other bar hash bytes"),
|
||||
},
|
||||
index: 1,
|
||||
}
|
||||
actual, ok = q.Get("nodeID")
|
||||
|
||||
if !ok {
|
||||
t.Error("Unexpected non-OK Get()")
|
||||
}
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Errorf("Incorrect block returned (second)\n E: %+v\n A: %+v", expected, actual)
|
||||
}
|
||||
|
||||
// Third get should return the first block of the second file
|
||||
|
||||
expected = queuedBlock{
|
||||
name: "foo",
|
||||
block: scanner.Block{
|
||||
Offset: 0,
|
||||
Size: 128,
|
||||
Hash: []byte("some foo hash bytes"),
|
||||
},
|
||||
}
|
||||
actual, ok = q.Get("nodeID")
|
||||
|
||||
if !ok {
|
||||
t.Error("Unexpected non-OK Get()")
|
||||
}
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Errorf("Incorrect block returned (third)\n E: %+v\n A: %+v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
func TestFileQueueDone(t *testing.T) {
|
||||
ch := make(chan content)
|
||||
var recv sync.WaitGroup
|
||||
recv.Add(1)
|
||||
go func() {
|
||||
content := <-ch
|
||||
if bytes.Compare(content.data, []byte("first block bytes")) != 0 {
|
||||
t.Error("Incorrect data in first content block")
|
||||
}
|
||||
|
||||
content = <-ch
|
||||
if bytes.Compare(content.data, []byte("second block bytes")) != 0 {
|
||||
t.Error("Incorrect data in second content block")
|
||||
}
|
||||
|
||||
_, ok := <-ch
|
||||
if ok {
|
||||
t.Error("Content channel not closed")
|
||||
}
|
||||
|
||||
recv.Done()
|
||||
}()
|
||||
|
||||
q := FileQueue{resolver: fakeResolver{}}
|
||||
q.Add("foo", []scanner.Block{
|
||||
{Offset: 0, Length: 128, Hash: []byte("some foo hash bytes")},
|
||||
{Offset: 128, Length: 128, Hash: []byte("some other foo hash bytes")},
|
||||
}, ch)
|
||||
|
||||
b0, _ := q.Get("nodeID")
|
||||
b1, _ := q.Get("nodeID")
|
||||
|
||||
q.Done(b0.name, b0.block.Offset, []byte("first block bytes"))
|
||||
q.Done(b1.name, b1.block.Offset, []byte("second block bytes"))
|
||||
|
||||
recv.Wait()
|
||||
|
||||
// Queue should now have one file less
|
||||
|
||||
if l := q.Len(); l != 0 {
|
||||
t.Error("Queue not empty")
|
||||
}
|
||||
|
||||
_, ok := q.Get("nodeID")
|
||||
if ok {
|
||||
t.Error("Unexpected OK Get()")
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
func TestFileQueueGetNodeIDs(t *testing.T) {
|
||||
q := NewFileQueue()
|
||||
q.SetAvailable("a-foo", []string{"nodeID", "a"})
|
||||
q.SetAvailable("b-bar", []string{"nodeID", "b"})
|
||||
|
||||
q.Add("a-foo", []scanner.Block{
|
||||
{Offset: 0, Size: 128, Hash: []byte("some foo hash bytes")},
|
||||
{Offset: 128, Size: 128, Hash: []byte("some other foo hash bytes")},
|
||||
{Offset: 256, Size: 128, Hash: []byte("more foo hash bytes")},
|
||||
}, nil)
|
||||
q.Add("b-bar", []scanner.Block{
|
||||
{Offset: 0, Size: 128, Hash: []byte("some bar hash bytes")},
|
||||
{Offset: 128, Size: 128, Hash: []byte("some other bar hash bytes")},
|
||||
}, nil)
|
||||
|
||||
expected := queuedBlock{
|
||||
name: "b-bar",
|
||||
block: scanner.Block{
|
||||
Offset: 0,
|
||||
Size: 128,
|
||||
Hash: []byte("some bar hash bytes"),
|
||||
},
|
||||
}
|
||||
actual, ok := q.Get("b")
|
||||
if !ok {
|
||||
t.Error("Unexpected non-OK Get()")
|
||||
}
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Errorf("Incorrect block returned\n E: %+v\n A: %+v", expected, actual)
|
||||
}
|
||||
|
||||
expected = queuedBlock{
|
||||
name: "a-foo",
|
||||
block: scanner.Block{
|
||||
Offset: 0,
|
||||
Size: 128,
|
||||
Hash: []byte("some foo hash bytes"),
|
||||
},
|
||||
}
|
||||
actual, ok = q.Get("a")
|
||||
if !ok {
|
||||
t.Error("Unexpected non-OK Get()")
|
||||
}
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Errorf("Incorrect block returned\n E: %+v\n A: %+v", expected, actual)
|
||||
}
|
||||
|
||||
expected = queuedBlock{
|
||||
name: "a-foo",
|
||||
block: scanner.Block{
|
||||
Offset: 128,
|
||||
Size: 128,
|
||||
Hash: []byte("some other foo hash bytes"),
|
||||
},
|
||||
index: 1,
|
||||
}
|
||||
actual, ok = q.Get("nodeID")
|
||||
if !ok {
|
||||
t.Error("Unexpected non-OK Get()")
|
||||
}
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Errorf("Incorrect block returned\n E: %+v\n A: %+v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFileQueueThreadHandling(t *testing.T) {
|
||||
// This should pass with go test -race
|
||||
|
||||
const n = 100
|
||||
var total int
|
||||
var blocks []scanner.Block
|
||||
for i := 1; i <= n; i++ {
|
||||
blocks = append(blocks, scanner.Block{Offset: int64(i), Size: 1})
|
||||
total += i
|
||||
}
|
||||
|
||||
q := NewFileQueue()
|
||||
q.Add("foo", blocks, nil)
|
||||
q.SetAvailable("foo", []string{"nodeID"})
|
||||
|
||||
var start = make(chan bool)
|
||||
var gotTot uint32
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(n)
|
||||
for i := 1; i <= n; i++ {
|
||||
go func() {
|
||||
<-start
|
||||
b, _ := q.Get("nodeID")
|
||||
atomic.AddUint32(&gotTot, uint32(b.block.Offset))
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
|
||||
close(start)
|
||||
wg.Wait()
|
||||
if int(gotTot) != total {
|
||||
t.Errorf("Total mismatch; %d != %d", gotTot, total)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeleteAt(t *testing.T) {
|
||||
q := FileQueue{}
|
||||
|
||||
for i := 0; i < 4; i++ {
|
||||
q.files = queuedFileList{{name: "a"}, {name: "b"}, {name: "c"}, {name: "d"}}
|
||||
q.deleteAt(i)
|
||||
if l := len(q.files); l != 3 {
|
||||
t.Fatalf("deleteAt(%d) failed; %d != 3", i, l)
|
||||
}
|
||||
}
|
||||
|
||||
q.files = queuedFileList{{name: "a"}}
|
||||
q.deleteAt(0)
|
||||
if l := len(q.files); l != 0 {
|
||||
t.Fatalf("deleteAt(only) failed; %d != 0", l)
|
||||
}
|
||||
}
|
||||
173
cmd/syncthing/gui.go
Normal file
173
cmd/syncthing/gui.go
Normal file
@@ -0,0 +1,173 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
"github.com/codegangsta/martini"
|
||||
)
|
||||
|
||||
type guiError struct {
|
||||
Time time.Time
|
||||
Error string
|
||||
}
|
||||
|
||||
var (
|
||||
configInSync = true
|
||||
guiErrors = []guiError{}
|
||||
guiErrorsMut sync.Mutex
|
||||
)
|
||||
|
||||
func startGUI(addr string, m *Model) {
|
||||
router := martini.NewRouter()
|
||||
router.Get("/", getRoot)
|
||||
router.Get("/rest/version", restGetVersion)
|
||||
router.Get("/rest/model", restGetModel)
|
||||
router.Get("/rest/connections", restGetConnections)
|
||||
router.Get("/rest/config", restGetConfig)
|
||||
router.Get("/rest/config/sync", restGetConfigInSync)
|
||||
router.Get("/rest/need", restGetNeed)
|
||||
router.Get("/rest/system", restGetSystem)
|
||||
router.Get("/rest/errors", restGetErrors)
|
||||
|
||||
router.Post("/rest/config", restPostConfig)
|
||||
router.Post("/rest/restart", restPostRestart)
|
||||
router.Post("/rest/error", restPostError)
|
||||
|
||||
go func() {
|
||||
mr := martini.New()
|
||||
mr.Use(embeddedStatic())
|
||||
mr.Use(martini.Recovery())
|
||||
mr.Action(router.Handle)
|
||||
mr.Map(m)
|
||||
err := http.ListenAndServe(addr, mr)
|
||||
if err != nil {
|
||||
warnln("GUI not possible:", err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func getRoot(w http.ResponseWriter, r *http.Request) {
|
||||
http.Redirect(w, r, "/index.html", 302)
|
||||
}
|
||||
|
||||
func restGetVersion() string {
|
||||
return Version
|
||||
}
|
||||
|
||||
func restGetModel(m *Model, w http.ResponseWriter) {
|
||||
var res = make(map[string]interface{})
|
||||
|
||||
globalFiles, globalDeleted, globalBytes := m.GlobalSize()
|
||||
res["globalFiles"], res["globalDeleted"], res["globalBytes"] = globalFiles, globalDeleted, globalBytes
|
||||
|
||||
localFiles, localDeleted, localBytes := m.LocalSize()
|
||||
res["localFiles"], res["localDeleted"], res["localBytes"] = localFiles, localDeleted, localBytes
|
||||
|
||||
inSyncFiles, inSyncBytes := m.InSyncSize()
|
||||
res["inSyncFiles"], res["inSyncBytes"] = inSyncFiles, inSyncBytes
|
||||
|
||||
files, total := m.NeedFiles()
|
||||
res["needFiles"], res["needBytes"] = len(files), total
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetConnections(m *Model, w http.ResponseWriter) {
|
||||
var res = m.ConnectionStats()
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetConfig(w http.ResponseWriter) {
|
||||
json.NewEncoder(w).Encode(cfg)
|
||||
}
|
||||
|
||||
func restPostConfig(req *http.Request) {
|
||||
err := json.NewDecoder(req.Body).Decode(&cfg)
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
} else {
|
||||
saveConfig()
|
||||
configInSync = false
|
||||
}
|
||||
}
|
||||
|
||||
func restGetConfigInSync(w http.ResponseWriter) {
|
||||
json.NewEncoder(w).Encode(map[string]bool{"configInSync": configInSync})
|
||||
}
|
||||
|
||||
func restPostRestart(req *http.Request) {
|
||||
restart()
|
||||
}
|
||||
|
||||
type guiFile scanner.File
|
||||
|
||||
func (f guiFile) MarshalJSON() ([]byte, error) {
|
||||
type t struct {
|
||||
Name string
|
||||
Size int64
|
||||
}
|
||||
return json.Marshal(t{
|
||||
Name: f.Name,
|
||||
Size: scanner.File(f).Size,
|
||||
})
|
||||
}
|
||||
|
||||
func restGetNeed(m *Model, w http.ResponseWriter) {
|
||||
files, _ := m.NeedFiles()
|
||||
gfs := make([]guiFile, len(files))
|
||||
for i, f := range files {
|
||||
gfs[i] = guiFile(f)
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(gfs)
|
||||
}
|
||||
|
||||
var cpuUsagePercent float64
|
||||
var cpuUsageLock sync.RWMutex
|
||||
|
||||
func restGetSystem(w http.ResponseWriter) {
|
||||
var m runtime.MemStats
|
||||
runtime.ReadMemStats(&m)
|
||||
|
||||
res := make(map[string]interface{})
|
||||
res["myID"] = myID
|
||||
res["goroutines"] = runtime.NumGoroutine()
|
||||
res["alloc"] = m.Alloc
|
||||
res["sys"] = m.Sys
|
||||
cpuUsageLock.RLock()
|
||||
res["cpuPercent"] = cpuUsagePercent
|
||||
cpuUsageLock.RUnlock()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetErrors(w http.ResponseWriter) {
|
||||
guiErrorsMut.Lock()
|
||||
json.NewEncoder(w).Encode(guiErrors)
|
||||
guiErrorsMut.Unlock()
|
||||
}
|
||||
|
||||
func restPostError(req *http.Request) {
|
||||
bs, _ := ioutil.ReadAll(req.Body)
|
||||
req.Body.Close()
|
||||
showGuiError(string(bs))
|
||||
}
|
||||
|
||||
func showGuiError(err string) {
|
||||
guiErrorsMut.Lock()
|
||||
guiErrors = append(guiErrors, guiError{time.Now(), err})
|
||||
if len(guiErrors) > 5 {
|
||||
guiErrors = guiErrors[len(guiErrors)-5:]
|
||||
}
|
||||
guiErrorsMut.Unlock()
|
||||
}
|
||||
9
cmd/syncthing/gui_development.go
Normal file
9
cmd/syncthing/gui_development.go
Normal file
@@ -0,0 +1,9 @@
|
||||
//+build guidev
|
||||
|
||||
package main
|
||||
|
||||
import "github.com/codegangsta/martini"
|
||||
|
||||
func embeddedStatic() interface{} {
|
||||
return martini.Static("gui")
|
||||
}
|
||||
40
cmd/syncthing/gui_embedded.go
Normal file
40
cmd/syncthing/gui_embedded.go
Normal file
@@ -0,0 +1,40 @@
|
||||
//+build !guidev
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"mime"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/auto"
|
||||
)
|
||||
|
||||
func embeddedStatic() interface{} {
|
||||
var modt = time.Now().UTC().Format(http.TimeFormat)
|
||||
|
||||
return func(res http.ResponseWriter, req *http.Request, log *log.Logger) {
|
||||
file := req.URL.Path
|
||||
|
||||
if file[0] == '/' {
|
||||
file = file[1:]
|
||||
}
|
||||
|
||||
bs, ok := auto.Assets[file]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
mtype := mime.TypeByExtension(filepath.Ext(req.URL.Path))
|
||||
if len(mtype) != 0 {
|
||||
res.Header().Set("Content-Type", mtype)
|
||||
}
|
||||
res.Header().Set("Content-Size", fmt.Sprintf("%d", len(bs)))
|
||||
res.Header().Set("Last-Modified", modt)
|
||||
|
||||
res.Write(bs)
|
||||
}
|
||||
}
|
||||
92
cmd/syncthing/gui_solaris.go
Normal file
92
cmd/syncthing/gui_solaris.go
Normal file
@@ -0,0 +1,92 @@
|
||||
//+build solaris
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
type id_t int32
|
||||
type ulong_t uint32
|
||||
|
||||
type timestruc_t struct {
|
||||
Tv_sec int64
|
||||
Tv_nsec int64
|
||||
}
|
||||
|
||||
func (tv timestruc_t) Nano() int64 {
|
||||
return tv.Tv_sec*1e9 + tv.Tv_nsec
|
||||
}
|
||||
|
||||
type prusage_t struct {
|
||||
Pr_lwpid id_t /* lwp id. 0: process or defunct */
|
||||
Pr_count int32 /* number of contributing lwps */
|
||||
Pr_tstamp timestruc_t /* real time stamp, time of read() */
|
||||
Pr_create timestruc_t /* process/lwp creation time stamp */
|
||||
Pr_term timestruc_t /* process/lwp termination time stamp */
|
||||
Pr_rtime timestruc_t /* total lwp real (elapsed) time */
|
||||
Pr_utime timestruc_t /* user level CPU time */
|
||||
Pr_stime timestruc_t /* system call CPU time */
|
||||
Pr_ttime timestruc_t /* other system trap CPU time */
|
||||
Pr_tftime timestruc_t /* text page fault sleep time */
|
||||
Pr_dftime timestruc_t /* data page fault sleep time */
|
||||
Pr_kftime timestruc_t /* kernel page fault sleep time */
|
||||
Pr_ltime timestruc_t /* user lock wait sleep time */
|
||||
Pr_slptime timestruc_t /* all other sleep time */
|
||||
Pr_wtime timestruc_t /* wait-cpu (latency) time */
|
||||
Pr_stoptime timestruc_t /* stopped time */
|
||||
Pr_minf ulong_t /* minor page faults */
|
||||
Pr_majf ulong_t /* major page faults */
|
||||
Pr_nswap ulong_t /* swaps */
|
||||
Pr_inblk ulong_t /* input blocks */
|
||||
Pr_oublk ulong_t /* output blocks */
|
||||
Pr_msnd ulong_t /* messages sent */
|
||||
Pr_mrcv ulong_t /* messages received */
|
||||
Pr_sigs ulong_t /* signals received */
|
||||
Pr_vctx ulong_t /* voluntary context switches */
|
||||
Pr_ictx ulong_t /* involuntary context switches */
|
||||
Pr_sysc ulong_t /* system calls */
|
||||
Pr_ioch ulong_t /* chars read and written */
|
||||
|
||||
}
|
||||
|
||||
func solarisPrusage(pid int, rusage *prusage_t) error {
|
||||
fd, err := os.Open(fmt.Sprintf("/proc/%d/usage", pid))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = binary.Read(fd, binary.LittleEndian, rusage)
|
||||
fd.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
func init() {
|
||||
go trackCPUUsage()
|
||||
}
|
||||
|
||||
func trackCPUUsage() {
|
||||
var prevUsage int64
|
||||
var prevTime = time.Now().UnixNano()
|
||||
var rusage prusage_t
|
||||
var pid = os.Getpid()
|
||||
for {
|
||||
time.Sleep(10 * time.Second)
|
||||
err := solarisPrusage(pid, &rusage)
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
continue
|
||||
}
|
||||
curTime := time.Now().UnixNano()
|
||||
timeDiff := curTime - prevTime
|
||||
curUsage := rusage.Pr_utime.Nano() + rusage.Pr_stime.Nano()
|
||||
usageDiff := curUsage - prevUsage
|
||||
cpuUsageLock.Lock()
|
||||
cpuUsagePercent = 100 * float64(usageDiff) / float64(timeDiff)
|
||||
cpuUsageLock.Unlock()
|
||||
prevTime = curTime
|
||||
prevUsage = curUsage
|
||||
}
|
||||
}
|
||||
31
cmd/syncthing/gui_unix.go
Normal file
31
cmd/syncthing/gui_unix.go
Normal file
@@ -0,0 +1,31 @@
|
||||
//+build !windows,!solaris
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
func init() {
|
||||
go trackCPUUsage()
|
||||
}
|
||||
|
||||
func trackCPUUsage() {
|
||||
var prevUsage int64
|
||||
var prevTime = time.Now().UnixNano()
|
||||
var rusage syscall.Rusage
|
||||
for {
|
||||
time.Sleep(10 * time.Second)
|
||||
syscall.Getrusage(syscall.RUSAGE_SELF, &rusage)
|
||||
curTime := time.Now().UnixNano()
|
||||
timeDiff := curTime - prevTime
|
||||
curUsage := rusage.Utime.Nano() + rusage.Stime.Nano()
|
||||
usageDiff := curUsage - prevUsage
|
||||
cpuUsageLock.Lock()
|
||||
cpuUsagePercent = 100 * float64(usageDiff) / float64(timeDiff)
|
||||
cpuUsageLock.Unlock()
|
||||
prevTime = curTime
|
||||
prevUsage = curUsage
|
||||
}
|
||||
}
|
||||
@@ -6,16 +6,11 @@ import (
|
||||
"os"
|
||||
)
|
||||
|
||||
var logger = log.New(os.Stderr, "", log.Ltime)
|
||||
var logger *log.Logger
|
||||
|
||||
func debugln(vals ...interface{}) {
|
||||
s := fmt.Sprintln(vals...)
|
||||
logger.Output(2, "DEBUG: "+s)
|
||||
}
|
||||
|
||||
func debugf(format string, vals ...interface{}) {
|
||||
s := fmt.Sprintf(format, vals...)
|
||||
logger.Output(2, "DEBUG: "+s)
|
||||
func init() {
|
||||
log.SetOutput(os.Stderr)
|
||||
logger = log.New(os.Stderr, "", log.Flags())
|
||||
}
|
||||
|
||||
func infoln(vals ...interface{}) {
|
||||
@@ -40,11 +35,13 @@ func okf(format string, vals ...interface{}) {
|
||||
|
||||
func warnln(vals ...interface{}) {
|
||||
s := fmt.Sprintln(vals...)
|
||||
showGuiError(s)
|
||||
logger.Output(2, "WARNING: "+s)
|
||||
}
|
||||
|
||||
func warnf(format string, vals ...interface{}) {
|
||||
s := fmt.Sprintf(format, vals...)
|
||||
showGuiError(s)
|
||||
logger.Output(2, "WARNING: "+s)
|
||||
}
|
||||
|
||||
623
cmd/syncthing/main.go
Normal file
623
cmd/syncthing/main.go
Normal file
@@ -0,0 +1,623 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"compress/gzip"
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net"
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/ini"
|
||||
"github.com/calmh/syncthing/discover"
|
||||
"github.com/calmh/syncthing/protocol"
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
)
|
||||
|
||||
const BlockSize = 128 * 1024
|
||||
|
||||
var cfg Configuration
|
||||
var Version = "unknown-dev"
|
||||
|
||||
var (
|
||||
myID string
|
||||
)
|
||||
|
||||
var (
|
||||
showVersion bool
|
||||
confDir string
|
||||
verbose bool
|
||||
)
|
||||
|
||||
const (
|
||||
usage = "syncthing [options]"
|
||||
extraUsage = `The following enviroment variables are interpreted by syncthing:
|
||||
|
||||
STNORESTART Do not attempt to restart when requested to, instead just exit.
|
||||
Set this variable when running under a service manager such as
|
||||
runit, launchd, etc.
|
||||
|
||||
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
|
||||
profiler with HTTP access.
|
||||
|
||||
STTRACE A comma separated string of facilities to trace. The valid
|
||||
facility strings:
|
||||
- "scanner" (the file change scanner)
|
||||
- "discover" (the node discovery package)
|
||||
- "net" (connecting and disconnecting, network messages)
|
||||
- "idx" (index sending and receiving)
|
||||
- "need" (file need calculations)
|
||||
- "pull" (file pull activity)`
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.StringVar(&confDir, "home", getDefaultConfDir(), "Set configuration directory")
|
||||
flag.BoolVar(&showVersion, "version", false, "Show version")
|
||||
flag.BoolVar(&verbose, "v", false, "Be more verbose")
|
||||
flag.Usage = usageFor(flag.CommandLine, usage, extraUsage)
|
||||
flag.Parse()
|
||||
|
||||
if len(os.Getenv("STRESTART")) > 0 {
|
||||
// Give the parent process time to exit and release sockets etc.
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
|
||||
if showVersion {
|
||||
fmt.Println(Version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
if len(os.Getenv("GOGC")) == 0 {
|
||||
debug.SetGCPercent(25)
|
||||
}
|
||||
|
||||
if len(os.Getenv("GOMAXPROCS")) == 0 {
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
}
|
||||
|
||||
confDir = expandTilde(confDir)
|
||||
|
||||
// Ensure that our home directory exists and that we have a certificate and key.
|
||||
|
||||
ensureDir(confDir, 0700)
|
||||
cert, err := loadCert(confDir)
|
||||
if err != nil {
|
||||
newCertificate(confDir)
|
||||
cert, err = loadCert(confDir)
|
||||
fatalErr(err)
|
||||
}
|
||||
|
||||
myID = string(certID(cert.Certificate[0]))
|
||||
log.SetPrefix("[" + myID[0:5] + "] ")
|
||||
logger.SetPrefix("[" + myID[0:5] + "] ")
|
||||
|
||||
infoln("Version", Version)
|
||||
infoln("My ID:", myID)
|
||||
|
||||
// Prepare to be able to save configuration
|
||||
|
||||
cfgFile := path.Join(confDir, "config.xml")
|
||||
go saveConfigLoop(cfgFile)
|
||||
|
||||
// Load the configuration file, if it exists.
|
||||
// If it does not, create a template.
|
||||
|
||||
cf, err := os.Open(cfgFile)
|
||||
if err == nil {
|
||||
// Read config.xml
|
||||
cfg, err = readConfigXML(cf)
|
||||
if err != nil {
|
||||
fatalln(err)
|
||||
}
|
||||
cf.Close()
|
||||
} else {
|
||||
// No config.xml, let's try the old syncthing.ini
|
||||
iniFile := path.Join(confDir, "syncthing.ini")
|
||||
cf, err := os.Open(iniFile)
|
||||
if err == nil {
|
||||
infoln("Migrating syncthing.ini to config.xml")
|
||||
iniCfg := ini.Parse(cf)
|
||||
cf.Close()
|
||||
os.Rename(iniFile, path.Join(confDir, "migrated_syncthing.ini"))
|
||||
|
||||
cfg, _ = readConfigXML(nil)
|
||||
cfg.Repositories = []RepositoryConfiguration{
|
||||
{Directory: iniCfg.Get("repository", "dir")},
|
||||
}
|
||||
readConfigINI(iniCfg.OptionMap("settings"), &cfg.Options)
|
||||
for name, addrs := range iniCfg.OptionMap("nodes") {
|
||||
n := NodeConfiguration{
|
||||
NodeID: name,
|
||||
Addresses: strings.Fields(addrs),
|
||||
}
|
||||
cfg.Repositories[0].Nodes = append(cfg.Repositories[0].Nodes, n)
|
||||
}
|
||||
|
||||
saveConfig()
|
||||
}
|
||||
}
|
||||
|
||||
if len(cfg.Repositories) == 0 {
|
||||
infoln("No config file; starting with empty defaults")
|
||||
|
||||
cfg, err = readConfigXML(nil)
|
||||
cfg.Repositories = []RepositoryConfiguration{
|
||||
{
|
||||
Directory: path.Join(getHomeDir(), "Sync"),
|
||||
Nodes: []NodeConfiguration{
|
||||
{NodeID: myID, Addresses: []string{"dynamic"}},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
saveConfig()
|
||||
infof("Edit %s to taste or use the GUI\n", cfgFile)
|
||||
}
|
||||
|
||||
// Make sure the local node is in the node list.
|
||||
cfg.Repositories[0].Nodes = cleanNodeList(cfg.Repositories[0].Nodes, myID)
|
||||
|
||||
var dir = expandTilde(cfg.Repositories[0].Directory)
|
||||
|
||||
if profiler := os.Getenv("STPROFILER"); len(profiler) > 0 {
|
||||
go func() {
|
||||
dlog.Println("Starting profiler on", profiler)
|
||||
err := http.ListenAndServe(profiler, nil)
|
||||
if err != nil {
|
||||
dlog.Fatal(err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// The TLS configuration is used for both the listening socket and outgoing
|
||||
// connections.
|
||||
|
||||
tlsCfg := &tls.Config{
|
||||
Certificates: []tls.Certificate{cert},
|
||||
NextProtos: []string{"bep/1.0"},
|
||||
ServerName: myID,
|
||||
ClientAuth: tls.RequestClientCert,
|
||||
SessionTicketsDisabled: true,
|
||||
InsecureSkipVerify: true,
|
||||
MinVersion: tls.VersionTLS12,
|
||||
}
|
||||
|
||||
ensureDir(dir, -1)
|
||||
m := NewModel(dir, cfg.Options.MaxChangeKbps*1000)
|
||||
if cfg.Options.MaxSendKbps > 0 {
|
||||
m.LimitRate(cfg.Options.MaxSendKbps)
|
||||
}
|
||||
|
||||
// GUI
|
||||
if cfg.Options.GUIEnabled && cfg.Options.GUIAddress != "" {
|
||||
addr, err := net.ResolveTCPAddr("tcp", cfg.Options.GUIAddress)
|
||||
if err != nil {
|
||||
warnf("Cannot start GUI on %q: %v", cfg.Options.GUIAddress, err)
|
||||
} else {
|
||||
var hostOpen, hostShow string
|
||||
switch {
|
||||
case addr.IP == nil:
|
||||
hostOpen = "localhost"
|
||||
hostShow = "0.0.0.0"
|
||||
case addr.IP.IsUnspecified():
|
||||
hostOpen = "localhost"
|
||||
hostShow = addr.IP.String()
|
||||
default:
|
||||
hostOpen = addr.IP.String()
|
||||
hostShow = hostOpen
|
||||
}
|
||||
|
||||
infof("Starting web GUI on http://%s:%d/", hostShow, addr.Port)
|
||||
startGUI(cfg.Options.GUIAddress, m)
|
||||
if cfg.Options.StartBrowser && len(os.Getenv("STRESTART")) == 0 {
|
||||
openURL(fmt.Sprintf("http://%s:%d", hostOpen, addr.Port))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Walk the repository and update the local model before establishing any
|
||||
// connections to other nodes.
|
||||
|
||||
if verbose {
|
||||
infoln("Populating repository index")
|
||||
}
|
||||
loadIndex(m)
|
||||
|
||||
sup := &suppressor{threshold: int64(cfg.Options.MaxChangeKbps)}
|
||||
w := &scanner.Walker{
|
||||
Dir: m.dir,
|
||||
IgnoreFile: ".stignore",
|
||||
FollowSymlinks: cfg.Options.FollowSymlinks,
|
||||
BlockSize: BlockSize,
|
||||
TempNamer: defTempNamer,
|
||||
Suppressor: sup,
|
||||
CurrentFiler: m,
|
||||
}
|
||||
updateLocalModel(m, w)
|
||||
|
||||
connOpts := map[string]string{
|
||||
"clientId": "syncthing",
|
||||
"clientVersion": Version,
|
||||
"clusterHash": clusterHash(cfg.Repositories[0].Nodes),
|
||||
}
|
||||
|
||||
// Routine to listen for incoming connections
|
||||
if verbose {
|
||||
infoln("Listening for incoming connections")
|
||||
}
|
||||
for _, addr := range cfg.Options.ListenAddress {
|
||||
go listen(myID, addr, m, tlsCfg, connOpts)
|
||||
}
|
||||
|
||||
// Routine to connect out to configured nodes
|
||||
if verbose {
|
||||
infoln("Attempting to connect to other nodes")
|
||||
}
|
||||
disc := discovery(cfg.Options.ListenAddress[0])
|
||||
go connect(myID, disc, m, tlsCfg, connOpts)
|
||||
|
||||
// Routine to pull blocks from other nodes to synchronize the local
|
||||
// repository. Does not run when we are in read only (publish only) mode.
|
||||
if !cfg.Options.ReadOnly {
|
||||
if verbose {
|
||||
if cfg.Options.AllowDelete {
|
||||
infoln("Deletes from peer nodes are allowed")
|
||||
} else {
|
||||
infoln("Deletes from peer nodes will be ignored")
|
||||
}
|
||||
okln("Ready to synchronize (read-write)")
|
||||
}
|
||||
m.StartRW(cfg.Options.AllowDelete, cfg.Options.ParallelRequests)
|
||||
} else if verbose {
|
||||
okln("Ready to synchronize (read only; no external updates accepted)")
|
||||
}
|
||||
|
||||
// Periodically scan the repository and update the local
|
||||
// XXX: Should use some fsnotify mechanism.
|
||||
go func() {
|
||||
td := time.Duration(cfg.Options.RescanIntervalS) * time.Second
|
||||
for {
|
||||
time.Sleep(td)
|
||||
if m.LocalAge() > (td / 2).Seconds() {
|
||||
updateLocalModel(m, w)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if verbose {
|
||||
// Periodically print statistics
|
||||
go printStatsLoop(m)
|
||||
}
|
||||
|
||||
select {}
|
||||
}
|
||||
|
||||
func restart() {
|
||||
infoln("Restarting")
|
||||
if os.Getenv("SMF_FMRI") != "" || os.Getenv("STNORESTART") != "" {
|
||||
// Solaris SMF
|
||||
infoln("Service manager detected; exit instead of restart")
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
env := os.Environ()
|
||||
if len(os.Getenv("STRESTART")) == 0 {
|
||||
env = append(env, "STRESTART=1")
|
||||
}
|
||||
pgm, err := exec.LookPath(os.Args[0])
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
return
|
||||
}
|
||||
proc, err := os.StartProcess(pgm, os.Args, &os.ProcAttr{
|
||||
Env: env,
|
||||
Files: []*os.File{os.Stdin, os.Stdout, os.Stderr},
|
||||
})
|
||||
if err != nil {
|
||||
fatalln(err)
|
||||
}
|
||||
proc.Release()
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
var saveConfigCh = make(chan struct{})
|
||||
|
||||
func saveConfigLoop(cfgFile string) {
|
||||
for _ = range saveConfigCh {
|
||||
fd, err := os.Create(cfgFile + ".tmp")
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
continue
|
||||
}
|
||||
|
||||
err = writeConfigXML(fd, cfg)
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
fd.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
continue
|
||||
}
|
||||
|
||||
if runtime.GOOS == "windows" {
|
||||
err := os.Remove(cfgFile)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
warnln(err)
|
||||
}
|
||||
}
|
||||
|
||||
err = os.Rename(cfgFile+".tmp", cfgFile)
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func saveConfig() {
|
||||
saveConfigCh <- struct{}{}
|
||||
}
|
||||
|
||||
func printStatsLoop(m *Model) {
|
||||
var lastUpdated int64
|
||||
var lastStats = make(map[string]ConnectionInfo)
|
||||
|
||||
for {
|
||||
time.Sleep(60 * time.Second)
|
||||
|
||||
for node, stats := range m.ConnectionStats() {
|
||||
secs := time.Since(lastStats[node].At).Seconds()
|
||||
inbps := 8 * int(float64(stats.InBytesTotal-lastStats[node].InBytesTotal)/secs)
|
||||
outbps := 8 * int(float64(stats.OutBytesTotal-lastStats[node].OutBytesTotal)/secs)
|
||||
|
||||
if inbps+outbps > 0 {
|
||||
infof("%s: %sb/s in, %sb/s out", node[0:5], MetricPrefix(int64(inbps)), MetricPrefix(int64(outbps)))
|
||||
}
|
||||
|
||||
lastStats[node] = stats
|
||||
}
|
||||
|
||||
if lu := m.Generation(); lu > lastUpdated {
|
||||
lastUpdated = lu
|
||||
files, _, bytes := m.GlobalSize()
|
||||
infof("%6d files, %9sB in cluster", files, BinaryPrefix(bytes))
|
||||
files, _, bytes = m.LocalSize()
|
||||
infof("%6d files, %9sB in local repo", files, BinaryPrefix(bytes))
|
||||
needFiles, bytes := m.NeedFiles()
|
||||
infof("%6d files, %9sB to synchronize", len(needFiles), BinaryPrefix(bytes))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func listen(myID string, addr string, m *Model, tlsCfg *tls.Config, connOpts map[string]string) {
|
||||
if debugNet {
|
||||
dlog.Println("listening on", addr)
|
||||
}
|
||||
l, err := tls.Listen("tcp", addr, tlsCfg)
|
||||
fatalErr(err)
|
||||
|
||||
listen:
|
||||
for {
|
||||
conn, err := l.Accept()
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
continue
|
||||
}
|
||||
|
||||
if debugNet {
|
||||
dlog.Println("connect from", conn.RemoteAddr())
|
||||
}
|
||||
|
||||
tc := conn.(*tls.Conn)
|
||||
err = tc.Handshake()
|
||||
if err != nil {
|
||||
warnln(err)
|
||||
tc.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
remoteID := certID(tc.ConnectionState().PeerCertificates[0].Raw)
|
||||
|
||||
if remoteID == myID {
|
||||
warnf("Connect from myself (%s) - should not happen", remoteID)
|
||||
conn.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
if m.ConnectedTo(remoteID) {
|
||||
warnf("Connect from connected node (%s)", remoteID)
|
||||
}
|
||||
|
||||
for _, nodeCfg := range cfg.Repositories[0].Nodes {
|
||||
if nodeCfg.NodeID == remoteID {
|
||||
protoConn := protocol.NewConnection(remoteID, conn, conn, m, connOpts)
|
||||
m.AddConnection(conn, protoConn)
|
||||
continue listen
|
||||
}
|
||||
}
|
||||
conn.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func discovery(addr string) *discover.Discoverer {
|
||||
_, portstr, err := net.SplitHostPort(addr)
|
||||
fatalErr(err)
|
||||
port, _ := strconv.Atoi(portstr)
|
||||
|
||||
if !cfg.Options.LocalAnnEnabled {
|
||||
port = -1
|
||||
} else if verbose {
|
||||
infoln("Sending local discovery announcements")
|
||||
}
|
||||
|
||||
if !cfg.Options.GlobalAnnEnabled {
|
||||
cfg.Options.GlobalAnnServer = ""
|
||||
} else if verbose {
|
||||
infoln("Sending external discovery announcements")
|
||||
}
|
||||
|
||||
disc, err := discover.NewDiscoverer(myID, port, cfg.Options.GlobalAnnServer)
|
||||
|
||||
if err != nil {
|
||||
warnf("No discovery possible (%v)", err)
|
||||
}
|
||||
|
||||
return disc
|
||||
}
|
||||
|
||||
func connect(myID string, disc *discover.Discoverer, m *Model, tlsCfg *tls.Config, connOpts map[string]string) {
|
||||
for {
|
||||
nextNode:
|
||||
for _, nodeCfg := range cfg.Repositories[0].Nodes {
|
||||
if nodeCfg.NodeID == myID {
|
||||
continue
|
||||
}
|
||||
if m.ConnectedTo(nodeCfg.NodeID) {
|
||||
continue
|
||||
}
|
||||
for _, addr := range nodeCfg.Addresses {
|
||||
if addr == "dynamic" {
|
||||
if disc != nil {
|
||||
t := disc.Lookup(nodeCfg.NodeID)
|
||||
if len(t) == 0 {
|
||||
continue
|
||||
}
|
||||
addr = t[0] //XXX: Handle all of them
|
||||
}
|
||||
}
|
||||
|
||||
if debugNet {
|
||||
dlog.Println("dial", nodeCfg.NodeID, addr)
|
||||
}
|
||||
conn, err := tls.Dial("tcp", addr, tlsCfg)
|
||||
if err != nil {
|
||||
if debugNet {
|
||||
dlog.Println(err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
remoteID := certID(conn.ConnectionState().PeerCertificates[0].Raw)
|
||||
if remoteID != nodeCfg.NodeID {
|
||||
warnln("Unexpected nodeID", remoteID, "!=", nodeCfg.NodeID)
|
||||
conn.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
protoConn := protocol.NewConnection(remoteID, conn, conn, m, connOpts)
|
||||
m.AddConnection(conn, protoConn)
|
||||
continue nextNode
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(time.Duration(cfg.Options.ReconnectIntervalS) * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
func updateLocalModel(m *Model, w *scanner.Walker) {
|
||||
files, _ := w.Walk()
|
||||
m.ReplaceLocal(files)
|
||||
saveIndex(m)
|
||||
}
|
||||
|
||||
func saveIndex(m *Model) {
|
||||
name := m.RepoID() + ".idx.gz"
|
||||
fullName := path.Join(confDir, name)
|
||||
idxf, err := os.Create(fullName + ".tmp")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
gzw := gzip.NewWriter(idxf)
|
||||
|
||||
protocol.IndexMessage{
|
||||
Repository: "local",
|
||||
Files: m.ProtocolIndex(),
|
||||
}.EncodeXDR(gzw)
|
||||
gzw.Close()
|
||||
idxf.Close()
|
||||
os.Rename(fullName+".tmp", fullName)
|
||||
}
|
||||
|
||||
func loadIndex(m *Model) {
|
||||
name := m.RepoID() + ".idx.gz"
|
||||
idxf, err := os.Open(path.Join(confDir, name))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer idxf.Close()
|
||||
|
||||
gzr, err := gzip.NewReader(idxf)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer gzr.Close()
|
||||
|
||||
var im protocol.IndexMessage
|
||||
err = im.DecodeXDR(gzr)
|
||||
if err != nil || im.Repository != "local" {
|
||||
return
|
||||
}
|
||||
m.SeedLocal(im.Files)
|
||||
}
|
||||
|
||||
func ensureDir(dir string, mode int) {
|
||||
fi, err := os.Stat(dir)
|
||||
if os.IsNotExist(err) {
|
||||
err := os.MkdirAll(dir, 0700)
|
||||
fatalErr(err)
|
||||
} else if mode >= 0 && err == nil && int(fi.Mode()&0777) != mode {
|
||||
err := os.Chmod(dir, os.FileMode(mode))
|
||||
fatalErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
func expandTilde(p string) string {
|
||||
if runtime.GOOS == "windows" {
|
||||
return p
|
||||
}
|
||||
|
||||
if strings.HasPrefix(p, "~/") {
|
||||
return strings.Replace(p, "~", getUnixHomeDir(), 1)
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
func getUnixHomeDir() string {
|
||||
home := os.Getenv("HOME")
|
||||
if home == "" {
|
||||
fatalln("No home directory?")
|
||||
}
|
||||
return home
|
||||
}
|
||||
|
||||
func getHomeDir() string {
|
||||
if runtime.GOOS == "windows" {
|
||||
home := os.Getenv("HOMEDRIVE") + os.Getenv("HOMEPATH")
|
||||
if home == "" {
|
||||
home = os.Getenv("USERPROFILE")
|
||||
}
|
||||
return home
|
||||
}
|
||||
return getUnixHomeDir()
|
||||
}
|
||||
|
||||
func getDefaultConfDir() string {
|
||||
if runtime.GOOS == "windows" {
|
||||
return path.Join(os.Getenv("AppData"), "syncthing")
|
||||
}
|
||||
return expandTilde("~/.syncthing")
|
||||
}
|
||||
929
cmd/syncthing/model.go
Normal file
929
cmd/syncthing/model.go
Normal file
@@ -0,0 +1,929 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/buffers"
|
||||
"github.com/calmh/syncthing/protocol"
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
)
|
||||
|
||||
type Model struct {
|
||||
dir string
|
||||
|
||||
global map[string]scanner.File // the latest version of each file as it exists in the cluster
|
||||
gmut sync.RWMutex // protects global
|
||||
local map[string]scanner.File // the files we currently have locally on disk
|
||||
lmut sync.RWMutex // protects local
|
||||
remote map[string]map[string]scanner.File
|
||||
rmut sync.RWMutex // protects remote
|
||||
protoConn map[string]Connection
|
||||
rawConn map[string]io.Closer
|
||||
pmut sync.RWMutex // protects protoConn and rawConn
|
||||
|
||||
// Queue for files to fetch. fq can call back into the model, so we must ensure
|
||||
// to hold no locks when calling methods on fq.
|
||||
fq *FileQueue
|
||||
dq chan scanner.File // queue for files to delete
|
||||
|
||||
updatedLocal int64 // timestamp of last update to local
|
||||
updateGlobal int64 // timestamp of last update to remote
|
||||
lastIdxBcast time.Time
|
||||
lastIdxBcastRequest time.Time
|
||||
umut sync.RWMutex // provides updated* and lastIdx*
|
||||
|
||||
rwRunning bool
|
||||
delete bool
|
||||
initmut sync.Mutex // protects rwRunning and delete
|
||||
|
||||
sup suppressor
|
||||
|
||||
parallelRequests int
|
||||
limitRequestRate chan struct{}
|
||||
|
||||
imut sync.Mutex // protects Index
|
||||
}
|
||||
|
||||
type Connection interface {
|
||||
ID() string
|
||||
Index(string, []protocol.FileInfo)
|
||||
Request(repo, name string, offset int64, size int) ([]byte, error)
|
||||
Statistics() protocol.Statistics
|
||||
Option(key string) string
|
||||
}
|
||||
|
||||
const (
|
||||
idxBcastHoldtime = 15 * time.Second // Wait at least this long after the last index modification
|
||||
idxBcastMaxDelay = 120 * time.Second // Unless we've already waited this long
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNoSuchFile = errors.New("no such file")
|
||||
ErrInvalid = errors.New("file is invalid")
|
||||
)
|
||||
|
||||
// NewModel creates and starts a new model. The model starts in read-only mode,
|
||||
// where it sends index information to connected peers and responds to requests
|
||||
// for file data without altering the local repository in any way.
|
||||
func NewModel(dir string, maxChangeBw int) *Model {
|
||||
m := &Model{
|
||||
dir: dir,
|
||||
global: make(map[string]scanner.File),
|
||||
local: make(map[string]scanner.File),
|
||||
remote: make(map[string]map[string]scanner.File),
|
||||
protoConn: make(map[string]Connection),
|
||||
rawConn: make(map[string]io.Closer),
|
||||
lastIdxBcast: time.Now(),
|
||||
sup: suppressor{threshold: int64(maxChangeBw)},
|
||||
fq: NewFileQueue(),
|
||||
dq: make(chan scanner.File),
|
||||
}
|
||||
|
||||
go m.broadcastIndexLoop()
|
||||
return m
|
||||
}
|
||||
|
||||
func (m *Model) LimitRate(kbps int) {
|
||||
m.limitRequestRate = make(chan struct{}, kbps)
|
||||
n := kbps/10 + 1
|
||||
go func() {
|
||||
for {
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
for i := 0; i < n; i++ {
|
||||
select {
|
||||
case m.limitRequestRate <- struct{}{}:
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// StartRW starts read/write processing on the current model. When in
|
||||
// read/write mode the model will attempt to keep in sync with the cluster by
|
||||
// pulling needed files from peer nodes.
|
||||
func (m *Model) StartRW(del bool, threads int) {
|
||||
m.initmut.Lock()
|
||||
defer m.initmut.Unlock()
|
||||
|
||||
if m.rwRunning {
|
||||
panic("starting started model")
|
||||
}
|
||||
|
||||
m.rwRunning = true
|
||||
m.delete = del
|
||||
m.parallelRequests = threads
|
||||
|
||||
if del {
|
||||
go m.deleteLoop()
|
||||
}
|
||||
}
|
||||
|
||||
// Generation returns an opaque integer that is guaranteed to increment on
|
||||
// every change to the local repository or global model.
|
||||
func (m *Model) Generation() int64 {
|
||||
m.umut.RLock()
|
||||
defer m.umut.RUnlock()
|
||||
|
||||
return m.updatedLocal + m.updateGlobal
|
||||
}
|
||||
|
||||
func (m *Model) LocalAge() float64 {
|
||||
m.umut.RLock()
|
||||
defer m.umut.RUnlock()
|
||||
|
||||
return time.Since(time.Unix(m.updatedLocal, 0)).Seconds()
|
||||
}
|
||||
|
||||
type ConnectionInfo struct {
|
||||
protocol.Statistics
|
||||
Address string
|
||||
ClientID string
|
||||
ClientVersion string
|
||||
Completion int
|
||||
}
|
||||
|
||||
// ConnectionStats returns a map with connection statistics for each connected node.
|
||||
func (m *Model) ConnectionStats() map[string]ConnectionInfo {
|
||||
type remoteAddrer interface {
|
||||
RemoteAddr() net.Addr
|
||||
}
|
||||
|
||||
m.gmut.RLock()
|
||||
m.pmut.RLock()
|
||||
m.rmut.RLock()
|
||||
|
||||
var tot int64
|
||||
for _, f := range m.global {
|
||||
if f.Flags&protocol.FlagDeleted == 0 {
|
||||
tot += f.Size
|
||||
}
|
||||
}
|
||||
|
||||
var res = make(map[string]ConnectionInfo)
|
||||
for node, conn := range m.protoConn {
|
||||
ci := ConnectionInfo{
|
||||
Statistics: conn.Statistics(),
|
||||
ClientID: conn.Option("clientId"),
|
||||
ClientVersion: conn.Option("clientVersion"),
|
||||
}
|
||||
if nc, ok := m.rawConn[node].(remoteAddrer); ok {
|
||||
ci.Address = nc.RemoteAddr().String()
|
||||
}
|
||||
|
||||
var have int64
|
||||
for _, f := range m.remote[node] {
|
||||
if f.Equals(m.global[f.Name]) && f.Flags&protocol.FlagDeleted == 0 {
|
||||
have += f.Size
|
||||
}
|
||||
}
|
||||
|
||||
ci.Completion = 100
|
||||
if tot != 0 {
|
||||
ci.Completion = int(100 * have / tot)
|
||||
}
|
||||
|
||||
res[node] = ci
|
||||
}
|
||||
|
||||
m.rmut.RUnlock()
|
||||
m.pmut.RUnlock()
|
||||
m.gmut.RUnlock()
|
||||
return res
|
||||
}
|
||||
|
||||
// GlobalSize returns the number of files, deleted files and total bytes for all
|
||||
// files in the global model.
|
||||
func (m *Model) GlobalSize() (files, deleted int, bytes int64) {
|
||||
m.gmut.RLock()
|
||||
|
||||
for _, f := range m.global {
|
||||
if f.Flags&protocol.FlagDeleted == 0 {
|
||||
files++
|
||||
bytes += f.Size
|
||||
} else {
|
||||
deleted++
|
||||
}
|
||||
}
|
||||
|
||||
m.gmut.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// LocalSize returns the number of files, deleted files and total bytes for all
|
||||
// files in the local repository.
|
||||
func (m *Model) LocalSize() (files, deleted int, bytes int64) {
|
||||
m.lmut.RLock()
|
||||
|
||||
for _, f := range m.local {
|
||||
if f.Flags&protocol.FlagDeleted == 0 {
|
||||
files++
|
||||
bytes += f.Size
|
||||
} else {
|
||||
deleted++
|
||||
}
|
||||
}
|
||||
|
||||
m.lmut.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// InSyncSize returns the number and total byte size of the local files that
|
||||
// are in sync with the global model.
|
||||
func (m *Model) InSyncSize() (files, bytes int64) {
|
||||
m.gmut.RLock()
|
||||
m.lmut.RLock()
|
||||
|
||||
for n, f := range m.local {
|
||||
if gf, ok := m.global[n]; ok && f.Equals(gf) {
|
||||
if f.Flags&protocol.FlagDeleted == 0 {
|
||||
files++
|
||||
bytes += f.Size
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
m.lmut.RUnlock()
|
||||
m.gmut.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// NeedFiles returns the list of currently needed files and the total size.
|
||||
func (m *Model) NeedFiles() (files []scanner.File, bytes int64) {
|
||||
qf := m.fq.QueuedFiles()
|
||||
|
||||
m.gmut.RLock()
|
||||
|
||||
for _, n := range qf {
|
||||
f := m.global[n]
|
||||
files = append(files, f)
|
||||
bytes += f.Size
|
||||
}
|
||||
|
||||
m.gmut.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// Index is called when a new node is connected and we receive their full index.
|
||||
// Implements the protocol.Model interface.
|
||||
func (m *Model) Index(nodeID string, fs []protocol.FileInfo) {
|
||||
var files = make([]scanner.File, len(fs))
|
||||
for i := range fs {
|
||||
files[i] = fileFromFileInfo(fs[i])
|
||||
}
|
||||
|
||||
m.imut.Lock()
|
||||
defer m.imut.Unlock()
|
||||
|
||||
if debugNet {
|
||||
dlog.Printf("IDX(in): %s: %d files", nodeID, len(fs))
|
||||
}
|
||||
|
||||
repo := make(map[string]scanner.File)
|
||||
for _, f := range files {
|
||||
m.indexUpdate(repo, f)
|
||||
}
|
||||
|
||||
m.rmut.Lock()
|
||||
m.remote[nodeID] = repo
|
||||
m.rmut.Unlock()
|
||||
|
||||
m.recomputeGlobal()
|
||||
m.recomputeNeedForFiles(files)
|
||||
}
|
||||
|
||||
// IndexUpdate is called for incremental updates to connected nodes' indexes.
|
||||
// Implements the protocol.Model interface.
|
||||
func (m *Model) IndexUpdate(nodeID string, fs []protocol.FileInfo) {
|
||||
var files = make([]scanner.File, len(fs))
|
||||
for i := range fs {
|
||||
files[i] = fileFromFileInfo(fs[i])
|
||||
}
|
||||
|
||||
m.imut.Lock()
|
||||
defer m.imut.Unlock()
|
||||
|
||||
if debugNet {
|
||||
dlog.Printf("IDXUP(in): %s: %d files", nodeID, len(files))
|
||||
}
|
||||
|
||||
m.rmut.Lock()
|
||||
repo, ok := m.remote[nodeID]
|
||||
if !ok {
|
||||
warnf("Index update from node %s that does not have an index", nodeID)
|
||||
m.rmut.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
for _, f := range files {
|
||||
m.indexUpdate(repo, f)
|
||||
}
|
||||
m.rmut.Unlock()
|
||||
|
||||
m.recomputeGlobal()
|
||||
m.recomputeNeedForFiles(files)
|
||||
}
|
||||
|
||||
func (m *Model) indexUpdate(repo map[string]scanner.File, f scanner.File) {
|
||||
if debugIdx {
|
||||
var flagComment string
|
||||
if f.Flags&protocol.FlagDeleted != 0 {
|
||||
flagComment = " (deleted)"
|
||||
}
|
||||
dlog.Printf("IDX(in): %q m=%d f=%o%s v=%d (%d blocks)", f.Name, f.Modified, f.Flags, flagComment, f.Version, len(f.Blocks))
|
||||
}
|
||||
|
||||
if extraFlags := f.Flags &^ (protocol.FlagInvalid | protocol.FlagDeleted | 0xfff); extraFlags != 0 {
|
||||
warnf("IDX(in): Unknown flags 0x%x in index record %+v", extraFlags, f)
|
||||
return
|
||||
}
|
||||
|
||||
repo[f.Name] = f
|
||||
}
|
||||
|
||||
// Close removes the peer from the model and closes the underlying connection if possible.
|
||||
// Implements the protocol.Model interface.
|
||||
func (m *Model) Close(node string, err error) {
|
||||
if debugNet {
|
||||
dlog.Printf("%s: %v", node, err)
|
||||
}
|
||||
if err == protocol.ErrClusterHash {
|
||||
warnf("Connection to %s closed due to mismatched cluster hash. Ensure that the configured cluster members are identical on both nodes.", node)
|
||||
} else if err != io.EOF {
|
||||
warnf("Connection to %s closed: %v", node, err)
|
||||
}
|
||||
|
||||
m.fq.RemoveAvailable(node)
|
||||
|
||||
m.pmut.Lock()
|
||||
m.rmut.Lock()
|
||||
|
||||
conn, ok := m.rawConn[node]
|
||||
if ok {
|
||||
conn.Close()
|
||||
}
|
||||
|
||||
delete(m.remote, node)
|
||||
delete(m.protoConn, node)
|
||||
delete(m.rawConn, node)
|
||||
|
||||
m.rmut.Unlock()
|
||||
m.pmut.Unlock()
|
||||
|
||||
m.recomputeGlobal()
|
||||
m.recomputeNeedForGlobal()
|
||||
}
|
||||
|
||||
// Request returns the specified data segment by reading it from local disk.
|
||||
// Implements the protocol.Model interface.
|
||||
func (m *Model) Request(nodeID, repo, name string, offset int64, size int) ([]byte, error) {
|
||||
// Verify that the requested file exists in the local and global model.
|
||||
m.lmut.RLock()
|
||||
lf, localOk := m.local[name]
|
||||
m.lmut.RUnlock()
|
||||
|
||||
m.gmut.RLock()
|
||||
_, globalOk := m.global[name]
|
||||
m.gmut.RUnlock()
|
||||
|
||||
if !localOk || !globalOk {
|
||||
warnf("SECURITY (nonexistent file) REQ(in): %s: %q o=%d s=%d", nodeID, name, offset, size)
|
||||
return nil, ErrNoSuchFile
|
||||
}
|
||||
if lf.Suppressed {
|
||||
return nil, ErrInvalid
|
||||
}
|
||||
|
||||
if debugNet && nodeID != "<local>" {
|
||||
dlog.Printf("REQ(in): %s: %q o=%d s=%d", nodeID, name, offset, size)
|
||||
}
|
||||
fn := path.Join(m.dir, name)
|
||||
fd, err := os.Open(fn) // XXX: Inefficient, should cache fd?
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer fd.Close()
|
||||
|
||||
buf := buffers.Get(int(size))
|
||||
_, err = fd.ReadAt(buf, offset)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if m.limitRequestRate != nil {
|
||||
for s := 0; s < len(buf); s += 1024 {
|
||||
<-m.limitRequestRate
|
||||
}
|
||||
}
|
||||
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
// ReplaceLocal replaces the local repository index with the given list of files.
|
||||
func (m *Model) ReplaceLocal(fs []scanner.File) {
|
||||
var updated bool
|
||||
var newLocal = make(map[string]scanner.File)
|
||||
|
||||
m.lmut.RLock()
|
||||
for _, f := range fs {
|
||||
newLocal[f.Name] = f
|
||||
if ef := m.local[f.Name]; !ef.Equals(f) {
|
||||
updated = true
|
||||
}
|
||||
}
|
||||
m.lmut.RUnlock()
|
||||
|
||||
if m.markDeletedLocals(newLocal) {
|
||||
updated = true
|
||||
}
|
||||
|
||||
m.lmut.RLock()
|
||||
if len(newLocal) != len(m.local) {
|
||||
updated = true
|
||||
}
|
||||
m.lmut.RUnlock()
|
||||
|
||||
if updated {
|
||||
m.lmut.Lock()
|
||||
m.local = newLocal
|
||||
m.lmut.Unlock()
|
||||
|
||||
m.recomputeGlobal()
|
||||
m.recomputeNeedForGlobal()
|
||||
|
||||
m.umut.Lock()
|
||||
m.updatedLocal = time.Now().Unix()
|
||||
m.lastIdxBcastRequest = time.Now()
|
||||
m.umut.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// SeedLocal replaces the local repository index with the given list of files,
|
||||
// in protocol data types. Does not track deletes, should only be used to seed
|
||||
// the local index from a cache file at startup.
|
||||
func (m *Model) SeedLocal(fs []protocol.FileInfo) {
|
||||
m.lmut.Lock()
|
||||
m.local = make(map[string]scanner.File)
|
||||
for _, f := range fs {
|
||||
m.local[f.Name] = fileFromFileInfo(f)
|
||||
}
|
||||
m.lmut.Unlock()
|
||||
|
||||
m.recomputeGlobal()
|
||||
m.recomputeNeedForGlobal()
|
||||
}
|
||||
|
||||
// Implements scanner.CurrentFiler
|
||||
func (m *Model) CurrentFile(file string) scanner.File {
|
||||
m.lmut.RLock()
|
||||
f := m.local[file]
|
||||
m.lmut.RUnlock()
|
||||
return f
|
||||
}
|
||||
|
||||
// ConnectedTo returns true if we are connected to the named node.
|
||||
func (m *Model) ConnectedTo(nodeID string) bool {
|
||||
m.pmut.RLock()
|
||||
_, ok := m.protoConn[nodeID]
|
||||
m.pmut.RUnlock()
|
||||
return ok
|
||||
}
|
||||
|
||||
// RepoID returns a unique ID representing the current repository location.
|
||||
func (m *Model) RepoID() string {
|
||||
return fmt.Sprintf("%x", sha1.Sum([]byte(m.dir)))
|
||||
}
|
||||
|
||||
// AddConnection adds a new peer connection to the model. An initial index will
|
||||
// be sent to the connected peer, thereafter index updates whenever the local
|
||||
// repository changes.
|
||||
func (m *Model) AddConnection(rawConn io.Closer, protoConn Connection) {
|
||||
nodeID := protoConn.ID()
|
||||
m.pmut.Lock()
|
||||
m.protoConn[nodeID] = protoConn
|
||||
m.rawConn[nodeID] = rawConn
|
||||
m.pmut.Unlock()
|
||||
|
||||
go func() {
|
||||
idx := m.ProtocolIndex()
|
||||
if debugNet {
|
||||
dlog.Printf("IDX(out/initial): %s: %d files", nodeID, len(idx))
|
||||
}
|
||||
protoConn.Index("default", idx)
|
||||
}()
|
||||
|
||||
m.initmut.Lock()
|
||||
rw := m.rwRunning
|
||||
m.initmut.Unlock()
|
||||
if !rw {
|
||||
return
|
||||
}
|
||||
|
||||
for i := 0; i < m.parallelRequests; i++ {
|
||||
i := i
|
||||
go func() {
|
||||
if debugPull {
|
||||
dlog.Println("starting puller:", nodeID, i)
|
||||
}
|
||||
for {
|
||||
m.pmut.RLock()
|
||||
if _, ok := m.protoConn[nodeID]; !ok {
|
||||
if debugPull {
|
||||
dlog.Println("stopping puller:", nodeID, i)
|
||||
}
|
||||
m.pmut.RUnlock()
|
||||
return
|
||||
}
|
||||
m.pmut.RUnlock()
|
||||
|
||||
qb, ok := m.fq.Get(nodeID)
|
||||
if ok {
|
||||
if debugPull {
|
||||
dlog.Println("request: out", nodeID, i, qb.name, qb.block.Offset)
|
||||
}
|
||||
data, _ := protoConn.Request("default", qb.name, qb.block.Offset, int(qb.block.Size))
|
||||
m.fq.Done(qb.name, qb.block.Offset, data)
|
||||
} else {
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
// ProtocolIndex returns the current local index in protocol data types.
|
||||
// Must be called with the read lock held.
|
||||
func (m *Model) ProtocolIndex() []protocol.FileInfo {
|
||||
var index []protocol.FileInfo
|
||||
|
||||
m.lmut.RLock()
|
||||
|
||||
for _, f := range m.local {
|
||||
mf := fileInfoFromFile(f)
|
||||
if debugIdx {
|
||||
var flagComment string
|
||||
if mf.Flags&protocol.FlagDeleted != 0 {
|
||||
flagComment = " (deleted)"
|
||||
}
|
||||
dlog.Printf("IDX(out): %q m=%d f=%o%s v=%d (%d blocks)", mf.Name, mf.Modified, mf.Flags, flagComment, mf.Version, len(mf.Blocks))
|
||||
}
|
||||
index = append(index, mf)
|
||||
}
|
||||
|
||||
m.lmut.RUnlock()
|
||||
return index
|
||||
}
|
||||
|
||||
func (m *Model) requestGlobal(nodeID, name string, offset int64, size int, hash []byte) ([]byte, error) {
|
||||
m.pmut.RLock()
|
||||
nc, ok := m.protoConn[nodeID]
|
||||
m.pmut.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("requestGlobal: no such node: %s", nodeID)
|
||||
}
|
||||
|
||||
if debugNet {
|
||||
dlog.Printf("REQ(out): %s: %q o=%d s=%d h=%x", nodeID, name, offset, size, hash)
|
||||
}
|
||||
|
||||
return nc.Request("default", name, offset, size)
|
||||
}
|
||||
|
||||
func (m *Model) broadcastIndexLoop() {
|
||||
for {
|
||||
m.umut.RLock()
|
||||
bcastRequested := m.lastIdxBcastRequest.After(m.lastIdxBcast)
|
||||
holdtimeExceeded := time.Since(m.lastIdxBcastRequest) > idxBcastHoldtime
|
||||
m.umut.RUnlock()
|
||||
|
||||
maxDelayExceeded := time.Since(m.lastIdxBcast) > idxBcastMaxDelay
|
||||
if bcastRequested && (holdtimeExceeded || maxDelayExceeded) {
|
||||
idx := m.ProtocolIndex()
|
||||
|
||||
var indexWg sync.WaitGroup
|
||||
indexWg.Add(len(m.protoConn))
|
||||
|
||||
m.umut.Lock()
|
||||
m.lastIdxBcast = time.Now()
|
||||
m.umut.Unlock()
|
||||
|
||||
m.pmut.RLock()
|
||||
for _, node := range m.protoConn {
|
||||
node := node
|
||||
if debugNet {
|
||||
dlog.Printf("IDX(out/loop): %s: %d files", node.ID(), len(idx))
|
||||
}
|
||||
go func() {
|
||||
node.Index("default", idx)
|
||||
indexWg.Done()
|
||||
}()
|
||||
}
|
||||
m.pmut.RUnlock()
|
||||
|
||||
indexWg.Wait()
|
||||
}
|
||||
time.Sleep(idxBcastHoldtime)
|
||||
}
|
||||
}
|
||||
|
||||
// markDeletedLocals sets the deleted flag on files that have gone missing locally.
|
||||
func (m *Model) markDeletedLocals(newLocal map[string]scanner.File) bool {
|
||||
// For every file in the existing local table, check if they are also
|
||||
// present in the new local table. If they are not, check that we already
|
||||
// had the newest version available according to the global table and if so
|
||||
// note the file as having been deleted.
|
||||
var updated bool
|
||||
|
||||
m.gmut.RLock()
|
||||
m.lmut.RLock()
|
||||
|
||||
for n, f := range m.local {
|
||||
if _, ok := newLocal[n]; !ok {
|
||||
if gf := m.global[n]; !gf.NewerThan(f) {
|
||||
if f.Flags&protocol.FlagDeleted == 0 {
|
||||
f.Flags = protocol.FlagDeleted
|
||||
f.Version++
|
||||
f.Blocks = nil
|
||||
updated = true
|
||||
}
|
||||
newLocal[n] = f
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
m.lmut.RUnlock()
|
||||
m.gmut.RUnlock()
|
||||
|
||||
return updated
|
||||
}
|
||||
|
||||
func (m *Model) updateLocal(f scanner.File) {
|
||||
var updated bool
|
||||
|
||||
m.lmut.Lock()
|
||||
if ef, ok := m.local[f.Name]; !ok || !ef.Equals(f) {
|
||||
m.local[f.Name] = f
|
||||
updated = true
|
||||
}
|
||||
m.lmut.Unlock()
|
||||
|
||||
if updated {
|
||||
m.recomputeGlobal()
|
||||
// We don't recomputeNeed here for two reasons:
|
||||
// - a need shouldn't have arisen due to having a newer local file
|
||||
// - recomputeNeed might call into fq.Add but we might have been called by
|
||||
// fq which would be a deadlock on fq
|
||||
|
||||
m.umut.Lock()
|
||||
m.updatedLocal = time.Now().Unix()
|
||||
m.lastIdxBcastRequest = time.Now()
|
||||
m.umut.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
XXX: Not done, needs elegant handling of availability
|
||||
|
||||
func (m *Model) recomputeGlobalFor(files []scanner.File) bool {
|
||||
m.gmut.Lock()
|
||||
defer m.gmut.Unlock()
|
||||
|
||||
var updated bool
|
||||
for _, f := range files {
|
||||
if gf, ok := m.global[f.Name]; !ok || f.NewerThan(gf) {
|
||||
m.global[f.Name] = f
|
||||
updated = true
|
||||
// Fix availability
|
||||
}
|
||||
}
|
||||
return updated
|
||||
}
|
||||
*/
|
||||
|
||||
func (m *Model) recomputeGlobal() {
|
||||
var newGlobal = make(map[string]scanner.File)
|
||||
|
||||
m.lmut.RLock()
|
||||
for n, f := range m.local {
|
||||
newGlobal[n] = f
|
||||
}
|
||||
m.lmut.RUnlock()
|
||||
|
||||
var available = make(map[string][]string)
|
||||
|
||||
m.rmut.RLock()
|
||||
var highestMod int64
|
||||
for nodeID, fs := range m.remote {
|
||||
for n, nf := range fs {
|
||||
if lf, ok := newGlobal[n]; !ok || nf.NewerThan(lf) {
|
||||
newGlobal[n] = nf
|
||||
available[n] = []string{nodeID}
|
||||
if nf.Modified > highestMod {
|
||||
highestMod = nf.Modified
|
||||
}
|
||||
} else if lf.Equals(nf) {
|
||||
available[n] = append(available[n], nodeID)
|
||||
}
|
||||
}
|
||||
}
|
||||
m.rmut.RUnlock()
|
||||
|
||||
for f, ns := range available {
|
||||
m.fq.SetAvailable(f, ns)
|
||||
}
|
||||
|
||||
// Figure out if anything actually changed
|
||||
|
||||
m.gmut.RLock()
|
||||
var updated bool
|
||||
if highestMod > m.updateGlobal || len(newGlobal) != len(m.global) {
|
||||
updated = true
|
||||
} else {
|
||||
for n, f0 := range newGlobal {
|
||||
if f1, ok := m.global[n]; !ok || !f0.Equals(f1) {
|
||||
updated = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
m.gmut.RUnlock()
|
||||
|
||||
if updated {
|
||||
m.gmut.Lock()
|
||||
m.umut.Lock()
|
||||
m.global = newGlobal
|
||||
m.updateGlobal = time.Now().Unix()
|
||||
m.umut.Unlock()
|
||||
m.gmut.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
type addOrder struct {
|
||||
n string
|
||||
remote []scanner.Block
|
||||
fm *fileMonitor
|
||||
}
|
||||
|
||||
func (m *Model) recomputeNeedForGlobal() {
|
||||
var toDelete []scanner.File
|
||||
var toAdd []addOrder
|
||||
|
||||
m.gmut.RLock()
|
||||
|
||||
for _, gf := range m.global {
|
||||
toAdd, toDelete = m.recomputeNeedForFile(gf, toAdd, toDelete)
|
||||
}
|
||||
|
||||
m.gmut.RUnlock()
|
||||
|
||||
for _, ao := range toAdd {
|
||||
m.fq.Add(ao.n, ao.remote, ao.fm)
|
||||
}
|
||||
for _, gf := range toDelete {
|
||||
m.dq <- gf
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Model) recomputeNeedForFiles(files []scanner.File) {
|
||||
var toDelete []scanner.File
|
||||
var toAdd []addOrder
|
||||
|
||||
m.gmut.RLock()
|
||||
|
||||
for _, gf := range files {
|
||||
toAdd, toDelete = m.recomputeNeedForFile(gf, toAdd, toDelete)
|
||||
}
|
||||
|
||||
m.gmut.RUnlock()
|
||||
|
||||
for _, ao := range toAdd {
|
||||
m.fq.Add(ao.n, ao.remote, ao.fm)
|
||||
}
|
||||
for _, gf := range toDelete {
|
||||
m.dq <- gf
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Model) recomputeNeedForFile(gf scanner.File, toAdd []addOrder, toDelete []scanner.File) ([]addOrder, []scanner.File) {
|
||||
m.lmut.RLock()
|
||||
lf, ok := m.local[gf.Name]
|
||||
m.lmut.RUnlock()
|
||||
|
||||
if !ok || gf.NewerThan(lf) {
|
||||
if gf.Suppressed {
|
||||
// Never attempt to sync invalid files
|
||||
return toAdd, toDelete
|
||||
}
|
||||
if gf.Flags&protocol.FlagDeleted != 0 && !m.delete {
|
||||
// Don't want to delete files, so forget this need
|
||||
return toAdd, toDelete
|
||||
}
|
||||
if gf.Flags&protocol.FlagDeleted != 0 && !ok {
|
||||
// Don't have the file, so don't need to delete it
|
||||
return toAdd, toDelete
|
||||
}
|
||||
if debugNeed {
|
||||
dlog.Printf("need: lf:%v gf:%v", lf, gf)
|
||||
}
|
||||
|
||||
if gf.Flags&protocol.FlagDeleted != 0 {
|
||||
toDelete = append(toDelete, gf)
|
||||
} else {
|
||||
local, remote := scanner.BlockDiff(lf.Blocks, gf.Blocks)
|
||||
fm := fileMonitor{
|
||||
name: gf.Name,
|
||||
path: path.Clean(path.Join(m.dir, gf.Name)),
|
||||
global: gf,
|
||||
model: m,
|
||||
localBlocks: local,
|
||||
}
|
||||
toAdd = append(toAdd, addOrder{gf.Name, remote, &fm})
|
||||
}
|
||||
}
|
||||
|
||||
return toAdd, toDelete
|
||||
}
|
||||
|
||||
func (m *Model) WhoHas(name string) []string {
|
||||
var remote []string
|
||||
|
||||
m.gmut.RLock()
|
||||
m.rmut.RLock()
|
||||
|
||||
gf := m.global[name]
|
||||
for node, files := range m.remote {
|
||||
if file, ok := files[name]; ok && file.Equals(gf) {
|
||||
remote = append(remote, node)
|
||||
}
|
||||
}
|
||||
|
||||
m.rmut.RUnlock()
|
||||
m.gmut.RUnlock()
|
||||
return remote
|
||||
}
|
||||
|
||||
func (m *Model) deleteLoop() {
|
||||
for file := range m.dq {
|
||||
if debugPull {
|
||||
dlog.Println("delete", file.Name)
|
||||
}
|
||||
path := path.Clean(path.Join(m.dir, file.Name))
|
||||
err := os.Remove(path)
|
||||
if err != nil {
|
||||
warnf("%s: %v", file.Name, err)
|
||||
}
|
||||
|
||||
m.updateLocal(file)
|
||||
}
|
||||
}
|
||||
|
||||
func fileFromFileInfo(f protocol.FileInfo) scanner.File {
|
||||
var blocks = make([]scanner.Block, len(f.Blocks))
|
||||
var offset int64
|
||||
for i, b := range f.Blocks {
|
||||
blocks[i] = scanner.Block{
|
||||
Offset: offset,
|
||||
Size: b.Size,
|
||||
Hash: b.Hash,
|
||||
}
|
||||
offset += int64(b.Size)
|
||||
}
|
||||
return scanner.File{
|
||||
Name: f.Name,
|
||||
Size: offset,
|
||||
Flags: f.Flags &^ protocol.FlagInvalid,
|
||||
Modified: f.Modified,
|
||||
Version: f.Version,
|
||||
Blocks: blocks,
|
||||
Suppressed: f.Flags&protocol.FlagInvalid != 0,
|
||||
}
|
||||
}
|
||||
|
||||
func fileInfoFromFile(f scanner.File) protocol.FileInfo {
|
||||
var blocks = make([]protocol.BlockInfo, len(f.Blocks))
|
||||
for i, b := range f.Blocks {
|
||||
blocks[i] = protocol.BlockInfo{
|
||||
Size: b.Size,
|
||||
Hash: b.Hash,
|
||||
}
|
||||
}
|
||||
pf := protocol.FileInfo{
|
||||
Name: f.Name,
|
||||
Flags: f.Flags,
|
||||
Modified: f.Modified,
|
||||
Version: f.Version,
|
||||
Blocks: blocks,
|
||||
}
|
||||
if f.Suppressed {
|
||||
pf.Flags |= protocol.FlagInvalid
|
||||
}
|
||||
return pf
|
||||
}
|
||||
556
cmd/syncthing/model_test.go
Normal file
556
cmd/syncthing/model_test.go
Normal file
@@ -0,0 +1,556 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/protocol"
|
||||
"github.com/calmh/syncthing/scanner"
|
||||
)
|
||||
|
||||
func TestNewModel(t *testing.T) {
|
||||
m := NewModel("foo", 1e6)
|
||||
|
||||
if m == nil {
|
||||
t.Fatalf("NewModel returned nil")
|
||||
}
|
||||
|
||||
if fs, _ := m.NeedFiles(); len(fs) > 0 {
|
||||
t.Errorf("New model should have no Need")
|
||||
}
|
||||
|
||||
if len(m.local) > 0 {
|
||||
t.Errorf("New model should have no Have")
|
||||
}
|
||||
}
|
||||
|
||||
var testDataExpected = map[string]scanner.File{
|
||||
"foo": scanner.File{
|
||||
Name: "foo",
|
||||
Flags: 0,
|
||||
Modified: 0,
|
||||
Size: 7,
|
||||
Blocks: []scanner.Block{{Offset: 0x0, Size: 0x7, Hash: []uint8{0xae, 0xc0, 0x70, 0x64, 0x5f, 0xe5, 0x3e, 0xe3, 0xb3, 0x76, 0x30, 0x59, 0x37, 0x61, 0x34, 0xf0, 0x58, 0xcc, 0x33, 0x72, 0x47, 0xc9, 0x78, 0xad, 0xd1, 0x78, 0xb6, 0xcc, 0xdf, 0xb0, 0x1, 0x9f}}},
|
||||
},
|
||||
"empty": scanner.File{
|
||||
Name: "empty",
|
||||
Flags: 0,
|
||||
Modified: 0,
|
||||
Size: 0,
|
||||
Blocks: []scanner.Block{{Offset: 0x0, Size: 0x0, Hash: []uint8{0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55}}},
|
||||
},
|
||||
"bar": scanner.File{
|
||||
Name: "bar",
|
||||
Flags: 0,
|
||||
Modified: 0,
|
||||
Size: 10,
|
||||
Blocks: []scanner.Block{{Offset: 0x0, Size: 0xa, Hash: []uint8{0x2f, 0x72, 0xcc, 0x11, 0xa6, 0xfc, 0xd0, 0x27, 0x1e, 0xce, 0xf8, 0xc6, 0x10, 0x56, 0xee, 0x1e, 0xb1, 0x24, 0x3b, 0xe3, 0x80, 0x5b, 0xf9, 0xa9, 0xdf, 0x98, 0xf9, 0x2f, 0x76, 0x36, 0xb0, 0x5c}}},
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
// Fix expected test data to match reality
|
||||
for n, f := range testDataExpected {
|
||||
fi, _ := os.Stat("testdata/" + n)
|
||||
f.Flags = uint32(fi.Mode())
|
||||
f.Modified = fi.ModTime().Unix()
|
||||
testDataExpected[n] = f
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateLocal(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
if fs, _ := m.NeedFiles(); len(fs) > 0 {
|
||||
t.Fatalf("Model with only local data should have no need")
|
||||
}
|
||||
|
||||
if l1, l2 := len(m.local), len(testDataExpected); l1 != l2 {
|
||||
t.Fatalf("Model len(local) incorrect, %d != %d", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(testDataExpected); l1 != l2 {
|
||||
t.Fatalf("Model len(global) incorrect, %d != %d", l1, l2)
|
||||
}
|
||||
for name, file := range testDataExpected {
|
||||
if f, ok := m.local[name]; ok {
|
||||
if !reflect.DeepEqual(f, file) {
|
||||
t.Errorf("Incorrect local\n%v !=\n%v\nfor file %q", f, file, name)
|
||||
}
|
||||
} else {
|
||||
t.Errorf("Missing file %q in local table", name)
|
||||
}
|
||||
if f, ok := m.global[name]; ok {
|
||||
if !reflect.DeepEqual(f, file) {
|
||||
t.Errorf("Incorrect global\n%v !=\n%v\nfor file %q", f, file, name)
|
||||
}
|
||||
} else {
|
||||
t.Errorf("Missing file %q in global table", name)
|
||||
}
|
||||
}
|
||||
|
||||
for _, f := range fs {
|
||||
if hf, ok := m.local[f.Name]; !ok || hf.Modified != f.Modified {
|
||||
t.Fatalf("Incorrect local for %q", f.Name)
|
||||
}
|
||||
if cf, ok := m.global[f.Name]; !ok || cf.Modified != f.Modified {
|
||||
t.Fatalf("Incorrect global for %q", f.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteUpdateExisting(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
newFile := protocol.FileInfo{
|
||||
Name: "foo",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
m.Index("42", []protocol.FileInfo{newFile})
|
||||
|
||||
if fs, _ := m.NeedFiles(); len(fs) != 1 {
|
||||
t.Errorf("Model missing Need for one file (%d != 1)", len(fs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteAddNew(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
newFile := protocol.FileInfo{
|
||||
Name: "a new file",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
m.Index("42", []protocol.FileInfo{newFile})
|
||||
|
||||
if fs, _ := m.NeedFiles(); len(fs) != 1 {
|
||||
t.Errorf("Model len(m.need) incorrect (%d != 1)", len(fs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteUpdateOld(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
oldTimeStamp := int64(1234)
|
||||
newFile := protocol.FileInfo{
|
||||
Name: "foo",
|
||||
Modified: oldTimeStamp,
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
m.Index("42", []protocol.FileInfo{newFile})
|
||||
|
||||
if fs, _ := m.NeedFiles(); len(fs) != 0 {
|
||||
t.Errorf("Model len(need) incorrect (%d != 0)", len(fs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoteIndexUpdate(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
foo := protocol.FileInfo{
|
||||
Name: "foo",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
|
||||
bar := protocol.FileInfo{
|
||||
Name: "bar",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
|
||||
m.Index("42", []protocol.FileInfo{foo})
|
||||
|
||||
if fs, _ := m.NeedFiles(); fs[0].Name != "foo" {
|
||||
t.Error("Model doesn't need 'foo'")
|
||||
}
|
||||
|
||||
m.IndexUpdate("42", []protocol.FileInfo{bar})
|
||||
|
||||
if fs, _ := m.NeedFiles(); fs[0].Name != "foo" {
|
||||
t.Error("Model doesn't need 'foo'")
|
||||
}
|
||||
if fs, _ := m.NeedFiles(); fs[1].Name != "bar" {
|
||||
t.Error("Model doesn't need 'bar'")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDelete(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
if l1, l2 := len(m.local), len(fs); l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs); l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
|
||||
ot := time.Now().Unix()
|
||||
newFile := scanner.File{
|
||||
Name: "a new file",
|
||||
Modified: ot,
|
||||
Blocks: []scanner.Block{{0, 100, []byte("some hash bytes")}},
|
||||
}
|
||||
m.updateLocal(newFile)
|
||||
|
||||
if l1, l2 := len(m.local), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
|
||||
// The deleted file is kept in the local and global tables and marked as deleted.
|
||||
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
if l1, l2 := len(m.local), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
|
||||
if m.local["a new file"].Flags&(1<<12) == 0 {
|
||||
t.Error("Unexpected deleted flag = 0 in local table")
|
||||
}
|
||||
if len(m.local["a new file"].Blocks) != 0 {
|
||||
t.Error("Unexpected non-zero blocks for deleted file in local")
|
||||
}
|
||||
if ft := m.local["a new file"].Modified; ft != ot {
|
||||
t.Errorf("Unexpected time %d != %d for deleted file in local", ft, ot+1)
|
||||
}
|
||||
if fv := m.local["a new file"].Version; fv != 1 {
|
||||
t.Errorf("Unexpected version %d != 1 for deleted file in local", fv)
|
||||
}
|
||||
|
||||
if m.global["a new file"].Flags&(1<<12) == 0 {
|
||||
t.Error("Unexpected deleted flag = 0 in global table")
|
||||
}
|
||||
if len(m.global["a new file"].Blocks) != 0 {
|
||||
t.Error("Unexpected non-zero blocks for deleted file in global")
|
||||
}
|
||||
if ft := m.global["a new file"].Modified; ft != ot {
|
||||
t.Errorf("Unexpected time %d != %d for deleted file in global", ft, ot+1)
|
||||
}
|
||||
if fv := m.local["a new file"].Version; fv != 1 {
|
||||
t.Errorf("Unexpected version %d != 1 for deleted file in global", fv)
|
||||
}
|
||||
|
||||
// Another update should change nothing
|
||||
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
if l1, l2 := len(m.local), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
|
||||
if m.local["a new file"].Flags&(1<<12) == 0 {
|
||||
t.Error("Unexpected deleted flag = 0 in local table")
|
||||
}
|
||||
if len(m.local["a new file"].Blocks) != 0 {
|
||||
t.Error("Unexpected non-zero blocks for deleted file in local")
|
||||
}
|
||||
if ft := m.local["a new file"].Modified; ft != ot {
|
||||
t.Errorf("Unexpected time %d != %d for deleted file in local", ft, ot)
|
||||
}
|
||||
if fv := m.local["a new file"].Version; fv != 1 {
|
||||
t.Errorf("Unexpected version %d != 1 for deleted file in local", fv)
|
||||
}
|
||||
|
||||
if m.global["a new file"].Flags&(1<<12) == 0 {
|
||||
t.Error("Unexpected deleted flag = 0 in global table")
|
||||
}
|
||||
if len(m.global["a new file"].Blocks) != 0 {
|
||||
t.Error("Unexpected non-zero blocks for deleted file in global")
|
||||
}
|
||||
if ft := m.global["a new file"].Modified; ft != ot {
|
||||
t.Errorf("Unexpected time %d != %d for deleted file in global", ft, ot)
|
||||
}
|
||||
if fv := m.local["a new file"].Version; fv != 1 {
|
||||
t.Errorf("Unexpected version %d != 1 for deleted file in global", fv)
|
||||
}
|
||||
}
|
||||
|
||||
func TestForgetNode(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
if l1, l2 := len(m.local), len(fs); l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs); l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if fs, _ := m.NeedFiles(); len(fs) != 0 {
|
||||
t.Errorf("Model len(need) incorrect (%d != 0)", len(fs))
|
||||
}
|
||||
|
||||
newFile := protocol.FileInfo{
|
||||
Name: "new file",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
m.Index("42", []protocol.FileInfo{newFile})
|
||||
|
||||
newFile = protocol.FileInfo{
|
||||
Name: "new file 2",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
m.Index("43", []protocol.FileInfo{newFile})
|
||||
|
||||
if l1, l2 := len(m.local), len(fs); l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs)+2; l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if fs, _ := m.NeedFiles(); len(fs) != 2 {
|
||||
t.Errorf("Model len(need) incorrect (%d != 2)", len(fs))
|
||||
}
|
||||
|
||||
m.Close("42", nil)
|
||||
|
||||
if l1, l2 := len(m.local), len(fs); l1 != l2 {
|
||||
t.Errorf("Model len(local) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
if l1, l2 := len(m.global), len(fs)+1; l1 != l2 {
|
||||
t.Errorf("Model len(global) incorrect (%d != %d)", l1, l2)
|
||||
}
|
||||
|
||||
if fs, _ := m.NeedFiles(); len(fs) != 1 {
|
||||
t.Errorf("Model len(need) incorrect (%d != 1)", len(fs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestRequest(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
bs, err := m.Request("some node", "default", "foo", 0, 6)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if bytes.Compare(bs, []byte("foobar")) != 0 {
|
||||
t.Errorf("Incorrect data from request: %q", string(bs))
|
||||
}
|
||||
|
||||
bs, err = m.Request("some node", "default", "../walk.go", 0, 6)
|
||||
if err == nil {
|
||||
t.Error("Unexpected nil error on insecure file read")
|
||||
}
|
||||
if bs != nil {
|
||||
t.Errorf("Unexpected non nil data on insecure file read: %q", string(bs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestIgnoreWithUnknownFlags(t *testing.T) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
valid := protocol.FileInfo{
|
||||
Name: "valid",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
Flags: protocol.FlagDeleted | 0755,
|
||||
}
|
||||
|
||||
invalid := protocol.FileInfo{
|
||||
Name: "invalid",
|
||||
Modified: time.Now().Unix(),
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
Flags: 1<<27 | protocol.FlagDeleted | 0755,
|
||||
}
|
||||
|
||||
m.Index("42", []protocol.FileInfo{valid, invalid})
|
||||
|
||||
if _, ok := m.global[valid.Name]; !ok {
|
||||
t.Error("Model should include", valid)
|
||||
}
|
||||
if _, ok := m.global[invalid.Name]; ok {
|
||||
t.Error("Model not should include", invalid)
|
||||
}
|
||||
}
|
||||
|
||||
func genFiles(n int) []protocol.FileInfo {
|
||||
files := make([]protocol.FileInfo, n)
|
||||
t := time.Now().Unix()
|
||||
for i := 0; i < n; i++ {
|
||||
files[i] = protocol.FileInfo{
|
||||
Name: fmt.Sprintf("file%d", i),
|
||||
Modified: t,
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
}
|
||||
|
||||
return files
|
||||
}
|
||||
|
||||
func BenchmarkIndex10000(b *testing.B) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
files := genFiles(10000)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.Index("42", files)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkIndex00100(b *testing.B) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
files := genFiles(100)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.Index("42", files)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkIndexUpdate10000f10000(b *testing.B) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
files := genFiles(10000)
|
||||
m.Index("42", files)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.IndexUpdate("42", files)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkIndexUpdate10000f00100(b *testing.B) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
files := genFiles(10000)
|
||||
m.Index("42", files)
|
||||
|
||||
ufiles := genFiles(100)
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.IndexUpdate("42", ufiles)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkIndexUpdate10000f00001(b *testing.B) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
files := genFiles(10000)
|
||||
m.Index("42", files)
|
||||
|
||||
ufiles := genFiles(1)
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.IndexUpdate("42", ufiles)
|
||||
}
|
||||
}
|
||||
|
||||
type FakeConnection struct {
|
||||
id string
|
||||
requestData []byte
|
||||
}
|
||||
|
||||
func (FakeConnection) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f FakeConnection) ID() string {
|
||||
return string(f.id)
|
||||
}
|
||||
|
||||
func (f FakeConnection) Option(string) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (FakeConnection) Index(string, []protocol.FileInfo) {}
|
||||
|
||||
func (f FakeConnection) Request(repo, name string, offset int64, size int) ([]byte, error) {
|
||||
return f.requestData, nil
|
||||
}
|
||||
|
||||
func (FakeConnection) Ping() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (FakeConnection) Statistics() protocol.Statistics {
|
||||
return protocol.Statistics{}
|
||||
}
|
||||
|
||||
func BenchmarkRequest(b *testing.B) {
|
||||
m := NewModel("testdata", 1e6)
|
||||
w := scanner.Walker{Dir: "testdata", IgnoreFile: ".stignore", BlockSize: 128 * 1024}
|
||||
fs, _ := w.Walk()
|
||||
m.ReplaceLocal(fs)
|
||||
|
||||
const n = 1000
|
||||
files := make([]protocol.FileInfo, n)
|
||||
t := time.Now().Unix()
|
||||
for i := 0; i < n; i++ {
|
||||
files[i] = protocol.FileInfo{
|
||||
Name: fmt.Sprintf("file%d", i),
|
||||
Modified: t,
|
||||
Blocks: []protocol.BlockInfo{{100, []byte("some hash bytes")}},
|
||||
}
|
||||
}
|
||||
|
||||
fc := FakeConnection{
|
||||
id: "42",
|
||||
requestData: []byte("some data to return"),
|
||||
}
|
||||
m.AddConnection(fc, fc)
|
||||
m.Index("42", files)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
data, err := m.requestGlobal("42", files[i%n].Name, 0, 32, nil)
|
||||
if err != nil {
|
||||
b.Error(err)
|
||||
}
|
||||
if data == nil {
|
||||
b.Error("nil data")
|
||||
}
|
||||
}
|
||||
}
|
||||
34
cmd/syncthing/openurl.go
Normal file
34
cmd/syncthing/openurl.go
Normal file
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
Copyright 2011 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
func openURL(url string) error {
|
||||
if runtime.GOOS == "windows" {
|
||||
return exec.Command("cmd.exe", "/C", "start "+url).Run()
|
||||
}
|
||||
|
||||
if runtime.GOOS == "darwin" {
|
||||
return exec.Command("open", url).Run()
|
||||
}
|
||||
|
||||
return exec.Command("xdg-open", url).Run()
|
||||
}
|
||||
78
cmd/syncthing/suppressor.go
Normal file
78
cmd/syncthing/suppressor.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
MaxChangeHistory = 4
|
||||
)
|
||||
|
||||
type change struct {
|
||||
size int64
|
||||
when time.Time
|
||||
}
|
||||
|
||||
type changeHistory struct {
|
||||
changes []change
|
||||
next int64
|
||||
prevSup bool
|
||||
}
|
||||
|
||||
type suppressor struct {
|
||||
sync.Mutex
|
||||
changes map[string]changeHistory
|
||||
threshold int64 // bytes/s
|
||||
}
|
||||
|
||||
func (h changeHistory) bandwidth(t time.Time) int64 {
|
||||
if len(h.changes) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
var t0 = h.changes[0].when
|
||||
if t == t0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
var bw float64
|
||||
for _, c := range h.changes {
|
||||
bw += float64(c.size)
|
||||
}
|
||||
return int64(bw / t.Sub(t0).Seconds())
|
||||
}
|
||||
|
||||
func (h *changeHistory) append(size int64, t time.Time) {
|
||||
c := change{size, t}
|
||||
if len(h.changes) == MaxChangeHistory {
|
||||
h.changes = h.changes[1:MaxChangeHistory]
|
||||
}
|
||||
h.changes = append(h.changes, c)
|
||||
}
|
||||
|
||||
func (s *suppressor) Suppress(name string, fi os.FileInfo) bool {
|
||||
sup, _ := s.suppress(name, fi.Size(), time.Now())
|
||||
return sup
|
||||
}
|
||||
|
||||
func (s *suppressor) suppress(name string, size int64, t time.Time) (bool, bool) {
|
||||
s.Lock()
|
||||
|
||||
if s.changes == nil {
|
||||
s.changes = make(map[string]changeHistory)
|
||||
}
|
||||
h := s.changes[name]
|
||||
sup := h.bandwidth(t) > s.threshold
|
||||
prevSup := h.prevSup
|
||||
h.prevSup = sup
|
||||
if !sup {
|
||||
h.append(size, t)
|
||||
}
|
||||
s.changes[name] = h
|
||||
|
||||
s.Unlock()
|
||||
|
||||
return sup, prevSup
|
||||
}
|
||||
113
cmd/syncthing/suppressor_test.go
Normal file
113
cmd/syncthing/suppressor_test.go
Normal file
@@ -0,0 +1,113 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestSuppressor(t *testing.T) {
|
||||
s := suppressor{threshold: 10000}
|
||||
t0 := time.Now()
|
||||
|
||||
t1 := t0
|
||||
sup, prev := s.suppress("foo", 10000, t1)
|
||||
if sup {
|
||||
t.Fatal("Never suppress first change")
|
||||
}
|
||||
if prev {
|
||||
t.Fatal("Incorrect prev status")
|
||||
}
|
||||
|
||||
// bw is 10000 / 10 = 1000
|
||||
t1 = t0.Add(10 * time.Second)
|
||||
if bw := s.changes["foo"].bandwidth(t1); bw != 1000 {
|
||||
t.Errorf("Incorrect bw %d", bw)
|
||||
}
|
||||
sup, prev = s.suppress("foo", 10000, t1)
|
||||
if sup {
|
||||
t.Fatal("Should still be fine")
|
||||
}
|
||||
if prev {
|
||||
t.Fatal("Incorrect prev status")
|
||||
}
|
||||
|
||||
// bw is (10000 + 10000) / 11 = 1818
|
||||
t1 = t0.Add(11 * time.Second)
|
||||
if bw := s.changes["foo"].bandwidth(t1); bw != 1818 {
|
||||
t.Errorf("Incorrect bw %d", bw)
|
||||
}
|
||||
sup, prev = s.suppress("foo", 100500, t1)
|
||||
if sup {
|
||||
t.Fatal("Should still be fine")
|
||||
}
|
||||
if prev {
|
||||
t.Fatal("Incorrect prev status")
|
||||
}
|
||||
|
||||
// bw is (10000 + 10000 + 100500) / 12 = 10041
|
||||
t1 = t0.Add(12 * time.Second)
|
||||
if bw := s.changes["foo"].bandwidth(t1); bw != 10041 {
|
||||
t.Errorf("Incorrect bw %d", bw)
|
||||
}
|
||||
sup, prev = s.suppress("foo", 10000000, t1) // value will be ignored
|
||||
if !sup {
|
||||
t.Fatal("Should be over threshold")
|
||||
}
|
||||
if prev {
|
||||
t.Fatal("Incorrect prev status")
|
||||
}
|
||||
|
||||
// bw is (10000 + 10000 + 100500) / 15 = 8033
|
||||
t1 = t0.Add(15 * time.Second)
|
||||
if bw := s.changes["foo"].bandwidth(t1); bw != 8033 {
|
||||
t.Errorf("Incorrect bw %d", bw)
|
||||
}
|
||||
sup, prev = s.suppress("foo", 10000000, t1)
|
||||
if sup {
|
||||
t.Fatal("Should be Ok")
|
||||
}
|
||||
if !prev {
|
||||
t.Fatal("Incorrect prev status")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHistory(t *testing.T) {
|
||||
h := changeHistory{}
|
||||
|
||||
t0 := time.Now()
|
||||
h.append(40, t0)
|
||||
|
||||
if l := len(h.changes); l != 1 {
|
||||
t.Errorf("Incorrect history length %d", l)
|
||||
}
|
||||
if s := h.changes[0].size; s != 40 {
|
||||
t.Errorf("Incorrect first record size %d", s)
|
||||
}
|
||||
|
||||
for i := 1; i < MaxChangeHistory; i++ {
|
||||
h.append(int64(40+i), t0.Add(time.Duration(i)*time.Second))
|
||||
}
|
||||
|
||||
if l := len(h.changes); l != MaxChangeHistory {
|
||||
t.Errorf("Incorrect history length %d", l)
|
||||
}
|
||||
if s := h.changes[0].size; s != 40 {
|
||||
t.Errorf("Incorrect first record size %d", s)
|
||||
}
|
||||
if s := h.changes[MaxChangeHistory-1].size; s != 40+MaxChangeHistory-1 {
|
||||
t.Errorf("Incorrect last record size %d", s)
|
||||
}
|
||||
|
||||
h.append(999, t0.Add(time.Duration(999)*time.Second))
|
||||
|
||||
if l := len(h.changes); l != MaxChangeHistory {
|
||||
t.Errorf("Incorrect history length %d", l)
|
||||
}
|
||||
if s := h.changes[0].size; s != 41 {
|
||||
t.Errorf("Incorrect first record size %d", s)
|
||||
}
|
||||
if s := h.changes[MaxChangeHistory-1].size; s != 999 {
|
||||
t.Errorf("Incorrect last record size %d", s)
|
||||
}
|
||||
|
||||
}
|
||||
28
cmd/syncthing/tempname.go
Normal file
28
cmd/syncthing/tempname.go
Normal file
@@ -0,0 +1,28 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type tempNamer struct {
|
||||
prefix string
|
||||
}
|
||||
|
||||
var defTempNamer = tempNamer{".syncthing"}
|
||||
|
||||
func (t tempNamer) IsTemporary(name string) bool {
|
||||
if runtime.GOOS == "windows" {
|
||||
name = filepath.ToSlash(name)
|
||||
}
|
||||
return strings.HasPrefix(path.Base(name), t.prefix)
|
||||
}
|
||||
|
||||
func (t tempNamer) TempName(name string) string {
|
||||
tdir := path.Dir(name)
|
||||
tname := fmt.Sprintf("%s.%s", t.prefix, path.Base(name))
|
||||
return path.Join(tdir, tname)
|
||||
}
|
||||
2
cmd/syncthing/testdata/.stignore
vendored
Normal file
2
cmd/syncthing/testdata/.stignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
.*
|
||||
quux
|
||||
0
cmd/syncthing/testdata/empty
vendored
Normal file
0
cmd/syncthing/testdata/empty
vendored
Normal file
@@ -3,7 +3,7 @@ package main
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha1"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
@@ -12,11 +12,12 @@ import (
|
||||
"math/big"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
tlsRSABits = 2048
|
||||
tlsRSABits = 3072
|
||||
tlsName = "syncthing"
|
||||
)
|
||||
|
||||
@@ -24,14 +25,16 @@ func loadCert(dir string) (tls.Certificate, error) {
|
||||
return tls.LoadX509KeyPair(path.Join(dir, "cert.pem"), path.Join(dir, "key.pem"))
|
||||
}
|
||||
|
||||
func certId(bs []byte) string {
|
||||
hf := sha1.New()
|
||||
func certID(bs []byte) string {
|
||||
hf := sha256.New()
|
||||
hf.Write(bs)
|
||||
id := hf.Sum(nil)
|
||||
return base32.StdEncoding.EncodeToString(id)
|
||||
return strings.Trim(base32.StdEncoding.EncodeToString(id), "=")
|
||||
}
|
||||
|
||||
func newCertificate(dir string) {
|
||||
infoln("Generating RSA certificate and key...")
|
||||
|
||||
priv, err := rsa.GenerateKey(rand.Reader, tlsRSABits)
|
||||
fatalErr(err)
|
||||
|
||||
@@ -47,7 +50,7 @@ func newCertificate(dir string) {
|
||||
NotAfter: notAfter,
|
||||
|
||||
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth},
|
||||
BasicConstraintsValid: true,
|
||||
}
|
||||
|
||||
@@ -58,11 +61,11 @@ func newCertificate(dir string) {
|
||||
fatalErr(err)
|
||||
pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes})
|
||||
certOut.Close()
|
||||
okln("Created TLS certificate file")
|
||||
okln("Created RSA certificate file")
|
||||
|
||||
keyOut, err := os.OpenFile(path.Join(dir, "key.pem"), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
|
||||
fatalErr(err)
|
||||
pem.Encode(keyOut, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)})
|
||||
keyOut.Close()
|
||||
okln("Created TLS key file")
|
||||
okln("Created RSA key file")
|
||||
}
|
||||
52
cmd/syncthing/usage.go
Normal file
52
cmd/syncthing/usage.go
Normal file
@@ -0,0 +1,52 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"text/tabwriter"
|
||||
)
|
||||
|
||||
func optionTable(w io.Writer, rows [][]string) {
|
||||
tw := tabwriter.NewWriter(w, 2, 4, 2, ' ', 0)
|
||||
for _, row := range rows {
|
||||
for i, cell := range row {
|
||||
if i > 0 {
|
||||
tw.Write([]byte("\t"))
|
||||
}
|
||||
tw.Write([]byte(cell))
|
||||
}
|
||||
tw.Write([]byte("\n"))
|
||||
}
|
||||
tw.Flush()
|
||||
}
|
||||
|
||||
func usageFor(fs *flag.FlagSet, usage string, extra string) func() {
|
||||
return func() {
|
||||
var b bytes.Buffer
|
||||
b.WriteString("Usage:\n " + usage + "\n")
|
||||
|
||||
var options [][]string
|
||||
fs.VisitAll(func(f *flag.Flag) {
|
||||
var opt = " -" + f.Name
|
||||
|
||||
if f.DefValue != "false" {
|
||||
opt += "=" + f.DefValue
|
||||
}
|
||||
|
||||
options = append(options, []string{opt, f.Usage})
|
||||
})
|
||||
|
||||
if len(options) > 0 {
|
||||
b.WriteString("\nOptions:\n")
|
||||
optionTable(&b, options)
|
||||
}
|
||||
|
||||
fmt.Println(b.String())
|
||||
|
||||
if len(extra) > 0 {
|
||||
fmt.Println(extra)
|
||||
}
|
||||
}
|
||||
}
|
||||
29
cmd/syncthing/util.go
Normal file
29
cmd/syncthing/util.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func MetricPrefix(n int64) string {
|
||||
if n > 1e9 {
|
||||
return fmt.Sprintf("%.02f G", float64(n)/1e9)
|
||||
}
|
||||
if n > 1e6 {
|
||||
return fmt.Sprintf("%.02f M", float64(n)/1e6)
|
||||
}
|
||||
if n > 1e3 {
|
||||
return fmt.Sprintf("%.01f k", float64(n)/1e3)
|
||||
}
|
||||
return fmt.Sprintf("%d ", n)
|
||||
}
|
||||
|
||||
func BinaryPrefix(n int64) string {
|
||||
if n > 1<<30 {
|
||||
return fmt.Sprintf("%.02f Gi", float64(n)/(1<<30))
|
||||
}
|
||||
if n > 1<<20 {
|
||||
return fmt.Sprintf("%.02f Mi", float64(n)/(1<<20))
|
||||
}
|
||||
if n > 1<<10 {
|
||||
return fmt.Sprintf("%.01f Ki", float64(n)/(1<<10))
|
||||
}
|
||||
return fmt.Sprintf("%d ", n)
|
||||
}
|
||||
115
discover/PROTOCOL.md
Normal file
115
discover/PROTOCOL.md
Normal file
@@ -0,0 +1,115 @@
|
||||
Node Discovery Protocol v2
|
||||
==========================
|
||||
|
||||
Mode of Operation
|
||||
-----------------
|
||||
|
||||
There are two distinct modes: "local discovery", performed on a LAN
|
||||
segment (broadcast domain) and "global discovery" performed over the
|
||||
Internet in general with the support of a well known server.
|
||||
|
||||
Local discovery does not use Query packets. Instead Announcement packets
|
||||
are sent periodically and each participating node keeps a table of the
|
||||
announcements it has seen. On multihomed hosts the announcement packets
|
||||
should be sent on each interface that syncthing will accept connections.
|
||||
|
||||
It is recommended that local discovery Announcement packets are sent on
|
||||
a 30 to 60 second interval, possibly with forced transmissions when a
|
||||
previously unknown node is discovered.
|
||||
|
||||
Global discovery is made possible by periodically updating a global server
|
||||
using Announcement packets indentical to those transmitted for local
|
||||
discovery. The node performing discovery will transmit a Query packet to
|
||||
the global server and expect an Announcement packet in response. In case
|
||||
the global server has no knowledge of the queried node ID, there will be
|
||||
no response. A timeout is to be used to determine lookup failure.
|
||||
|
||||
There is no message to unregister from the global server; instead
|
||||
registrations are forgotten after 60 minutes. It is recommended to
|
||||
send Announcement packets to the global server on a 30 minute interval.
|
||||
|
||||
Packet Formats
|
||||
--------------
|
||||
|
||||
The Announcement packet has the following structure:
|
||||
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Magic Number (0x029E4C77) |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of Node ID |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ Node ID (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Number of Addresses |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ Zero or more Address Structures \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
Address Structure:
|
||||
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of IP |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ IP (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Port Number | 0x0000 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
This is the XDR encoding of:
|
||||
|
||||
struct Announcement {
|
||||
unsigned int MagicNumber;
|
||||
string NodeID<>;
|
||||
Address Addresses<>;
|
||||
}
|
||||
|
||||
struct Address {
|
||||
opaque IP<>;
|
||||
unsigned short PortNumber;
|
||||
}
|
||||
|
||||
NodeID is padded to a multiple of 32 bits and all fields are in sent in
|
||||
network (big endian) byte order. In the Address structure, the IP field
|
||||
can be of three differnt kinds;
|
||||
|
||||
- A zero length indicates that the IP address should be taken from the
|
||||
source address of the announcement packet, be it IPv4 or IPv6. The
|
||||
source address must be a valid unicast address.
|
||||
|
||||
- A four byte length indicates that the address is an IPv4 unicast
|
||||
address.
|
||||
|
||||
- A sixteen byte length indicates that the address is an IPv6 unicast
|
||||
address.
|
||||
|
||||
The Query packet has the following structure:
|
||||
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Magic Number (0x23D63A9A) |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of Node ID |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ Node ID (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
This is the XDR encoding of:
|
||||
|
||||
struct Announcement {
|
||||
unsigned int MagicNumber;
|
||||
string NodeID<>;
|
||||
}
|
||||
|
||||
@@ -1,74 +1,237 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"flag"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/syncthing/discover"
|
||||
)
|
||||
|
||||
type Node struct {
|
||||
Addresses []Address
|
||||
Updated time.Time
|
||||
}
|
||||
|
||||
type Address struct {
|
||||
IP []byte
|
||||
Port uint16
|
||||
}
|
||||
|
||||
var (
|
||||
nodes = make(map[string]Node)
|
||||
lock sync.Mutex
|
||||
nodes = make(map[string]Node)
|
||||
lock sync.Mutex
|
||||
queries = 0
|
||||
answered = 0
|
||||
)
|
||||
|
||||
func main() {
|
||||
addr, _ := net.ResolveUDPAddr("udp", ":22025")
|
||||
var debug bool
|
||||
var listen string
|
||||
var timestamp bool
|
||||
|
||||
flag.StringVar(&listen, "listen", ":22025", "Listen address")
|
||||
flag.BoolVar(&debug, "debug", false, "Enable debug output")
|
||||
flag.BoolVar(×tamp, "timestamp", true, "Timestamp the log output")
|
||||
flag.Parse()
|
||||
|
||||
log.SetOutput(os.Stdout)
|
||||
if !timestamp {
|
||||
log.SetFlags(0)
|
||||
}
|
||||
|
||||
addr, _ := net.ResolveUDPAddr("udp", listen)
|
||||
conn, err := net.ListenUDP("udp", addr)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
for {
|
||||
time.Sleep(600 * time.Second)
|
||||
|
||||
lock.Lock()
|
||||
|
||||
var deleted = 0
|
||||
for id, node := range nodes {
|
||||
if time.Since(node.Updated) > 60*time.Minute {
|
||||
delete(nodes, id)
|
||||
deleted++
|
||||
}
|
||||
}
|
||||
log.Printf("Expired %d nodes; %d nodes in registry; %d queries (%d answered)", deleted, len(nodes), queries, answered)
|
||||
queries = 0
|
||||
answered = 0
|
||||
|
||||
lock.Unlock()
|
||||
}
|
||||
}()
|
||||
|
||||
var buf = make([]byte, 1024)
|
||||
for {
|
||||
buf = buf[:cap(buf)]
|
||||
n, addr, err := conn.ReadFromUDP(buf)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
pkt, err := discover.DecodePacket(buf[:n])
|
||||
if err != nil {
|
||||
log.Println("Warning:", err)
|
||||
if n < 4 {
|
||||
log.Printf("Received short packet (%d bytes)", n)
|
||||
continue
|
||||
}
|
||||
|
||||
switch pkt.Magic {
|
||||
case 0x20121025:
|
||||
// Announcement
|
||||
//lock.Lock()
|
||||
buf = buf[:n]
|
||||
magic := binary.BigEndian.Uint32(buf)
|
||||
|
||||
switch magic {
|
||||
case discover.AnnouncementMagicV1:
|
||||
var pkt discover.AnnounceV1
|
||||
err := pkt.UnmarshalXDR(buf)
|
||||
if err != nil {
|
||||
log.Println("AnnounceV1 Unmarshal:", err)
|
||||
log.Println(hex.Dump(buf))
|
||||
continue
|
||||
}
|
||||
if debug {
|
||||
log.Printf("<- %v %#v", addr, pkt)
|
||||
}
|
||||
|
||||
ip := addr.IP.To4()
|
||||
if ip == nil {
|
||||
ip = addr.IP.To16()
|
||||
}
|
||||
node := Node{ip, uint16(pkt.Port)}
|
||||
log.Println("<-", pkt.ID, node)
|
||||
nodes[pkt.ID] = node
|
||||
//lock.Unlock()
|
||||
case 0x19760309:
|
||||
// Query
|
||||
//lock.Lock()
|
||||
node, ok := nodes[pkt.ID]
|
||||
//lock.Unlock()
|
||||
if ok {
|
||||
pkt := discover.Packet{
|
||||
Magic: 0x20121025,
|
||||
ID: pkt.ID,
|
||||
Port: node.Port,
|
||||
IP: node.IP,
|
||||
node := Node{
|
||||
Addresses: []Address{{
|
||||
IP: ip,
|
||||
Port: pkt.Port,
|
||||
}},
|
||||
Updated: time.Now(),
|
||||
}
|
||||
|
||||
lock.Lock()
|
||||
nodes[pkt.NodeID] = node
|
||||
lock.Unlock()
|
||||
|
||||
case discover.QueryMagicV1:
|
||||
var pkt discover.QueryV1
|
||||
err := pkt.UnmarshalXDR(buf)
|
||||
if err != nil {
|
||||
log.Println("QueryV1 Unmarshal:", err)
|
||||
log.Println(hex.Dump(buf))
|
||||
continue
|
||||
}
|
||||
if debug {
|
||||
log.Printf("<- %v %#v", addr, pkt)
|
||||
}
|
||||
|
||||
lock.Lock()
|
||||
node, ok := nodes[pkt.NodeID]
|
||||
queries++
|
||||
lock.Unlock()
|
||||
|
||||
if ok && len(node.Addresses) > 0 {
|
||||
pkt := discover.AnnounceV1{
|
||||
Magic: discover.AnnouncementMagicV1,
|
||||
NodeID: pkt.NodeID,
|
||||
Port: node.Addresses[0].Port,
|
||||
IP: node.Addresses[0].IP,
|
||||
}
|
||||
_, _, err = conn.WriteMsgUDP(discover.EncodePacket(pkt), nil, addr)
|
||||
if debug {
|
||||
log.Printf("-> %v %#v", addr, pkt)
|
||||
}
|
||||
|
||||
tb := pkt.MarshalXDR()
|
||||
_, _, err = conn.WriteMsgUDP(tb, nil, addr)
|
||||
if err != nil {
|
||||
log.Println("Warning:", err)
|
||||
} else {
|
||||
log.Println("->", pkt.ID, node)
|
||||
log.Println("QueryV1 response write:", err)
|
||||
}
|
||||
|
||||
lock.Lock()
|
||||
answered++
|
||||
lock.Unlock()
|
||||
}
|
||||
|
||||
case discover.AnnouncementMagicV2:
|
||||
var pkt discover.AnnounceV2
|
||||
err := pkt.UnmarshalXDR(buf)
|
||||
if err != nil {
|
||||
log.Println("AnnounceV2 Unmarshal:", err)
|
||||
log.Println(hex.Dump(buf))
|
||||
continue
|
||||
}
|
||||
if debug {
|
||||
log.Printf("<- %v %#v", addr, pkt)
|
||||
}
|
||||
|
||||
ip := addr.IP.To4()
|
||||
if ip == nil {
|
||||
ip = addr.IP.To16()
|
||||
}
|
||||
|
||||
var addrs []Address
|
||||
for _, addr := range pkt.Addresses {
|
||||
tip := addr.IP
|
||||
if len(tip) == 0 {
|
||||
tip = ip
|
||||
}
|
||||
addrs = append(addrs, Address{
|
||||
IP: tip,
|
||||
Port: addr.Port,
|
||||
})
|
||||
}
|
||||
|
||||
node := Node{
|
||||
Addresses: addrs,
|
||||
Updated: time.Now(),
|
||||
}
|
||||
|
||||
lock.Lock()
|
||||
nodes[pkt.NodeID] = node
|
||||
lock.Unlock()
|
||||
|
||||
case discover.QueryMagicV2:
|
||||
var pkt discover.QueryV2
|
||||
err := pkt.UnmarshalXDR(buf)
|
||||
if err != nil {
|
||||
log.Println("QueryV2 Unmarshal:", err)
|
||||
log.Println(hex.Dump(buf))
|
||||
continue
|
||||
}
|
||||
if debug {
|
||||
log.Printf("<- %v %#v", addr, pkt)
|
||||
}
|
||||
|
||||
lock.Lock()
|
||||
node, ok := nodes[pkt.NodeID]
|
||||
queries++
|
||||
lock.Unlock()
|
||||
|
||||
if ok && len(node.Addresses) > 0 {
|
||||
pkt := discover.AnnounceV2{
|
||||
Magic: discover.AnnouncementMagicV2,
|
||||
NodeID: pkt.NodeID,
|
||||
}
|
||||
for _, addr := range node.Addresses {
|
||||
pkt.Addresses = append(pkt.Addresses, discover.Address{IP: addr.IP, Port: addr.Port})
|
||||
}
|
||||
if debug {
|
||||
log.Printf("-> %v %#v", addr, pkt)
|
||||
}
|
||||
|
||||
tb := pkt.MarshalXDR()
|
||||
_, _, err = conn.WriteMsgUDP(tb, nil, addr)
|
||||
if err != nil {
|
||||
log.Println("QueryV2 response write:", err)
|
||||
}
|
||||
|
||||
lock.Lock()
|
||||
answered++
|
||||
lock.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
12
discover/debug.go
Normal file
12
discover/debug.go
Normal file
@@ -0,0 +1,12 @@
|
||||
package discover
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
dlog = log.New(os.Stderr, "discover: ", log.Lmicroseconds|log.Lshortfile)
|
||||
debug = strings.Contains(os.Getenv("STTRACE"), "discover")
|
||||
)
|
||||
@@ -1,89 +1,12 @@
|
||||
/*
|
||||
This is the local node discovery protocol. In principle we might be better
|
||||
served by something more standardized, such as mDNS / DNS-SD. In practice, this
|
||||
was much easier and quicker to get up and running.
|
||||
|
||||
The mode of operation is to periodically (currently once every 30 seconds)
|
||||
broadcast an Announcement packet to UDP port 21025. The packet has the
|
||||
following format:
|
||||
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Magic Number (0x20121025) |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Port Number | Reserved |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of NodeID |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ NodeID (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of IP |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ IP (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
This is the XDR encoding of:
|
||||
|
||||
struct Announcement {
|
||||
unsigned int Magic;
|
||||
unsigned short Port;
|
||||
string NodeID<>;
|
||||
}
|
||||
|
||||
(Hence NodeID is padded to a multiple of 32 bits)
|
||||
|
||||
The sending node's address is not encoded in local announcement -- the Length
|
||||
of IP field is set to zero and the address is taken to be the source address of
|
||||
the announcement. In announcement packets sent by a discovery server in
|
||||
response to a query, the IP is present and the length is either 4 (IPv4) or 16
|
||||
(IPv6).
|
||||
|
||||
Every time such a packet is received, a local table that maps NodeID to Address
|
||||
is updated. When the local node wants to connect to another node with the
|
||||
address specification 'dynamic', this table is consulted.
|
||||
|
||||
For external discovery, an identical packet is sent every 30 minutes to the
|
||||
external discovery server. The server keeps information for up to 60 minutes.
|
||||
To query the server, and UDP packet with the format below is sent.
|
||||
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Magic Number (0x19760309) |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of NodeID |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ NodeID (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
This is the XDR encoding of:
|
||||
|
||||
struct Announcement {
|
||||
unsigned int Magic;
|
||||
string NodeID<>;
|
||||
}
|
||||
|
||||
(Hence NodeID is padded to a multiple of 32 bits)
|
||||
|
||||
It is answered with an announcement packet for the queried node ID if the
|
||||
information is available. There is no answer for queries about unknown nodes. A
|
||||
reasonable timeout is recommended instead. (This, combined with server side
|
||||
rate limits for packets per source IP and queries per node ID, prevents the
|
||||
server from being used as an amplifier in a DDoS attack.)
|
||||
*/
|
||||
package discover
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"net"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -91,32 +14,36 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
AnnouncementPort = 21025
|
||||
AnnouncementMagic = 0x20121025
|
||||
QueryMagic = 0x19760309
|
||||
AnnouncementPort = 21025
|
||||
)
|
||||
|
||||
type Discoverer struct {
|
||||
MyID string
|
||||
ListenPort int
|
||||
BroadcastIntv time.Duration
|
||||
ExtListenPort int
|
||||
ExtBroadcastIntv time.Duration
|
||||
|
||||
conn *net.UDPConn
|
||||
registry map[string]string
|
||||
registry map[string][]string
|
||||
registryLock sync.RWMutex
|
||||
extServer string
|
||||
|
||||
localBroadcastTick <-chan time.Time
|
||||
forcedBroadcastTick chan time.Time
|
||||
}
|
||||
|
||||
var (
|
||||
ErrIncorrectMagic = errors.New("incorrect magic number")
|
||||
)
|
||||
|
||||
// We tolerate a certain amount of errors because we might be running on
|
||||
// laptops that sleep and wake, have intermittent network connectivity, etc.
|
||||
// When we hit this many errors in succession, we stop.
|
||||
const maxErrors = 30
|
||||
|
||||
func NewDiscoverer(id string, port int, extPort int, extServer string) (*Discoverer, error) {
|
||||
local4 := &net.UDPAddr{IP: net.IP{0, 0, 0, 0}, Port: AnnouncementPort}
|
||||
conn, err := net.ListenUDP("udp4", local4)
|
||||
func NewDiscoverer(id string, port int, extServer string) (*Discoverer, error) {
|
||||
local := &net.UDPAddr{IP: nil, Port: AnnouncementPort}
|
||||
conn, err := net.ListenUDP("udp", local)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -125,58 +52,139 @@ func NewDiscoverer(id string, port int, extPort int, extServer string) (*Discove
|
||||
MyID: id,
|
||||
ListenPort: port,
|
||||
BroadcastIntv: 30 * time.Second,
|
||||
ExtListenPort: extPort,
|
||||
ExtBroadcastIntv: 1800 * time.Second,
|
||||
|
||||
conn: conn,
|
||||
registry: make(map[string]string),
|
||||
registry: make(map[string][]string),
|
||||
extServer: extServer,
|
||||
}
|
||||
|
||||
go disc.recvAnnouncements()
|
||||
|
||||
if disc.ListenPort > 0 {
|
||||
disc.sendAnnouncements()
|
||||
disc.localBroadcastTick = time.Tick(disc.BroadcastIntv)
|
||||
disc.forcedBroadcastTick = make(chan time.Time)
|
||||
go disc.sendAnnouncements()
|
||||
}
|
||||
if len(disc.extServer) > 0 && disc.ExtListenPort > 0 {
|
||||
disc.sendExtAnnouncements()
|
||||
if len(disc.extServer) > 0 {
|
||||
go disc.sendExtAnnouncements()
|
||||
}
|
||||
|
||||
return disc, nil
|
||||
}
|
||||
|
||||
func (d *Discoverer) sendAnnouncements() {
|
||||
remote4 := &net.UDPAddr{IP: net.IP{255, 255, 255, 255}, Port: AnnouncementPort}
|
||||
var pkt = AnnounceV2{AnnouncementMagicV2, d.MyID, []Address{{nil, 22000}}}
|
||||
var buf = pkt.MarshalXDR()
|
||||
var errCounter = 0
|
||||
var err error
|
||||
|
||||
buf := EncodePacket(Packet{AnnouncementMagic, uint16(d.ListenPort), d.MyID, nil})
|
||||
go d.writeAnnouncements(buf, remote4, d.BroadcastIntv)
|
||||
remote := &net.UDPAddr{
|
||||
IP: net.IP{255, 255, 255, 255},
|
||||
Port: AnnouncementPort,
|
||||
}
|
||||
|
||||
for errCounter < maxErrors {
|
||||
intfs, err := net.Interfaces()
|
||||
if err != nil {
|
||||
log.Printf("discover/listInterfaces: %v; no local announcements", err)
|
||||
return
|
||||
}
|
||||
|
||||
for _, intf := range intfs {
|
||||
if intf.Flags&(net.FlagBroadcast|net.FlagLoopback) == net.FlagBroadcast {
|
||||
addrs, err := intf.Addrs()
|
||||
if err != nil {
|
||||
log.Println("discover/listAddrs: warning:", err)
|
||||
errCounter++
|
||||
continue
|
||||
}
|
||||
|
||||
var srcAddr string
|
||||
for _, addr := range addrs {
|
||||
if strings.Contains(addr.String(), ".") {
|
||||
// Found an IPv4 adress
|
||||
parts := strings.Split(addr.String(), "/")
|
||||
srcAddr = parts[0]
|
||||
break
|
||||
}
|
||||
}
|
||||
if len(srcAddr) == 0 {
|
||||
if debug {
|
||||
dlog.Println("no source address found on interface", intf.Name)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
iaddr, err := net.ResolveUDPAddr("udp4", srcAddr+":0")
|
||||
if err != nil {
|
||||
log.Println("discover/resolve: warning:", err)
|
||||
errCounter++
|
||||
continue
|
||||
}
|
||||
|
||||
conn, err := net.ListenUDP("udp4", iaddr)
|
||||
if err != nil {
|
||||
log.Println("discover/listen: warning:", err)
|
||||
errCounter++
|
||||
continue
|
||||
}
|
||||
|
||||
if debug {
|
||||
dlog.Println("send announcement from", conn.LocalAddr(), "to", remote, "on", intf.Name)
|
||||
}
|
||||
|
||||
_, err = conn.WriteTo(buf, remote)
|
||||
if err != nil {
|
||||
// Some interfaces don't seem to support broadcast even though the flags claims they do, i.e. vmnet
|
||||
conn.Close()
|
||||
|
||||
if debug {
|
||||
log.Println(err)
|
||||
}
|
||||
|
||||
errCounter++
|
||||
continue
|
||||
}
|
||||
|
||||
conn.Close()
|
||||
errCounter = 0
|
||||
}
|
||||
}
|
||||
|
||||
select {
|
||||
case <-d.localBroadcastTick:
|
||||
case <-d.forcedBroadcastTick:
|
||||
}
|
||||
}
|
||||
log.Println("discover/write: local: stopping due to too many errors:", err)
|
||||
}
|
||||
|
||||
func (d *Discoverer) sendExtAnnouncements() {
|
||||
extIP, err := net.ResolveUDPAddr("udp", d.extServer+":22025")
|
||||
remote, err := net.ResolveUDPAddr("udp", d.extServer)
|
||||
if err != nil {
|
||||
log.Printf("discover/external: %v; no external announcements", err)
|
||||
return
|
||||
}
|
||||
|
||||
buf := EncodePacket(Packet{AnnouncementMagic, uint16(d.ExtListenPort), d.MyID, nil})
|
||||
go d.writeAnnouncements(buf, extIP, d.ExtBroadcastIntv)
|
||||
}
|
||||
|
||||
func (d *Discoverer) writeAnnouncements(buf []byte, remote *net.UDPAddr, intv time.Duration) {
|
||||
var pkt = AnnounceV2{AnnouncementMagicV2, d.MyID, []Address{{nil, 22000}}}
|
||||
var buf = pkt.MarshalXDR()
|
||||
var errCounter = 0
|
||||
var err error
|
||||
|
||||
for errCounter < maxErrors {
|
||||
_, _, err = d.conn.WriteMsgUDP(buf, nil, remote)
|
||||
if debug {
|
||||
dlog.Println("send announcement -> ", remote)
|
||||
}
|
||||
_, err = d.conn.WriteTo(buf, remote)
|
||||
if err != nil {
|
||||
log.Println("discover/write: warning:", err)
|
||||
errCounter++
|
||||
} else {
|
||||
errCounter = 0
|
||||
}
|
||||
time.Sleep(intv)
|
||||
time.Sleep(d.ExtBroadcastIntv)
|
||||
}
|
||||
log.Println("discover/write: %v: stopping due to too many errors:", remote, err)
|
||||
log.Printf("discover/write: %v: stopping due to too many errors: %v", remote, err)
|
||||
}
|
||||
|
||||
func (d *Discoverer) recvAnnouncements() {
|
||||
@@ -191,90 +199,143 @@ func (d *Discoverer) recvAnnouncements() {
|
||||
continue
|
||||
}
|
||||
|
||||
pkt, err := DecodePacket(buf[:n])
|
||||
if err != nil || pkt.Magic != AnnouncementMagic {
|
||||
if debug {
|
||||
dlog.Printf("read announcement:\n%s", hex.Dump(buf[:n]))
|
||||
}
|
||||
|
||||
var pkt AnnounceV2
|
||||
err = pkt.UnmarshalXDR(buf[:n])
|
||||
if err != nil {
|
||||
errCounter++
|
||||
time.Sleep(time.Second)
|
||||
continue
|
||||
}
|
||||
|
||||
if debug {
|
||||
dlog.Printf("parsed announcement: %#v", pkt)
|
||||
}
|
||||
|
||||
errCounter = 0
|
||||
|
||||
if pkt.ID != d.MyID {
|
||||
nodeAddr := fmt.Sprintf("%s:%d", addr.IP.String(), pkt.Port)
|
||||
d.registryLock.Lock()
|
||||
if d.registry[pkt.ID] != nodeAddr {
|
||||
d.registry[pkt.ID] = nodeAddr
|
||||
if pkt.NodeID != d.MyID {
|
||||
var addrs []string
|
||||
for _, a := range pkt.Addresses {
|
||||
var nodeAddr string
|
||||
if len(a.IP) > 0 {
|
||||
nodeAddr = fmt.Sprintf("%s:%d", ipStr(a.IP), a.Port)
|
||||
} else {
|
||||
nodeAddr = fmt.Sprintf("%s:%d", addr.IP.String(), a.Port)
|
||||
}
|
||||
addrs = append(addrs, nodeAddr)
|
||||
}
|
||||
if debug {
|
||||
dlog.Printf("register: %#v", addrs)
|
||||
}
|
||||
d.registryLock.Lock()
|
||||
_, seen := d.registry[pkt.NodeID]
|
||||
if !seen {
|
||||
select {
|
||||
case d.forcedBroadcastTick <- time.Now():
|
||||
}
|
||||
}
|
||||
d.registry[pkt.NodeID] = addrs
|
||||
d.registryLock.Unlock()
|
||||
}
|
||||
}
|
||||
log.Println("discover/read: stopping due to too many errors:", err)
|
||||
}
|
||||
|
||||
func (d *Discoverer) externalLookup(node string) (string, bool) {
|
||||
extIP, err := net.ResolveUDPAddr("udp", d.extServer+":22025")
|
||||
func (d *Discoverer) externalLookup(node string) []string {
|
||||
extIP, err := net.ResolveUDPAddr("udp", d.extServer)
|
||||
if err != nil {
|
||||
log.Printf("discover/external: %v; no external lookup", err)
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
|
||||
conn, err := net.DialUDP("udp", nil, extIP)
|
||||
if err != nil {
|
||||
log.Printf("discover/external: %v; no external lookup", err)
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
err = conn.SetDeadline(time.Now().Add(5 * time.Second))
|
||||
if err != nil {
|
||||
log.Printf("discover/external: %v; no external lookup", err)
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
|
||||
_, err = conn.Write(EncodePacket(Packet{QueryMagic, 0, node, nil}))
|
||||
buf := QueryV2{QueryMagicV2, node}.MarshalXDR()
|
||||
_, err = conn.Write(buf)
|
||||
if err != nil {
|
||||
log.Printf("discover/external: %v; no external lookup", err)
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
buffers.Put(buf)
|
||||
|
||||
var buf = buffers.Get(256)
|
||||
buf = buffers.Get(256)
|
||||
defer buffers.Put(buf)
|
||||
|
||||
n, err := conn.Read(buf)
|
||||
if err != nil {
|
||||
if err, ok := err.(net.Error); ok && err.Timeout() {
|
||||
// Expected if the server doesn't know about requested node ID
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
log.Printf("discover/external/read: %v; no external lookup", err)
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
|
||||
pkt, err := DecodePacket(buf[:n])
|
||||
if debug {
|
||||
dlog.Printf("read external:\n%s", hex.Dump(buf[:n]))
|
||||
}
|
||||
|
||||
var pkt AnnounceV2
|
||||
err = pkt.UnmarshalXDR(buf[:n])
|
||||
if err != nil {
|
||||
log.Printf("discover/external/read: %v; no external lookup", err)
|
||||
return "", false
|
||||
log.Println("discover/external/decode:", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
if pkt.Magic != AnnouncementMagic {
|
||||
log.Printf("discover/external/read: bad magic; no external lookup", err)
|
||||
return "", false
|
||||
if debug {
|
||||
dlog.Printf("parsed external: %#v", pkt)
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s:%d", ipStr(pkt.IP), pkt.Port), true
|
||||
var addrs []string
|
||||
for _, a := range pkt.Addresses {
|
||||
var nodeAddr string
|
||||
if len(a.IP) > 0 {
|
||||
nodeAddr = fmt.Sprintf("%s:%d", ipStr(a.IP), a.Port)
|
||||
}
|
||||
addrs = append(addrs, nodeAddr)
|
||||
}
|
||||
return addrs
|
||||
}
|
||||
|
||||
func (d *Discoverer) Lookup(node string) (string, bool) {
|
||||
func (d *Discoverer) Lookup(node string) []string {
|
||||
d.registryLock.Lock()
|
||||
addr, ok := d.registry[node]
|
||||
d.registryLock.Unlock()
|
||||
|
||||
if ok {
|
||||
return addr, true
|
||||
return addr
|
||||
} else if len(d.extServer) != 0 {
|
||||
// We might want to cache this, but not permanently so it needs some intelligence
|
||||
return d.externalLookup(node)
|
||||
}
|
||||
return "", false
|
||||
return nil
|
||||
}
|
||||
|
||||
func ipStr(ip []byte) string {
|
||||
var f = "%d"
|
||||
var s = "."
|
||||
if len(ip) > 4 {
|
||||
f = "%x"
|
||||
s = ":"
|
||||
}
|
||||
var ss = make([]string, len(ip))
|
||||
for i := range ip {
|
||||
ss[i] = fmt.Sprintf(f, ip[i])
|
||||
}
|
||||
return strings.Join(ss, s)
|
||||
}
|
||||
|
||||
2
discover/doc.go
Normal file
2
discover/doc.go
Normal file
@@ -0,0 +1,2 @@
|
||||
// Package discover implements the node discovery protocol.
|
||||
package discover
|
||||
@@ -1,160 +0,0 @@
|
||||
package discover
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type Packet struct {
|
||||
Magic uint32 // AnnouncementMagic or QueryMagic
|
||||
Port uint16 // unset if magic == QueryMagic
|
||||
ID string
|
||||
IP []byte // zero length in local announcements
|
||||
}
|
||||
|
||||
var (
|
||||
errBadMagic = errors.New("bad magic")
|
||||
errFormat = errors.New("incorrect packet format")
|
||||
)
|
||||
|
||||
func EncodePacket(pkt Packet) []byte {
|
||||
if l := len(pkt.IP); l != 0 && l != 4 && l != 16 {
|
||||
// bad ip format
|
||||
return nil
|
||||
}
|
||||
|
||||
var idbs = []byte(pkt.ID)
|
||||
var l = 4 + 4 + len(idbs) + pad(len(idbs))
|
||||
if pkt.Magic == AnnouncementMagic {
|
||||
l += 4 + 4 + len(pkt.IP)
|
||||
}
|
||||
|
||||
var buf = make([]byte, l)
|
||||
var offset = 0
|
||||
|
||||
binary.BigEndian.PutUint32(buf[offset:], pkt.Magic)
|
||||
offset += 4
|
||||
|
||||
if pkt.Magic == AnnouncementMagic {
|
||||
binary.BigEndian.PutUint16(buf[offset:], uint16(pkt.Port))
|
||||
offset += 4
|
||||
}
|
||||
|
||||
binary.BigEndian.PutUint32(buf[offset:], uint32(len(idbs)))
|
||||
offset += 4
|
||||
copy(buf[offset:], idbs)
|
||||
offset += len(idbs) + pad(len(idbs))
|
||||
|
||||
if pkt.Magic == AnnouncementMagic {
|
||||
binary.BigEndian.PutUint32(buf[offset:], uint32(len(pkt.IP)))
|
||||
offset += 4
|
||||
copy(buf[offset:], pkt.IP)
|
||||
offset += len(pkt.IP)
|
||||
}
|
||||
|
||||
return buf
|
||||
}
|
||||
|
||||
func DecodePacket(buf []byte) (*Packet, error) {
|
||||
var p Packet
|
||||
var offset int
|
||||
|
||||
if len(buf) < 4 {
|
||||
// short packet
|
||||
return nil, errFormat
|
||||
}
|
||||
p.Magic = binary.BigEndian.Uint32(buf[offset:])
|
||||
offset += 4
|
||||
|
||||
if p.Magic != AnnouncementMagic && p.Magic != QueryMagic {
|
||||
return nil, errBadMagic
|
||||
}
|
||||
|
||||
if p.Magic == AnnouncementMagic {
|
||||
// Port Number
|
||||
|
||||
if len(buf) < offset+4 {
|
||||
// short packet
|
||||
return nil, errFormat
|
||||
}
|
||||
p.Port = binary.BigEndian.Uint16(buf[offset:])
|
||||
offset += 2
|
||||
reserved := binary.BigEndian.Uint16(buf[offset:])
|
||||
if reserved != 0 {
|
||||
return nil, errFormat
|
||||
}
|
||||
offset += 2
|
||||
}
|
||||
|
||||
// Node ID
|
||||
|
||||
if len(buf) < offset+4 {
|
||||
// short packet
|
||||
return nil, errFormat
|
||||
}
|
||||
l := binary.BigEndian.Uint32(buf[offset:])
|
||||
offset += 4
|
||||
|
||||
if len(buf) < offset+int(l)+pad(int(l)) {
|
||||
// short packet
|
||||
return nil, errFormat
|
||||
}
|
||||
idbs := buf[offset : offset+int(l)]
|
||||
p.ID = string(idbs)
|
||||
offset += int(l) + pad(int(l))
|
||||
|
||||
if p.Magic == AnnouncementMagic {
|
||||
// IP
|
||||
|
||||
if len(buf) < offset+4 {
|
||||
// short packet
|
||||
return nil, errFormat
|
||||
}
|
||||
l = binary.BigEndian.Uint32(buf[offset:])
|
||||
offset += 4
|
||||
|
||||
if l != 0 && l != 4 && l != 16 {
|
||||
// weird ip length
|
||||
return nil, errFormat
|
||||
}
|
||||
if len(buf) < offset+int(l) {
|
||||
// short packet
|
||||
return nil, errFormat
|
||||
}
|
||||
if l > 0 {
|
||||
p.IP = buf[offset : offset+int(l)]
|
||||
offset += int(l)
|
||||
}
|
||||
}
|
||||
|
||||
if len(buf[offset:]) > 0 {
|
||||
// extra data
|
||||
return nil, errFormat
|
||||
}
|
||||
|
||||
return &p, nil
|
||||
}
|
||||
|
||||
func pad(l int) int {
|
||||
d := l % 4
|
||||
if d == 0 {
|
||||
return 0
|
||||
}
|
||||
return 4 - d
|
||||
}
|
||||
|
||||
func ipStr(ip []byte) string {
|
||||
switch len(ip) {
|
||||
case 4:
|
||||
return fmt.Sprintf("%d.%d.%d.%d", ip[0], ip[1], ip[2], ip[3])
|
||||
case 16:
|
||||
return fmt.Sprintf("%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x",
|
||||
ip[0], ip[1], ip[2], ip[3],
|
||||
ip[4], ip[5], ip[6], ip[7],
|
||||
ip[8], ip[9], ip[10], ip[11],
|
||||
ip[12], ip[13], ip[14], ip[15])
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
@@ -1,138 +0,0 @@
|
||||
package discover
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var testdata = []struct {
|
||||
data []byte
|
||||
packet *Packet
|
||||
err error
|
||||
}{
|
||||
{
|
||||
[]byte{0x20, 0x12, 0x10, 0x25,
|
||||
0x12, 0x34, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x05,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00},
|
||||
&Packet{
|
||||
Magic: 0x20121025,
|
||||
Port: 0x1234,
|
||||
ID: "hello",
|
||||
},
|
||||
nil,
|
||||
},
|
||||
{
|
||||
[]byte{0x20, 0x12, 0x10, 0x25,
|
||||
0x34, 0x56, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x08,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, 0x21, 0x21,
|
||||
0x00, 0x00, 0x00, 0x04,
|
||||
0x01, 0x02, 0x03, 0x04},
|
||||
&Packet{
|
||||
Magic: 0x20121025,
|
||||
Port: 0x3456,
|
||||
ID: "hello!!!",
|
||||
IP: []byte{1, 2, 3, 4},
|
||||
},
|
||||
nil,
|
||||
},
|
||||
{
|
||||
[]byte{0x19, 0x76, 0x03, 0x09,
|
||||
0x00, 0x00, 0x00, 0x06,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, 0x00, 0x00},
|
||||
&Packet{
|
||||
Magic: 0x19760309,
|
||||
ID: "hello!",
|
||||
},
|
||||
nil,
|
||||
},
|
||||
{
|
||||
[]byte{0x20, 0x12, 0x10, 0x25,
|
||||
0x12, 0x34, 0x12, 0x34, // reserved bits not set to zero
|
||||
0x00, 0x00, 0x00, 0x06,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00},
|
||||
nil,
|
||||
errFormat,
|
||||
},
|
||||
{
|
||||
[]byte{0x20, 0x12, 0x10, 0x25,
|
||||
0x12, 0x34, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x06,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, // missing padding
|
||||
0x00, 0x00, 0x00, 0x00},
|
||||
nil,
|
||||
errFormat,
|
||||
},
|
||||
{
|
||||
[]byte{0x19, 0x77, 0x03, 0x09, // incorrect Magic
|
||||
0x00, 0x00, 0x00, 0x06,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, 0x00, 0x00},
|
||||
nil,
|
||||
errBadMagic,
|
||||
},
|
||||
{
|
||||
[]byte{0x19, 0x76, 0x03, 0x09,
|
||||
0x6c, 0x6c, 0x6c, 0x6c, // length exceeds packet size
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, 0x00, 0x00},
|
||||
nil,
|
||||
errFormat,
|
||||
},
|
||||
{
|
||||
[]byte{0x19, 0x76, 0x03, 0x09,
|
||||
0x00, 0x00, 0x00, 0x06,
|
||||
0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x21, 0x00, 0x00,
|
||||
0x23}, // extra data at the end
|
||||
nil,
|
||||
errFormat,
|
||||
},
|
||||
}
|
||||
|
||||
func TestDecodePacket(t *testing.T) {
|
||||
for i, test := range testdata {
|
||||
p, err := DecodePacket(test.data)
|
||||
if err != test.err {
|
||||
t.Errorf("%d: unexpected error %v", i, err)
|
||||
} else {
|
||||
if !reflect.DeepEqual(p, test.packet) {
|
||||
t.Errorf("%d: incorrect packet\n%v\n%v", i, test.packet, p)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncodePacket(t *testing.T) {
|
||||
for i, test := range testdata {
|
||||
if test.err != nil {
|
||||
continue
|
||||
}
|
||||
buf := EncodePacket(*test.packet)
|
||||
if bytes.Compare(buf, test.data) != 0 {
|
||||
t.Errorf("%d: incorrect encoded packet\n% x\n% 0x", i, test.data, buf)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var ipstrTests = []struct {
|
||||
d []byte
|
||||
s string
|
||||
}{
|
||||
{[]byte{192, 168, 34}, ""},
|
||||
{[]byte{192, 168, 0, 34}, "192.168.0.34"},
|
||||
{[]byte{0x20, 0x01, 0x12, 0x34,
|
||||
0x34, 0x56, 0x56, 0x78,
|
||||
0x78, 0x00, 0x00, 0xdc,
|
||||
0x00, 0x00, 0x43, 0x54}, "2001:1234:3456:5678:7800:00dc:0000:4354"},
|
||||
}
|
||||
|
||||
func TestIPStr(t *testing.T) {
|
||||
for _, tc := range ipstrTests {
|
||||
s1 := ipStr(tc.d)
|
||||
if s1 != tc.s {
|
||||
t.Errorf("Incorrect ipstr %q != %q", tc.s, s1)
|
||||
}
|
||||
}
|
||||
}
|
||||
39
discover/packets.go
Normal file
39
discover/packets.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package discover
|
||||
|
||||
const (
|
||||
AnnouncementMagicV1 = 0x20121025
|
||||
QueryMagicV1 = 0x19760309
|
||||
)
|
||||
|
||||
type QueryV1 struct {
|
||||
Magic uint32
|
||||
NodeID string // max:64
|
||||
}
|
||||
|
||||
type AnnounceV1 struct {
|
||||
Magic uint32
|
||||
Port uint16
|
||||
NodeID string // max:64
|
||||
IP []byte // max:16
|
||||
}
|
||||
|
||||
const (
|
||||
AnnouncementMagicV2 = 0x029E4C77
|
||||
QueryMagicV2 = 0x23D63A9A
|
||||
)
|
||||
|
||||
type QueryV2 struct {
|
||||
Magic uint32
|
||||
NodeID string // max:64
|
||||
}
|
||||
|
||||
type AnnounceV2 struct {
|
||||
Magic uint32
|
||||
NodeID string // max:64
|
||||
Addresses []Address // max:16
|
||||
}
|
||||
|
||||
type Address struct {
|
||||
IP []byte // max:16
|
||||
Port uint16
|
||||
}
|
||||
220
discover/packets_xdr.go
Normal file
220
discover/packets_xdr.go
Normal file
@@ -0,0 +1,220 @@
|
||||
package discover
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
|
||||
"github.com/calmh/syncthing/xdr"
|
||||
)
|
||||
|
||||
func (o QueryV1) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o QueryV1) MarshalXDR() []byte {
|
||||
var buf bytes.Buffer
|
||||
var xw = xdr.NewWriter(&buf)
|
||||
o.encodeXDR(xw)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (o QueryV1) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(o.Magic)
|
||||
if len(o.NodeID) > 64 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteString(o.NodeID)
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *QueryV1) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *QueryV1) UnmarshalXDR(bs []byte) error {
|
||||
var buf = bytes.NewBuffer(bs)
|
||||
var xr = xdr.NewReader(buf)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *QueryV1) decodeXDR(xr *xdr.Reader) error {
|
||||
o.Magic = xr.ReadUint32()
|
||||
o.NodeID = xr.ReadStringMax(64)
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
func (o AnnounceV1) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o AnnounceV1) MarshalXDR() []byte {
|
||||
var buf bytes.Buffer
|
||||
var xw = xdr.NewWriter(&buf)
|
||||
o.encodeXDR(xw)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (o AnnounceV1) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(o.Magic)
|
||||
xw.WriteUint16(o.Port)
|
||||
if len(o.NodeID) > 64 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteString(o.NodeID)
|
||||
if len(o.IP) > 16 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteBytes(o.IP)
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *AnnounceV1) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *AnnounceV1) UnmarshalXDR(bs []byte) error {
|
||||
var buf = bytes.NewBuffer(bs)
|
||||
var xr = xdr.NewReader(buf)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *AnnounceV1) decodeXDR(xr *xdr.Reader) error {
|
||||
o.Magic = xr.ReadUint32()
|
||||
o.Port = xr.ReadUint16()
|
||||
o.NodeID = xr.ReadStringMax(64)
|
||||
o.IP = xr.ReadBytesMax(16)
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
func (o QueryV2) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o QueryV2) MarshalXDR() []byte {
|
||||
var buf bytes.Buffer
|
||||
var xw = xdr.NewWriter(&buf)
|
||||
o.encodeXDR(xw)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (o QueryV2) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(o.Magic)
|
||||
if len(o.NodeID) > 64 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteString(o.NodeID)
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *QueryV2) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *QueryV2) UnmarshalXDR(bs []byte) error {
|
||||
var buf = bytes.NewBuffer(bs)
|
||||
var xr = xdr.NewReader(buf)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *QueryV2) decodeXDR(xr *xdr.Reader) error {
|
||||
o.Magic = xr.ReadUint32()
|
||||
o.NodeID = xr.ReadStringMax(64)
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
func (o AnnounceV2) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o AnnounceV2) MarshalXDR() []byte {
|
||||
var buf bytes.Buffer
|
||||
var xw = xdr.NewWriter(&buf)
|
||||
o.encodeXDR(xw)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (o AnnounceV2) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(o.Magic)
|
||||
if len(o.NodeID) > 64 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteString(o.NodeID)
|
||||
if len(o.Addresses) > 16 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.Addresses)))
|
||||
for i := range o.Addresses {
|
||||
o.Addresses[i].encodeXDR(xw)
|
||||
}
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *AnnounceV2) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *AnnounceV2) UnmarshalXDR(bs []byte) error {
|
||||
var buf = bytes.NewBuffer(bs)
|
||||
var xr = xdr.NewReader(buf)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *AnnounceV2) decodeXDR(xr *xdr.Reader) error {
|
||||
o.Magic = xr.ReadUint32()
|
||||
o.NodeID = xr.ReadStringMax(64)
|
||||
_AddressesSize := int(xr.ReadUint32())
|
||||
if _AddressesSize > 16 {
|
||||
return xdr.ErrElementSizeExceeded
|
||||
}
|
||||
o.Addresses = make([]Address, _AddressesSize)
|
||||
for i := range o.Addresses {
|
||||
(&o.Addresses[i]).decodeXDR(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
func (o Address) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
}
|
||||
|
||||
func (o Address) MarshalXDR() []byte {
|
||||
var buf bytes.Buffer
|
||||
var xw = xdr.NewWriter(&buf)
|
||||
o.encodeXDR(xw)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (o Address) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
if len(o.IP) > 16 {
|
||||
return xw.Tot(), xdr.ErrElementSizeExceeded
|
||||
}
|
||||
xw.WriteBytes(o.IP)
|
||||
xw.WriteUint16(o.Port)
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *Address) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *Address) UnmarshalXDR(bs []byte) error {
|
||||
var buf = bytes.NewBuffer(bs)
|
||||
var xr = xdr.NewReader(buf)
|
||||
return o.decodeXDR(xr)
|
||||
}
|
||||
|
||||
func (o *Address) decodeXDR(xr *xdr.Reader) error {
|
||||
o.IP = xr.ReadBytesMax(16)
|
||||
o.Port = xr.ReadUint16()
|
||||
return xr.Error()
|
||||
}
|
||||
174
files/set.go
Normal file
174
files/set.go
Normal file
@@ -0,0 +1,174 @@
|
||||
// Package fileset provides a set type to track local/remote files with newness checks.
|
||||
package fileset
|
||||
|
||||
import "sync"
|
||||
|
||||
type File struct {
|
||||
Key Key
|
||||
Modified int64
|
||||
Flags uint32
|
||||
Data interface{}
|
||||
}
|
||||
|
||||
type Key struct {
|
||||
Name string
|
||||
Version uint32
|
||||
}
|
||||
|
||||
type fileRecord struct {
|
||||
Usage int
|
||||
File File
|
||||
}
|
||||
|
||||
type bitset uint64
|
||||
|
||||
func (a Key) newerThan(b Key) bool {
|
||||
return a.Version > b.Version
|
||||
}
|
||||
|
||||
type Set struct {
|
||||
mutex sync.RWMutex
|
||||
files map[Key]fileRecord
|
||||
remoteKey [64]map[string]Key
|
||||
globalAvailability map[string]bitset
|
||||
globalKey map[string]Key
|
||||
}
|
||||
|
||||
func NewSet() *Set {
|
||||
var m = Set{
|
||||
files: make(map[Key]fileRecord),
|
||||
globalAvailability: make(map[string]bitset),
|
||||
globalKey: make(map[string]Key),
|
||||
}
|
||||
return &m
|
||||
}
|
||||
|
||||
func (m *Set) AddLocal(fs []File) {
|
||||
m.mutex.Lock()
|
||||
m.unlockedAddRemote(0, fs)
|
||||
m.mutex.Unlock()
|
||||
}
|
||||
|
||||
func (m *Set) SetLocal(fs []File) {
|
||||
m.mutex.Lock()
|
||||
m.unlockedSetRemote(0, fs)
|
||||
m.mutex.Unlock()
|
||||
}
|
||||
|
||||
func (m *Set) AddRemote(cid uint, fs []File) {
|
||||
if cid < 1 || cid > 63 {
|
||||
panic("Connection ID must be in the range 1 - 63 inclusive")
|
||||
}
|
||||
m.mutex.Lock()
|
||||
m.unlockedAddRemote(cid, fs)
|
||||
m.mutex.Unlock()
|
||||
}
|
||||
|
||||
func (m *Set) SetRemote(cid uint, fs []File) {
|
||||
if cid < 1 || cid > 63 {
|
||||
panic("Connection ID must be in the range 1 - 63 inclusive")
|
||||
}
|
||||
m.mutex.Lock()
|
||||
m.unlockedSetRemote(cid, fs)
|
||||
m.mutex.Unlock()
|
||||
}
|
||||
|
||||
func (m *Set) unlockedAddRemote(cid uint, fs []File) {
|
||||
remFiles := m.remoteKey[cid]
|
||||
for _, f := range fs {
|
||||
n := f.Key.Name
|
||||
|
||||
if ck, ok := remFiles[n]; ok && ck == f.Key {
|
||||
// The remote already has exactly this file, skip it
|
||||
continue
|
||||
}
|
||||
|
||||
remFiles[n] = f.Key
|
||||
|
||||
// Keep the block list or increment the usage
|
||||
if br, ok := m.files[f.Key]; !ok {
|
||||
m.files[f.Key] = fileRecord{
|
||||
Usage: 1,
|
||||
File: f,
|
||||
}
|
||||
} else {
|
||||
br.Usage++
|
||||
m.files[f.Key] = br
|
||||
}
|
||||
|
||||
// Update global view
|
||||
gk, ok := m.globalKey[n]
|
||||
switch {
|
||||
case ok && f.Key == gk:
|
||||
av := m.globalAvailability[n]
|
||||
av |= 1 << cid
|
||||
m.globalAvailability[n] = av
|
||||
case f.Key.newerThan(gk):
|
||||
m.globalKey[n] = f.Key
|
||||
m.globalAvailability[n] = 1 << cid
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Set) unlockedSetRemote(cid uint, fs []File) {
|
||||
// Decrement usage for all files belonging to this remote, and remove
|
||||
// those that are no longer needed.
|
||||
for _, fk := range m.remoteKey[cid] {
|
||||
br, ok := m.files[fk]
|
||||
switch {
|
||||
case ok && br.Usage == 1:
|
||||
delete(m.files, fk)
|
||||
case ok && br.Usage > 1:
|
||||
br.Usage--
|
||||
m.files[fk] = br
|
||||
}
|
||||
}
|
||||
|
||||
// Clear existing remote remoteKey
|
||||
m.remoteKey[cid] = make(map[string]Key)
|
||||
|
||||
// Recalculate global based on all remaining remoteKey
|
||||
for n := range m.globalKey {
|
||||
var nk Key // newest key
|
||||
var na bitset // newest availability
|
||||
|
||||
for i, rem := range m.remoteKey {
|
||||
if rk, ok := rem[n]; ok {
|
||||
switch {
|
||||
case rk == nk:
|
||||
na |= 1 << uint(i)
|
||||
case rk.newerThan(nk):
|
||||
nk = rk
|
||||
na = 1 << uint(i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if na != 0 {
|
||||
// Someone had the file
|
||||
m.globalKey[n] = nk
|
||||
m.globalAvailability[n] = na
|
||||
} else {
|
||||
// Noone had the file
|
||||
delete(m.globalKey, n)
|
||||
delete(m.globalAvailability, n)
|
||||
}
|
||||
}
|
||||
|
||||
// Add new remote remoteKey to the mix
|
||||
m.unlockedAddRemote(cid, fs)
|
||||
}
|
||||
|
||||
func (m *Set) Need(cid uint) []File {
|
||||
var fs []File
|
||||
m.mutex.Lock()
|
||||
|
||||
for name, gk := range m.globalKey {
|
||||
if gk.newerThan(m.remoteKey[cid][name]) {
|
||||
fs = append(fs, m.files[gk].File)
|
||||
}
|
||||
}
|
||||
|
||||
m.mutex.Unlock()
|
||||
return fs
|
||||
}
|
||||
207
files/set_test.go
Normal file
207
files/set_test.go
Normal file
@@ -0,0 +1,207 @@
|
||||
package fileset
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestGlobalSet(t *testing.T) {
|
||||
m := NewSet()
|
||||
|
||||
local := []File{
|
||||
File{Key{"a", 1000}, 0, 0, nil},
|
||||
File{Key{"b", 1000}, 0, 0, nil},
|
||||
File{Key{"c", 1000}, 0, 0, nil},
|
||||
File{Key{"d", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
remote := []File{
|
||||
File{Key{"a", 1000}, 0, 0, nil},
|
||||
File{Key{"b", 1001}, 0, 0, nil},
|
||||
File{Key{"c", 1002}, 0, 0, nil},
|
||||
File{Key{"e", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
expectedGlobal := map[string]Key{
|
||||
"a": local[0].Key,
|
||||
"b": remote[1].Key,
|
||||
"c": remote[2].Key,
|
||||
"d": local[3].Key,
|
||||
"e": remote[3].Key,
|
||||
}
|
||||
|
||||
m.SetLocal(local)
|
||||
m.SetRemote(1, remote)
|
||||
|
||||
if !reflect.DeepEqual(m.globalKey, expectedGlobal) {
|
||||
t.Errorf("Global incorrect;\n%v !=\n%v", m.globalKey, expectedGlobal)
|
||||
}
|
||||
|
||||
if lb := len(m.files); lb != 7 {
|
||||
t.Errorf("Num files incorrect %d != 7\n%v", lb, m.files)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSetLocal10k(b *testing.B) {
|
||||
m := NewSet()
|
||||
|
||||
var local []File
|
||||
for i := 0; i < 10000; i++ {
|
||||
local = append(local, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
var remote []File
|
||||
for i := 0; i < 10000; i++ {
|
||||
remote = append(remote, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
m.SetRemote(1, remote)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.SetLocal(local)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkSetLocal10(b *testing.B) {
|
||||
m := NewSet()
|
||||
|
||||
var local []File
|
||||
for i := 0; i < 10; i++ {
|
||||
local = append(local, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
var remote []File
|
||||
for i := 0; i < 10000; i++ {
|
||||
remote = append(remote, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
m.SetRemote(1, remote)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
m.SetLocal(local)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkAddLocal10k(b *testing.B) {
|
||||
m := NewSet()
|
||||
|
||||
var local []File
|
||||
for i := 0; i < 10000; i++ {
|
||||
local = append(local, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
var remote []File
|
||||
for i := 0; i < 10000; i++ {
|
||||
remote = append(remote, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
m.SetRemote(1, remote)
|
||||
m.SetLocal(local)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
b.StopTimer()
|
||||
for j := range local {
|
||||
local[j].Key.Version++
|
||||
}
|
||||
b.StartTimer()
|
||||
m.AddLocal(local)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkAddLocal10(b *testing.B) {
|
||||
m := NewSet()
|
||||
|
||||
var local []File
|
||||
for i := 0; i < 10; i++ {
|
||||
local = append(local, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
var remote []File
|
||||
for i := 0; i < 10000; i++ {
|
||||
remote = append(remote, File{Key{fmt.Sprintf("file%d"), 1000}, 0, 0, nil})
|
||||
}
|
||||
|
||||
m.SetRemote(1, remote)
|
||||
m.SetLocal(local)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
for j := range local {
|
||||
local[j].Key.Version++
|
||||
}
|
||||
m.AddLocal(local)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGlobalReset(t *testing.T) {
|
||||
m := NewSet()
|
||||
|
||||
local := []File{
|
||||
File{Key{"a", 1000}, 0, 0, nil},
|
||||
File{Key{"b", 1000}, 0, 0, nil},
|
||||
File{Key{"c", 1000}, 0, 0, nil},
|
||||
File{Key{"d", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
remote := []File{
|
||||
File{Key{"a", 1000}, 0, 0, nil},
|
||||
File{Key{"b", 1001}, 0, 0, nil},
|
||||
File{Key{"c", 1002}, 0, 0, nil},
|
||||
File{Key{"e", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
expectedGlobalKey := map[string]Key{
|
||||
"a": local[0].Key,
|
||||
"b": local[1].Key,
|
||||
"c": local[2].Key,
|
||||
"d": local[3].Key,
|
||||
}
|
||||
|
||||
m.SetLocal(local)
|
||||
m.SetRemote(1, remote)
|
||||
m.SetRemote(1, nil)
|
||||
|
||||
if !reflect.DeepEqual(m.globalKey, expectedGlobalKey) {
|
||||
t.Errorf("Global incorrect;\n%v !=\n%v", m.globalKey, expectedGlobalKey)
|
||||
}
|
||||
|
||||
if lb := len(m.files); lb != 4 {
|
||||
t.Errorf("Num files incorrect %d != 4\n%v", lb, m.files)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNeed(t *testing.T) {
|
||||
m := NewSet()
|
||||
|
||||
local := []File{
|
||||
File{Key{"a", 1000}, 0, 0, nil},
|
||||
File{Key{"b", 1000}, 0, 0, nil},
|
||||
File{Key{"c", 1000}, 0, 0, nil},
|
||||
File{Key{"d", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
remote := []File{
|
||||
File{Key{"a", 1000}, 0, 0, nil},
|
||||
File{Key{"b", 1001}, 0, 0, nil},
|
||||
File{Key{"c", 1002}, 0, 0, nil},
|
||||
File{Key{"e", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
shouldNeed := []File{
|
||||
File{Key{"b", 1001}, 0, 0, nil},
|
||||
File{Key{"c", 1002}, 0, 0, nil},
|
||||
File{Key{"e", 1000}, 0, 0, nil},
|
||||
}
|
||||
|
||||
m.SetLocal(local)
|
||||
m.SetRemote(1, remote)
|
||||
|
||||
need := m.Need(0)
|
||||
if !reflect.DeepEqual(need, shouldNeed) {
|
||||
t.Errorf("Need incorrect;\n%v !=\n%v", need, shouldNeed)
|
||||
}
|
||||
}
|
||||
@@ -1,26 +0,0 @@
|
||||
Copyright (c) 2012 Jesse van den Kieboom. All rights reserved.
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
@@ -1,128 +0,0 @@
|
||||
go-flags: a go library for parsing command line arguments
|
||||
=========================================================
|
||||
|
||||
This library provides similar functionality to the builtin flag library of
|
||||
go, but provides much more functionality and nicer formatting. From the
|
||||
documentation:
|
||||
|
||||
Package flags provides an extensive command line option parser.
|
||||
The flags package is similar in functionality to the go builtin flag package
|
||||
but provides more options and uses reflection to provide a convenient and
|
||||
succinct way of specifying command line options.
|
||||
|
||||
Supported features:
|
||||
* Options with short names (-v)
|
||||
* Options with long names (--verbose)
|
||||
* Options with and without arguments (bool v.s. other type)
|
||||
* Options with optional arguments and default values
|
||||
* Multiple option groups each containing a set of options
|
||||
* Generate and print well-formatted help message
|
||||
* Passing remaining command line arguments after -- (optional)
|
||||
* Ignoring unknown command line options (optional)
|
||||
* Supports -I/usr/include -I=/usr/include -I /usr/include option argument specification
|
||||
* Supports multiple short options -aux
|
||||
* Supports all primitive go types (string, int{8..64}, uint{8..64}, float)
|
||||
* Supports same option multiple times (can store in slice or last option counts)
|
||||
* Supports maps
|
||||
* Supports function callbacks
|
||||
|
||||
The flags package uses structs, reflection and struct field tags
|
||||
to allow users to specify command line options. This results in very simple
|
||||
and consise specification of your application options. For example:
|
||||
|
||||
type Options struct {
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
|
||||
}
|
||||
|
||||
This specifies one option with a short name -v and a long name --verbose.
|
||||
When either -v or --verbose is found on the command line, a 'true' value
|
||||
will be appended to the Verbose field. e.g. when specifying -vvv, the
|
||||
resulting value of Verbose will be {[true, true, true]}.
|
||||
|
||||
Example:
|
||||
--------
|
||||
var opts struct {
|
||||
// Slice of bool will append 'true' each time the option
|
||||
// is encountered (can be set multiple times, like -vvv)
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
|
||||
|
||||
// Example of automatic marshalling to desired type (uint)
|
||||
Offset uint `long:"offset" description:"Offset"`
|
||||
|
||||
// Example of a callback, called each time the option is found.
|
||||
Call func(string) `short:"c" description:"Call phone number"`
|
||||
|
||||
// Example of a required flag
|
||||
Name string `short:"n" long:"name" description:"A name" required:"true"`
|
||||
|
||||
// Example of a value name
|
||||
File string `short:"f" long:"file" description:"A file" value-name:"FILE"`
|
||||
|
||||
// Example of a pointer
|
||||
Ptr *int `short:"p" description:"A pointer to an integer"`
|
||||
|
||||
// Example of a slice of strings
|
||||
StringSlice []string `short:"s" description:"A slice of strings"`
|
||||
|
||||
// Example of a slice of pointers
|
||||
PtrSlice []*string `long:"ptrslice" description:"A slice of pointers to string"`
|
||||
|
||||
// Example of a map
|
||||
IntMap map[string]int `long:"intmap" description:"A map from string to int"`
|
||||
}
|
||||
|
||||
// Callback which will invoke callto:<argument> to call a number.
|
||||
// Note that this works just on OS X (and probably only with
|
||||
// Skype) but it shows the idea.
|
||||
opts.Call = func(num string) {
|
||||
cmd := exec.Command("open", "callto:"+num)
|
||||
cmd.Start()
|
||||
cmd.Process.Release()
|
||||
}
|
||||
|
||||
// Make some fake arguments to parse.
|
||||
args := []string{
|
||||
"-vv",
|
||||
"--offset=5",
|
||||
"-n", "Me",
|
||||
"-p", "3",
|
||||
"-s", "hello",
|
||||
"-s", "world",
|
||||
"--ptrslice", "hello",
|
||||
"--ptrslice", "world",
|
||||
"--intmap", "a:1",
|
||||
"--intmap", "b:5",
|
||||
"arg1",
|
||||
"arg2",
|
||||
"arg3",
|
||||
}
|
||||
|
||||
// Parse flags from `args'. Note that here we use flags.ParseArgs for
|
||||
// the sake of making a working example. Normally, you would simply use
|
||||
// flags.Parse(&opts) which uses os.Args
|
||||
args, err := flags.ParseArgs(&opts, args)
|
||||
|
||||
if err != nil {
|
||||
panic(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("Verbosity: %v\n", opts.Verbose)
|
||||
fmt.Printf("Offset: %d\n", opts.Offset)
|
||||
fmt.Printf("Name: %s\n", opts.Name)
|
||||
fmt.Printf("Ptr: %d\n", *opts.Ptr)
|
||||
fmt.Printf("StringSlice: %v\n", opts.StringSlice)
|
||||
fmt.Printf("PtrSlice: [%v %v]\n", *opts.PtrSlice[0], *opts.PtrSlice[1])
|
||||
fmt.Printf("IntMap: [a:%v b:%v]\n", opts.IntMap["a"], opts.IntMap["b"])
|
||||
fmt.Printf("Remaining args: %s\n", strings.Join(args, " "))
|
||||
|
||||
// Output: Verbosity: [true true]
|
||||
// Offset: 5
|
||||
// Name: Me
|
||||
// Ptr: 3
|
||||
// StringSlice: [hello world]
|
||||
// PtrSlice: [hello world]
|
||||
// IntMap: [a:1 b:5]
|
||||
// Remaining args: arg1 arg2 arg3
|
||||
|
||||
More information can be found in the godocs: <http://godoc.org/github.com/jessevdk/go-flags>
|
||||
@@ -1,82 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func assertString(t *testing.T, a string, b string) {
|
||||
if a != b {
|
||||
t.Errorf("Expected %#v, but got %#v", b, a)
|
||||
}
|
||||
}
|
||||
func assertStringArray(t *testing.T, a []string, b []string) {
|
||||
if len(a) != len(b) {
|
||||
t.Errorf("Expected %#v, but got %#v", b, a)
|
||||
return
|
||||
}
|
||||
|
||||
for i, v := range a {
|
||||
if b[i] != v {
|
||||
t.Errorf("Expected %#v, but got %#v", b, a)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func assertBoolArray(t *testing.T, a []bool, b []bool) {
|
||||
if len(a) != len(b) {
|
||||
t.Errorf("Expected %#v, but got %#v", b, a)
|
||||
return
|
||||
}
|
||||
|
||||
for i, v := range a {
|
||||
if b[i] != v {
|
||||
t.Errorf("Expected %#v, but got %#v", b, a)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func assertParserSuccess(t *testing.T, data interface{}, args ...string) (*Parser, []string) {
|
||||
parser := NewParser(data, Default&^PrintErrors)
|
||||
ret, err := parser.ParseArgs(args)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected parse error: %s", err)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return parser, ret
|
||||
}
|
||||
|
||||
func assertParseSuccess(t *testing.T, data interface{}, args ...string) []string {
|
||||
_, ret := assertParserSuccess(t, data, args...)
|
||||
return ret
|
||||
}
|
||||
|
||||
func assertError(t *testing.T, err error, typ ErrorType, msg string) {
|
||||
if err == nil {
|
||||
t.Fatalf("Expected error: %s", msg)
|
||||
return
|
||||
}
|
||||
|
||||
if e, ok := err.(*Error); !ok {
|
||||
t.Fatalf("Expected Error type, but got %#v", err)
|
||||
return
|
||||
} else {
|
||||
if e.Type != typ {
|
||||
t.Errorf("Expected error type {%s}, but got {%s}", typ, e.Type)
|
||||
}
|
||||
|
||||
if e.Message != msg {
|
||||
t.Errorf("Expected error message %#v, but got %#v", msg, e.Message)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func assertParseFail(t *testing.T, typ ErrorType, msg string, data interface{}, args ...string) {
|
||||
parser := NewParser(data, Default&^PrintErrors)
|
||||
_, err := parser.ParseArgs(args)
|
||||
|
||||
assertError(t, err, typ, msg)
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
echo '# linux arm7'
|
||||
GOARM=7 GOARCH=arm GOOS=linux go build
|
||||
echo '# linux arm5'
|
||||
GOARM=5 GOARCH=arm GOOS=linux go build
|
||||
echo '# windows 386'
|
||||
GOARCH=386 GOOS=windows go build
|
||||
echo '# windows amd64'
|
||||
GOARCH=amd64 GOOS=windows go build
|
||||
echo '# darwin'
|
||||
GOARCH=amd64 GOOS=darwin go build
|
||||
echo '# freebsd'
|
||||
GOARCH=amd64 GOOS=freebsd go build
|
||||
@@ -1,61 +0,0 @@
|
||||
package flags
|
||||
|
||||
func levenshtein(s string, t string) int {
|
||||
if len(s) == 0 {
|
||||
return len(t)
|
||||
}
|
||||
|
||||
if len(t) == 0 {
|
||||
return len(s)
|
||||
}
|
||||
|
||||
var l1, l2, l3 int
|
||||
|
||||
if len(s) == 1 {
|
||||
l1 = len(t) + 1
|
||||
} else {
|
||||
l1 = levenshtein(s[1:len(s)-1], t) + 1
|
||||
}
|
||||
|
||||
if len(t) == 1 {
|
||||
l2 = len(s) + 1
|
||||
} else {
|
||||
l2 = levenshtein(t[1:len(t)-1], s) + 1
|
||||
}
|
||||
|
||||
l3 = levenshtein(s[1:len(s)], t[1:len(t)])
|
||||
|
||||
if s[0] != t[0] {
|
||||
l3 += 1
|
||||
}
|
||||
|
||||
if l2 < l1 {
|
||||
l1 = l2
|
||||
}
|
||||
|
||||
if l1 < l3 {
|
||||
return l1
|
||||
}
|
||||
|
||||
return l3
|
||||
}
|
||||
|
||||
func closestChoice(cmd string, choices []string) (string, int) {
|
||||
if len(choices) == 0 {
|
||||
return "", 0
|
||||
}
|
||||
|
||||
mincmd := -1
|
||||
mindist := -1
|
||||
|
||||
for i, c := range choices {
|
||||
l := levenshtein(cmd, c)
|
||||
|
||||
if mincmd < 0 || l < mindist {
|
||||
mindist = l
|
||||
mincmd = i
|
||||
}
|
||||
}
|
||||
|
||||
return choices[mincmd], mindist
|
||||
}
|
||||
@@ -1,84 +0,0 @@
|
||||
package flags
|
||||
|
||||
// Command represents an application command. Commands can be added to the
|
||||
// parser (which itself is a command) and are selected/executed when its name
|
||||
// is specified on the command line. The Command type embeds a Group and
|
||||
// therefore also carries a set of command specific options.
|
||||
type Command struct {
|
||||
// Embedded, see Group for more information
|
||||
*Group
|
||||
|
||||
// The name by which the command can be invoked
|
||||
Name string
|
||||
|
||||
// The active sub command (set by parsing) or nil
|
||||
Active *Command
|
||||
|
||||
commands []*Command
|
||||
hasBuiltinHelpGroup bool
|
||||
}
|
||||
|
||||
// Commander is an interface which can be implemented by any command added in
|
||||
// the options. When implemented, the Execute method will be called for the last
|
||||
// specified (sub)command providing the remaining command line arguments.
|
||||
type Commander interface {
|
||||
// Execute will be called for the last active (sub)command. The
|
||||
// args argument contains the remaining command line arguments. The
|
||||
// error that Execute returns will be eventually passed out of the
|
||||
// Parse method of the Parser.
|
||||
Execute(args []string) error
|
||||
}
|
||||
|
||||
// Usage is an interface which can be implemented to show a custom usage string
|
||||
// in the help message shown for a command.
|
||||
type Usage interface {
|
||||
// Usage is called for commands to allow customized printing of command
|
||||
// usage in the generated help message.
|
||||
Usage() string
|
||||
}
|
||||
|
||||
// AddCommand adds a new command to the parser with the given name and data. The
|
||||
// data needs to be a pointer to a struct from which the fields indicate which
|
||||
// options are in the command. The provided data can implement the Command and
|
||||
// Usage interfaces.
|
||||
func (c *Command) AddCommand(command string, shortDescription string, longDescription string, data interface{}) (*Command, error) {
|
||||
cmd := newCommand(command, shortDescription, longDescription, data)
|
||||
|
||||
if err := cmd.scan(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c.commands = append(c.commands, cmd)
|
||||
return cmd, nil
|
||||
}
|
||||
|
||||
// AddGroup adds a new group to the command with the given name and data. The
|
||||
// data needs to be a pointer to a struct from which the fields indicate which
|
||||
// options are in the group.
|
||||
func (c *Command) AddGroup(shortDescription string, longDescription string, data interface{}) (*Group, error) {
|
||||
group := newGroup(shortDescription, longDescription, data)
|
||||
|
||||
if err := group.scanType(c.scanSubCommandHandler(group)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c.groups = append(c.groups, group)
|
||||
return group, nil
|
||||
}
|
||||
|
||||
// Commands returns a list of subcommands of this command.
|
||||
func (c *Command) Commands() []*Command {
|
||||
return c.commands
|
||||
}
|
||||
|
||||
// Find locates the subcommand with the given name and returns it. If no such
|
||||
// command can be found Find will return nil.
|
||||
func (c *Command) Find(name string) *Command {
|
||||
for _, cc := range c.commands {
|
||||
if cc.Name == name {
|
||||
return cc
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,161 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
type lookup struct {
|
||||
shortNames map[string]*Option
|
||||
longNames map[string]*Option
|
||||
|
||||
required map[*Option]bool
|
||||
commands map[string]*Command
|
||||
}
|
||||
|
||||
func newCommand(name string, shortDescription string, longDescription string, data interface{}) *Command {
|
||||
return &Command{
|
||||
Group: newGroup(shortDescription, longDescription, data),
|
||||
Name: name,
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Command) scanSubCommandHandler(parentg *Group) scanHandler {
|
||||
f := func(realval reflect.Value, sfield *reflect.StructField) (bool, error) {
|
||||
mtag := newMultiTag(string(sfield.Tag))
|
||||
|
||||
if err := mtag.Parse(); err != nil {
|
||||
return true, err
|
||||
}
|
||||
|
||||
subcommand := mtag.Get("command")
|
||||
|
||||
if len(subcommand) != 0 {
|
||||
ptrval := reflect.NewAt(realval.Type(), unsafe.Pointer(realval.UnsafeAddr()))
|
||||
|
||||
shortDescription := mtag.Get("description")
|
||||
longDescription := mtag.Get("long-description")
|
||||
|
||||
if _, err := c.AddCommand(subcommand, shortDescription, longDescription, ptrval.Interface()); err != nil {
|
||||
return true, err
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return parentg.scanSubGroupHandler(realval, sfield)
|
||||
}
|
||||
|
||||
return f
|
||||
}
|
||||
|
||||
func (c *Command) scan() error {
|
||||
return c.scanType(c.scanSubCommandHandler(c.Group))
|
||||
}
|
||||
|
||||
func (c *Command) eachCommand(f func(*Command), recurse bool) {
|
||||
f(c)
|
||||
|
||||
for _, cc := range c.commands {
|
||||
if recurse {
|
||||
cc.eachCommand(f, true)
|
||||
} else {
|
||||
f(cc)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Command) eachActiveGroup(f func(g *Group)) {
|
||||
c.eachGroup(f)
|
||||
|
||||
if c.Active != nil {
|
||||
c.Active.eachActiveGroup(f)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Command) addHelpGroups(showHelp func() error) {
|
||||
if !c.hasBuiltinHelpGroup {
|
||||
c.addHelpGroup(showHelp)
|
||||
c.hasBuiltinHelpGroup = true
|
||||
}
|
||||
|
||||
for _, cc := range c.commands {
|
||||
cc.addHelpGroups(showHelp)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Command) makeLookup() lookup {
|
||||
ret := lookup{
|
||||
shortNames: make(map[string]*Option),
|
||||
longNames: make(map[string]*Option),
|
||||
|
||||
required: make(map[*Option]bool),
|
||||
commands: make(map[string]*Command),
|
||||
}
|
||||
|
||||
c.eachGroup(func(g *Group) {
|
||||
for _, option := range g.options {
|
||||
if option.Required && option.canCli() {
|
||||
ret.required[option] = true
|
||||
}
|
||||
|
||||
if option.ShortName != 0 {
|
||||
ret.shortNames[string(option.ShortName)] = option
|
||||
}
|
||||
|
||||
if len(option.LongName) > 0 {
|
||||
ret.longNames[option.LongName] = option
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
for _, subcommand := range c.commands {
|
||||
ret.commands[subcommand.Name] = subcommand
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
func (c *Command) groupByName(name string) *Group {
|
||||
if grp := c.Group.groupByName(name); grp != nil {
|
||||
return grp
|
||||
}
|
||||
|
||||
for _, subc := range c.commands {
|
||||
prefix := subc.Name + "."
|
||||
|
||||
if strings.HasPrefix(name, prefix) {
|
||||
if grp := subc.groupByName(name[len(prefix):]); grp != nil {
|
||||
return grp
|
||||
}
|
||||
} else if name == subc.Name {
|
||||
return subc.Group
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type commandList []*Command
|
||||
|
||||
func (c commandList) Less(i, j int) bool {
|
||||
return c[i].Name < c[j].Name
|
||||
}
|
||||
|
||||
func (c commandList) Len() int {
|
||||
return len(c)
|
||||
}
|
||||
|
||||
func (c commandList) Swap(i, j int) {
|
||||
c[i], c[j] = c[j], c[i]
|
||||
}
|
||||
|
||||
func (c *Command) sortedCommands() []*Command {
|
||||
ret := make(commandList, len(c.commands))
|
||||
copy(ret, c.commands)
|
||||
|
||||
sort.Sort(ret)
|
||||
return []*Command(ret)
|
||||
}
|
||||
@@ -1,255 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCommandInline(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Command struct {
|
||||
G bool `short:"g"`
|
||||
} `command:"cmd"`
|
||||
}{}
|
||||
|
||||
p, ret := assertParserSuccess(t, &opts, "-v", "cmd", "-g")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if p.Active == nil {
|
||||
t.Errorf("Expected active command")
|
||||
}
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !opts.Command.G {
|
||||
t.Errorf("Expected Command.G to be true")
|
||||
}
|
||||
|
||||
if p.Command.Find("cmd") != p.Active {
|
||||
t.Errorf("Expected to find command `cmd' to be active")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommandInlineMulti(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
C1 struct {
|
||||
} `command:"c1"`
|
||||
|
||||
C2 struct {
|
||||
G bool `short:"g"`
|
||||
} `command:"c2"`
|
||||
}{}
|
||||
|
||||
p, ret := assertParserSuccess(t, &opts, "-v", "c2", "-g")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if p.Active == nil {
|
||||
t.Errorf("Expected active command")
|
||||
}
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !opts.C2.G {
|
||||
t.Errorf("Expected C2.G to be true")
|
||||
}
|
||||
|
||||
if p.Command.Find("c1") == nil {
|
||||
t.Errorf("Expected to find command `c1'")
|
||||
}
|
||||
|
||||
if c2 := p.Command.Find("c2"); c2 == nil {
|
||||
t.Errorf("Expected to find command `c2'")
|
||||
} else if c2 != p.Active {
|
||||
t.Errorf("Expected to find command `c2' to be active")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommandFlagOrder1(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Command struct {
|
||||
G bool `short:"g"`
|
||||
} `command:"cmd"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrUnknownFlag, "unknown flag `g'", &opts, "-v", "-g", "cmd")
|
||||
}
|
||||
|
||||
func TestCommandFlagOrder2(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Command struct {
|
||||
G bool `short:"g"`
|
||||
} `command:"cmd"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrUnknownFlag, "unknown flag `v'", &opts, "cmd", "-v", "-g")
|
||||
}
|
||||
|
||||
func TestCommandEstimate(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Cmd1 struct {
|
||||
} `command:"remove"`
|
||||
|
||||
Cmd2 struct {
|
||||
} `command:"add"`
|
||||
}{}
|
||||
|
||||
p := NewParser(&opts, None)
|
||||
_, err := p.ParseArgs([]string{})
|
||||
|
||||
assertError(t, err, ErrRequired, "Please specify one command of: add or remove")
|
||||
}
|
||||
|
||||
type testCommand struct {
|
||||
G bool `short:"g"`
|
||||
Executed bool
|
||||
EArgs []string
|
||||
}
|
||||
|
||||
func (c *testCommand) Execute(args []string) error {
|
||||
c.Executed = true
|
||||
c.EArgs = args
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestCommandExecute(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Command testCommand `command:"cmd"`
|
||||
}{}
|
||||
|
||||
assertParseSuccess(t, &opts, "-v", "cmd", "-g", "a", "b")
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !opts.Command.Executed {
|
||||
t.Errorf("Did not execute command")
|
||||
}
|
||||
|
||||
if !opts.Command.G {
|
||||
t.Errorf("Expected Command.C to be true")
|
||||
}
|
||||
|
||||
assertStringArray(t, opts.Command.EArgs, []string{"a", "b"})
|
||||
}
|
||||
|
||||
func TestCommandClosest(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Cmd1 struct {
|
||||
} `command:"remove"`
|
||||
|
||||
Cmd2 struct {
|
||||
} `command:"add"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrRequired, "Unknown command `addd', did you mean `add'?", &opts, "-v", "addd")
|
||||
}
|
||||
|
||||
func TestCommandAdd(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
}{}
|
||||
|
||||
var cmd = struct {
|
||||
G bool `short:"g"`
|
||||
}{}
|
||||
|
||||
p := NewParser(&opts, Default)
|
||||
c, err := p.AddCommand("cmd", "", "", &cmd)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
ret, err := p.ParseArgs([]string{"-v", "cmd", "-g", "rest"})
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
assertStringArray(t, ret, []string{"rest"})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !cmd.G {
|
||||
t.Errorf("Expected Command.G to be true")
|
||||
}
|
||||
|
||||
if p.Command.Find("cmd") != c {
|
||||
t.Errorf("Expected to find command `cmd'")
|
||||
}
|
||||
|
||||
if p.Commands()[0] != c {
|
||||
t.Errorf("Espected command #v, but got #v", c, p.Commands()[0])
|
||||
}
|
||||
|
||||
if c.Options()[0].ShortName != 'g' {
|
||||
t.Errorf("Expected short name `g' but got %v", c.Options()[0].ShortName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCommandNestedInline(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Command struct {
|
||||
G bool `short:"g"`
|
||||
|
||||
Nested struct {
|
||||
N string `long:"n"`
|
||||
} `command:"nested"`
|
||||
} `command:"cmd"`
|
||||
}{}
|
||||
|
||||
p, ret := assertParserSuccess(t, &opts, "-v", "cmd", "-g", "nested", "--n", "n", "rest")
|
||||
|
||||
assertStringArray(t, ret, []string{"rest"})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !opts.Command.G {
|
||||
t.Errorf("Expected Command.G to be true")
|
||||
}
|
||||
|
||||
assertString(t, opts.Command.Nested.N, "n")
|
||||
|
||||
if c := p.Command.Find("cmd"); c == nil {
|
||||
t.Errorf("Expected to find command `cmd'")
|
||||
} else {
|
||||
if c != p.Active {
|
||||
t.Errorf("Expected `cmd' to be the active parser command")
|
||||
}
|
||||
|
||||
if nested := c.Find("nested"); nested == nil {
|
||||
t.Errorf("Expected to find command `nested'")
|
||||
} else if nested != c.Active {
|
||||
t.Errorf("Expected to find command `nested' to be the active `cmd' command")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,315 +0,0 @@
|
||||
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Marshaler is the interface implemented by types that can marshal themselves
|
||||
// to a string representation of the flag.
|
||||
type Marshaler interface {
|
||||
// MarshalFlag marshals a flag value to its string representation.
|
||||
MarshalFlag() (string, error)
|
||||
}
|
||||
|
||||
// Unmarshaler is the interface implemented by types that can unmarshal a flag
|
||||
// argument to themselves. The provided value is directly passed from the
|
||||
// command line.
|
||||
type Unmarshaler interface {
|
||||
// UnmarshalFlag unmarshals a string value representation to the flag
|
||||
// value (which therefore needs to be a pointer receiver).
|
||||
UnmarshalFlag(value string) error
|
||||
}
|
||||
|
||||
func getBase(options multiTag, base int) (int, error) {
|
||||
sbase := options.Get("base")
|
||||
|
||||
var err error
|
||||
var ivbase int64
|
||||
|
||||
if sbase != "" {
|
||||
ivbase, err = strconv.ParseInt(sbase, 10, 32)
|
||||
base = int(ivbase)
|
||||
}
|
||||
|
||||
return base, err
|
||||
}
|
||||
|
||||
func convertMarshal(val reflect.Value) (bool, string, error) {
|
||||
// Check first for the Marshaler interface
|
||||
if val.Type().NumMethod() > 0 && val.CanInterface() {
|
||||
if marshaler, ok := val.Interface().(Marshaler); ok {
|
||||
ret, err := marshaler.MarshalFlag()
|
||||
return true, ret, err
|
||||
}
|
||||
}
|
||||
|
||||
return false, "", nil
|
||||
}
|
||||
|
||||
func convertToString(val reflect.Value, options multiTag) (string, error) {
|
||||
if ok, ret, err := convertMarshal(val); ok {
|
||||
return ret, err
|
||||
}
|
||||
|
||||
tp := val.Type()
|
||||
|
||||
// Support for time.Duration
|
||||
if tp == reflect.TypeOf((*time.Duration)(nil)).Elem() {
|
||||
stringer := val.Interface().(fmt.Stringer)
|
||||
return stringer.String(), nil
|
||||
}
|
||||
|
||||
switch tp.Kind() {
|
||||
case reflect.String:
|
||||
return val.String(), nil
|
||||
case reflect.Bool:
|
||||
if val.Bool() {
|
||||
return "true", nil
|
||||
}
|
||||
|
||||
return "false", nil
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
base, _ := getBase(options, 10)
|
||||
return strconv.FormatInt(val.Int(), base), nil
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
base, _ := getBase(options, 10)
|
||||
return strconv.FormatUint(val.Uint(), base), nil
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return strconv.FormatFloat(val.Float(), 'g', -1, tp.Bits()), nil
|
||||
case reflect.Slice:
|
||||
if val.Len() == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
ret := "["
|
||||
|
||||
for i := 0; i < val.Len(); i++ {
|
||||
if i != 0 {
|
||||
ret += ", "
|
||||
}
|
||||
|
||||
item, err := convertToString(val.Index(i), options)
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
ret += item
|
||||
}
|
||||
|
||||
return ret + "]", nil
|
||||
case reflect.Map:
|
||||
ret := "{"
|
||||
|
||||
for i, key := range val.MapKeys() {
|
||||
if i != 0 {
|
||||
ret += ", "
|
||||
}
|
||||
|
||||
item, err := convertToString(val.MapIndex(key), options)
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
ret += item
|
||||
}
|
||||
|
||||
return ret + "}", nil
|
||||
case reflect.Ptr:
|
||||
return convertToString(reflect.Indirect(val), options)
|
||||
case reflect.Interface:
|
||||
if !val.IsNil() {
|
||||
return convertToString(val.Elem(), options)
|
||||
}
|
||||
}
|
||||
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func convertUnmarshal(val string, retval reflect.Value) (bool, error) {
|
||||
if retval.Type().NumMethod() > 0 && retval.CanInterface() {
|
||||
if unmarshaler, ok := retval.Interface().(Unmarshaler); ok {
|
||||
return true, unmarshaler.UnmarshalFlag(val)
|
||||
}
|
||||
}
|
||||
|
||||
if retval.Type().Kind() != reflect.Ptr && retval.CanAddr() {
|
||||
return convertUnmarshal(val, retval.Addr())
|
||||
}
|
||||
|
||||
if retval.Type().Kind() == reflect.Interface && !retval.IsNil() {
|
||||
return convertUnmarshal(val, retval.Elem())
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func convert(val string, retval reflect.Value, options multiTag) error {
|
||||
if ok, err := convertUnmarshal(val, retval); ok {
|
||||
return err
|
||||
}
|
||||
|
||||
tp := retval.Type()
|
||||
|
||||
// Support for time.Duration
|
||||
if tp == reflect.TypeOf((*time.Duration)(nil)).Elem() {
|
||||
parsed, err := time.ParseDuration(val)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval.SetInt(int64(parsed))
|
||||
return nil
|
||||
}
|
||||
|
||||
switch tp.Kind() {
|
||||
case reflect.String:
|
||||
retval.SetString(val)
|
||||
case reflect.Bool:
|
||||
if val == "" {
|
||||
retval.SetBool(true)
|
||||
} else {
|
||||
b, err := strconv.ParseBool(val)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval.SetBool(b)
|
||||
}
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
base, err := getBase(options, 10)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
parsed, err := strconv.ParseInt(val, base, tp.Bits())
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval.SetInt(parsed)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
base, err := getBase(options, 10)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
parsed, err := strconv.ParseUint(val, base, tp.Bits())
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval.SetUint(parsed)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
parsed, err := strconv.ParseFloat(val, tp.Bits())
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval.SetFloat(parsed)
|
||||
case reflect.Slice:
|
||||
elemtp := tp.Elem()
|
||||
|
||||
elemvalptr := reflect.New(elemtp)
|
||||
elemval := reflect.Indirect(elemvalptr)
|
||||
|
||||
if err := convert(val, elemval, options); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval.Set(reflect.Append(retval, elemval))
|
||||
case reflect.Map:
|
||||
parts := strings.SplitN(val, ":", 2)
|
||||
|
||||
key := parts[0]
|
||||
var value string
|
||||
|
||||
if len(parts) == 2 {
|
||||
value = parts[1]
|
||||
}
|
||||
|
||||
keytp := tp.Key()
|
||||
keyval := reflect.New(keytp)
|
||||
|
||||
if err := convert(key, keyval, options); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
valuetp := tp.Elem()
|
||||
valueval := reflect.New(valuetp)
|
||||
|
||||
if err := convert(value, valueval, options); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if retval.IsNil() {
|
||||
retval.Set(reflect.MakeMap(tp))
|
||||
}
|
||||
|
||||
retval.SetMapIndex(reflect.Indirect(keyval), reflect.Indirect(valueval))
|
||||
case reflect.Ptr:
|
||||
if retval.IsNil() {
|
||||
retval.Set(reflect.New(retval.Type().Elem()))
|
||||
}
|
||||
|
||||
return convert(val, reflect.Indirect(retval), options)
|
||||
case reflect.Interface:
|
||||
if !retval.IsNil() {
|
||||
return convert(val, retval.Elem(), options)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func wrapText(s string, l int, prefix string) string {
|
||||
// Basic text wrapping of s at spaces to fit in l
|
||||
var ret string
|
||||
|
||||
s = strings.TrimSpace(s)
|
||||
|
||||
for len(s) > l {
|
||||
// Try to split on space
|
||||
suffix := ""
|
||||
|
||||
pos := strings.LastIndex(s[:l], " ")
|
||||
|
||||
if pos < 0 {
|
||||
pos = l - 1
|
||||
suffix = "-\n"
|
||||
}
|
||||
|
||||
if len(ret) != 0 {
|
||||
ret += "\n" + prefix
|
||||
}
|
||||
|
||||
ret += strings.TrimSpace(s[:pos]) + suffix
|
||||
s = strings.TrimSpace(s[pos:])
|
||||
}
|
||||
|
||||
if len(s) > 0 {
|
||||
if len(ret) != 0 {
|
||||
ret += "\n" + prefix
|
||||
}
|
||||
|
||||
return ret + s
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// ErrorType represents the type of error.
|
||||
type ErrorType uint
|
||||
|
||||
const (
|
||||
// ErrUnknown indicates a generic error.
|
||||
ErrUnknown ErrorType = iota
|
||||
|
||||
// ErrExpectedArgument indicates that an argument was expected.
|
||||
ErrExpectedArgument
|
||||
|
||||
// ErrUnknownFlag indicates an unknown flag.
|
||||
ErrUnknownFlag
|
||||
|
||||
// ErrUnknownGroup indicates an unknown group.
|
||||
ErrUnknownGroup
|
||||
|
||||
// ErrMarshal indicates a marshalling error while converting values.
|
||||
ErrMarshal
|
||||
|
||||
// ErrHelp indicates that the builtin help was shown (the error
|
||||
// contains the help message).
|
||||
ErrHelp
|
||||
|
||||
// ErrNoArgumentForBool indicates that an argument was given for a
|
||||
// boolean flag (which don't not take any arguments).
|
||||
ErrNoArgumentForBool
|
||||
|
||||
// ErrRequired indicates that a required flag was not provided.
|
||||
ErrRequired
|
||||
|
||||
// ErrShortNameTooLong indicates that a short flag name was specified,
|
||||
// longer than one character.
|
||||
ErrShortNameTooLong
|
||||
|
||||
// ErrDuplicatedFlag indicates that a short or long flag has been
|
||||
// defined more than once
|
||||
ErrDuplicatedFlag
|
||||
|
||||
// ErrTag indicates an error while parsing flag tags.
|
||||
ErrTag
|
||||
)
|
||||
|
||||
// String returns a string representation of the error type.
|
||||
func (e ErrorType) String() string {
|
||||
switch e {
|
||||
case ErrUnknown:
|
||||
return "unknown"
|
||||
case ErrExpectedArgument:
|
||||
return "expected argument"
|
||||
case ErrUnknownFlag:
|
||||
return "unknown flag"
|
||||
case ErrUnknownGroup:
|
||||
return "unknown group"
|
||||
case ErrMarshal:
|
||||
return "marshal"
|
||||
case ErrHelp:
|
||||
return "help"
|
||||
case ErrNoArgumentForBool:
|
||||
return "no argument for bool"
|
||||
case ErrRequired:
|
||||
return "required"
|
||||
case ErrShortNameTooLong:
|
||||
return "short name too long"
|
||||
case ErrDuplicatedFlag:
|
||||
return "duplicated flag"
|
||||
case ErrTag:
|
||||
return "tag"
|
||||
}
|
||||
|
||||
return "unknown"
|
||||
}
|
||||
|
||||
// Error represents a parser error. The error returned from Parse is of this
|
||||
// type. The error contains both a Type and Message.
|
||||
type Error struct {
|
||||
// The type of error
|
||||
Type ErrorType
|
||||
|
||||
// The error message
|
||||
Message string
|
||||
}
|
||||
|
||||
// Error returns the error's message
|
||||
func (e *Error) Error() string {
|
||||
return e.Message
|
||||
}
|
||||
|
||||
func newError(tp ErrorType, message string) *Error {
|
||||
return &Error{
|
||||
Type: tp,
|
||||
Message: message,
|
||||
}
|
||||
}
|
||||
|
||||
func newErrorf(tp ErrorType, format string, args ...interface{}) *Error {
|
||||
return newError(tp, fmt.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
func wrapError(err error) *Error {
|
||||
ret, ok := err.(*Error)
|
||||
|
||||
if !ok {
|
||||
return newError(ErrUnknown, err.Error())
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
@@ -1,95 +0,0 @@
|
||||
// Example of use of the flags package.
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func Example() {
|
||||
var opts struct {
|
||||
// Slice of bool will append 'true' each time the option
|
||||
// is encountered (can be set multiple times, like -vvv)
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
|
||||
|
||||
// Example of automatic marshalling to desired type (uint)
|
||||
Offset uint `long:"offset" description:"Offset"`
|
||||
|
||||
// Example of a callback, called each time the option is found.
|
||||
Call func(string) `short:"c" description:"Call phone number"`
|
||||
|
||||
// Example of a required flag
|
||||
Name string `short:"n" long:"name" description:"A name" required:"true"`
|
||||
|
||||
// Example of a value name
|
||||
File string `short:"f" long:"file" description:"A file" value-name:"FILE"`
|
||||
|
||||
// Example of a pointer
|
||||
Ptr *int `short:"p" description:"A pointer to an integer"`
|
||||
|
||||
// Example of a slice of strings
|
||||
StringSlice []string `short:"s" description:"A slice of strings"`
|
||||
|
||||
// Example of a slice of pointers
|
||||
PtrSlice []*string `long:"ptrslice" description:"A slice of pointers to string"`
|
||||
|
||||
// Example of a map
|
||||
IntMap map[string]int `long:"intmap" description:"A map from string to int"`
|
||||
}
|
||||
|
||||
// Callback which will invoke callto:<argument> to call a number.
|
||||
// Note that this works just on OS X (and probably only with
|
||||
// Skype) but it shows the idea.
|
||||
opts.Call = func(num string) {
|
||||
cmd := exec.Command("open", "callto:"+num)
|
||||
cmd.Start()
|
||||
cmd.Process.Release()
|
||||
}
|
||||
|
||||
// Make some fake arguments to parse.
|
||||
args := []string{
|
||||
"-vv",
|
||||
"--offset=5",
|
||||
"-n", "Me",
|
||||
"-p", "3",
|
||||
"-s", "hello",
|
||||
"-s", "world",
|
||||
"--ptrslice", "hello",
|
||||
"--ptrslice", "world",
|
||||
"--intmap", "a:1",
|
||||
"--intmap", "b:5",
|
||||
"arg1",
|
||||
"arg2",
|
||||
"arg3",
|
||||
}
|
||||
|
||||
// Parse flags from `args'. Note that here we use flags.ParseArgs for
|
||||
// the sake of making a working example. Normally, you would simply use
|
||||
// flags.Parse(&opts) which uses os.Args
|
||||
args, err := ParseArgs(&opts, args)
|
||||
|
||||
if err != nil {
|
||||
panic(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("Verbosity: %v\n", opts.Verbose)
|
||||
fmt.Printf("Offset: %d\n", opts.Offset)
|
||||
fmt.Printf("Name: %s\n", opts.Name)
|
||||
fmt.Printf("Ptr: %d\n", *opts.Ptr)
|
||||
fmt.Printf("StringSlice: %v\n", opts.StringSlice)
|
||||
fmt.Printf("PtrSlice: [%v %v]\n", *opts.PtrSlice[0], *opts.PtrSlice[1])
|
||||
fmt.Printf("IntMap: [a:%v b:%v]\n", opts.IntMap["a"], opts.IntMap["b"])
|
||||
fmt.Printf("Remaining args: %s\n", strings.Join(args, " "))
|
||||
|
||||
// Output: Verbosity: [true true]
|
||||
// Offset: 5
|
||||
// Name: Me
|
||||
// Ptr: 3
|
||||
// StringSlice: [hello world]
|
||||
// PtrSlice: [hello world]
|
||||
// IntMap: [a:1 b:5]
|
||||
// Remaining args: arg1 arg2 arg3
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type AddCommand struct {
|
||||
All bool `short:"a" long:"all" description:"Add all files"`
|
||||
}
|
||||
|
||||
var addCommand AddCommand
|
||||
|
||||
func (x *AddCommand) Execute(args []string) error {
|
||||
fmt.Printf("Adding (all=%v): %#v\n", x.All, args)
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
parser.AddCommand("add",
|
||||
"Add a file",
|
||||
"The add command adds a file to the repository. Use -a to add all files.",
|
||||
&addCommand)
|
||||
}
|
||||
@@ -1,75 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/calmh/syncthing/github.com/jessevdk/go-flags"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type EditorOptions struct {
|
||||
Input string `short:"i" long:"input" description:"Input file" default:"-"`
|
||||
Output string `short:"o" long:"output" description:"Output file" default:"-"`
|
||||
}
|
||||
|
||||
type Point struct {
|
||||
X, Y int
|
||||
}
|
||||
|
||||
func (p *Point) UnmarshalFlag(value string) error {
|
||||
parts := strings.Split(value, ",")
|
||||
|
||||
if len(parts) != 2 {
|
||||
return errors.New("Expected two numbers separated by a ,")
|
||||
}
|
||||
|
||||
x, err := strconv.ParseInt(parts[0], 10, 32)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
y, err := strconv.ParseInt(parts[1], 10, 32)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.X = int(x)
|
||||
p.Y = int(y)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p Point) MarshalFlag() (string, error) {
|
||||
return fmt.Sprintf("%d,%d", p.X, p.Y), nil
|
||||
}
|
||||
|
||||
type Options struct {
|
||||
// Example of verbosity with level
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Verbose output"`
|
||||
|
||||
// Example of optional value
|
||||
User string `short:"u" long:"user" description:"User name" optional:"yes" optional-value:"pancake"`
|
||||
|
||||
// Example of map with multiple default values
|
||||
Users map[string]string `long:"users" description:"User e-mail map" default:"system:system@example.org" default:"admin:admin@example.org"`
|
||||
|
||||
// Example of option group
|
||||
Editor EditorOptions `group:"Editor Options"`
|
||||
|
||||
// Example of custom type Marshal/Unmarshal
|
||||
Point Point `long:"point" description:"A x,y point" default:"1,2"`
|
||||
}
|
||||
|
||||
var options Options
|
||||
|
||||
var parser = flags.NewParser(&options, flags.Default)
|
||||
|
||||
func main() {
|
||||
if _, err := parser.Parse(); err != nil {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type RmCommand struct {
|
||||
Force bool `short:"f" long:"force" description:"Force removal of files"`
|
||||
}
|
||||
|
||||
var rmCommand RmCommand
|
||||
|
||||
func (x *RmCommand) Execute(args []string) error {
|
||||
fmt.Printf("Removing (force=%v): %#v\n", x.Force, args)
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
parser.AddCommand("rm",
|
||||
"Remove a file",
|
||||
"The rm command removes a file to the repository. Use -f to force removal of files.",
|
||||
&rmCommand)
|
||||
}
|
||||
@@ -1,141 +0,0 @@
|
||||
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package flags provides an extensive command line option parser.
|
||||
// The flags package is similar in functionality to the go builtin flag package
|
||||
// but provides more options and uses reflection to provide a convenient and
|
||||
// succinct way of specifying command line options.
|
||||
//
|
||||
// Supported features:
|
||||
// Options with short names (-v)
|
||||
// Options with long names (--verbose)
|
||||
// Options with and without arguments (bool v.s. other type)
|
||||
// Options with optional arguments and default values
|
||||
// Multiple option groups each containing a set of options
|
||||
// Generate and print well-formatted help message
|
||||
// Passing remaining command line arguments after -- (optional)
|
||||
// Ignoring unknown command line options (optional)
|
||||
// Supports -I/usr/include -I=/usr/include -I /usr/include option argument specification
|
||||
// Supports multiple short options -aux
|
||||
// Supports all primitive go types (string, int{8..64}, uint{8..64}, float)
|
||||
// Supports same option multiple times (can store in slice or last option counts)
|
||||
// Supports maps
|
||||
// Supports function callbacks
|
||||
//
|
||||
// Additional features specific to Windows:
|
||||
// Options with short names (/v)
|
||||
// Options with long names (/verbose)
|
||||
// Windows-style options with arguments use a colon as the delimiter
|
||||
// Modify generated help message with Windows-style / options
|
||||
//
|
||||
// The flags package uses structs, reflection and struct field tags
|
||||
// to allow users to specify command line options. This results in very simple
|
||||
// and consise specification of your application options. For example:
|
||||
//
|
||||
// type Options struct {
|
||||
// Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
|
||||
// }
|
||||
//
|
||||
// This specifies one option with a short name -v and a long name --verbose.
|
||||
// When either -v or --verbose is found on the command line, a 'true' value
|
||||
// will be appended to the Verbose field. e.g. when specifying -vvv, the
|
||||
// resulting value of Verbose will be {[true, true, true]}.
|
||||
//
|
||||
// Slice options work exactly the same as primitive type options, except that
|
||||
// whenever the option is encountered, a value is appended to the slice.
|
||||
//
|
||||
// Map options from string to primitive type are also supported. On the command
|
||||
// line, you specify the value for such an option as key:value. For example
|
||||
//
|
||||
// type Options struct {
|
||||
// AuthorInfo string[string] `short:"a"`
|
||||
// }
|
||||
//
|
||||
// Then, the AuthorInfo map can be filled with something like
|
||||
// -a name:Jesse -a "surname:van den Kieboom".
|
||||
//
|
||||
// Finally, for full control over the conversion between command line argument
|
||||
// values and options, user defined types can choose to implement the Marshaler
|
||||
// and Unmarshaler interfaces.
|
||||
//
|
||||
// Available field tags:
|
||||
// short: the short name of the option (single character)
|
||||
// long: the long name of the option
|
||||
// description: the description of the option (optional)
|
||||
// optional: whether an argument of the option is optional (optional)
|
||||
// optional-value: the value of an optional option when the option occurs
|
||||
// without an argument. This tag can be specified multiple
|
||||
// times in the case of maps or slices (optional)
|
||||
// default: the default value of an option. This tag can be specified
|
||||
// multiple times in the case of slices or maps (optional).
|
||||
// default-mask: when specified, this value will be displayed in the help
|
||||
// instead of the actual default value. This is useful
|
||||
// mostly for hiding otherwise sensitive information from
|
||||
// showing up in the help. If default-mask takes the special
|
||||
// value "-", then no default value will be shown at all
|
||||
// (optional)
|
||||
// required: whether an option is required to appear on the command
|
||||
// line. If a required option is not present, the parser
|
||||
// will return ErrRequired.
|
||||
// base: a base (radix) used to convert strings to integer values,
|
||||
// the default base is 10 (i.e. decimal) (optional)
|
||||
// value-name: the name of the argument value (to be shown in the help,
|
||||
// (optional)
|
||||
// group: when specified on a struct field, makes the struct field
|
||||
// a separate group with the given name (optional).
|
||||
// command: when specified on a struct field, makes the struct field
|
||||
// a (sub)command with the given name (optional).
|
||||
//
|
||||
// Either short: or long: must be specified to make the field eligible as an
|
||||
// option.
|
||||
//
|
||||
//
|
||||
// Option groups:
|
||||
//
|
||||
// Option groups are a simple way to semantically separate your options. The
|
||||
// only real difference is in how your options will appear in the builtin
|
||||
// generated help. All options in a particular group are shown together in the
|
||||
// help under the name of the group.
|
||||
//
|
||||
// There are currently three ways to specify option groups.
|
||||
//
|
||||
// 1. Use NewNamedParser specifying the various option groups.
|
||||
// 2. Use AddGroup to add a group to an existing parser.
|
||||
// 3. Add a struct field to the toplevel options annotated with the
|
||||
// group:"group-name" tag.
|
||||
//
|
||||
//
|
||||
//
|
||||
// Commands:
|
||||
//
|
||||
// The flags package also has basic support for commands. Commands are often
|
||||
// used in monolithic applications that support various commands or actions.
|
||||
// Take git for example, all of the add, commit, checkout, etc. are called
|
||||
// commands. Using commands you can easily separate multiple functions of your
|
||||
// application.
|
||||
//
|
||||
// There are currently two ways to specifiy a command.
|
||||
//
|
||||
// 1. Use AddCommand on an existing parser.
|
||||
// 2. Add a struct field to your options struct annotated with the
|
||||
// command:"command-name" tag.
|
||||
//
|
||||
// The most common, idiomatic way to implement commands is to define a global
|
||||
// parser instance and implement each command in a separate file. These
|
||||
// command files should define a go init function which calls AddCommand on
|
||||
// the global parser.
|
||||
//
|
||||
// When parsing ends and there is an active command and that command implements
|
||||
// the Commander interface, then its Execute method will be run with the
|
||||
// remaining command line arguments.
|
||||
//
|
||||
// Command structs can have options which become valid to parse after the
|
||||
// command has been specified on the command line. It is currently not valid
|
||||
// to specify options from the parent level of the command after the command
|
||||
// name has occurred. Thus, given a toplevel option "-v" and a command "add":
|
||||
//
|
||||
// Valid: ./app -v add
|
||||
// Invalid: ./app add -v
|
||||
//
|
||||
package flags
|
||||
@@ -1,80 +0,0 @@
|
||||
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package flags
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ErrNotPointerToStruct indicates that a provided data container is not
|
||||
// a pointer to a struct. Only pointers to structs are valid data containers
|
||||
// for options.
|
||||
var ErrNotPointerToStruct = errors.New("provided data is not a pointer to struct")
|
||||
|
||||
// Group represents an option group. Option groups can be used to logically
|
||||
// group options together under a description. Groups are only used to provide
|
||||
// more structure to options both for the user (as displayed in the help message)
|
||||
// and for you, since groups can be nested.
|
||||
type Group struct {
|
||||
// A short description of the group. The
|
||||
// short description is primarily used in the builtin generated help
|
||||
// message
|
||||
ShortDescription string
|
||||
|
||||
// A long description of the group. The long
|
||||
// description is primarily used to present information on commands
|
||||
// (Command embeds Group) in the builtin generated help and man pages.
|
||||
LongDescription string
|
||||
|
||||
// All the options in the group
|
||||
options []*Option
|
||||
|
||||
// All the subgroups
|
||||
groups []*Group
|
||||
|
||||
data interface{}
|
||||
}
|
||||
|
||||
// AddGroup adds a new group to the command with the given name and data. The
|
||||
// data needs to be a pointer to a struct from which the fields indicate which
|
||||
// options are in the group.
|
||||
func (g *Group) AddGroup(shortDescription string, longDescription string, data interface{}) (*Group, error) {
|
||||
group := newGroup(shortDescription, longDescription, data)
|
||||
|
||||
if err := group.scan(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
g.groups = append(g.groups, group)
|
||||
return group, nil
|
||||
}
|
||||
|
||||
// Groups returns the list of groups embedded in this group.
|
||||
func (g *Group) Groups() []*Group {
|
||||
return g.groups
|
||||
}
|
||||
|
||||
// Options returns the list of options in this group.
|
||||
func (g *Group) Options() []*Option {
|
||||
return g.options
|
||||
}
|
||||
|
||||
// Find locates the subgroup with the given short description and returns it.
|
||||
// If no such group can be found Find will return nil. Note that the description
|
||||
// is matched case insensitively.
|
||||
func (g *Group) Find(shortDescription string) *Group {
|
||||
lshortDescription := strings.ToLower(shortDescription)
|
||||
|
||||
var ret *Group
|
||||
|
||||
g.eachGroup(func(gg *Group) {
|
||||
if gg != g && strings.ToLower(gg.ShortDescription) == lshortDescription {
|
||||
ret = gg
|
||||
}
|
||||
})
|
||||
|
||||
return ret
|
||||
}
|
||||
@@ -1,263 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"unicode/utf8"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
type scanHandler func(reflect.Value, *reflect.StructField) (bool, error)
|
||||
|
||||
func newGroup(shortDescription string, longDescription string, data interface{}) *Group {
|
||||
return &Group{
|
||||
ShortDescription: shortDescription,
|
||||
LongDescription: longDescription,
|
||||
|
||||
data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func (g *Group) optionByName(name string, namematch func(*Option, string) bool) *Option {
|
||||
prio := 0
|
||||
var retopt *Option
|
||||
|
||||
for _, opt := range g.options {
|
||||
if namematch != nil && namematch(opt, name) && prio < 4 {
|
||||
retopt = opt
|
||||
prio = 4
|
||||
}
|
||||
|
||||
if name == opt.field.Name && prio < 3 {
|
||||
retopt = opt
|
||||
prio = 3
|
||||
}
|
||||
|
||||
if name == opt.LongName && prio < 2 {
|
||||
retopt = opt
|
||||
prio = 2
|
||||
}
|
||||
|
||||
if opt.ShortName != 0 && name == string(opt.ShortName) && prio < 1 {
|
||||
retopt = opt
|
||||
prio = 1
|
||||
}
|
||||
}
|
||||
|
||||
return retopt
|
||||
}
|
||||
|
||||
func (g *Group) storeDefaults() {
|
||||
for _, option := range g.options {
|
||||
// First. empty out the value
|
||||
if len(option.Default) > 0 {
|
||||
option.clear()
|
||||
}
|
||||
|
||||
for _, d := range option.Default {
|
||||
option.set(&d)
|
||||
}
|
||||
|
||||
if !option.value.CanSet() {
|
||||
continue
|
||||
}
|
||||
|
||||
option.defaultValue = reflect.ValueOf(option.value.Interface())
|
||||
}
|
||||
}
|
||||
|
||||
func (g *Group) eachGroup(f func(*Group)) {
|
||||
f(g)
|
||||
|
||||
for _, gg := range g.groups {
|
||||
gg.eachGroup(f)
|
||||
}
|
||||
}
|
||||
|
||||
func (g *Group) scanStruct(realval reflect.Value, sfield *reflect.StructField, handler scanHandler) error {
|
||||
stype := realval.Type()
|
||||
|
||||
if sfield != nil {
|
||||
if ok, err := handler(realval, sfield); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < stype.NumField(); i++ {
|
||||
field := stype.Field(i)
|
||||
|
||||
// PkgName is set only for non-exported fields, which we ignore
|
||||
if field.PkgPath != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
mtag := newMultiTag(string(field.Tag))
|
||||
|
||||
if err := mtag.Parse(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Skip fields with the no-flag tag
|
||||
if mtag.Get("no-flag") != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Dive deep into structs or pointers to structs
|
||||
kind := field.Type.Kind()
|
||||
fld := realval.Field(i)
|
||||
|
||||
if kind == reflect.Struct {
|
||||
if err := g.scanStruct(fld, &field, handler); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if kind == reflect.Ptr && field.Type.Elem().Kind() == reflect.Struct {
|
||||
if fld.IsNil() {
|
||||
fld.Set(reflect.New(fld.Type().Elem()))
|
||||
}
|
||||
|
||||
if err := g.scanStruct(reflect.Indirect(fld), &field, handler); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
longname := mtag.Get("long")
|
||||
shortname := mtag.Get("short")
|
||||
|
||||
// Need at least either a short or long name
|
||||
if longname == "" && shortname == "" && mtag.Get("ini-name") == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
short := rune(0)
|
||||
rc := utf8.RuneCountInString(shortname)
|
||||
|
||||
if rc > 1 {
|
||||
return newErrorf(ErrShortNameTooLong,
|
||||
"short names can only be 1 character long, not `%s'",
|
||||
shortname)
|
||||
|
||||
} else if rc == 1 {
|
||||
short, _ = utf8.DecodeRuneInString(shortname)
|
||||
}
|
||||
|
||||
description := mtag.Get("description")
|
||||
def := mtag.GetMany("default")
|
||||
optionalValue := mtag.GetMany("optional-value")
|
||||
valueName := mtag.Get("value-name")
|
||||
defaultMask := mtag.Get("default-mask")
|
||||
|
||||
optional := (mtag.Get("optional") != "")
|
||||
required := (mtag.Get("required") != "")
|
||||
|
||||
option := &Option{
|
||||
Description: description,
|
||||
ShortName: short,
|
||||
LongName: longname,
|
||||
Default: def,
|
||||
OptionalArgument: optional,
|
||||
OptionalValue: optionalValue,
|
||||
Required: required,
|
||||
ValueName: valueName,
|
||||
DefaultMask: defaultMask,
|
||||
|
||||
field: field,
|
||||
value: realval.Field(i),
|
||||
tag: mtag,
|
||||
}
|
||||
|
||||
g.options = append(g.options, option)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (g *Group) checkForDuplicateFlags() *Error {
|
||||
shortNames := make(map[rune]*Option)
|
||||
longNames := make(map[string]*Option)
|
||||
|
||||
var duplicateError *Error
|
||||
|
||||
g.eachGroup(func(g *Group) {
|
||||
for _, option := range g.options {
|
||||
if option.LongName != "" {
|
||||
if otherOption, ok := longNames[option.LongName]; ok {
|
||||
duplicateError = newErrorf(ErrDuplicatedFlag, "option `%s' uses the same long name as option `%s'", option, otherOption)
|
||||
return
|
||||
}
|
||||
longNames[option.LongName] = option
|
||||
}
|
||||
if option.ShortName != 0 {
|
||||
if otherOption, ok := shortNames[option.ShortName]; ok {
|
||||
duplicateError = newErrorf(ErrDuplicatedFlag, "option `%s' uses the same short name as option `%s'", option, otherOption)
|
||||
return
|
||||
}
|
||||
shortNames[option.ShortName] = option
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
return duplicateError
|
||||
}
|
||||
|
||||
func (g *Group) scanSubGroupHandler(realval reflect.Value, sfield *reflect.StructField) (bool, error) {
|
||||
mtag := newMultiTag(string(sfield.Tag))
|
||||
|
||||
if err := mtag.Parse(); err != nil {
|
||||
return true, err
|
||||
}
|
||||
|
||||
subgroup := mtag.Get("group")
|
||||
|
||||
if len(subgroup) != 0 {
|
||||
ptrval := reflect.NewAt(realval.Type(), unsafe.Pointer(realval.UnsafeAddr()))
|
||||
description := mtag.Get("description")
|
||||
|
||||
if _, err := g.AddGroup(subgroup, description, ptrval.Interface()); err != nil {
|
||||
return true, err
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func (g *Group) scanType(handler scanHandler) error {
|
||||
// Get all the public fields in the data struct
|
||||
ptrval := reflect.ValueOf(g.data)
|
||||
|
||||
if ptrval.Type().Kind() != reflect.Ptr {
|
||||
panic(ErrNotPointerToStruct)
|
||||
}
|
||||
|
||||
stype := ptrval.Type().Elem()
|
||||
|
||||
if stype.Kind() != reflect.Struct {
|
||||
panic(ErrNotPointerToStruct)
|
||||
}
|
||||
|
||||
realval := reflect.Indirect(ptrval)
|
||||
|
||||
if err := g.scanStruct(realval, nil, handler); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := g.checkForDuplicateFlags(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (g *Group) scan() error {
|
||||
return g.scanType(g.scanSubGroupHandler)
|
||||
}
|
||||
|
||||
func (g *Group) groupByName(name string) *Group {
|
||||
if len(name) == 0 {
|
||||
return g
|
||||
}
|
||||
|
||||
return g.Find(name)
|
||||
}
|
||||
@@ -1,160 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestGroupInline(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Group struct {
|
||||
G bool `short:"g"`
|
||||
} `group:"Grouped Options"`
|
||||
}{}
|
||||
|
||||
p, ret := assertParserSuccess(t, &opts, "-v", "-g")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !opts.Group.G {
|
||||
t.Errorf("Expected Group.G to be true")
|
||||
}
|
||||
|
||||
if p.Command.Group.Find("Grouped Options") == nil {
|
||||
t.Errorf("Expected to find group `Grouped Options'")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGroupAdd(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
}{}
|
||||
|
||||
var grp = struct {
|
||||
G bool `short:"g"`
|
||||
}{}
|
||||
|
||||
p := NewParser(&opts, Default)
|
||||
g, err := p.AddGroup("Grouped Options", "", &grp)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
ret, err := p.ParseArgs([]string{"-v", "-g", "rest"})
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
assertStringArray(t, ret, []string{"rest"})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !grp.G {
|
||||
t.Errorf("Expected Group.G to be true")
|
||||
}
|
||||
|
||||
if p.Command.Group.Find("Grouped Options") != g {
|
||||
t.Errorf("Expected to find group `Grouped Options'")
|
||||
}
|
||||
|
||||
if p.Groups()[1] != g {
|
||||
t.Errorf("Espected group #v, but got #v", g, p.Groups()[0])
|
||||
}
|
||||
|
||||
if g.Options()[0].ShortName != 'g' {
|
||||
t.Errorf("Expected short name `g' but got %v", g.Options()[0].ShortName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGroupNestedInline(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
|
||||
Group struct {
|
||||
G bool `short:"g"`
|
||||
|
||||
Nested struct {
|
||||
N string `long:"n"`
|
||||
} `group:"Nested Options"`
|
||||
} `group:"Grouped Options"`
|
||||
}{}
|
||||
|
||||
p, ret := assertParserSuccess(t, &opts, "-v", "-g", "--n", "n", "rest")
|
||||
|
||||
assertStringArray(t, ret, []string{"rest"})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
if !opts.Group.G {
|
||||
t.Errorf("Expected Group.G to be true")
|
||||
}
|
||||
|
||||
assertString(t, opts.Group.Nested.N, "n")
|
||||
|
||||
if p.Command.Group.Find("Grouped Options") == nil {
|
||||
t.Errorf("Expected to find group `Grouped Options'")
|
||||
}
|
||||
|
||||
if p.Command.Group.Find("Nested Options") == nil {
|
||||
t.Errorf("Expected to find group `Nested Options'")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDuplicateShortFlags(t *testing.T) {
|
||||
var opts struct {
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information"`
|
||||
Variables []string `short:"v" long:"variable" description:"Set a variable value."`
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"--verbose",
|
||||
"-v", "123",
|
||||
"-v", "456",
|
||||
}
|
||||
|
||||
_, err := ParseArgs(&opts, args)
|
||||
|
||||
if err == nil {
|
||||
t.Errorf("Expected an error with type ErrDuplicatedFlag")
|
||||
} else {
|
||||
err2 := err.(*Error)
|
||||
if err2.Type != ErrDuplicatedFlag {
|
||||
t.Errorf("Expected an error with type ErrDuplicatedFlag")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDuplicateLongFlags(t *testing.T) {
|
||||
var opts struct {
|
||||
Test1 []bool `short:"a" long:"testing" description:"Test 1"`
|
||||
Test2 []string `short:"b" long:"testing" description:"Test 2."`
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"--testing",
|
||||
}
|
||||
|
||||
_, err := ParseArgs(&opts, args)
|
||||
|
||||
if err == nil {
|
||||
t.Errorf("Expected an error with type ErrDuplicatedFlag")
|
||||
} else {
|
||||
err2 := err.(*Error)
|
||||
if err2.Type != ErrDuplicatedFlag {
|
||||
t.Errorf("Expected an error with type ErrDuplicatedFlag")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,275 +0,0 @@
|
||||
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package flags
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type alignmentInfo struct {
|
||||
maxLongLen int
|
||||
hasShort bool
|
||||
hasValueName bool
|
||||
terminalColumns int
|
||||
}
|
||||
|
||||
func (p *Parser) getAlignmentInfo() alignmentInfo {
|
||||
ret := alignmentInfo{
|
||||
maxLongLen: 0,
|
||||
hasShort: false,
|
||||
hasValueName: false,
|
||||
terminalColumns: getTerminalColumns(),
|
||||
}
|
||||
|
||||
if ret.terminalColumns <= 0 {
|
||||
ret.terminalColumns = 80
|
||||
}
|
||||
|
||||
p.eachActiveGroup(func(grp *Group) {
|
||||
for _, info := range grp.options {
|
||||
if info.ShortName != 0 {
|
||||
ret.hasShort = true
|
||||
}
|
||||
|
||||
lv := utf8.RuneCountInString(info.ValueName)
|
||||
|
||||
if lv != 0 {
|
||||
ret.hasValueName = true
|
||||
}
|
||||
|
||||
l := utf8.RuneCountInString(info.LongName) + lv
|
||||
|
||||
if l > ret.maxLongLen {
|
||||
ret.maxLongLen = l
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
func (p *Parser) writeHelpOption(writer *bufio.Writer, option *Option, info alignmentInfo) {
|
||||
line := &bytes.Buffer{}
|
||||
|
||||
distanceBetweenOptionAndDescription := 2
|
||||
paddingBeforeOption := 2
|
||||
|
||||
line.WriteString(strings.Repeat(" ", paddingBeforeOption))
|
||||
|
||||
if option.ShortName != 0 {
|
||||
line.WriteRune(defaultShortOptDelimiter)
|
||||
line.WriteRune(option.ShortName)
|
||||
} else if info.hasShort {
|
||||
line.WriteString(" ")
|
||||
}
|
||||
|
||||
descstart := info.maxLongLen + paddingBeforeOption + distanceBetweenOptionAndDescription
|
||||
|
||||
if info.hasShort {
|
||||
descstart += 2
|
||||
}
|
||||
|
||||
if info.maxLongLen > 0 {
|
||||
descstart += 4
|
||||
}
|
||||
|
||||
if info.hasValueName {
|
||||
descstart += 3
|
||||
}
|
||||
|
||||
if len(option.LongName) > 0 {
|
||||
if option.ShortName != 0 {
|
||||
line.WriteString(", ")
|
||||
} else if info.hasShort {
|
||||
line.WriteString(" ")
|
||||
}
|
||||
|
||||
line.WriteString(defaultLongOptDelimiter)
|
||||
line.WriteString(option.LongName)
|
||||
}
|
||||
|
||||
if option.canArgument() {
|
||||
line.WriteRune(defaultNameArgDelimiter)
|
||||
|
||||
if len(option.ValueName) > 0 {
|
||||
line.WriteString(option.ValueName)
|
||||
}
|
||||
}
|
||||
|
||||
written := line.Len()
|
||||
line.WriteTo(writer)
|
||||
|
||||
if option.Description != "" {
|
||||
dw := descstart - written
|
||||
writer.WriteString(strings.Repeat(" ", dw))
|
||||
|
||||
def := ""
|
||||
defs := option.Default
|
||||
|
||||
if len(option.DefaultMask) != 0 {
|
||||
if option.DefaultMask != "-" {
|
||||
def = option.DefaultMask
|
||||
}
|
||||
} else if len(defs) == 0 && option.canArgument() {
|
||||
var showdef bool
|
||||
|
||||
switch option.field.Type.Kind() {
|
||||
case reflect.Func, reflect.Ptr:
|
||||
showdef = !option.value.IsNil()
|
||||
case reflect.Slice, reflect.String, reflect.Array:
|
||||
showdef = option.value.Len() > 0
|
||||
case reflect.Map:
|
||||
showdef = !option.value.IsNil() && option.value.Len() > 0
|
||||
default:
|
||||
zeroval := reflect.Zero(option.field.Type)
|
||||
showdef = !reflect.DeepEqual(zeroval.Interface(), option.value.Interface())
|
||||
}
|
||||
|
||||
if showdef {
|
||||
def, _ = convertToString(option.value, option.tag)
|
||||
}
|
||||
} else if len(defs) != 0 {
|
||||
def = strings.Join(defs, ", ")
|
||||
}
|
||||
|
||||
var desc string
|
||||
|
||||
if def != "" {
|
||||
desc = fmt.Sprintf("%s (%v)", option.Description, def)
|
||||
} else {
|
||||
desc = option.Description
|
||||
}
|
||||
|
||||
writer.WriteString(wrapText(desc,
|
||||
info.terminalColumns-descstart,
|
||||
strings.Repeat(" ", descstart)))
|
||||
}
|
||||
|
||||
writer.WriteString("\n")
|
||||
}
|
||||
|
||||
func maxCommandLength(s []*Command) int {
|
||||
if len(s) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
ret := len(s[0].Name)
|
||||
|
||||
for _, v := range s[1:] {
|
||||
l := len(v.Name)
|
||||
|
||||
if l > ret {
|
||||
ret = l
|
||||
}
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
// WriteHelp writes a help message containing all the possible options and
|
||||
// their descriptions to the provided writer. Note that the HelpFlag parser
|
||||
// option provides a convenient way to add a -h/--help option group to the
|
||||
// command line parser which will automatically show the help messages using
|
||||
// this method.
|
||||
func (p *Parser) WriteHelp(writer io.Writer) {
|
||||
if writer == nil {
|
||||
return
|
||||
}
|
||||
|
||||
wr := bufio.NewWriter(writer)
|
||||
aligninfo := p.getAlignmentInfo()
|
||||
|
||||
cmd := p.Command
|
||||
|
||||
for cmd.Active != nil {
|
||||
cmd = cmd.Active
|
||||
}
|
||||
|
||||
if p.Name != "" {
|
||||
wr.WriteString("Usage:\n")
|
||||
wr.WriteString(" ")
|
||||
|
||||
allcmd := p.Command
|
||||
|
||||
for allcmd != nil {
|
||||
var usage string
|
||||
|
||||
if allcmd == p.Command {
|
||||
if len(p.Usage) != 0 {
|
||||
usage = p.Usage
|
||||
} else {
|
||||
usage = "[OPTIONS]"
|
||||
}
|
||||
} else if us, ok := allcmd.data.(Usage); ok {
|
||||
usage = us.Usage()
|
||||
} else {
|
||||
usage = fmt.Sprintf("[%s-OPTIONS]", allcmd.Name)
|
||||
}
|
||||
|
||||
if len(usage) != 0 {
|
||||
fmt.Fprintf(wr, " %s %s", allcmd.Name, usage)
|
||||
} else {
|
||||
fmt.Fprintf(wr, " %s", allcmd.Name)
|
||||
}
|
||||
|
||||
allcmd = allcmd.Active
|
||||
}
|
||||
|
||||
fmt.Fprintln(wr)
|
||||
|
||||
if len(cmd.LongDescription) != 0 {
|
||||
fmt.Fprintln(wr)
|
||||
|
||||
t := wrapText(cmd.LongDescription,
|
||||
aligninfo.terminalColumns,
|
||||
"")
|
||||
|
||||
fmt.Fprintln(wr, t)
|
||||
}
|
||||
}
|
||||
|
||||
p.eachActiveGroup(func(grp *Group) {
|
||||
first := true
|
||||
|
||||
for _, info := range grp.options {
|
||||
if info.canCli() {
|
||||
if first {
|
||||
fmt.Fprintf(wr, "\n%s:\n", grp.ShortDescription)
|
||||
first = false
|
||||
}
|
||||
|
||||
p.writeHelpOption(wr, info, aligninfo)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
scommands := cmd.sortedCommands()
|
||||
|
||||
if len(scommands) > 0 {
|
||||
maxnamelen := maxCommandLength(scommands)
|
||||
|
||||
fmt.Fprintln(wr)
|
||||
fmt.Fprintln(wr, "Available commands:")
|
||||
|
||||
for _, c := range scommands {
|
||||
fmt.Fprintf(wr, " %s", c.Name)
|
||||
|
||||
if len(c.ShortDescription) > 0 {
|
||||
pad := strings.Repeat(" ", maxnamelen-len(c.Name))
|
||||
fmt.Fprintf(wr, "%s %s", pad, c.ShortDescription)
|
||||
}
|
||||
|
||||
fmt.Fprintln(wr)
|
||||
}
|
||||
}
|
||||
|
||||
wr.Flush()
|
||||
}
|
||||
@@ -1,153 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func helpDiff(a, b string) (string, error) {
|
||||
atmp, err := ioutil.TempFile("", "help-diff")
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
btmp, err := ioutil.TempFile("", "help-diff")
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if _, err := io.WriteString(atmp, a); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if _, err := io.WriteString(btmp, b); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
ret, err := exec.Command("diff", "-u", "-d", "--label", "got", atmp.Name(), "--label", "expected", btmp.Name()).Output()
|
||||
|
||||
os.Remove(atmp.Name())
|
||||
os.Remove(btmp.Name())
|
||||
|
||||
return string(ret), nil
|
||||
}
|
||||
|
||||
type helpOptions struct {
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Show verbose debug information" ini-name:"verbose"`
|
||||
Call func(string) `short:"c" description:"Call phone number" ini-name:"call"`
|
||||
PtrSlice []*string `long:"ptrslice" description:"A slice of pointers to string"`
|
||||
|
||||
OnlyIni string `ini-name:"only-ini" description:"Option only available in ini"`
|
||||
|
||||
Other struct {
|
||||
StringSlice []string `short:"s" description:"A slice of strings"`
|
||||
IntMap map[string]int `long:"intmap" description:"A map from string to int" ini-name:"int-map"`
|
||||
} `group:"Other Options"`
|
||||
}
|
||||
|
||||
func TestHelp(t *testing.T) {
|
||||
var opts helpOptions
|
||||
|
||||
p := NewNamedParser("TestHelp", HelpFlag)
|
||||
p.AddGroup("Application Options", "The application options", &opts)
|
||||
|
||||
_, err := p.ParseArgs([]string{"--help"})
|
||||
|
||||
if err == nil {
|
||||
t.Fatalf("Expected help error")
|
||||
}
|
||||
|
||||
if e, ok := err.(*Error); !ok {
|
||||
t.Fatalf("Expected flags.Error, but got %#T", err)
|
||||
} else {
|
||||
if e.Type != ErrHelp {
|
||||
t.Errorf("Expected flags.ErrHelp type, but got %s", e.Type)
|
||||
}
|
||||
|
||||
expected := `Usage:
|
||||
TestHelp [OPTIONS]
|
||||
|
||||
Application Options:
|
||||
-v, --verbose Show verbose debug information
|
||||
-c= Call phone number
|
||||
--ptrslice= A slice of pointers to string
|
||||
|
||||
Other Options:
|
||||
-s= A slice of strings
|
||||
--intmap= A map from string to int
|
||||
|
||||
Help Options:
|
||||
-h, --help Show this help message
|
||||
`
|
||||
|
||||
if e.Message != expected {
|
||||
ret, err := helpDiff(e.Message, expected)
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected diff error: %s", err)
|
||||
t.Errorf("Unexpected help message, expected:\n\n%s\n\nbut got\n\n%s", expected, e.Message)
|
||||
} else {
|
||||
t.Errorf("Unexpected help message:\n\n%s", ret)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMan(t *testing.T) {
|
||||
var opts helpOptions
|
||||
|
||||
p := NewNamedParser("TestMan", HelpFlag)
|
||||
p.ShortDescription = "Test manpage generation"
|
||||
p.LongDescription = "This is a somewhat longer description of what this does"
|
||||
p.AddGroup("Application Options", "The application options", &opts)
|
||||
|
||||
var buf bytes.Buffer
|
||||
p.WriteManPage(&buf)
|
||||
|
||||
got := buf.String()
|
||||
|
||||
tt := time.Now()
|
||||
|
||||
expected := fmt.Sprintf(`.TH TestMan 1 "%s"
|
||||
.SH NAME
|
||||
TestMan \- Test manpage generation
|
||||
.SH SYNOPSIS
|
||||
\fBTestMan\fP [OPTIONS]
|
||||
.SH DESCRIPTION
|
||||
This is a somewhat longer description of what this does
|
||||
.SH OPTIONS
|
||||
.TP
|
||||
\fB-v, --verbose\fP
|
||||
Show verbose debug information
|
||||
.TP
|
||||
\fB-c\fP
|
||||
Call phone number
|
||||
.TP
|
||||
\fB--ptrslice\fP
|
||||
A slice of pointers to string
|
||||
.TP
|
||||
\fB-s\fP
|
||||
A slice of strings
|
||||
.TP
|
||||
\fB--intmap\fP
|
||||
A map from string to int
|
||||
`, tt.Format("2 January 2006"))
|
||||
|
||||
if got != expected {
|
||||
ret, err := helpDiff(got, expected)
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected man page, expected:\n\n%s\n\nbut got\n\n%s", expected, got)
|
||||
} else {
|
||||
t.Errorf("Unexpected man page:\n\n%s", ret)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,146 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
// IniError contains location information on where in the ini file an error
|
||||
// occured.
|
||||
type IniError struct {
|
||||
// The error message.
|
||||
Message string
|
||||
|
||||
// The filename of the file in which the error occurred.
|
||||
File string
|
||||
|
||||
// The line number at which the error occurred.
|
||||
LineNumber uint
|
||||
}
|
||||
|
||||
// Error provides a "file:line: message" formatted message of the ini error.
|
||||
func (x *IniError) Error() string {
|
||||
return fmt.Sprintf("%s:%d: %s",
|
||||
x.File,
|
||||
x.LineNumber,
|
||||
x.Message)
|
||||
}
|
||||
|
||||
// IniOptions for writing ini files
|
||||
type IniOptions uint
|
||||
|
||||
const (
|
||||
// IniNone indicates no options.
|
||||
IniNone IniOptions = 0
|
||||
|
||||
// IniIncludeDefaults indicates that default values should be written
|
||||
// when writing options to an ini file.
|
||||
IniIncludeDefaults = 1 << iota
|
||||
|
||||
// IniIncludeComments indicates that comments containing the description
|
||||
// of an option should be written when writing options to an ini file.
|
||||
IniIncludeComments
|
||||
|
||||
// IniDefault provides a default set of options.
|
||||
IniDefault = IniIncludeComments
|
||||
)
|
||||
|
||||
// IniParser is a utility to read and write flags options from and to ini
|
||||
// files.
|
||||
type IniParser struct {
|
||||
parser *Parser
|
||||
}
|
||||
|
||||
// NewIniParser creates a new ini parser for a given Parser.
|
||||
func NewIniParser(p *Parser) *IniParser {
|
||||
return &IniParser{
|
||||
parser: p,
|
||||
}
|
||||
}
|
||||
|
||||
// IniParse is a convenience function to parse command line options with default
|
||||
// settings from an ini file. The provided data is a pointer to a struct
|
||||
// representing the default option group (named "Application Options"). For
|
||||
// more control, use flags.NewParser.
|
||||
func IniParse(filename string, data interface{}) error {
|
||||
p := NewParser(data, Default)
|
||||
return NewIniParser(p).ParseFile(filename)
|
||||
}
|
||||
|
||||
// ParseFile parses flags from an ini formatted file. See Parse for more
|
||||
// information on the ini file foramt. The returned errors can be of the type
|
||||
// flags.Error or flags.IniError.
|
||||
func (i *IniParser) ParseFile(filename string) error {
|
||||
i.parser.storeDefaults()
|
||||
|
||||
ini, err := readIniFromFile(filename)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return i.parse(ini)
|
||||
}
|
||||
|
||||
// Parse parses flags from an ini format. You can use ParseFile as a
|
||||
// convenience function to parse from a filename instead of a general
|
||||
// io.Reader.
|
||||
//
|
||||
// The format of the ini file is as follows:
|
||||
//
|
||||
// [Option group name]
|
||||
// option = value
|
||||
//
|
||||
// Each section in the ini file represents an option group or command in the
|
||||
// flags parser. The default flags parser option group (i.e. when using
|
||||
// flags.Parse) is named 'Application Options'. The ini option name is matched
|
||||
// in the following order:
|
||||
//
|
||||
// 1. Compared to the ini-name tag on the option struct field (if present)
|
||||
// 2. Compared to the struct field name
|
||||
// 3. Compared to the option long name (if present)
|
||||
// 4. Compared to the option short name (if present)
|
||||
//
|
||||
// Sections for nested groups and commands can be addressed using a dot `.'
|
||||
// namespacing notation (i.e [subcommand.Options]). Group section names are
|
||||
// matched case insensitive.
|
||||
//
|
||||
// The returned errors can be of the type flags.Error or
|
||||
// flags.IniError.
|
||||
func (i *IniParser) Parse(reader io.Reader) error {
|
||||
i.parser.storeDefaults()
|
||||
|
||||
ini, err := readIni(reader, "")
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return i.parse(ini)
|
||||
}
|
||||
|
||||
// WriteFile writes the flags as ini format into a file. See WriteIni
|
||||
// for more information. The returned error occurs when the specified file
|
||||
// could not be opened for writing.
|
||||
func (i *IniParser) WriteFile(filename string, options IniOptions) error {
|
||||
file, err := os.Create(filename)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
defer file.Close()
|
||||
i.Write(file, options)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Write writes the current values of all the flags to an ini format.
|
||||
// See Parse for more information on the ini file format. You typically
|
||||
// call this only after settings have been parsed since the default values of each
|
||||
// option are stored just before parsing the flags (this is only relevant when
|
||||
// IniIncludeDefaults is _not_ set in options).
|
||||
func (i *IniParser) Write(writer io.Writer, options IniOptions) {
|
||||
writeIni(i, writer, options)
|
||||
}
|
||||
@@ -1,333 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type iniValue struct {
|
||||
Name string
|
||||
Value string
|
||||
}
|
||||
|
||||
type iniSection []iniValue
|
||||
type ini map[string]iniSection
|
||||
|
||||
func readFullLine(reader *bufio.Reader) (string, error) {
|
||||
var line []byte
|
||||
|
||||
for {
|
||||
l, more, err := reader.ReadLine()
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if line == nil && !more {
|
||||
return string(l), nil
|
||||
}
|
||||
|
||||
line = append(line, l...)
|
||||
|
||||
if !more {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return string(line), nil
|
||||
}
|
||||
|
||||
func optionIniName(option *Option) string {
|
||||
name := option.tag.Get("_read-ini-name")
|
||||
|
||||
if len(name) != 0 {
|
||||
return name
|
||||
}
|
||||
|
||||
name = option.tag.Get("ini-name")
|
||||
|
||||
if len(name) != 0 {
|
||||
return name
|
||||
}
|
||||
|
||||
return option.field.Name
|
||||
}
|
||||
|
||||
func writeGroupIni(group *Group, namespace string, writer io.Writer, options IniOptions) {
|
||||
var sname string
|
||||
|
||||
if len(namespace) != 0 {
|
||||
sname = namespace + "." + group.ShortDescription
|
||||
} else {
|
||||
sname = group.ShortDescription
|
||||
}
|
||||
|
||||
sectionwritten := false
|
||||
comments := (options & IniIncludeComments) != IniNone
|
||||
|
||||
for _, option := range group.options {
|
||||
if option.isFunc() {
|
||||
continue
|
||||
}
|
||||
|
||||
if len(option.tag.Get("no-ini")) != 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
val := option.value
|
||||
|
||||
if (options&IniIncludeDefaults) == IniNone &&
|
||||
reflect.DeepEqual(val, option.defaultValue) {
|
||||
continue
|
||||
}
|
||||
|
||||
if !sectionwritten {
|
||||
fmt.Fprintf(writer, "[%s]\n", sname)
|
||||
sectionwritten = true
|
||||
}
|
||||
|
||||
if comments {
|
||||
fmt.Fprintf(writer, "; %s\n", option.Description)
|
||||
}
|
||||
|
||||
oname := optionIniName(option)
|
||||
|
||||
switch val.Type().Kind() {
|
||||
case reflect.Slice:
|
||||
for idx := 0; idx < val.Len(); idx++ {
|
||||
v, _ := convertToString(val.Index(idx), option.tag)
|
||||
fmt.Fprintf(writer, "%s = %s\n", oname, v)
|
||||
}
|
||||
|
||||
if val.Len() == 0 {
|
||||
fmt.Fprintf(writer, "; %s =\n", oname)
|
||||
}
|
||||
case reflect.Map:
|
||||
for _, key := range val.MapKeys() {
|
||||
k, _ := convertToString(key, option.tag)
|
||||
v, _ := convertToString(val.MapIndex(key), option.tag)
|
||||
|
||||
fmt.Fprintf(writer, "%s = %s:%s\n", oname, k, v)
|
||||
}
|
||||
|
||||
if val.Len() == 0 {
|
||||
fmt.Fprintf(writer, "; %s =\n", oname)
|
||||
}
|
||||
default:
|
||||
v, _ := convertToString(val, option.tag)
|
||||
|
||||
if len(v) != 0 {
|
||||
fmt.Fprintf(writer, "%s = %s\n", oname, v)
|
||||
} else {
|
||||
fmt.Fprintf(writer, "%s =\n", oname)
|
||||
}
|
||||
}
|
||||
|
||||
if comments {
|
||||
fmt.Fprintln(writer)
|
||||
}
|
||||
}
|
||||
|
||||
if sectionwritten && !comments {
|
||||
fmt.Fprintln(writer)
|
||||
}
|
||||
}
|
||||
|
||||
func writeCommandIni(command *Command, namespace string, writer io.Writer, options IniOptions) {
|
||||
command.eachGroup(func(group *Group) {
|
||||
writeGroupIni(group, namespace, writer, options)
|
||||
})
|
||||
|
||||
for _, c := range command.commands {
|
||||
var nns string
|
||||
|
||||
if len(namespace) != 0 {
|
||||
nns = c.Name + "." + nns
|
||||
} else {
|
||||
nns = c.Name
|
||||
}
|
||||
|
||||
writeCommandIni(c, nns, writer, options)
|
||||
}
|
||||
}
|
||||
|
||||
func writeIni(parser *IniParser, writer io.Writer, options IniOptions) {
|
||||
writeCommandIni(parser.parser.Command, "", writer, options)
|
||||
}
|
||||
|
||||
func readIniFromFile(filename string) (ini, error) {
|
||||
file, err := os.Open(filename)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
defer file.Close()
|
||||
|
||||
return readIni(file, filename)
|
||||
}
|
||||
|
||||
func readIni(contents io.Reader, filename string) (ini, error) {
|
||||
ret := make(ini)
|
||||
|
||||
reader := bufio.NewReader(contents)
|
||||
|
||||
// Empty global section
|
||||
section := make(iniSection, 0, 10)
|
||||
sectionname := ""
|
||||
|
||||
ret[sectionname] = section
|
||||
|
||||
var lineno uint
|
||||
|
||||
for {
|
||||
line, err := readFullLine(reader)
|
||||
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lineno++
|
||||
line = strings.TrimSpace(line)
|
||||
|
||||
// Skip empty lines and lines starting with ; (comments)
|
||||
if len(line) == 0 || line[0] == ';' {
|
||||
continue
|
||||
}
|
||||
|
||||
if line[0] == '[' {
|
||||
if line[0] != '[' || line[len(line)-1] != ']' {
|
||||
return nil, &IniError{
|
||||
Message: "malformed section header",
|
||||
File: filename,
|
||||
LineNumber: lineno,
|
||||
}
|
||||
}
|
||||
|
||||
name := strings.TrimSpace(line[1 : len(line)-1])
|
||||
|
||||
if len(name) == 0 {
|
||||
return nil, &IniError{
|
||||
Message: "empty section name",
|
||||
File: filename,
|
||||
LineNumber: lineno,
|
||||
}
|
||||
}
|
||||
|
||||
sectionname = name
|
||||
section = ret[name]
|
||||
|
||||
if section == nil {
|
||||
section = make(iniSection, 0, 10)
|
||||
ret[name] = section
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse option here
|
||||
keyval := strings.SplitN(line, "=", 2)
|
||||
|
||||
if len(keyval) != 2 {
|
||||
return nil, &IniError{
|
||||
Message: fmt.Sprintf("malformed key=value (%s)", line),
|
||||
File: filename,
|
||||
LineNumber: lineno,
|
||||
}
|
||||
}
|
||||
|
||||
name := strings.TrimSpace(keyval[0])
|
||||
value := strings.TrimSpace(keyval[1])
|
||||
|
||||
section = append(section, iniValue{
|
||||
Name: name,
|
||||
Value: value,
|
||||
})
|
||||
|
||||
ret[sectionname] = section
|
||||
}
|
||||
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (i *IniParser) matchingGroups(name string) []*Group {
|
||||
if len(name) == 0 {
|
||||
var ret []*Group
|
||||
|
||||
i.parser.eachGroup(func(g *Group) {
|
||||
ret = append(ret, g)
|
||||
})
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
g := i.parser.groupByName(name)
|
||||
|
||||
if g != nil {
|
||||
return []*Group{g}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *IniParser) parse(ini ini) error {
|
||||
p := i.parser
|
||||
|
||||
for name, section := range ini {
|
||||
groups := i.matchingGroups(name)
|
||||
|
||||
if len(groups) == 0 {
|
||||
return newError(ErrUnknownGroup,
|
||||
fmt.Sprintf("could not find option group `%s'", name))
|
||||
}
|
||||
|
||||
for _, inival := range section {
|
||||
var opt *Option
|
||||
|
||||
for _, group := range groups {
|
||||
opt = group.optionByName(inival.Name, func(o *Option, n string) bool {
|
||||
return strings.ToLower(o.tag.Get("ini-name")) == strings.ToLower(n)
|
||||
})
|
||||
|
||||
if opt != nil && len(opt.tag.Get("no-ini")) != 0 {
|
||||
opt = nil
|
||||
}
|
||||
|
||||
if opt != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if opt == nil {
|
||||
if (p.Options & IgnoreUnknown) == None {
|
||||
return newError(ErrUnknownFlag,
|
||||
fmt.Sprintf("unknown option: %s", inival.Name))
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
pval := &inival.Value
|
||||
|
||||
if !opt.canArgument() && len(inival.Value) == 0 {
|
||||
pval = nil
|
||||
}
|
||||
|
||||
if err := opt.set(pval); err != nil {
|
||||
return wrapError(err)
|
||||
}
|
||||
|
||||
opt.tag.Set("_read-ini-name", inival.Name)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,170 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestWriteIni(t *testing.T) {
|
||||
var opts helpOptions
|
||||
|
||||
p := NewNamedParser("TestIni", Default)
|
||||
p.AddGroup("Application Options", "The application options", &opts)
|
||||
|
||||
p.ParseArgs([]string{"-vv", "--intmap=a:2", "--intmap", "b:3"})
|
||||
|
||||
inip := NewIniParser(p)
|
||||
|
||||
var b bytes.Buffer
|
||||
inip.Write(&b, IniDefault|IniIncludeDefaults)
|
||||
|
||||
got := b.String()
|
||||
expected := `[Application Options]
|
||||
; Show verbose debug information
|
||||
verbose = true
|
||||
verbose = true
|
||||
|
||||
; A slice of pointers to string
|
||||
; PtrSlice =
|
||||
|
||||
; Option only available in ini
|
||||
only-ini =
|
||||
|
||||
[Other Options]
|
||||
; A slice of strings
|
||||
; StringSlice =
|
||||
|
||||
; A map from string to int
|
||||
int-map = a:2
|
||||
int-map = b:3
|
||||
|
||||
`
|
||||
|
||||
if got != expected {
|
||||
ret, err := helpDiff(got, expected)
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected ini, expected:\n\n%s\n\nbut got\n\n%s", expected, got)
|
||||
} else {
|
||||
t.Errorf("Unexpected ini:\n\n%s", ret)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestReadIni(t *testing.T) {
|
||||
var opts helpOptions
|
||||
|
||||
p := NewNamedParser("TestIni", Default)
|
||||
p.AddGroup("Application Options", "The application options", &opts)
|
||||
|
||||
inip := NewIniParser(p)
|
||||
|
||||
inic := `
|
||||
; Show verbose debug information
|
||||
verbose = true
|
||||
verbose = true
|
||||
|
||||
[Application Options]
|
||||
; A slice of pointers to string
|
||||
; PtrSlice =
|
||||
|
||||
[Other Options]
|
||||
; A slice of strings
|
||||
; StringSlice =
|
||||
|
||||
; A map from string to int
|
||||
int-map = a:2
|
||||
int-map = b:3
|
||||
|
||||
`
|
||||
|
||||
b := strings.NewReader(inic)
|
||||
err := inip.Parse(b)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %s", err)
|
||||
}
|
||||
|
||||
assertBoolArray(t, opts.Verbose, []bool{true, true})
|
||||
|
||||
if v, ok := opts.Other.IntMap["a"]; !ok {
|
||||
t.Errorf("Expected \"a\" in Other.IntMap")
|
||||
} else if v != 2 {
|
||||
t.Errorf("Expected Other.IntMap[\"a\"] = 2, but got %v", v)
|
||||
}
|
||||
|
||||
if v, ok := opts.Other.IntMap["b"]; !ok {
|
||||
t.Errorf("Expected \"b\" in Other.IntMap")
|
||||
} else if v != 3 {
|
||||
t.Errorf("Expected Other.IntMap[\"b\"] = 3, but got %v", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIniCommands(t *testing.T) {
|
||||
var opts struct {
|
||||
Value string `short:"v" long:"value"`
|
||||
|
||||
Add struct {
|
||||
Name int `short:"n" long:"name" ini-name:"AliasName"`
|
||||
|
||||
Other struct {
|
||||
O string `short:"o" long:"other"`
|
||||
} `group:"Other Options"`
|
||||
} `command:"add"`
|
||||
}
|
||||
|
||||
p := NewNamedParser("TestIni", Default)
|
||||
p.AddGroup("Application Options", "The application options", &opts)
|
||||
|
||||
inip := NewIniParser(p)
|
||||
|
||||
inic := `[Application Options]
|
||||
value = some value
|
||||
|
||||
[add]
|
||||
AliasName = 5
|
||||
|
||||
[add.Other Options]
|
||||
other = subgroup
|
||||
`
|
||||
|
||||
b := strings.NewReader(inic)
|
||||
err := inip.Parse(b)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %s", err)
|
||||
}
|
||||
|
||||
assertString(t, opts.Value, "some value")
|
||||
|
||||
if opts.Add.Name != 5 {
|
||||
t.Errorf("Expected opts.Add.Name to be 5, but got %v", opts.Add.Name)
|
||||
}
|
||||
|
||||
assertString(t, opts.Add.Other.O, "subgroup")
|
||||
}
|
||||
|
||||
func TestIniNoIni(t *testing.T) {
|
||||
var opts struct {
|
||||
Value string `short:"v" long:"value" no-ini:"yes"`
|
||||
}
|
||||
|
||||
p := NewNamedParser("TestIni", Default)
|
||||
p.AddGroup("Application Options", "The application options", &opts)
|
||||
|
||||
inip := NewIniParser(p)
|
||||
|
||||
inic := `[Application Options]
|
||||
value = some value
|
||||
`
|
||||
|
||||
b := strings.NewReader(inic)
|
||||
err := inip.Parse(b)
|
||||
|
||||
if err == nil {
|
||||
t.Fatalf("Expected error")
|
||||
}
|
||||
|
||||
assertError(t, err, ErrUnknownFlag, "unknown option: value")
|
||||
}
|
||||
@@ -1,85 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLong(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `long:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "--value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLongArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `long:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "--value", "value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestLongArgEqual(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `long:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "--value=value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestLongDefault(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `long:"value" default:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts)
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestLongOptional(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `long:"value" optional:"yes" optional-value:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "--value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestLongOptionalArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `long:"value" optional:"yes" optional-value:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "--value", "no")
|
||||
|
||||
assertStringArray(t, ret, []string{"no"})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestLongOptionalArgEqual(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `long:"value" optional:"yes" optional-value:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "--value=value", "no")
|
||||
|
||||
assertStringArray(t, ret, []string{"no"})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
@@ -1,134 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func formatForMan(wr io.Writer, s string) {
|
||||
for {
|
||||
idx := strings.IndexRune(s, '`')
|
||||
|
||||
if idx < 0 {
|
||||
fmt.Fprintf(wr, "%s", s)
|
||||
break
|
||||
}
|
||||
|
||||
fmt.Fprintf(wr, "%s", s[:idx])
|
||||
|
||||
s = s[idx+1:]
|
||||
idx = strings.IndexRune(s, '\'')
|
||||
|
||||
if idx < 0 {
|
||||
fmt.Fprintf(wr, "%s", s)
|
||||
break
|
||||
}
|
||||
|
||||
fmt.Fprintf(wr, "\\fB%s\\fP", s[:idx])
|
||||
s = s[idx+1:]
|
||||
}
|
||||
}
|
||||
|
||||
func writeManPageOptions(wr io.Writer, grp *Group) {
|
||||
grp.eachGroup(func(group *Group) {
|
||||
for _, opt := range group.options {
|
||||
if !opt.canCli() {
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Fprintln(wr, ".TP")
|
||||
fmt.Fprintf(wr, "\\fB")
|
||||
|
||||
if opt.ShortName != 0 {
|
||||
fmt.Fprintf(wr, "-%c", opt.ShortName)
|
||||
}
|
||||
|
||||
if len(opt.LongName) != 0 {
|
||||
if opt.ShortName != 0 {
|
||||
fmt.Fprintf(wr, ", ")
|
||||
}
|
||||
|
||||
fmt.Fprintf(wr, "--%s", opt.LongName)
|
||||
}
|
||||
|
||||
fmt.Fprintln(wr, "\\fP")
|
||||
formatForMan(wr, opt.Description)
|
||||
fmt.Fprintln(wr, "")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func writeManPageSubCommands(wr io.Writer, name string, root *Command) {
|
||||
commands := root.sortedCommands()
|
||||
|
||||
for _, c := range commands {
|
||||
var nn string
|
||||
|
||||
if len(name) != 0 {
|
||||
nn = name + " " + c.Name
|
||||
} else {
|
||||
nn = c.Name
|
||||
}
|
||||
|
||||
writeManPageCommand(wr, nn, c)
|
||||
}
|
||||
}
|
||||
|
||||
func writeManPageCommand(wr io.Writer, name string, command *Command) {
|
||||
fmt.Fprintf(wr, ".SS %s\n", name)
|
||||
fmt.Fprintln(wr, command.ShortDescription)
|
||||
|
||||
if len(command.LongDescription) > 0 {
|
||||
fmt.Fprintln(wr, "")
|
||||
|
||||
cmdstart := fmt.Sprintf("The %s command", command.Name)
|
||||
|
||||
if strings.HasPrefix(command.LongDescription, cmdstart) {
|
||||
fmt.Fprintf(wr, "The \\fI%s\\fP command", command.Name)
|
||||
|
||||
formatForMan(wr, command.LongDescription[len(cmdstart):])
|
||||
fmt.Fprintln(wr, "")
|
||||
} else {
|
||||
formatForMan(wr, command.LongDescription)
|
||||
fmt.Fprintln(wr, "")
|
||||
}
|
||||
}
|
||||
|
||||
writeManPageOptions(wr, command.Group)
|
||||
writeManPageSubCommands(wr, name, command)
|
||||
}
|
||||
|
||||
// WriteManPage writes a basic man page in groff format to the specified
|
||||
// writer.
|
||||
func (p *Parser) WriteManPage(wr io.Writer) {
|
||||
t := time.Now()
|
||||
|
||||
fmt.Fprintf(wr, ".TH %s 1 \"%s\"\n", p.Name, t.Format("2 January 2006"))
|
||||
fmt.Fprintln(wr, ".SH NAME")
|
||||
fmt.Fprintf(wr, "%s \\- %s\n", p.Name, p.ShortDescription)
|
||||
fmt.Fprintln(wr, ".SH SYNOPSIS")
|
||||
|
||||
usage := p.Usage
|
||||
|
||||
if len(usage) == 0 {
|
||||
usage = "[OPTIONS]"
|
||||
}
|
||||
|
||||
fmt.Fprintf(wr, "\\fB%s\\fP %s\n", p.Name, usage)
|
||||
fmt.Fprintln(wr, ".SH DESCRIPTION")
|
||||
|
||||
formatForMan(wr, p.LongDescription)
|
||||
fmt.Fprintln(wr, "")
|
||||
|
||||
fmt.Fprintln(wr, ".SH OPTIONS")
|
||||
|
||||
writeManPageOptions(wr, p.Command.Group)
|
||||
|
||||
if len(p.commands) > 0 {
|
||||
fmt.Fprintln(wr, ".SH COMMANDS")
|
||||
|
||||
writeManPageSubCommands(wr, "", p.Command)
|
||||
}
|
||||
}
|
||||
@@ -1,78 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type marshalled bool
|
||||
|
||||
func (m *marshalled) UnmarshalFlag(value string) error {
|
||||
if value == "yes" {
|
||||
*m = true
|
||||
} else if value == "no" {
|
||||
*m = false
|
||||
} else {
|
||||
return fmt.Errorf("`%s' is not a valid value, please specify `yes' or `no'", value)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m marshalled) MarshalFlag() string {
|
||||
if m {
|
||||
return "yes"
|
||||
}
|
||||
|
||||
return "no"
|
||||
}
|
||||
|
||||
func TestMarshal(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value marshalled `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v=yes")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarshalDefault(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value marshalled `short:"v" default:"yes"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts)
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarshalOptional(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value marshalled `short:"v" optional:"yes" optional-value:"yes"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMarshalError(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value marshalled `short:"v"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrMarshal, "invalid argument for flag `-v' (expected flags.marshalled): `invalid' is not a valid value, please specify `yes' or `no'", &opts, "-vinvalid")
|
||||
}
|
||||
@@ -1,140 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type multiTag struct {
|
||||
value string
|
||||
cache map[string][]string
|
||||
}
|
||||
|
||||
func newMultiTag(v string) multiTag {
|
||||
return multiTag{
|
||||
value: v,
|
||||
}
|
||||
}
|
||||
|
||||
func (x *multiTag) scan() (map[string][]string, error) {
|
||||
v := x.value
|
||||
|
||||
ret := make(map[string][]string)
|
||||
|
||||
// This is mostly copied from reflect.StructTag.Get
|
||||
for v != "" {
|
||||
i := 0
|
||||
|
||||
// Skip whitespace
|
||||
for i < len(v) && v[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
|
||||
v = v[i:]
|
||||
|
||||
if v == "" {
|
||||
break
|
||||
}
|
||||
|
||||
// Scan to colon to find key
|
||||
i = 0
|
||||
|
||||
for i < len(v) && v[i] != ' ' && v[i] != ':' && v[i] != '"' {
|
||||
i++
|
||||
}
|
||||
|
||||
if i >= len(v) {
|
||||
return nil, newErrorf(ErrTag, "expected `:' after key name, but got end of tag (in `%v`)", x.value)
|
||||
}
|
||||
|
||||
if v[i] != ':' {
|
||||
return nil, newErrorf(ErrTag, "expected `:' after key name, but got `%v' (in `%v`)", v[i], x.value)
|
||||
}
|
||||
|
||||
if i+1 >= len(v) {
|
||||
return nil, newErrorf(ErrTag, "expected `\"' to start tag value at end of tag (in `%v`)", x.value)
|
||||
}
|
||||
|
||||
if v[i+1] != '"' {
|
||||
return nil, newErrorf(ErrTag, "expected `\"' to start tag value, but got `%v' (in `%v`)", v[i+1], x.value)
|
||||
}
|
||||
|
||||
name := v[:i]
|
||||
v = v[i+1:]
|
||||
|
||||
// Scan quoted string to find value
|
||||
i = 1
|
||||
|
||||
for i < len(v) && v[i] != '"' {
|
||||
if v[i] == '\n' {
|
||||
return nil, newErrorf(ErrTag, "unexpected newline in tag value `%v' (in `%v`)", name, x.value)
|
||||
}
|
||||
|
||||
if v[i] == '\\' {
|
||||
i++
|
||||
}
|
||||
i++
|
||||
}
|
||||
|
||||
if i >= len(v) {
|
||||
return nil, newErrorf(ErrTag, "expected end of tag value `\"' at end of tag (in `%v`)", x.value)
|
||||
}
|
||||
|
||||
val, err := strconv.Unquote(v[:i+1])
|
||||
|
||||
if err != nil {
|
||||
return nil, newErrorf(ErrTag, "Malformed value of tag `%v:%v` => %v (in `%v`)", name, v[:i+1], err, x.value)
|
||||
}
|
||||
|
||||
v = v[i+1:]
|
||||
|
||||
ret[name] = append(ret[name], val)
|
||||
}
|
||||
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (x *multiTag) Parse() error {
|
||||
vals, err := x.scan()
|
||||
x.cache = vals
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (x *multiTag) cached() map[string][]string {
|
||||
if x.cache == nil {
|
||||
cache, _ := x.scan()
|
||||
|
||||
if cache == nil {
|
||||
cache = make(map[string][]string)
|
||||
}
|
||||
|
||||
x.cache = cache
|
||||
}
|
||||
|
||||
return x.cache
|
||||
}
|
||||
|
||||
func (x *multiTag) Get(key string) string {
|
||||
c := x.cached()
|
||||
|
||||
if v, ok := c[key]; ok {
|
||||
return v[len(v)-1]
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *multiTag) GetMany(key string) []string {
|
||||
c := x.cached()
|
||||
return c[key]
|
||||
}
|
||||
|
||||
func (x *multiTag) Set(key string, value string) {
|
||||
c := x.cached()
|
||||
c[key] = []string{value}
|
||||
}
|
||||
|
||||
func (x *multiTag) SetMany(key string, value []string) {
|
||||
c := x.cached()
|
||||
c[key] = value
|
||||
}
|
||||
@@ -1,95 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// Option flag information. Contains a description of the option, short and
|
||||
// long name as well as a default value and whether an argument for this
|
||||
// flag is optional.
|
||||
type Option struct {
|
||||
// The description of the option flag. This description is shown
|
||||
// automatically in the builtin help.
|
||||
Description string
|
||||
|
||||
// The short name of the option (a single character). If not 0, the
|
||||
// option flag can be 'activated' using -<ShortName>. Either ShortName
|
||||
// or LongName needs to be non-empty.
|
||||
ShortName rune
|
||||
|
||||
// The long name of the option. If not "", the option flag can be
|
||||
// activated using --<LongName>. Either ShortName or LongName needs
|
||||
// to be non-empty.
|
||||
LongName string
|
||||
|
||||
// The default value of the option.
|
||||
Default []string
|
||||
|
||||
// If true, specifies that the argument to an option flag is optional.
|
||||
// When no argument to the flag is specified on the command line, the
|
||||
// value of Default will be set in the field this option represents.
|
||||
// This is only valid for non-boolean options.
|
||||
OptionalArgument bool
|
||||
|
||||
// The optional value of the option. The optional value is used when
|
||||
// the option flag is marked as having an OptionalArgument. This means
|
||||
// that when the flag is specified, but no option argument is given,
|
||||
// the value of the field this option represents will be set to
|
||||
// OptionalValue. This is only valid for non-boolean options.
|
||||
OptionalValue []string
|
||||
|
||||
// If true, the option _must_ be specified on the command line. If the
|
||||
// option is not specified, the parser will generate an ErrRequired type
|
||||
// error.
|
||||
Required bool
|
||||
|
||||
// A name for the value of an option shown in the Help as --flag [ValueName]
|
||||
ValueName string
|
||||
|
||||
// A mask value to show in the help instead of the default value. This
|
||||
// is useful for hiding sensitive information in the help, such as
|
||||
// passwords.
|
||||
DefaultMask string
|
||||
|
||||
// The struct field which the option represents.
|
||||
field reflect.StructField
|
||||
|
||||
// The struct field value which the option represents.
|
||||
value reflect.Value
|
||||
|
||||
defaultValue reflect.Value
|
||||
iniUsedName string
|
||||
tag multiTag
|
||||
}
|
||||
|
||||
// String converts an option to a human friendly readable string describing the
|
||||
// option.
|
||||
func (option *Option) String() string {
|
||||
var s string
|
||||
var short string
|
||||
|
||||
if option.ShortName != 0 {
|
||||
data := make([]byte, utf8.RuneLen(option.ShortName))
|
||||
utf8.EncodeRune(data, option.ShortName)
|
||||
short = string(data)
|
||||
|
||||
if len(option.LongName) != 0 {
|
||||
s = fmt.Sprintf("%s%s, %s%s",
|
||||
string(defaultShortOptDelimiter), short,
|
||||
defaultLongOptDelimiter, option.LongName)
|
||||
} else {
|
||||
s = fmt.Sprintf("%s%s", string(defaultShortOptDelimiter), short)
|
||||
}
|
||||
} else if len(option.LongName) != 0 {
|
||||
s = fmt.Sprintf("%s%s", defaultLongOptDelimiter, option.LongName)
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
// Value returns the option value as an interface{}.
|
||||
func (option *Option) Value() interface{} {
|
||||
return option.value.Interface()
|
||||
}
|
||||
@@ -1,125 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Set the value of an option to the specified value. An error will be returned
|
||||
// if the specified value could not be converted to the corresponding option
|
||||
// value type.
|
||||
func (option *Option) set(value *string) error {
|
||||
if option.isFunc() {
|
||||
return option.call(value)
|
||||
} else if value != nil {
|
||||
return convert(*value, option.value, option.tag)
|
||||
} else {
|
||||
return convert("", option.value, option.tag)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (option *Option) canCli() bool {
|
||||
return option.ShortName != 0 || len(option.LongName) != 0
|
||||
}
|
||||
|
||||
func (option *Option) canArgument() bool {
|
||||
if u := option.isUnmarshaler(); u != nil {
|
||||
return true
|
||||
}
|
||||
|
||||
return !option.isBool()
|
||||
}
|
||||
|
||||
func (option *Option) clear() {
|
||||
tp := option.value.Type()
|
||||
|
||||
switch tp.Kind() {
|
||||
case reflect.Func:
|
||||
// Skip
|
||||
case reflect.Map:
|
||||
// Empty the map
|
||||
option.value.Set(reflect.MakeMap(tp))
|
||||
default:
|
||||
zeroval := reflect.Zero(tp)
|
||||
option.value.Set(zeroval)
|
||||
}
|
||||
}
|
||||
|
||||
func (option *Option) isUnmarshaler() Unmarshaler {
|
||||
v := option.value
|
||||
|
||||
for {
|
||||
if !v.CanInterface() {
|
||||
return nil
|
||||
}
|
||||
|
||||
i := v.Interface()
|
||||
|
||||
if u, ok := i.(Unmarshaler); ok {
|
||||
return u
|
||||
}
|
||||
|
||||
if !v.CanAddr() {
|
||||
return nil
|
||||
}
|
||||
|
||||
v = v.Addr()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (option *Option) isBool() bool {
|
||||
tp := option.value.Type()
|
||||
|
||||
for {
|
||||
switch tp.Kind() {
|
||||
case reflect.Bool:
|
||||
return true
|
||||
case reflect.Slice:
|
||||
return (tp.Elem().Kind() == reflect.Bool)
|
||||
case reflect.Func:
|
||||
return tp.NumIn() == 0
|
||||
case reflect.Ptr:
|
||||
tp = tp.Elem()
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (option *Option) isFunc() bool {
|
||||
return option.value.Type().Kind() == reflect.Func
|
||||
}
|
||||
|
||||
func (option *Option) call(value *string) error {
|
||||
var retval []reflect.Value
|
||||
|
||||
if value == nil {
|
||||
retval = option.value.Call(nil)
|
||||
} else {
|
||||
tp := option.value.Type().In(0)
|
||||
|
||||
val := reflect.New(tp)
|
||||
val = reflect.Indirect(val)
|
||||
|
||||
if err := convert(*value, val, option.tag); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retval = option.value.Call([]reflect.Value{val})
|
||||
}
|
||||
|
||||
if len(retval) == 1 && retval[0].Type() == reflect.TypeOf((*error)(nil)).Elem() {
|
||||
if retval[0].Interface() == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return retval[0].Interface().(error)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,45 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPassDoubleDash(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
}{}
|
||||
|
||||
p := NewParser(&opts, PassDoubleDash)
|
||||
ret, err := p.ParseArgs([]string{"-v", "--", "-v", "-g"})
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
assertStringArray(t, ret, []string{"-v", "-g"})
|
||||
}
|
||||
|
||||
func TestPassAfterNonOption(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
}{}
|
||||
|
||||
p := NewParser(&opts, PassAfterNonOption)
|
||||
ret, err := p.ParseArgs([]string{"-v", "arg", "-v", "-g"})
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
|
||||
assertStringArray(t, ret, []string{"arg", "-v", "-g"})
|
||||
}
|
||||
@@ -1,54 +0,0 @@
|
||||
// +build !windows
|
||||
|
||||
package flags
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultShortOptDelimiter = '-'
|
||||
defaultLongOptDelimiter = "--"
|
||||
defaultNameArgDelimiter = '='
|
||||
)
|
||||
|
||||
func argumentIsOption(arg string) bool {
|
||||
return len(arg) > 0 && arg[0] == '-'
|
||||
}
|
||||
|
||||
// stripOptionPrefix returns the option without the prefix and whether or
|
||||
// not the option is a long option or not.
|
||||
func stripOptionPrefix(optname string) (prefix string, name string, islong bool) {
|
||||
if strings.HasPrefix(optname, "--") {
|
||||
return "--", optname[2:], true
|
||||
} else if strings.HasPrefix(optname, "-") {
|
||||
return "-", optname[1:], false
|
||||
}
|
||||
|
||||
return "", optname, false
|
||||
}
|
||||
|
||||
// splitOption attempts to split the passed option into a name and an argument.
|
||||
// When there is no argument specified, nil will be returned for it.
|
||||
func splitOption(prefix string, option string, islong bool) (string, *string) {
|
||||
pos := strings.Index(option, "=")
|
||||
|
||||
if (islong && pos >= 0) || (!islong && pos == 1) {
|
||||
rest := option[pos+1:]
|
||||
return option[:pos], &rest
|
||||
}
|
||||
|
||||
return option, nil
|
||||
}
|
||||
|
||||
// addHelpGroup adds a new group that contains default help parameters.
|
||||
func (c *Command) addHelpGroup(showHelp func() error) *Group {
|
||||
var help struct {
|
||||
ShowHelp func() error `short:"h" long:"help" description:"Show this help message"`
|
||||
}
|
||||
|
||||
help.ShowHelp = showHelp
|
||||
ret, _ := c.AddGroup("Help Options", "", &help)
|
||||
|
||||
return ret
|
||||
}
|
||||
@@ -1,85 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Windows uses a front slash for both short and long options. Also it uses
|
||||
// a colon for name/argument delimter.
|
||||
const (
|
||||
defaultShortOptDelimiter = '/'
|
||||
defaultLongOptDelimiter = "/"
|
||||
defaultNameArgDelimiter = ':'
|
||||
)
|
||||
|
||||
func argumentIsOption(arg string) bool {
|
||||
// Windows-style options allow front slash for the option
|
||||
// delimiter.
|
||||
return len(arg) > 0 && (arg[0] == '-' || arg[0] == '/')
|
||||
}
|
||||
|
||||
// stripOptionPrefix returns the option without the prefix and whether or
|
||||
// not the option is a long option or not.
|
||||
func stripOptionPrefix(optname string) (prefix string, name string, islong bool) {
|
||||
// Determine if the argument is a long option or not. Windows
|
||||
// typically supports both long and short options with a single
|
||||
// front slash as the option delimiter, so handle this situation
|
||||
// nicely.
|
||||
possplit := 0
|
||||
|
||||
if strings.HasPrefix(optname, "--") {
|
||||
possplit = 2
|
||||
islong = true
|
||||
} else if strings.HasPrefix(optname, "-") {
|
||||
possplit = 1
|
||||
islong = false
|
||||
} else if strings.HasPrefix(optname, "/") {
|
||||
possplit = 1
|
||||
islong = len(optname) > 2
|
||||
}
|
||||
|
||||
return optname[:possplit], optname[possplit:], islong
|
||||
}
|
||||
|
||||
// splitOption attempts to split the passed option into a name and an argument.
|
||||
// When there is no argument specified, nil will be returned for it.
|
||||
func splitOption(prefix string, option string, islong bool) (string, *string) {
|
||||
if len(option) == 0 {
|
||||
return option, nil
|
||||
}
|
||||
|
||||
// Windows typically uses a colon for the option name and argument
|
||||
// delimiter while POSIX typically uses an equals. Support both styles,
|
||||
// but don't allow the two to be mixed. That is to say /foo:bar and
|
||||
// --foo=bar are acceptable, but /foo=bar and --foo:bar are not.
|
||||
var pos int
|
||||
|
||||
if prefix == "/" {
|
||||
pos = strings.Index(option, ":")
|
||||
} else if len(prefix) > 0 {
|
||||
pos = strings.Index(option, "=")
|
||||
}
|
||||
|
||||
if (islong && pos >= 0) || (!islong && pos == 1) {
|
||||
rest := option[pos+1:]
|
||||
return option[:pos], &rest
|
||||
}
|
||||
|
||||
return option, nil
|
||||
}
|
||||
|
||||
// addHelpGroup adds a new group that contains default help parameters.
|
||||
func (c *Command) addHelpGroup(showHelp func() error) *Group {
|
||||
// Windows CLI applications typically use /? for help, so make both
|
||||
// that available as well as the POSIX style h and help.
|
||||
var help struct {
|
||||
ShowHelpWindows func() error `short:"?" description:"Show this help message"`
|
||||
ShowHelpPosix func() error `short:"h" long:"help" description:"Show this help message"`
|
||||
}
|
||||
|
||||
help.ShowHelpWindows = showHelp
|
||||
help.ShowHelpPosix = showHelp
|
||||
|
||||
ret, _ := c.AddGroup("Help Options", "", &help)
|
||||
return ret
|
||||
}
|
||||
@@ -1,212 +0,0 @@
|
||||
// Copyright 2012 Jesse van den Kieboom. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package flags
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
// A Parser provides command line option parsing. It can contain several
|
||||
// option groups each with their own set of options.
|
||||
type Parser struct {
|
||||
// Embedded, see Command for more information
|
||||
*Command
|
||||
|
||||
// A usage string to be displayed in the help message.
|
||||
Usage string
|
||||
|
||||
// Option flags changing the behavior of the parser.
|
||||
Options Options
|
||||
|
||||
internalError error
|
||||
}
|
||||
|
||||
// Options provides parser options that change the behavior of the option
|
||||
// parser.
|
||||
type Options uint
|
||||
|
||||
const (
|
||||
// None indicates no options.
|
||||
None Options = 0
|
||||
|
||||
// HelpFlag adds a default Help Options group to the parser containing
|
||||
// -h and --help options. When either -h or --help is specified on the
|
||||
// command line, the parser will return the special error of type
|
||||
// ErrHelp. When PrintErrors is also specified, then the help message
|
||||
// will also be automatically printed to os.Stderr.
|
||||
HelpFlag = 1 << iota
|
||||
|
||||
// PassDoubleDash passes all arguments after a double dash, --, as
|
||||
// remaining command line arguments (i.e. they will not be parsed for
|
||||
// flags).
|
||||
PassDoubleDash
|
||||
|
||||
// IgnoreUnknown ignores any unknown options and passes them as
|
||||
// remaining command line arguments instead of generating an error.
|
||||
IgnoreUnknown
|
||||
|
||||
// PrintErrors prints any errors which occurred during parsing to
|
||||
// os.Stderr.
|
||||
PrintErrors
|
||||
|
||||
// PassAfterNonOption passes all arguments after the first non option
|
||||
// as remaining command line arguments. This is equivalent to strict
|
||||
// POSIX processing.
|
||||
PassAfterNonOption
|
||||
|
||||
// Default is a convenient default set of options which should cover
|
||||
// most of the uses of the flags package.
|
||||
Default = HelpFlag | PrintErrors | PassDoubleDash
|
||||
)
|
||||
|
||||
// Parse is a convenience function to parse command line options with default
|
||||
// settings. The provided data is a pointer to a struct representing the
|
||||
// default option group (named "Application Options"). For more control, use
|
||||
// flags.NewParser.
|
||||
func Parse(data interface{}) ([]string, error) {
|
||||
return NewParser(data, Default).Parse()
|
||||
}
|
||||
|
||||
// ParseArgs is a convenience function to parse command line options with default
|
||||
// settings. The provided data is a pointer to a struct representing the
|
||||
// default option group (named "Application Options"). The args argument is
|
||||
// the list of command line arguments to parse. If you just want to parse the
|
||||
// default program command line arguments (i.e. os.Args), then use flags.Parse
|
||||
// instead. For more control, use flags.NewParser.
|
||||
func ParseArgs(data interface{}, args []string) ([]string, error) {
|
||||
return NewParser(data, Default).ParseArgs(args)
|
||||
}
|
||||
|
||||
// NewParser creates a new parser. It uses os.Args[0] as the application
|
||||
// name and then calls Parser.NewNamedParser (see Parser.NewNamedParser for
|
||||
// more details). The provided data is a pointer to a struct representing the
|
||||
// default option group (named "Application Options"), or nil if the default
|
||||
// group should not be added. The options parameter specifies a set of options
|
||||
// for the parser.
|
||||
func NewParser(data interface{}, options Options) *Parser {
|
||||
ret := NewNamedParser(path.Base(os.Args[0]), options)
|
||||
|
||||
if data != nil {
|
||||
_, ret.internalError = ret.AddGroup("Application Options", "", data)
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
// NewNamedParser creates a new parser. The appname is used to display the
|
||||
// executable name in the builtin help message. Option groups and commands can
|
||||
// be added to this parser by using AddGroup and AddCommand.
|
||||
func NewNamedParser(appname string, options Options) *Parser {
|
||||
return &Parser{
|
||||
Command: newCommand(appname, "", "", nil),
|
||||
Options: options,
|
||||
}
|
||||
}
|
||||
|
||||
// Parse parses the command line arguments from os.Args using Parser.ParseArgs.
|
||||
// For more detailed information see ParseArgs.
|
||||
func (p *Parser) Parse() ([]string, error) {
|
||||
return p.ParseArgs(os.Args[1:])
|
||||
}
|
||||
|
||||
// ParseArgs parses the command line arguments according to the option groups that
|
||||
// were added to the parser. On successful parsing of the arguments, the
|
||||
// remaining, non-option, arguments (if any) are returned. The returned error
|
||||
// indicates a parsing error and can be used with PrintError to display
|
||||
// contextual information on where the error occurred exactly.
|
||||
//
|
||||
// When the common help group has been added (AddHelp) and either -h or --help
|
||||
// was specified in the command line arguments, a help message will be
|
||||
// automatically printed. Furthermore, the special error type ErrHelp is returned.
|
||||
// It is up to the caller to exit the program if so desired.
|
||||
func (p *Parser) ParseArgs(args []string) ([]string, error) {
|
||||
if p.internalError != nil {
|
||||
return nil, p.internalError
|
||||
}
|
||||
|
||||
p.eachCommand(func(c *Command) {
|
||||
p.eachGroup(func(g *Group) {
|
||||
g.storeDefaults()
|
||||
})
|
||||
}, true)
|
||||
|
||||
// Add builtin help group to all commands if necessary
|
||||
if (p.Options & HelpFlag) != None {
|
||||
p.addHelpGroups(p.showBuiltinHelp)
|
||||
}
|
||||
|
||||
s := &parseState{
|
||||
args: args,
|
||||
retargs: make([]string, 0, len(args)),
|
||||
command: p.Command,
|
||||
lookup: p.makeLookup(),
|
||||
}
|
||||
|
||||
for !s.eof() {
|
||||
arg := s.pop()
|
||||
|
||||
// When PassDoubleDash is set and we encounter a --, then
|
||||
// simply append all the rest as arguments and break out
|
||||
if (p.Options&PassDoubleDash) != None && arg == "--" {
|
||||
s.retargs = append(s.retargs, s.args...)
|
||||
break
|
||||
}
|
||||
|
||||
if !argumentIsOption(arg) {
|
||||
// Note: this also sets s.err, so we can just check for
|
||||
// nil here and use s.err later
|
||||
if p.parseNonOption(s) != nil {
|
||||
break
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
var err error
|
||||
var option *Option
|
||||
|
||||
prefix, optname, islong := stripOptionPrefix(arg)
|
||||
optname, argument := splitOption(prefix, optname, islong)
|
||||
|
||||
if islong {
|
||||
option, err = p.parseLong(s, optname, argument)
|
||||
} else {
|
||||
option, err = p.parseShort(s, optname, argument)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
ignoreUnknown := (p.Options & IgnoreUnknown) != None
|
||||
parseErr := wrapError(err)
|
||||
|
||||
if !(parseErr.Type == ErrUnknownFlag && ignoreUnknown) {
|
||||
s.err = parseErr
|
||||
break
|
||||
}
|
||||
|
||||
if ignoreUnknown {
|
||||
s.retargs = append(s.retargs, arg)
|
||||
}
|
||||
} else {
|
||||
delete(s.lookup.required, option)
|
||||
}
|
||||
}
|
||||
|
||||
if s.err == nil {
|
||||
s.checkRequired()
|
||||
}
|
||||
|
||||
if s.err != nil {
|
||||
return nil, p.printError(s.err)
|
||||
}
|
||||
|
||||
if len(s.command.commands) != 0 {
|
||||
return nil, p.printError(s.estimateCommand())
|
||||
} else if cmd, ok := s.command.data.(Commander); ok {
|
||||
return nil, p.printError(cmd.Execute(s.retargs))
|
||||
}
|
||||
|
||||
return s.retargs, nil
|
||||
}
|
||||
@@ -1,243 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type parseState struct {
|
||||
arg string
|
||||
args []string
|
||||
retargs []string
|
||||
err error
|
||||
|
||||
command *Command
|
||||
lookup lookup
|
||||
}
|
||||
|
||||
func (p *parseState) eof() bool {
|
||||
return len(p.args) == 0
|
||||
}
|
||||
|
||||
func (p *parseState) pop() string {
|
||||
if p.eof() {
|
||||
return ""
|
||||
}
|
||||
|
||||
p.arg = p.args[0]
|
||||
p.args = p.args[1:]
|
||||
|
||||
return p.arg
|
||||
}
|
||||
|
||||
func (p *parseState) peek() string {
|
||||
if p.eof() {
|
||||
return ""
|
||||
}
|
||||
|
||||
return p.args[0]
|
||||
}
|
||||
|
||||
func (p *parseState) checkRequired() error {
|
||||
required := p.lookup.required
|
||||
|
||||
if len(required) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
names := make([]string, 0, len(required))
|
||||
|
||||
for k := range required {
|
||||
names = append(names, "`"+k.String()+"'")
|
||||
}
|
||||
|
||||
var msg string
|
||||
|
||||
if len(names) == 1 {
|
||||
msg = fmt.Sprintf("the required flag %s was not specified", names[0])
|
||||
} else {
|
||||
msg = fmt.Sprintf("the required flags %s and %s were not specified",
|
||||
strings.Join(names[:len(names)-1], ", "), names[len(names)-1])
|
||||
}
|
||||
|
||||
p.err = newError(ErrRequired, msg)
|
||||
return p.err
|
||||
}
|
||||
|
||||
func (p *parseState) estimateCommand() error {
|
||||
commands := p.command.sortedCommands()
|
||||
cmdnames := make([]string, len(commands))
|
||||
|
||||
for i, v := range commands {
|
||||
cmdnames[i] = v.Name
|
||||
}
|
||||
|
||||
var msg string
|
||||
|
||||
if len(p.retargs) != 0 {
|
||||
c, l := closestChoice(p.retargs[0], cmdnames)
|
||||
msg = fmt.Sprintf("Unknown command `%s'", p.retargs[0])
|
||||
|
||||
if float32(l)/float32(len(c)) < 0.5 {
|
||||
msg = fmt.Sprintf("%s, did you mean `%s'?", msg, c)
|
||||
} else if len(cmdnames) == 1 {
|
||||
msg = fmt.Sprintf("%s. You should use the %s command",
|
||||
msg,
|
||||
cmdnames[0])
|
||||
} else {
|
||||
msg = fmt.Sprintf("%s. Please specify one command of: %s or %s",
|
||||
msg,
|
||||
strings.Join(cmdnames[:len(cmdnames)-1], ", "),
|
||||
cmdnames[len(cmdnames)-1])
|
||||
}
|
||||
} else {
|
||||
if len(cmdnames) == 1 {
|
||||
msg = fmt.Sprintf("Please specify the %s command", cmdnames[0])
|
||||
} else {
|
||||
msg = fmt.Sprintf("Please specify one command of: %s or %s",
|
||||
strings.Join(cmdnames[:len(cmdnames)-1], ", "),
|
||||
cmdnames[len(cmdnames)-1])
|
||||
}
|
||||
}
|
||||
|
||||
return newError(ErrRequired, msg)
|
||||
}
|
||||
|
||||
func (p *Parser) parseOption(s *parseState, name string, option *Option, canarg bool, argument *string) (retoption *Option, err error) {
|
||||
if !option.canArgument() {
|
||||
if argument != nil {
|
||||
msg := fmt.Sprintf("bool flag `%s' cannot have an argument", option)
|
||||
return option, newError(ErrNoArgumentForBool, msg)
|
||||
}
|
||||
|
||||
err = option.set(nil)
|
||||
} else if argument != nil {
|
||||
err = option.set(argument)
|
||||
} else if canarg && !s.eof() {
|
||||
arg := s.pop()
|
||||
err = option.set(&arg)
|
||||
} else if option.OptionalArgument {
|
||||
option.clear()
|
||||
|
||||
for _, v := range option.OptionalValue {
|
||||
err = option.set(&v)
|
||||
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
msg := fmt.Sprintf("expected argument for flag `%s'", option)
|
||||
err = newError(ErrExpectedArgument, msg)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if _, ok := err.(*Error); !ok {
|
||||
msg := fmt.Sprintf("invalid argument for flag `%s' (expected %s): %s",
|
||||
option,
|
||||
option.value.Type(),
|
||||
err.Error())
|
||||
|
||||
err = newError(ErrMarshal, msg)
|
||||
}
|
||||
}
|
||||
|
||||
return option, err
|
||||
}
|
||||
|
||||
func (p *Parser) parseLong(s *parseState, name string, argument *string) (option *Option, err error) {
|
||||
if option := s.lookup.longNames[name]; option != nil {
|
||||
// Only long options that are required can consume an argument
|
||||
// from the argument list
|
||||
canarg := !option.OptionalArgument
|
||||
|
||||
return p.parseOption(s, name, option, canarg, argument)
|
||||
}
|
||||
|
||||
return nil, newError(ErrUnknownFlag, fmt.Sprintf("unknown flag `%s'", name))
|
||||
}
|
||||
|
||||
func (p *Parser) splitShortConcatArg(s *parseState, optname string) (string, *string) {
|
||||
c, n := utf8.DecodeRuneInString(optname)
|
||||
|
||||
if n == len(optname) {
|
||||
return optname, nil
|
||||
}
|
||||
|
||||
first := string(c)
|
||||
|
||||
if option := s.lookup.shortNames[first]; option != nil && option.canArgument() {
|
||||
arg := optname[n:]
|
||||
return first, &arg
|
||||
}
|
||||
|
||||
return optname, nil
|
||||
}
|
||||
|
||||
func (p *Parser) parseShort(s *parseState, optname string, argument *string) (option *Option, err error) {
|
||||
if argument == nil {
|
||||
optname, argument = p.splitShortConcatArg(s, optname)
|
||||
}
|
||||
|
||||
for i, c := range optname {
|
||||
shortname := string(c)
|
||||
|
||||
if option = s.lookup.shortNames[shortname]; option != nil {
|
||||
// Only the last short argument can consume an argument from
|
||||
// the arguments list, and only if it's non optional
|
||||
canarg := (i+utf8.RuneLen(c) == len(optname)) && !option.OptionalArgument
|
||||
|
||||
if _, err := p.parseOption(s, shortname, option, canarg, argument); err != nil {
|
||||
return option, err
|
||||
}
|
||||
} else {
|
||||
return nil, newError(ErrUnknownFlag, fmt.Sprintf("unknown flag `%s'", shortname))
|
||||
}
|
||||
|
||||
// Only the first option can have a concatted argument, so just
|
||||
// clear argument here
|
||||
argument = nil
|
||||
}
|
||||
|
||||
return option, nil
|
||||
}
|
||||
|
||||
func (p *Parser) parseNonOption(s *parseState) error {
|
||||
if cmd := s.lookup.commands[s.arg]; cmd != nil {
|
||||
if err := s.checkRequired(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.command.Active = cmd
|
||||
|
||||
s.command = cmd
|
||||
s.lookup = cmd.makeLookup()
|
||||
} else if (p.Options & PassAfterNonOption) != None {
|
||||
// If PassAfterNonOption is set then all remaining arguments
|
||||
// are considered positional
|
||||
s.retargs = append(append(s.retargs, s.arg), s.args...)
|
||||
s.args = []string{}
|
||||
} else {
|
||||
s.retargs = append(s.retargs, s.arg)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Parser) showBuiltinHelp() error {
|
||||
var b bytes.Buffer
|
||||
|
||||
p.WriteHelp(&b)
|
||||
return newError(ErrHelp, b.String())
|
||||
}
|
||||
|
||||
func (p *Parser) printError(err error) error {
|
||||
if err != nil && (p.Options&PrintErrors) != None {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
@@ -1,81 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestPointerBool(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value *bool `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !*opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPointerString(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value *string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v", "value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, *opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestPointerSlice(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value *[]string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v", "value1", "-v", "value2")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertStringArray(t, *opts.Value, []string{"value1", "value2"})
|
||||
}
|
||||
|
||||
func TestPointerMap(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value *map[string]int `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v", "k1:2", "-v", "k2:-5")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if v, ok := (*opts.Value)["k1"]; !ok {
|
||||
t.Errorf("Expected key \"k1\" to exist")
|
||||
} else if v != 2 {
|
||||
t.Errorf("Expected \"k1\" to be 2, but got %#v", v)
|
||||
}
|
||||
|
||||
if v, ok := (*opts.Value)["k2"]; !ok {
|
||||
t.Errorf("Expected key \"k2\" to exist")
|
||||
} else if v != -5 {
|
||||
t.Errorf("Expected \"k2\" to be -5, but got %#v", v)
|
||||
}
|
||||
}
|
||||
|
||||
type PointerGroup struct {
|
||||
Value bool `short:"v"`
|
||||
}
|
||||
|
||||
func TestPointerGroup(t *testing.T) {
|
||||
var opts = struct {
|
||||
Group *PointerGroup `group:"Group Options"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Group.Value {
|
||||
t.Errorf("Expected Group.Value to be true")
|
||||
}
|
||||
}
|
||||
@@ -1,169 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestShort(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.Value {
|
||||
t.Errorf("Expected Value to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestShortTooLong(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"vv"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrShortNameTooLong, "short names can only be 1 character long, not `vv'", &opts)
|
||||
}
|
||||
|
||||
func TestShortRequired(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v" required:"true"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrRequired, "the required flag `-v' was not specified", &opts)
|
||||
}
|
||||
|
||||
func TestShortMultiConcat(t *testing.T) {
|
||||
var opts = struct {
|
||||
V bool `short:"v"`
|
||||
O bool `short:"o"`
|
||||
F bool `short:"f"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-vo", "-f")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
|
||||
if !opts.V {
|
||||
t.Errorf("Expected V to be true")
|
||||
}
|
||||
|
||||
if !opts.O {
|
||||
t.Errorf("Expected O to be true")
|
||||
}
|
||||
|
||||
if !opts.F {
|
||||
t.Errorf("Expected F to be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestShortMultiSlice(t *testing.T) {
|
||||
var opts = struct {
|
||||
Values []bool `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v", "-v")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertBoolArray(t, opts.Values, []bool{true, true})
|
||||
}
|
||||
|
||||
func TestShortMultiSliceConcat(t *testing.T) {
|
||||
var opts = struct {
|
||||
Values []bool `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-vvv")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertBoolArray(t, opts.Values, []bool{true, true, true})
|
||||
}
|
||||
|
||||
func TestShortWithEqualArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v=value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestShortWithArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-vvalue")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestShortArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-v", "value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestShortMultiWithEqualArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
F []bool `short:"f"`
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrExpectedArgument, "expected argument for flag `-v'", &opts, "-ffv=value")
|
||||
}
|
||||
|
||||
func TestShortMultiArg(t *testing.T) {
|
||||
var opts = struct {
|
||||
F []bool `short:"f"`
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-ffv", "value")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertBoolArray(t, opts.F, []bool{true, true})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
|
||||
func TestShortMultiArgConcatFail(t *testing.T) {
|
||||
var opts = struct {
|
||||
F []bool `short:"f"`
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrExpectedArgument, "expected argument for flag `-v'", &opts, "-ffvvalue")
|
||||
}
|
||||
|
||||
func TestShortMultiArgConcat(t *testing.T) {
|
||||
var opts = struct {
|
||||
F []bool `short:"f"`
|
||||
Value string `short:"v"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-vff")
|
||||
|
||||
assertStringArray(t, ret, []string{})
|
||||
assertString(t, opts.Value, "ff")
|
||||
}
|
||||
|
||||
func TestShortOptional(t *testing.T) {
|
||||
var opts = struct {
|
||||
F []bool `short:"f"`
|
||||
Value string `short:"v" optional:"yes" optional-value:"value"`
|
||||
}{}
|
||||
|
||||
ret := assertParseSuccess(t, &opts, "-fv", "f")
|
||||
|
||||
assertStringArray(t, ret, []string{"f"})
|
||||
assertString(t, opts.Value, "value")
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestTagMissingColon(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrTag, "expected `:' after key name, but got end of tag (in `short`)", &opts, "")
|
||||
}
|
||||
|
||||
func TestTagMissingValue(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrTag, "expected `\"' to start tag value at end of tag (in `short:`)", &opts, "")
|
||||
}
|
||||
|
||||
func TestTagMissingQuote(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `short:"v`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrTag, "expected end of tag value `\"' at end of tag (in `short:\"v`)", &opts, "")
|
||||
}
|
||||
|
||||
func TestTagNewline(t *testing.T) {
|
||||
var opts = struct {
|
||||
Value bool `long:"verbose" description:"verbose
|
||||
something"`
|
||||
}{}
|
||||
|
||||
assertParseFail(t, ErrTag, "unexpected newline in tag value `description' (in `long:\"verbose\" description:\"verbose\nsomething\"`)", &opts, "")
|
||||
}
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
package flags
|
||||
|
||||
func getTerminalColumns() int {
|
||||
return 80
|
||||
}
|
||||
@@ -1,66 +0,0 @@
|
||||
package flags
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestUnknownFlags(t *testing.T) {
|
||||
var opts = struct {
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Verbose output"`
|
||||
}{}
|
||||
|
||||
args := []string{
|
||||
"-f",
|
||||
}
|
||||
|
||||
p := NewParser(&opts, 0)
|
||||
args, err := p.ParseArgs(args)
|
||||
|
||||
if err == nil {
|
||||
t.Fatal("Expected error for unknown argument")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIgnoreUnknownFlags(t *testing.T) {
|
||||
var opts = struct {
|
||||
Verbose []bool `short:"v" long:"verbose" description:"Verbose output"`
|
||||
}{}
|
||||
|
||||
args := []string{
|
||||
"hello",
|
||||
"world",
|
||||
"-v",
|
||||
"--foo=bar",
|
||||
"--verbose",
|
||||
"-f",
|
||||
}
|
||||
|
||||
p := NewParser(&opts, IgnoreUnknown)
|
||||
args, err := p.ParseArgs(args)
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
exargs := []string{
|
||||
"hello",
|
||||
"world",
|
||||
"--foo=bar",
|
||||
"-f",
|
||||
}
|
||||
|
||||
issame := (len(args) == len(exargs))
|
||||
|
||||
if issame {
|
||||
for i := 0; i < len(args); i++ {
|
||||
if args[i] != exargs[i] {
|
||||
issame = false
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !issame {
|
||||
t.Fatalf("Expected %v but got %v", exargs, args)
|
||||
}
|
||||
}
|
||||
203
gui/angular.min.js
vendored
Normal file
203
gui/angular.min.js
vendored
Normal file
@@ -0,0 +1,203 @@
|
||||
/*
|
||||
AngularJS v1.2.7
|
||||
(c) 2010-2014 Google, Inc. http://angularjs.org
|
||||
License: MIT
|
||||
*/
|
||||
(function(Z,Q,r){'use strict';function F(b){return function(){var a=arguments[0],c,a="["+(b?b+":":"")+a+"] http://errors.angularjs.org/1.2.7/"+(b?b+"/":"")+a;for(c=1;c<arguments.length;c++)a=a+(1==c?"?":"&")+"p"+(c-1)+"="+encodeURIComponent("function"==typeof arguments[c]?arguments[c].toString().replace(/ \{[\s\S]*$/,""):"undefined"==typeof arguments[c]?"undefined":"string"!=typeof arguments[c]?JSON.stringify(arguments[c]):arguments[c]);return Error(a)}}function qb(b){if(null==b||za(b))return!1;var a=
|
||||
b.length;return 1===b.nodeType&&a?!0:D(b)||K(b)||0===a||"number"===typeof a&&0<a&&a-1 in b}function q(b,a,c){var d;if(b)if(L(b))for(d in b)"prototype"==d||("length"==d||"name"==d||b.hasOwnProperty&&!b.hasOwnProperty(d))||a.call(c,b[d],d);else if(b.forEach&&b.forEach!==q)b.forEach(a,c);else if(qb(b))for(d=0;d<b.length;d++)a.call(c,b[d],d);else for(d in b)b.hasOwnProperty(d)&&a.call(c,b[d],d);return b}function Ob(b){var a=[],c;for(c in b)b.hasOwnProperty(c)&&a.push(c);return a.sort()}function Oc(b,
|
||||
a,c){for(var d=Ob(b),e=0;e<d.length;e++)a.call(c,b[d[e]],d[e]);return d}function Pb(b){return function(a,c){b(c,a)}}function Ya(){for(var b=ka.length,a;b;){b--;a=ka[b].charCodeAt(0);if(57==a)return ka[b]="A",ka.join("");if(90==a)ka[b]="0";else return ka[b]=String.fromCharCode(a+1),ka.join("")}ka.unshift("0");return ka.join("")}function Qb(b,a){a?b.$$hashKey=a:delete b.$$hashKey}function t(b){var a=b.$$hashKey;q(arguments,function(a){a!==b&&q(a,function(a,c){b[c]=a})});Qb(b,a);return b}function S(b){return parseInt(b,
|
||||
10)}function Rb(b,a){return t(new (t(function(){},{prototype:b})),a)}function w(){}function Aa(b){return b}function $(b){return function(){return b}}function z(b){return"undefined"===typeof b}function B(b){return"undefined"!==typeof b}function X(b){return null!=b&&"object"===typeof b}function D(b){return"string"===typeof b}function rb(b){return"number"===typeof b}function Ja(b){return"[object Date]"===Za.call(b)}function K(b){return"[object Array]"===Za.call(b)}function L(b){return"function"===typeof b}
|
||||
function $a(b){return"[object RegExp]"===Za.call(b)}function za(b){return b&&b.document&&b.location&&b.alert&&b.setInterval}function Pc(b){return!(!b||!(b.nodeName||b.on&&b.find))}function Qc(b,a,c){var d=[];q(b,function(b,g,f){d.push(a.call(c,b,g,f))});return d}function ab(b,a){if(b.indexOf)return b.indexOf(a);for(var c=0;c<b.length;c++)if(a===b[c])return c;return-1}function Ka(b,a){var c=ab(b,a);0<=c&&b.splice(c,1);return a}function fa(b,a){if(za(b)||b&&b.$evalAsync&&b.$watch)throw La("cpws");if(a){if(b===
|
||||
a)throw La("cpi");if(K(b))for(var c=a.length=0;c<b.length;c++)a.push(fa(b[c]));else{c=a.$$hashKey;q(a,function(b,c){delete a[c]});for(var d in b)a[d]=fa(b[d]);Qb(a,c)}}else(a=b)&&(K(b)?a=fa(b,[]):Ja(b)?a=new Date(b.getTime()):$a(b)?a=RegExp(b.source):X(b)&&(a=fa(b,{})));return a}function Sb(b,a){a=a||{};for(var c in b)b.hasOwnProperty(c)&&("$"!==c.charAt(0)&&"$"!==c.charAt(1))&&(a[c]=b[c]);return a}function ua(b,a){if(b===a)return!0;if(null===b||null===a)return!1;if(b!==b&&a!==a)return!0;var c=typeof b,
|
||||
d;if(c==typeof a&&"object"==c)if(K(b)){if(!K(a))return!1;if((c=b.length)==a.length){for(d=0;d<c;d++)if(!ua(b[d],a[d]))return!1;return!0}}else{if(Ja(b))return Ja(a)&&b.getTime()==a.getTime();if($a(b)&&$a(a))return b.toString()==a.toString();if(b&&b.$evalAsync&&b.$watch||a&&a.$evalAsync&&a.$watch||za(b)||za(a)||K(a))return!1;c={};for(d in b)if("$"!==d.charAt(0)&&!L(b[d])){if(!ua(b[d],a[d]))return!1;c[d]=!0}for(d in a)if(!c.hasOwnProperty(d)&&"$"!==d.charAt(0)&&a[d]!==r&&!L(a[d]))return!1;return!0}return!1}
|
||||
function Tb(){return Q.securityPolicy&&Q.securityPolicy.isActive||Q.querySelector&&!(!Q.querySelector("[ng-csp]")&&!Q.querySelector("[data-ng-csp]"))}function bb(b,a){var c=2<arguments.length?va.call(arguments,2):[];return!L(a)||a instanceof RegExp?a:c.length?function(){return arguments.length?a.apply(b,c.concat(va.call(arguments,0))):a.apply(b,c)}:function(){return arguments.length?a.apply(b,arguments):a.call(b)}}function Rc(b,a){var c=a;"string"===typeof b&&"$"===b.charAt(0)?c=r:za(a)?c="$WINDOW":
|
||||
a&&Q===a?c="$DOCUMENT":a&&(a.$evalAsync&&a.$watch)&&(c="$SCOPE");return c}function pa(b,a){return"undefined"===typeof b?r:JSON.stringify(b,Rc,a?" ":null)}function Ub(b){return D(b)?JSON.parse(b):b}function Ma(b){"function"===typeof b?b=!0:b&&0!==b.length?(b=x(""+b),b=!("f"==b||"0"==b||"false"==b||"no"==b||"n"==b||"[]"==b)):b=!1;return b}function ga(b){b=A(b).clone();try{b.empty()}catch(a){}var c=A("<div>").append(b).html();try{return 3===b[0].nodeType?x(c):c.match(/^(<[^>]+>)/)[1].replace(/^<([\w\-]+)/,
|
||||
function(a,b){return"<"+x(b)})}catch(d){return x(c)}}function Vb(b){try{return decodeURIComponent(b)}catch(a){}}function Wb(b){var a={},c,d;q((b||"").split("&"),function(b){b&&(c=b.split("="),d=Vb(c[0]),B(d)&&(b=B(c[1])?Vb(c[1]):!0,a[d]?K(a[d])?a[d].push(b):a[d]=[a[d],b]:a[d]=b))});return a}function Xb(b){var a=[];q(b,function(b,d){K(b)?q(b,function(b){a.push(wa(d,!0)+(!0===b?"":"="+wa(b,!0)))}):a.push(wa(d,!0)+(!0===b?"":"="+wa(b,!0)))});return a.length?a.join("&"):""}function sb(b){return wa(b,
|
||||
!0).replace(/%26/gi,"&").replace(/%3D/gi,"=").replace(/%2B/gi,"+")}function wa(b,a){return encodeURIComponent(b).replace(/%40/gi,"@").replace(/%3A/gi,":").replace(/%24/g,"$").replace(/%2C/gi,",").replace(/%20/g,a?"%20":"+")}function Sc(b,a){function c(a){a&&d.push(a)}var d=[b],e,g,f=["ng:app","ng-app","x-ng-app","data-ng-app"],h=/\sng[:\-]app(:\s*([\w\d_]+);?)?\s/;q(f,function(a){f[a]=!0;c(Q.getElementById(a));a=a.replace(":","\\:");b.querySelectorAll&&(q(b.querySelectorAll("."+a),c),q(b.querySelectorAll("."+
|
||||
a+"\\:"),c),q(b.querySelectorAll("["+a+"]"),c))});q(d,function(a){if(!e){var b=h.exec(" "+a.className+" ");b?(e=a,g=(b[2]||"").replace(/\s+/g,",")):q(a.attributes,function(b){!e&&f[b.name]&&(e=a,g=b.value)})}});e&&a(e,g?[g]:[])}function Yb(b,a){var c=function(){b=A(b);if(b.injector()){var c=b[0]===Q?"document":ga(b);throw La("btstrpd",c);}a=a||[];a.unshift(["$provide",function(a){a.value("$rootElement",b)}]);a.unshift("ng");c=Zb(a);c.invoke(["$rootScope","$rootElement","$compile","$injector","$animate",
|
||||
function(a,b,c,d,e){a.$apply(function(){b.data("$injector",d);c(b)(a)})}]);return c},d=/^NG_DEFER_BOOTSTRAP!/;if(Z&&!d.test(Z.name))return c();Z.name=Z.name.replace(d,"");Na.resumeBootstrap=function(b){q(b,function(b){a.push(b)});c()}}function cb(b,a){a=a||"_";return b.replace(Tc,function(b,d){return(d?a:"")+b.toLowerCase()})}function tb(b,a,c){if(!b)throw La("areq",a||"?",c||"required");return b}function Oa(b,a,c){c&&K(b)&&(b=b[b.length-1]);tb(L(b),a,"not a function, got "+(b&&"object"==typeof b?
|
||||
b.constructor.name||"Object":typeof b));return b}function xa(b,a){if("hasOwnProperty"===b)throw La("badname",a);}function ub(b,a,c){if(!a)return b;a=a.split(".");for(var d,e=b,g=a.length,f=0;f<g;f++)d=a[f],b&&(b=(e=b)[d]);return!c&&L(b)?bb(e,b):b}function vb(b){var a=b[0];b=b[b.length-1];if(a===b)return A(a);var c=[a];do{a=a.nextSibling;if(!a)break;c.push(a)}while(a!==b);return A(c)}function Uc(b){var a=F("$injector"),c=F("ng");b=b.angular||(b.angular={});b.$$minErr=b.$$minErr||F;return b.module||
|
||||
(b.module=function(){var b={};return function(e,g,f){if("hasOwnProperty"===e)throw c("badname","module");g&&b.hasOwnProperty(e)&&(b[e]=null);return b[e]||(b[e]=function(){function b(a,d,e){return function(){c[e||"push"]([a,d,arguments]);return n}}if(!g)throw a("nomod",e);var c=[],d=[],l=b("$injector","invoke"),n={_invokeQueue:c,_runBlocks:d,requires:g,name:e,provider:b("$provide","provider"),factory:b("$provide","factory"),service:b("$provide","service"),value:b("$provide","value"),constant:b("$provide",
|
||||
"constant","unshift"),animation:b("$animateProvider","register"),filter:b("$filterProvider","register"),controller:b("$controllerProvider","register"),directive:b("$compileProvider","directive"),config:l,run:function(a){d.push(a);return this}};f&&l(f);return n}())}}())}function Pa(b){return b.replace(Vc,function(a,b,d,e){return e?d.toUpperCase():d}).replace(Wc,"Moz$1")}function wb(b,a,c,d){function e(b){var e=c&&b?[this.filter(b)]:[this],m=a,k,l,n,p,s,C;if(!d||null!=b)for(;e.length;)for(k=e.shift(),
|
||||
l=0,n=k.length;l<n;l++)for(p=A(k[l]),m?p.triggerHandler("$destroy"):m=!m,s=0,p=(C=p.children()).length;s<p;s++)e.push(Ba(C[s]));return g.apply(this,arguments)}var g=Ba.fn[b],g=g.$original||g;e.$original=g;Ba.fn[b]=e}function O(b){if(b instanceof O)return b;if(!(this instanceof O)){if(D(b)&&"<"!=b.charAt(0))throw xb("nosel");return new O(b)}if(D(b)){var a=Q.createElement("div");a.innerHTML="<div> </div>"+b;a.removeChild(a.firstChild);yb(this,a.childNodes);A(Q.createDocumentFragment()).append(this)}else yb(this,
|
||||
b)}function zb(b){return b.cloneNode(!0)}function Ca(b){$b(b);var a=0;for(b=b.childNodes||[];a<b.length;a++)Ca(b[a])}function ac(b,a,c,d){if(B(d))throw xb("offargs");var e=la(b,"events");la(b,"handle")&&(z(a)?q(e,function(a,c){Ab(b,c,a);delete e[c]}):q(a.split(" "),function(a){z(c)?(Ab(b,a,e[a]),delete e[a]):Ka(e[a]||[],c)}))}function $b(b,a){var c=b[db],d=Qa[c];d&&(a?delete Qa[c].data[a]:(d.handle&&(d.events.$destroy&&d.handle({},"$destroy"),ac(b)),delete Qa[c],b[db]=r))}function la(b,a,c){var d=
|
||||
b[db],d=Qa[d||-1];if(B(c))d||(b[db]=d=++Xc,d=Qa[d]={}),d[a]=c;else return d&&d[a]}function bc(b,a,c){var d=la(b,"data"),e=B(c),g=!e&&B(a),f=g&&!X(a);d||f||la(b,"data",d={});if(e)d[a]=c;else if(g){if(f)return d&&d[a];t(d,a)}else return d}function Bb(b,a){return b.getAttribute?-1<(" "+(b.getAttribute("class")||"")+" ").replace(/[\n\t]/g," ").indexOf(" "+a+" "):!1}function Cb(b,a){a&&b.setAttribute&&q(a.split(" "),function(a){b.setAttribute("class",aa((" "+(b.getAttribute("class")||"")+" ").replace(/[\n\t]/g,
|
||||
" ").replace(" "+aa(a)+" "," ")))})}function Db(b,a){if(a&&b.setAttribute){var c=(" "+(b.getAttribute("class")||"")+" ").replace(/[\n\t]/g," ");q(a.split(" "),function(a){a=aa(a);-1===c.indexOf(" "+a+" ")&&(c+=a+" ")});b.setAttribute("class",aa(c))}}function yb(b,a){if(a){a=a.nodeName||!B(a.length)||za(a)?[a]:a;for(var c=0;c<a.length;c++)b.push(a[c])}}function cc(b,a){return eb(b,"$"+(a||"ngController")+"Controller")}function eb(b,a,c){b=A(b);9==b[0].nodeType&&(b=b.find("html"));for(a=K(a)?a:[a];b.length;){for(var d=
|
||||
0,e=a.length;d<e;d++)if((c=b.data(a[d]))!==r)return c;b=b.parent()}}function dc(b){for(var a=0,c=b.childNodes;a<c.length;a++)Ca(c[a]);for(;b.firstChild;)b.removeChild(b.firstChild)}function ec(b,a){var c=fb[a.toLowerCase()];return c&&fc[b.nodeName]&&c}function Yc(b,a){var c=function(c,e){c.preventDefault||(c.preventDefault=function(){c.returnValue=!1});c.stopPropagation||(c.stopPropagation=function(){c.cancelBubble=!0});c.target||(c.target=c.srcElement||Q);if(z(c.defaultPrevented)){var g=c.preventDefault;
|
||||
c.preventDefault=function(){c.defaultPrevented=!0;g.call(c)};c.defaultPrevented=!1}c.isDefaultPrevented=function(){return c.defaultPrevented||!1===c.returnValue};var f=Sb(a[e||c.type]||[]);q(f,function(a){a.call(b,c)});8>=M?(c.preventDefault=null,c.stopPropagation=null,c.isDefaultPrevented=null):(delete c.preventDefault,delete c.stopPropagation,delete c.isDefaultPrevented)};c.elem=b;return c}function Da(b){var a=typeof b,c;"object"==a&&null!==b?"function"==typeof(c=b.$$hashKey)?c=b.$$hashKey():c===
|
||||
r&&(c=b.$$hashKey=Ya()):c=b;return a+":"+c}function Ra(b){q(b,this.put,this)}function gc(b){var a,c;"function"==typeof b?(a=b.$inject)||(a=[],b.length&&(c=b.toString().replace(Zc,""),c=c.match($c),q(c[1].split(ad),function(b){b.replace(bd,function(b,c,d){a.push(d)})})),b.$inject=a):K(b)?(c=b.length-1,Oa(b[c],"fn"),a=b.slice(0,c)):Oa(b,"fn",!0);return a}function Zb(b){function a(a){return function(b,c){if(X(b))q(b,Pb(a));else return a(b,c)}}function c(a,b){xa(a,"service");if(L(b)||K(b))b=n.instantiate(b);
|
||||
if(!b.$get)throw Sa("pget",a);return l[a+h]=b}function d(a,b){return c(a,{$get:b})}function e(a){var b=[],c,d,g,h;q(a,function(a){if(!k.get(a)){k.put(a,!0);try{if(D(a))for(c=Ta(a),b=b.concat(e(c.requires)).concat(c._runBlocks),d=c._invokeQueue,g=0,h=d.length;g<h;g++){var f=d[g],m=n.get(f[0]);m[f[1]].apply(m,f[2])}else L(a)?b.push(n.invoke(a)):K(a)?b.push(n.invoke(a)):Oa(a,"module")}catch(l){throw K(a)&&(a=a[a.length-1]),l.message&&(l.stack&&-1==l.stack.indexOf(l.message))&&(l=l.message+"\n"+l.stack),
|
||||
Sa("modulerr",a,l.stack||l.message||l);}}});return b}function g(a,b){function c(d){if(a.hasOwnProperty(d)){if(a[d]===f)throw Sa("cdep",m.join(" <- "));return a[d]}try{return m.unshift(d),a[d]=f,a[d]=b(d)}catch(e){throw a[d]===f&&delete a[d],e;}finally{m.shift()}}function d(a,b,e){var g=[],h=gc(a),f,k,m;k=0;for(f=h.length;k<f;k++){m=h[k];if("string"!==typeof m)throw Sa("itkn",m);g.push(e&&e.hasOwnProperty(m)?e[m]:c(m))}a.$inject||(a=a[f]);return a.apply(b,g)}return{invoke:d,instantiate:function(a,
|
||||
b){var c=function(){},e;c.prototype=(K(a)?a[a.length-1]:a).prototype;c=new c;e=d(a,c,b);return X(e)||L(e)?e:c},get:c,annotate:gc,has:function(b){return l.hasOwnProperty(b+h)||a.hasOwnProperty(b)}}}var f={},h="Provider",m=[],k=new Ra,l={$provide:{provider:a(c),factory:a(d),service:a(function(a,b){return d(a,["$injector",function(a){return a.instantiate(b)}])}),value:a(function(a,b){return d(a,$(b))}),constant:a(function(a,b){xa(a,"constant");l[a]=b;p[a]=b}),decorator:function(a,b){var c=n.get(a+h),
|
||||
d=c.$get;c.$get=function(){var a=s.invoke(d,c);return s.invoke(b,null,{$delegate:a})}}}},n=l.$injector=g(l,function(){throw Sa("unpr",m.join(" <- "));}),p={},s=p.$injector=g(p,function(a){a=n.get(a+h);return s.invoke(a.$get,a)});q(e(b),function(a){s.invoke(a||w)});return s}function cd(){var b=!0;this.disableAutoScrolling=function(){b=!1};this.$get=["$window","$location","$rootScope",function(a,c,d){function e(a){var b=null;q(a,function(a){b||"a"!==x(a.nodeName)||(b=a)});return b}function g(){var b=
|
||||
c.hash(),d;b?(d=f.getElementById(b))?d.scrollIntoView():(d=e(f.getElementsByName(b)))?d.scrollIntoView():"top"===b&&a.scrollTo(0,0):a.scrollTo(0,0)}var f=a.document;b&&d.$watch(function(){return c.hash()},function(){d.$evalAsync(g)});return g}]}function dd(b,a,c,d){function e(a){try{a.apply(null,va.call(arguments,1))}finally{if(C--,0===C)for(;y.length;)try{y.pop()()}catch(b){c.error(b)}}}function g(a,b){(function T(){q(E,function(a){a()});u=b(T,a)})()}function f(){v=null;R!=h.url()&&(R=h.url(),q(ha,
|
||||
function(a){a(h.url())}))}var h=this,m=a[0],k=b.location,l=b.history,n=b.setTimeout,p=b.clearTimeout,s={};h.isMock=!1;var C=0,y=[];h.$$completeOutstandingRequest=e;h.$$incOutstandingRequestCount=function(){C++};h.notifyWhenNoOutstandingRequests=function(a){q(E,function(a){a()});0===C?a():y.push(a)};var E=[],u;h.addPollFn=function(a){z(u)&&g(100,n);E.push(a);return a};var R=k.href,H=a.find("base"),v=null;h.url=function(a,c){k!==b.location&&(k=b.location);l!==b.history&&(l=b.history);if(a){if(R!=a)return R=
|
||||
a,d.history?c?l.replaceState(null,"",a):(l.pushState(null,"",a),H.attr("href",H.attr("href"))):(v=a,c?k.replace(a):k.href=a),h}else return v||k.href.replace(/%27/g,"'")};var ha=[],N=!1;h.onUrlChange=function(a){if(!N){if(d.history)A(b).on("popstate",f);if(d.hashchange)A(b).on("hashchange",f);else h.addPollFn(f);N=!0}ha.push(a);return a};h.baseHref=function(){var a=H.attr("href");return a?a.replace(/^(https?\:)?\/\/[^\/]*/,""):""};var V={},J="",ba=h.baseHref();h.cookies=function(a,b){var d,e,g,h;if(a)b===
|
||||
r?m.cookie=escape(a)+"=;path="+ba+";expires=Thu, 01 Jan 1970 00:00:00 GMT":D(b)&&(d=(m.cookie=escape(a)+"="+escape(b)+";path="+ba).length+1,4096<d&&c.warn("Cookie '"+a+"' possibly not set or overflowed because it was too large ("+d+" > 4096 bytes)!"));else{if(m.cookie!==J)for(J=m.cookie,d=J.split("; "),V={},g=0;g<d.length;g++)e=d[g],h=e.indexOf("="),0<h&&(a=unescape(e.substring(0,h)),V[a]===r&&(V[a]=unescape(e.substring(h+1))));return V}};h.defer=function(a,b){var c;C++;c=n(function(){delete s[c];
|
||||
e(a)},b||0);s[c]=!0;return c};h.defer.cancel=function(a){return s[a]?(delete s[a],p(a),e(w),!0):!1}}function ed(){this.$get=["$window","$log","$sniffer","$document",function(b,a,c,d){return new dd(b,d,a,c)}]}function fd(){this.$get=function(){function b(b,d){function e(a){a!=n&&(p?p==a&&(p=a.n):p=a,g(a.n,a.p),g(a,n),n=a,n.n=null)}function g(a,b){a!=b&&(a&&(a.p=b),b&&(b.n=a))}if(b in a)throw F("$cacheFactory")("iid",b);var f=0,h=t({},d,{id:b}),m={},k=d&&d.capacity||Number.MAX_VALUE,l={},n=null,p=null;
|
||||
return a[b]={put:function(a,b){var c=l[a]||(l[a]={key:a});e(c);if(!z(b))return a in m||f++,m[a]=b,f>k&&this.remove(p.key),b},get:function(a){var b=l[a];if(b)return e(b),m[a]},remove:function(a){var b=l[a];b&&(b==n&&(n=b.p),b==p&&(p=b.n),g(b.n,b.p),delete l[a],delete m[a],f--)},removeAll:function(){m={};f=0;l={};n=p=null},destroy:function(){l=h=m=null;delete a[b]},info:function(){return t({},h,{size:f})}}}var a={};b.info=function(){var b={};q(a,function(a,e){b[e]=a.info()});return b};b.get=function(b){return a[b]};
|
||||
return b}}function gd(){this.$get=["$cacheFactory",function(b){return b("templates")}]}function ic(b,a){var c={},d="Directive",e=/^\s*directive\:\s*([\d\w\-_]+)\s+(.*)$/,g=/(([\d\w\-_]+)(?:\:([^;]+))?;?)/,f=/^(on[a-z]+|formaction)$/;this.directive=function m(a,e){xa(a,"directive");D(a)?(tb(e,"directiveFactory"),c.hasOwnProperty(a)||(c[a]=[],b.factory(a+d,["$injector","$exceptionHandler",function(b,d){var e=[];q(c[a],function(c,g){try{var f=b.invoke(c);L(f)?f={compile:$(f)}:!f.compile&&f.link&&(f.compile=
|
||||
$(f.link));f.priority=f.priority||0;f.index=g;f.name=f.name||a;f.require=f.require||f.controller&&f.name;f.restrict=f.restrict||"A";e.push(f)}catch(m){d(m)}});return e}])),c[a].push(e)):q(a,Pb(m));return this};this.aHrefSanitizationWhitelist=function(b){return B(b)?(a.aHrefSanitizationWhitelist(b),this):a.aHrefSanitizationWhitelist()};this.imgSrcSanitizationWhitelist=function(b){return B(b)?(a.imgSrcSanitizationWhitelist(b),this):a.imgSrcSanitizationWhitelist()};this.$get=["$injector","$interpolate",
|
||||
"$exceptionHandler","$http","$templateCache","$parse","$controller","$rootScope","$document","$sce","$animate","$$sanitizeUri",function(a,b,l,n,p,s,C,y,E,u,R,H){function v(a,b,c,d,e){a instanceof A||(a=A(a));q(a,function(b,c){3==b.nodeType&&b.nodeValue.match(/\S+/)&&(a[c]=A(b).wrap("<span></span>").parent()[0])});var g=N(a,b,a,c,d,e);ha(a,"ng-scope");return function(b,c,d){tb(b,"scope");var e=c?Ea.clone.call(a):a;q(d,function(a,b){e.data("$"+b+"Controller",a)});d=0;for(var f=e.length;d<f;d++){var k=
|
||||
e[d].nodeType;1!==k&&9!==k||e.eq(d).data("$scope",b)}c&&c(e,b);g&&g(b,e,e);return e}}function ha(a,b){try{a.addClass(b)}catch(c){}}function N(a,b,c,d,e,g){function f(a,c,d,e){var g,m,l,s,n,p,I;g=c.length;var C=Array(g);for(n=0;n<g;n++)C[n]=c[n];I=n=0;for(p=k.length;n<p;I++)m=C[I],c=k[n++],g=k[n++],l=A(m),c?(c.scope?(s=a.$new(),l.data("$scope",s)):s=a,(l=c.transclude)||!e&&b?c(g,s,m,d,V(a,l||b)):c(g,s,m,d,e)):g&&g(a,m.childNodes,r,e)}for(var k=[],m,l,s,n,p=0;p<a.length;p++)m=new Eb,l=J(a[p],[],m,0===
|
||||
p?d:r,e),(g=l.length?ia(l,a[p],m,b,c,null,[],[],g):null)&&g.scope&&ha(A(a[p]),"ng-scope"),m=g&&g.terminal||!(s=a[p].childNodes)||!s.length?null:N(s,g?g.transclude:b),k.push(g,m),n=n||g||m,g=null;return n?f:null}function V(a,b){return function(c,d,e){var g=!1;c||(c=a.$new(),g=c.$$transcluded=!0);d=b(c,d,e);if(g)d.on("$destroy",bb(c,c.$destroy));return d}}function J(a,b,c,d,f){var k=c.$attr,m;switch(a.nodeType){case 1:T(b,ma(Fa(a).toLowerCase()),"E",d,f);var l,s,n;m=a.attributes;for(var p=0,C=m&&m.length;p<
|
||||
C;p++){var y=!1,R=!1;l=m[p];if(!M||8<=M||l.specified){s=l.name;n=ma(s);W.test(n)&&(s=cb(n.substr(6),"-"));var v=n.replace(/(Start|End)$/,"");n===v+"Start"&&(y=s,R=s.substr(0,s.length-5)+"end",s=s.substr(0,s.length-6));n=ma(s.toLowerCase());k[n]=s;c[n]=l=aa(l.value);ec(a,n)&&(c[n]=!0);S(a,b,l,n);T(b,n,"A",d,f,y,R)}}a=a.className;if(D(a)&&""!==a)for(;m=g.exec(a);)n=ma(m[2]),T(b,n,"C",d,f)&&(c[n]=aa(m[3])),a=a.substr(m.index+m[0].length);break;case 3:F(b,a.nodeValue);break;case 8:try{if(m=e.exec(a.nodeValue))n=
|
||||
ma(m[1]),T(b,n,"M",d,f)&&(c[n]=aa(m[2]))}catch(E){}}b.sort(z);return b}function ba(a,b,c){var d=[],e=0;if(b&&a.hasAttribute&&a.hasAttribute(b)){do{if(!a)throw ja("uterdir",b,c);1==a.nodeType&&(a.hasAttribute(b)&&e++,a.hasAttribute(c)&&e--);d.push(a);a=a.nextSibling}while(0<e)}else d.push(a);return A(d)}function P(a,b,c){return function(d,e,g,f,m){e=ba(e[0],b,c);return a(d,e,g,f,m)}}function ia(a,c,d,e,g,f,m,n,p){function y(a,b,c,d){if(a){c&&(a=P(a,c,d));a.require=G.require;if(H===G||G.$$isolateScope)a=
|
||||
jc(a,{isolateScope:!0});m.push(a)}if(b){c&&(b=P(b,c,d));b.require=G.require;if(H===G||G.$$isolateScope)b=jc(b,{isolateScope:!0});n.push(b)}}function R(a,b,c){var d,e="data",g=!1;if(D(a)){for(;"^"==(d=a.charAt(0))||"?"==d;)a=a.substr(1),"^"==d&&(e="inheritedData"),g=g||"?"==d;d=null;c&&"data"===e&&(d=c[a]);d=d||b[e]("$"+a+"Controller");if(!d&&!g)throw ja("ctreq",a,ca);}else K(a)&&(d=[],q(a,function(a){d.push(R(a,b,c))}));return d}function E(a,e,g,f,p){function y(a,b){var c;2>arguments.length&&(b=a,
|
||||
a=r);z&&(c=ba);return p(a,b,c)}var I,v,N,u,P,J,ba={},gb;I=c===g?d:Sb(d,new Eb(A(g),d.$attr));v=I.$$element;if(H){var T=/^\s*([@=&])(\??)\s*(\w*)\s*$/;f=A(g);J=e.$new(!0);ia&&ia===H.$$originalDirective?f.data("$isolateScope",J):f.data("$isolateScopeNoTemplate",J);ha(f,"ng-isolate-scope");q(H.scope,function(a,c){var d=a.match(T)||[],g=d[3]||c,f="?"==d[2],d=d[1],m,l,n,p;J.$$isolateBindings[c]=d+g;switch(d){case "@":I.$observe(g,function(a){J[c]=a});I.$$observers[g].$$scope=e;I[g]&&(J[c]=b(I[g])(e));
|
||||
break;case "=":if(f&&!I[g])break;l=s(I[g]);p=l.literal?ua:function(a,b){return a===b};n=l.assign||function(){m=J[c]=l(e);throw ja("nonassign",I[g],H.name);};m=J[c]=l(e);J.$watch(function(){var a=l(e);p(a,J[c])||(p(a,m)?n(e,a=J[c]):J[c]=a);return m=a},null,l.literal);break;case "&":l=s(I[g]);J[c]=function(a){return l(e,a)};break;default:throw ja("iscp",H.name,c,a);}})}gb=p&&y;V&&q(V,function(a){var b={$scope:a===H||a.$$isolateScope?J:e,$element:v,$attrs:I,$transclude:gb},c;P=a.controller;"@"==P&&(P=
|
||||
I[a.name]);c=C(P,b);ba[a.name]=c;z||v.data("$"+a.name+"Controller",c);a.controllerAs&&(b.$scope[a.controllerAs]=c)});f=0;for(N=m.length;f<N;f++)try{u=m[f],u(u.isolateScope?J:e,v,I,u.require&&R(u.require,v,ba),gb)}catch(G){l(G,ga(v))}f=e;H&&(H.template||null===H.templateUrl)&&(f=J);a&&a(f,g.childNodes,r,p);for(f=n.length-1;0<=f;f--)try{u=n[f],u(u.isolateScope?J:e,v,I,u.require&&R(u.require,v,ba),gb)}catch(B){l(B,ga(v))}}p=p||{};var N=-Number.MAX_VALUE,u,V=p.controllerDirectives,H=p.newIsolateScopeDirective,
|
||||
ia=p.templateDirective;p=p.nonTlbTranscludeDirective;for(var T=!1,z=!1,t=d.$$element=A(c),G,ca,U,F=e,O,M=0,na=a.length;M<na;M++){G=a[M];var Ua=G.$$start,S=G.$$end;Ua&&(t=ba(c,Ua,S));U=r;if(N>G.priority)break;if(U=G.scope)u=u||G,G.templateUrl||(x("new/isolated scope",H,G,t),X(U)&&(H=G));ca=G.name;!G.templateUrl&&G.controller&&(U=G.controller,V=V||{},x("'"+ca+"' controller",V[ca],G,t),V[ca]=G);if(U=G.transclude)T=!0,G.$$tlb||(x("transclusion",p,G,t),p=G),"element"==U?(z=!0,N=G.priority,U=ba(c,Ua,S),
|
||||
t=d.$$element=A(Q.createComment(" "+ca+": "+d[ca]+" ")),c=t[0],hb(g,A(va.call(U,0)),c),F=v(U,e,N,f&&f.name,{nonTlbTranscludeDirective:p})):(U=A(zb(c)).contents(),t.empty(),F=v(U,e));if(G.template)if(x("template",ia,G,t),ia=G,U=L(G.template)?G.template(t,d):G.template,U=Y(U),G.replace){f=G;U=A("<div>"+aa(U)+"</div>").contents();c=U[0];if(1!=U.length||1!==c.nodeType)throw ja("tplrt",ca,"");hb(g,t,c);na={$attr:{}};U=J(c,[],na);var W=a.splice(M+1,a.length-(M+1));H&&hc(U);a=a.concat(U).concat(W);B(d,na);
|
||||
na=a.length}else t.html(U);if(G.templateUrl)x("template",ia,G,t),ia=G,G.replace&&(f=G),E=w(a.splice(M,a.length-M),t,d,g,F,m,n,{controllerDirectives:V,newIsolateScopeDirective:H,templateDirective:ia,nonTlbTranscludeDirective:p}),na=a.length;else if(G.compile)try{O=G.compile(t,d,F),L(O)?y(null,O,Ua,S):O&&y(O.pre,O.post,Ua,S)}catch(Z){l(Z,ga(t))}G.terminal&&(E.terminal=!0,N=Math.max(N,G.priority))}E.scope=u&&!0===u.scope;E.transclude=T&&F;return E}function hc(a){for(var b=0,c=a.length;b<c;b++)a[b]=Rb(a[b],
|
||||
{$$isolateScope:!0})}function T(b,e,g,f,k,s,n){if(e===k)return null;k=null;if(c.hasOwnProperty(e)){var p;e=a.get(e+d);for(var C=0,y=e.length;C<y;C++)try{p=e[C],(f===r||f>p.priority)&&-1!=p.restrict.indexOf(g)&&(s&&(p=Rb(p,{$$start:s,$$end:n})),b.push(p),k=p)}catch(v){l(v)}}return k}function B(a,b){var c=b.$attr,d=a.$attr,e=a.$$element;q(a,function(d,e){"$"!=e.charAt(0)&&(b[e]&&(d+=("style"===e?";":" ")+b[e]),a.$set(e,d,!0,c[e]))});q(b,function(b,g){"class"==g?(ha(e,b),a["class"]=(a["class"]?a["class"]+
|
||||
" ":"")+b):"style"==g?(e.attr("style",e.attr("style")+";"+b),a.style=(a.style?a.style+";":"")+b):"$"==g.charAt(0)||a.hasOwnProperty(g)||(a[g]=b,d[g]=c[g])})}function w(a,b,c,d,e,g,f,m){var k=[],l,s,C=b[0],y=a.shift(),v=t({},y,{templateUrl:null,transclude:null,replace:null,$$originalDirective:y}),R=L(y.templateUrl)?y.templateUrl(b,c):y.templateUrl;b.empty();n.get(u.getTrustedResourceUrl(R),{cache:p}).success(function(n){var p,E;n=Y(n);if(y.replace){n=A("<div>"+aa(n)+"</div>").contents();p=n[0];if(1!=
|
||||
n.length||1!==p.nodeType)throw ja("tplrt",y.name,R);n={$attr:{}};hb(d,b,p);var u=J(p,[],n);X(y.scope)&&hc(u);a=u.concat(a);B(c,n)}else p=C,b.html(n);a.unshift(v);l=ia(a,p,c,e,b,y,g,f,m);q(d,function(a,c){a==p&&(d[c]=b[0])});for(s=N(b[0].childNodes,e);k.length;){n=k.shift();E=k.shift();var H=k.shift(),ha=k.shift(),u=b[0];E!==C&&(u=zb(p),hb(H,A(E),u));E=l.transclude?V(n,l.transclude):ha;l(s,n,u,d,E)}k=null}).error(function(a,b,c,d){throw ja("tpload",d.url);});return function(a,b,c,d,e){k?(k.push(b),
|
||||
k.push(c),k.push(d),k.push(e)):l(s,b,c,d,e)}}function z(a,b){var c=b.priority-a.priority;return 0!==c?c:a.name!==b.name?a.name<b.name?-1:1:a.index-b.index}function x(a,b,c,d){if(b)throw ja("multidir",b.name,c.name,a,ga(d));}function F(a,c){var d=b(c,!0);d&&a.push({priority:0,compile:$(function(a,b){var c=b.parent(),e=c.data("$binding")||[];e.push(d);ha(c.data("$binding",e),"ng-binding");a.$watch(d,function(a){b[0].nodeValue=a})})})}function O(a,b){if("srcdoc"==b)return u.HTML;var c=Fa(a);if("xlinkHref"==
|
||||
b||"FORM"==c&&"action"==b||"IMG"!=c&&("src"==b||"ngSrc"==b))return u.RESOURCE_URL}function S(a,c,d,e){var g=b(d,!0);if(g){if("multiple"===e&&"SELECT"===Fa(a))throw ja("selmulti",ga(a));c.push({priority:100,compile:function(){return{pre:function(c,d,m){d=m.$$observers||(m.$$observers={});if(f.test(e))throw ja("nodomevents");if(g=b(m[e],!0,O(a,e)))m[e]=g(c),(d[e]||(d[e]=[])).$$inter=!0,(m.$$observers&&m.$$observers[e].$$scope||c).$watch(g,function(a,b){"class"===e&&a!=b?m.$updateClass(a,b):m.$set(e,
|
||||
a)})}}}})}}function hb(a,b,c){var d=b[0],e=b.length,g=d.parentNode,f,m;if(a)for(f=0,m=a.length;f<m;f++)if(a[f]==d){a[f++]=c;m=f+e-1;for(var k=a.length;f<k;f++,m++)m<k?a[f]=a[m]:delete a[f];a.length-=e-1;break}g&&g.replaceChild(c,d);a=Q.createDocumentFragment();a.appendChild(d);c[A.expando]=d[A.expando];d=1;for(e=b.length;d<e;d++)g=b[d],A(g).remove(),a.appendChild(g),delete b[d];b[0]=c;b.length=1}function jc(a,b){return t(function(){return a.apply(null,arguments)},a,b)}var Eb=function(a,b){this.$$element=
|
||||
a;this.$attr=b||{}};Eb.prototype={$normalize:ma,$addClass:function(a){a&&0<a.length&&R.addClass(this.$$element,a)},$removeClass:function(a){a&&0<a.length&&R.removeClass(this.$$element,a)},$updateClass:function(a,b){this.$removeClass(kc(b,a));this.$addClass(kc(a,b))},$set:function(a,b,c,d){var e=ec(this.$$element[0],a);e&&(this.$$element.prop(a,b),d=e);this[a]=b;d?this.$attr[a]=d:(d=this.$attr[a])||(this.$attr[a]=d=cb(a,"-"));e=Fa(this.$$element);if("A"===e&&"href"===a||"IMG"===e&&"src"===a)this[a]=
|
||||
b=H(b,"src"===a);!1!==c&&(null===b||b===r?this.$$element.removeAttr(d):this.$$element.attr(d,b));(c=this.$$observers)&&q(c[a],function(a){try{a(b)}catch(c){l(c)}})},$observe:function(a,b){var c=this,d=c.$$observers||(c.$$observers={}),e=d[a]||(d[a]=[]);e.push(b);y.$evalAsync(function(){e.$$inter||b(c[a])});return b}};var ca=b.startSymbol(),na=b.endSymbol(),Y="{{"==ca||"}}"==na?Aa:function(a){return a.replace(/\{\{/g,ca).replace(/}}/g,na)},W=/^ngAttr[A-Z]/;return v}]}function ma(b){return Pa(b.replace(hd,
|
||||
""))}function kc(b,a){var c="",d=b.split(/\s+/),e=a.split(/\s+/),g=0;a:for(;g<d.length;g++){for(var f=d[g],h=0;h<e.length;h++)if(f==e[h])continue a;c+=(0<c.length?" ":"")+f}return c}function id(){var b={},a=/^(\S+)(\s+as\s+(\w+))?$/;this.register=function(a,d){xa(a,"controller");X(a)?t(b,a):b[a]=d};this.$get=["$injector","$window",function(c,d){return function(e,g){var f,h,m;D(e)&&(f=e.match(a),h=f[1],m=f[3],e=b.hasOwnProperty(h)?b[h]:ub(g.$scope,h,!0)||ub(d,h,!0),Oa(e,h,!0));f=c.instantiate(e,g);
|
||||
if(m){if(!g||"object"!=typeof g.$scope)throw F("$controller")("noscp",h||e.name,m);g.$scope[m]=f}return f}}]}function jd(){this.$get=["$window",function(b){return A(b.document)}]}function kd(){this.$get=["$log",function(b){return function(a,c){b.error.apply(b,arguments)}}]}function lc(b){var a={},c,d,e;if(!b)return a;q(b.split("\n"),function(b){e=b.indexOf(":");c=x(aa(b.substr(0,e)));d=aa(b.substr(e+1));c&&(a[c]=a[c]?a[c]+(", "+d):d)});return a}function mc(b){var a=X(b)?b:r;return function(c){a||
|
||||
(a=lc(b));return c?a[x(c)]||null:a}}function nc(b,a,c){if(L(c))return c(b,a);q(c,function(c){b=c(b,a)});return b}function ld(){var b=/^\s*(\[|\{[^\{])/,a=/[\}\]]\s*$/,c=/^\)\]\}',?\n/,d={"Content-Type":"application/json;charset=utf-8"},e=this.defaults={transformResponse:[function(d){D(d)&&(d=d.replace(c,""),b.test(d)&&a.test(d)&&(d=Ub(d)));return d}],transformRequest:[function(a){return X(a)&&"[object File]"!==Za.call(a)?pa(a):a}],headers:{common:{Accept:"application/json, text/plain, */*"},post:d,
|
||||
put:d,patch:d},xsrfCookieName:"XSRF-TOKEN",xsrfHeaderName:"X-XSRF-TOKEN"},g=this.interceptors=[],f=this.responseInterceptors=[];this.$get=["$httpBackend","$browser","$cacheFactory","$rootScope","$q","$injector",function(a,b,c,d,n,p){function s(a){function c(a){var b=t({},a,{data:nc(a.data,a.headers,d.transformResponse)});return 200<=a.status&&300>a.status?b:n.reject(b)}var d={transformRequest:e.transformRequest,transformResponse:e.transformResponse},g=function(a){function b(a){var c;q(a,function(b,
|
||||
d){L(b)&&(c=b(),null!=c?a[d]=c:delete a[d])})}var c=e.headers,d=t({},a.headers),g,f,c=t({},c.common,c[x(a.method)]);b(c);b(d);a:for(g in c){a=x(g);for(f in d)if(x(f)===a)continue a;d[g]=c[g]}return d}(a);t(d,a);d.headers=g;d.method=Ga(d.method);(a=Fb(d.url)?b.cookies()[d.xsrfCookieName||e.xsrfCookieName]:r)&&(g[d.xsrfHeaderName||e.xsrfHeaderName]=a);var f=[function(a){g=a.headers;var b=nc(a.data,mc(g),a.transformRequest);z(a.data)&&q(g,function(a,b){"content-type"===x(b)&&delete g[b]});z(a.withCredentials)&&
|
||||
!z(e.withCredentials)&&(a.withCredentials=e.withCredentials);return C(a,b,g).then(c,c)},r],h=n.when(d);for(q(u,function(a){(a.request||a.requestError)&&f.unshift(a.request,a.requestError);(a.response||a.responseError)&&f.push(a.response,a.responseError)});f.length;){a=f.shift();var k=f.shift(),h=h.then(a,k)}h.success=function(a){h.then(function(b){a(b.data,b.status,b.headers,d)});return h};h.error=function(a){h.then(null,function(b){a(b.data,b.status,b.headers,d)});return h};return h}function C(b,
|
||||
c,g){function f(a,b,c){u&&(200<=a&&300>a?u.put(r,[a,b,lc(c)]):u.remove(r));m(b,a,c);d.$$phase||d.$apply()}function m(a,c,d){c=Math.max(c,0);(200<=c&&300>c?p.resolve:p.reject)({data:a,status:c,headers:mc(d),config:b})}function k(){var a=ab(s.pendingRequests,b);-1!==a&&s.pendingRequests.splice(a,1)}var p=n.defer(),C=p.promise,u,q,r=y(b.url,b.params);s.pendingRequests.push(b);C.then(k,k);(b.cache||e.cache)&&(!1!==b.cache&&"GET"==b.method)&&(u=X(b.cache)?b.cache:X(e.cache)?e.cache:E);if(u)if(q=u.get(r),
|
||||
B(q)){if(q.then)return q.then(k,k),q;K(q)?m(q[1],q[0],fa(q[2])):m(q,200,{})}else u.put(r,C);z(q)&&a(b.method,r,c,f,g,b.timeout,b.withCredentials,b.responseType);return C}function y(a,b){if(!b)return a;var c=[];Oc(b,function(a,b){null===a||z(a)||(K(a)||(a=[a]),q(a,function(a){X(a)&&(a=pa(a));c.push(wa(b)+"="+wa(a))}))});return a+(-1==a.indexOf("?")?"?":"&")+c.join("&")}var E=c("$http"),u=[];q(g,function(a){u.unshift(D(a)?p.get(a):p.invoke(a))});q(f,function(a,b){var c=D(a)?p.get(a):p.invoke(a);u.splice(b,
|
||||
0,{response:function(a){return c(n.when(a))},responseError:function(a){return c(n.reject(a))}})});s.pendingRequests=[];(function(a){q(arguments,function(a){s[a]=function(b,c){return s(t(c||{},{method:a,url:b}))}})})("get","delete","head","jsonp");(function(a){q(arguments,function(a){s[a]=function(b,c,d){return s(t(d||{},{method:a,url:b,data:c}))}})})("post","put");s.defaults=e;return s}]}function md(b){return 8>=M&&"patch"===x(b)?new ActiveXObject("Microsoft.XMLHTTP"):new Z.XMLHttpRequest}function nd(){this.$get=
|
||||
["$browser","$window","$document",function(b,a,c){return od(b,md,b.defer,a.angular.callbacks,c[0])}]}function od(b,a,c,d,e){function g(a,b){var c=e.createElement("script"),d=function(){c.onreadystatechange=c.onload=c.onerror=null;e.body.removeChild(c);b&&b()};c.type="text/javascript";c.src=a;M&&8>=M?c.onreadystatechange=function(){/loaded|complete/.test(c.readyState)&&d()}:c.onload=c.onerror=function(){d()};e.body.appendChild(c);return d}var f=-1;return function(e,m,k,l,n,p,s,C){function y(){u=f;
|
||||
H&&H();v&&v.abort()}function E(a,d,e,g){var f=qa(m).protocol;r&&c.cancel(r);H=v=null;d="file"==f&&0===d?e?200:404:d;a(1223==d?204:d,e,g);b.$$completeOutstandingRequest(w)}var u;b.$$incOutstandingRequestCount();m=m||b.url();if("jsonp"==x(e)){var R="_"+(d.counter++).toString(36);d[R]=function(a){d[R].data=a};var H=g(m.replace("JSON_CALLBACK","angular.callbacks."+R),function(){d[R].data?E(l,200,d[R].data):E(l,u||-2);delete d[R]})}else{var v=a(e);v.open(e,m,!0);q(n,function(a,b){B(a)&&v.setRequestHeader(b,
|
||||
a)});v.onreadystatechange=function(){if(v&&4==v.readyState){var a=null,b=null;u!==f&&(a=v.getAllResponseHeaders(),b=v.responseType?v.response:v.responseText);E(l,u||v.status,b,a)}};s&&(v.withCredentials=!0);C&&(v.responseType=C);v.send(k||null)}if(0<p)var r=c(y,p);else p&&p.then&&p.then(y)}}function pd(){var b="{{",a="}}";this.startSymbol=function(a){return a?(b=a,this):b};this.endSymbol=function(b){return b?(a=b,this):a};this.$get=["$parse","$exceptionHandler","$sce",function(c,d,e){function g(g,
|
||||
k,l){for(var n,p,s=0,C=[],y=g.length,E=!1,u=[];s<y;)-1!=(n=g.indexOf(b,s))&&-1!=(p=g.indexOf(a,n+f))?(s!=n&&C.push(g.substring(s,n)),C.push(s=c(E=g.substring(n+f,p))),s.exp=E,s=p+h,E=!0):(s!=y&&C.push(g.substring(s)),s=y);(y=C.length)||(C.push(""),y=1);if(l&&1<C.length)throw oc("noconcat",g);if(!k||E)return u.length=y,s=function(a){try{for(var b=0,c=y,f;b<c;b++)"function"==typeof(f=C[b])&&(f=f(a),f=l?e.getTrusted(l,f):e.valueOf(f),null===f||z(f)?f="":"string"!=typeof f&&(f=pa(f))),u[b]=f;return u.join("")}catch(h){a=
|
||||
oc("interr",g,h.toString()),d(a)}},s.exp=g,s.parts=C,s}var f=b.length,h=a.length;g.startSymbol=function(){return b};g.endSymbol=function(){return a};return g}]}function qd(){this.$get=["$rootScope","$window","$q",function(b,a,c){function d(d,f,h,m){var k=a.setInterval,l=a.clearInterval,n=c.defer(),p=n.promise,s=0,C=B(m)&&!m;h=B(h)?h:0;p.then(null,null,d);p.$$intervalId=k(function(){n.notify(s++);0<h&&s>=h&&(n.resolve(s),l(p.$$intervalId),delete e[p.$$intervalId]);C||b.$apply()},f);e[p.$$intervalId]=
|
||||
n;return p}var e={};d.cancel=function(a){return a&&a.$$intervalId in e?(e[a.$$intervalId].reject("canceled"),clearInterval(a.$$intervalId),delete e[a.$$intervalId],!0):!1};return d}]}function rd(){this.$get=function(){return{id:"en-us",NUMBER_FORMATS:{DECIMAL_SEP:".",GROUP_SEP:",",PATTERNS:[{minInt:1,minFrac:0,maxFrac:3,posPre:"",posSuf:"",negPre:"-",negSuf:"",gSize:3,lgSize:3},{minInt:1,minFrac:2,maxFrac:2,posPre:"\u00a4",posSuf:"",negPre:"(\u00a4",negSuf:")",gSize:3,lgSize:3}],CURRENCY_SYM:"$"},
|
||||
DATETIME_FORMATS:{MONTH:"January February March April May June July August September October November December".split(" "),SHORTMONTH:"Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec".split(" "),DAY:"Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),SHORTDAY:"Sun Mon Tue Wed Thu Fri Sat".split(" "),AMPMS:["AM","PM"],medium:"MMM d, y h:mm:ss a","short":"M/d/yy h:mm a",fullDate:"EEEE, MMMM d, y",longDate:"MMMM d, y",mediumDate:"MMM d, y",shortDate:"M/d/yy",mediumTime:"h:mm:ss a",
|
||||
shortTime:"h:mm a"},pluralCat:function(b){return 1===b?"one":"other"}}}}function pc(b){b=b.split("/");for(var a=b.length;a--;)b[a]=sb(b[a]);return b.join("/")}function qc(b,a,c){b=qa(b,c);a.$$protocol=b.protocol;a.$$host=b.hostname;a.$$port=S(b.port)||sd[b.protocol]||null}function rc(b,a,c){var d="/"!==b.charAt(0);d&&(b="/"+b);b=qa(b,c);a.$$path=decodeURIComponent(d&&"/"===b.pathname.charAt(0)?b.pathname.substring(1):b.pathname);a.$$search=Wb(b.search);a.$$hash=decodeURIComponent(b.hash);a.$$path&&
|
||||
"/"!=a.$$path.charAt(0)&&(a.$$path="/"+a.$$path)}function oa(b,a){if(0===a.indexOf(b))return a.substr(b.length)}function Va(b){var a=b.indexOf("#");return-1==a?b:b.substr(0,a)}function Gb(b){return b.substr(0,Va(b).lastIndexOf("/")+1)}function sc(b,a){this.$$html5=!0;a=a||"";var c=Gb(b);qc(b,this,b);this.$$parse=function(a){var e=oa(c,a);if(!D(e))throw Hb("ipthprfx",a,c);rc(e,this,b);this.$$path||(this.$$path="/");this.$$compose()};this.$$compose=function(){var a=Xb(this.$$search),b=this.$$hash?"#"+
|
||||
sb(this.$$hash):"";this.$$url=pc(this.$$path)+(a?"?"+a:"")+b;this.$$absUrl=c+this.$$url.substr(1)};this.$$rewrite=function(d){var e;if((e=oa(b,d))!==r)return d=e,(e=oa(a,e))!==r?c+(oa("/",e)||e):b+d;if((e=oa(c,d))!==r)return c+e;if(c==d+"/")return c}}function Ib(b,a){var c=Gb(b);qc(b,this,b);this.$$parse=function(d){var e=oa(b,d)||oa(c,d),e="#"==e.charAt(0)?oa(a,e):this.$$html5?e:"";if(!D(e))throw Hb("ihshprfx",d,a);rc(e,this,b);d=this.$$path;var g=/^\/?.*?:(\/.*)/;0===e.indexOf(b)&&(e=e.replace(b,
|
||||
""));g.exec(e)||(d=(e=g.exec(d))?e[1]:d);this.$$path=d;this.$$compose()};this.$$compose=function(){var c=Xb(this.$$search),e=this.$$hash?"#"+sb(this.$$hash):"";this.$$url=pc(this.$$path)+(c?"?"+c:"")+e;this.$$absUrl=b+(this.$$url?a+this.$$url:"")};this.$$rewrite=function(a){if(Va(b)==Va(a))return a}}function tc(b,a){this.$$html5=!0;Ib.apply(this,arguments);var c=Gb(b);this.$$rewrite=function(d){var e;if(b==Va(d))return d;if(e=oa(c,d))return b+a+e;if(c===d+"/")return c}}function ib(b){return function(){return this[b]}}
|
||||
function uc(b,a){return function(c){if(z(c))return this[b];this[b]=a(c);this.$$compose();return this}}function td(){var b="",a=!1;this.hashPrefix=function(a){return B(a)?(b=a,this):b};this.html5Mode=function(b){return B(b)?(a=b,this):a};this.$get=["$rootScope","$browser","$sniffer","$rootElement",function(c,d,e,g){function f(a){c.$broadcast("$locationChangeSuccess",h.absUrl(),a)}var h,m=d.baseHref(),k=d.url();a?(m=k.substring(0,k.indexOf("/",k.indexOf("//")+2))+(m||"/"),e=e.history?sc:tc):(m=Va(k),
|
||||
e=Ib);h=new e(m,"#"+b);h.$$parse(h.$$rewrite(k));g.on("click",function(a){if(!a.ctrlKey&&!a.metaKey&&2!=a.which){for(var b=A(a.target);"a"!==x(b[0].nodeName);)if(b[0]===g[0]||!(b=b.parent())[0])return;var e=b.prop("href");X(e)&&"[object SVGAnimatedString]"===e.toString()&&(e=qa(e.animVal).href);var f=h.$$rewrite(e);e&&(!b.attr("target")&&f&&!a.isDefaultPrevented())&&(a.preventDefault(),f!=d.url()&&(h.$$parse(f),c.$apply(),Z.angular["ff-684208-preventDefault"]=!0))}});h.absUrl()!=k&&d.url(h.absUrl(),
|
||||
!0);d.onUrlChange(function(a){h.absUrl()!=a&&(c.$evalAsync(function(){var b=h.absUrl();h.$$parse(a);c.$broadcast("$locationChangeStart",a,b).defaultPrevented?(h.$$parse(b),d.url(b)):f(b)}),c.$$phase||c.$digest())});var l=0;c.$watch(function(){var a=d.url(),b=h.$$replace;l&&a==h.absUrl()||(l++,c.$evalAsync(function(){c.$broadcast("$locationChangeStart",h.absUrl(),a).defaultPrevented?h.$$parse(a):(d.url(h.absUrl(),b),f(a))}));h.$$replace=!1;return l});return h}]}function ud(){var b=!0,a=this;this.debugEnabled=
|
||||
function(a){return B(a)?(b=a,this):b};this.$get=["$window",function(c){function d(a){a instanceof Error&&(a.stack?a=a.message&&-1===a.stack.indexOf(a.message)?"Error: "+a.message+"\n"+a.stack:a.stack:a.sourceURL&&(a=a.message+"\n"+a.sourceURL+":"+a.line));return a}function e(a){var b=c.console||{},e=b[a]||b.log||w;a=!1;try{a=!!e.apply}catch(m){}return a?function(){var a=[];q(arguments,function(b){a.push(d(b))});return e.apply(b,a)}:function(a,b){e(a,null==b?"":b)}}return{log:e("log"),info:e("info"),
|
||||
warn:e("warn"),error:e("error"),debug:function(){var c=e("debug");return function(){b&&c.apply(a,arguments)}}()}}]}function da(b,a){if("constructor"===b)throw ya("isecfld",a);return b}function Wa(b,a){if(b){if(b.constructor===b)throw ya("isecfn",a);if(b.document&&b.location&&b.alert&&b.setInterval)throw ya("isecwindow",a);if(b.children&&(b.nodeName||b.on&&b.find))throw ya("isecdom",a);}return b}function jb(b,a,c,d,e){e=e||{};a=a.split(".");for(var g,f=0;1<a.length;f++){g=da(a.shift(),d);var h=b[g];
|
||||
h||(h={},b[g]=h);b=h;b.then&&e.unwrapPromises&&(ra(d),"$$v"in b||function(a){a.then(function(b){a.$$v=b})}(b),b.$$v===r&&(b.$$v={}),b=b.$$v)}g=da(a.shift(),d);return b[g]=c}function vc(b,a,c,d,e,g,f){da(b,g);da(a,g);da(c,g);da(d,g);da(e,g);return f.unwrapPromises?function(f,m){var k=m&&m.hasOwnProperty(b)?m:f,l;if(null==k)return k;(k=k[b])&&k.then&&(ra(g),"$$v"in k||(l=k,l.$$v=r,l.then(function(a){l.$$v=a})),k=k.$$v);if(null==k)return a?r:k;(k=k[a])&&k.then&&(ra(g),"$$v"in k||(l=k,l.$$v=r,l.then(function(a){l.$$v=
|
||||
a})),k=k.$$v);if(null==k)return c?r:k;(k=k[c])&&k.then&&(ra(g),"$$v"in k||(l=k,l.$$v=r,l.then(function(a){l.$$v=a})),k=k.$$v);if(null==k)return d?r:k;(k=k[d])&&k.then&&(ra(g),"$$v"in k||(l=k,l.$$v=r,l.then(function(a){l.$$v=a})),k=k.$$v);if(null==k)return e?r:k;(k=k[e])&&k.then&&(ra(g),"$$v"in k||(l=k,l.$$v=r,l.then(function(a){l.$$v=a})),k=k.$$v);return k}:function(g,f){var k=f&&f.hasOwnProperty(b)?f:g;if(null==k)return k;k=k[b];if(null==k)return a?r:k;k=k[a];if(null==k)return c?r:k;k=k[c];if(null==
|
||||
k)return d?r:k;k=k[d];return null==k?e?r:k:k=k[e]}}function vd(b,a){da(b,a);return function(a,d){return null==a?r:(d&&d.hasOwnProperty(b)?d:a)[b]}}function wd(b,a,c){da(b,c);da(a,c);return function(c,e){if(null==c)return r;c=(e&&e.hasOwnProperty(b)?e:c)[b];return null==c?r:c[a]}}function wc(b,a,c){if(Jb.hasOwnProperty(b))return Jb[b];var d=b.split("."),e=d.length,g;if(a.unwrapPromises||1!==e)if(a.unwrapPromises||2!==e)if(a.csp)g=6>e?vc(d[0],d[1],d[2],d[3],d[4],c,a):function(b,g){var f=0,h;do h=vc(d[f++],
|
||||
d[f++],d[f++],d[f++],d[f++],c,a)(b,g),g=r,b=h;while(f<e);return h};else{var f="var p;\n";q(d,function(b,d){da(b,c);f+="if(s == null) return undefined;\ns="+(d?"s":'((k&&k.hasOwnProperty("'+b+'"))?k:s)')+'["'+b+'"];\n'+(a.unwrapPromises?'if (s && s.then) {\n pw("'+c.replace(/(["\r\n])/g,"\\$1")+'");\n if (!("$$v" in s)) {\n p=s;\n p.$$v = undefined;\n p.then(function(v) {p.$$v=v;});\n}\n s=s.$$v\n}\n':"")});var f=f+"return s;",h=new Function("s","k","pw",f);h.toString=$(f);g=a.unwrapPromises?function(a,
|
||||
b){return h(a,b,ra)}:h}else g=wd(d[0],d[1],c);else g=vd(d[0],c);"hasOwnProperty"!==b&&(Jb[b]=g);return g}function xd(){var b={},a={csp:!1,unwrapPromises:!1,logPromiseWarnings:!0};this.unwrapPromises=function(b){return B(b)?(a.unwrapPromises=!!b,this):a.unwrapPromises};this.logPromiseWarnings=function(b){return B(b)?(a.logPromiseWarnings=b,this):a.logPromiseWarnings};this.$get=["$filter","$sniffer","$log",function(c,d,e){a.csp=d.csp;ra=function(b){a.logPromiseWarnings&&!xc.hasOwnProperty(b)&&(xc[b]=
|
||||
!0,e.warn("[$parse] Promise found in the expression `"+b+"`. Automatic unwrapping of promises in Angular expressions is deprecated."))};return function(d){var e;switch(typeof d){case "string":if(b.hasOwnProperty(d))return b[d];e=new Kb(a);e=(new Xa(e,c,a)).parse(d,!1);"hasOwnProperty"!==d&&(b[d]=e);return e;case "function":return d;default:return w}}}]}function yd(){this.$get=["$rootScope","$exceptionHandler",function(b,a){return zd(function(a){b.$evalAsync(a)},a)}]}function zd(b,a){function c(a){return a}
|
||||
function d(a){return f(a)}var e=function(){var h=[],m,k;return k={resolve:function(a){if(h){var c=h;h=r;m=g(a);c.length&&b(function(){for(var a,b=0,d=c.length;b<d;b++)a=c[b],m.then(a[0],a[1],a[2])})}},reject:function(a){k.resolve(f(a))},notify:function(a){if(h){var c=h;h.length&&b(function(){for(var b,d=0,e=c.length;d<e;d++)b=c[d],b[2](a)})}},promise:{then:function(b,g,f){var k=e(),C=function(d){try{k.resolve((L(b)?b:c)(d))}catch(e){k.reject(e),a(e)}},y=function(b){try{k.resolve((L(g)?g:d)(b))}catch(c){k.reject(c),
|
||||
a(c)}},E=function(b){try{k.notify((L(f)?f:c)(b))}catch(d){a(d)}};h?h.push([C,y,E]):m.then(C,y,E);return k.promise},"catch":function(a){return this.then(null,a)},"finally":function(a){function b(a,c){var d=e();c?d.resolve(a):d.reject(a);return d.promise}function d(e,g){var f=null;try{f=(a||c)()}catch(h){return b(h,!1)}return f&&L(f.then)?f.then(function(){return b(e,g)},function(a){return b(a,!1)}):b(e,g)}return this.then(function(a){return d(a,!0)},function(a){return d(a,!1)})}}}},g=function(a){return a&&
|
||||
L(a.then)?a:{then:function(c){var d=e();b(function(){d.resolve(c(a))});return d.promise}}},f=function(c){return{then:function(g,f){var l=e();b(function(){try{l.resolve((L(f)?f:d)(c))}catch(b){l.reject(b),a(b)}});return l.promise}}};return{defer:e,reject:f,when:function(h,m,k,l){var n=e(),p,s=function(b){try{return(L(m)?m:c)(b)}catch(d){return a(d),f(d)}},C=function(b){try{return(L(k)?k:d)(b)}catch(c){return a(c),f(c)}},y=function(b){try{return(L(l)?l:c)(b)}catch(d){a(d)}};b(function(){g(h).then(function(a){p||
|
||||
(p=!0,n.resolve(g(a).then(s,C,y)))},function(a){p||(p=!0,n.resolve(C(a)))},function(a){p||n.notify(y(a))})});return n.promise},all:function(a){var b=e(),c=0,d=K(a)?[]:{};q(a,function(a,e){c++;g(a).then(function(a){d.hasOwnProperty(e)||(d[e]=a,--c||b.resolve(d))},function(a){d.hasOwnProperty(e)||b.reject(a)})});0===c&&b.resolve(d);return b.promise}}}function Ad(){var b=10,a=F("$rootScope"),c=null;this.digestTtl=function(a){arguments.length&&(b=a);return b};this.$get=["$injector","$exceptionHandler",
|
||||
"$parse","$browser",function(d,e,g,f){function h(){this.$id=Ya();this.$$phase=this.$parent=this.$$watchers=this.$$nextSibling=this.$$prevSibling=this.$$childHead=this.$$childTail=null;this["this"]=this.$root=this;this.$$destroyed=!1;this.$$asyncQueue=[];this.$$postDigestQueue=[];this.$$listeners={};this.$$listenerCount={};this.$$isolateBindings={}}function m(b){if(p.$$phase)throw a("inprog",p.$$phase);p.$$phase=b}function k(a,b){var c=g(a);Oa(c,b);return c}function l(a,b,c){do a.$$listenerCount[c]-=
|
||||
b,0===a.$$listenerCount[c]&&delete a.$$listenerCount[c];while(a=a.$parent)}function n(){}h.prototype={constructor:h,$new:function(a){a?(a=new h,a.$root=this.$root,a.$$asyncQueue=this.$$asyncQueue,a.$$postDigestQueue=this.$$postDigestQueue):(a=function(){},a.prototype=this,a=new a,a.$id=Ya());a["this"]=a;a.$$listeners={};a.$$listenerCount={};a.$parent=this;a.$$watchers=a.$$nextSibling=a.$$childHead=a.$$childTail=null;a.$$prevSibling=this.$$childTail;this.$$childHead?this.$$childTail=this.$$childTail.$$nextSibling=
|
||||
a:this.$$childHead=this.$$childTail=a;return a},$watch:function(a,b,d){var e=k(a,"watch"),g=this.$$watchers,f={fn:b,last:n,get:e,exp:a,eq:!!d};c=null;if(!L(b)){var h=k(b||w,"listener");f.fn=function(a,b,c){h(c)}}if("string"==typeof a&&e.constant){var m=f.fn;f.fn=function(a,b,c){m.call(this,a,b,c);Ka(g,f)}}g||(g=this.$$watchers=[]);g.unshift(f);return function(){Ka(g,f);c=null}},$watchCollection:function(a,b){var c=this,d,e,f=0,h=g(a),m=[],k={},l=0;return this.$watch(function(){e=h(c);var a,b;if(X(e))if(qb(e))for(d!==
|
||||
m&&(d=m,l=d.length=0,f++),a=e.length,l!==a&&(f++,d.length=l=a),b=0;b<a;b++)d[b]!==e[b]&&(f++,d[b]=e[b]);else{d!==k&&(d=k={},l=0,f++);a=0;for(b in e)e.hasOwnProperty(b)&&(a++,d.hasOwnProperty(b)?d[b]!==e[b]&&(f++,d[b]=e[b]):(l++,d[b]=e[b],f++));if(l>a)for(b in f++,d)d.hasOwnProperty(b)&&!e.hasOwnProperty(b)&&(l--,delete d[b])}else d!==e&&(d=e,f++);return f},function(){b(e,d,c)})},$digest:function(){var d,f,g,h,k=this.$$asyncQueue,l=this.$$postDigestQueue,q,v,r=b,N,V=[],J,A,P;m("$digest");c=null;do{v=
|
||||
!1;for(N=this;k.length;){try{P=k.shift(),P.scope.$eval(P.expression)}catch(B){p.$$phase=null,e(B)}c=null}a:do{if(h=N.$$watchers)for(q=h.length;q--;)try{if(d=h[q])if((f=d.get(N))!==(g=d.last)&&!(d.eq?ua(f,g):"number"==typeof f&&"number"==typeof g&&isNaN(f)&&isNaN(g)))v=!0,c=d,d.last=d.eq?fa(f):f,d.fn(f,g===n?f:g,N),5>r&&(J=4-r,V[J]||(V[J]=[]),A=L(d.exp)?"fn: "+(d.exp.name||d.exp.toString()):d.exp,A+="; newVal: "+pa(f)+"; oldVal: "+pa(g),V[J].push(A));else if(d===c){v=!1;break a}}catch(t){p.$$phase=
|
||||
null,e(t)}if(!(h=N.$$childHead||N!==this&&N.$$nextSibling))for(;N!==this&&!(h=N.$$nextSibling);)N=N.$parent}while(N=h);if(v&&!r--)throw p.$$phase=null,a("infdig",b,pa(V));}while(v||k.length);for(p.$$phase=null;l.length;)try{l.shift()()}catch(z){e(z)}},$destroy:function(){if(!this.$$destroyed){var a=this.$parent;this.$broadcast("$destroy");this.$$destroyed=!0;this!==p&&(q(this.$$listenerCount,bb(null,l,this)),a.$$childHead==this&&(a.$$childHead=this.$$nextSibling),a.$$childTail==this&&(a.$$childTail=
|
||||
this.$$prevSibling),this.$$prevSibling&&(this.$$prevSibling.$$nextSibling=this.$$nextSibling),this.$$nextSibling&&(this.$$nextSibling.$$prevSibling=this.$$prevSibling),this.$parent=this.$$nextSibling=this.$$prevSibling=this.$$childHead=this.$$childTail=null)}},$eval:function(a,b){return g(a)(this,b)},$evalAsync:function(a){p.$$phase||p.$$asyncQueue.length||f.defer(function(){p.$$asyncQueue.length&&p.$digest()});this.$$asyncQueue.push({scope:this,expression:a})},$$postDigest:function(a){this.$$postDigestQueue.push(a)},
|
||||
$apply:function(a){try{return m("$apply"),this.$eval(a)}catch(b){e(b)}finally{p.$$phase=null;try{p.$digest()}catch(c){throw e(c),c;}}},$on:function(a,b){var c=this.$$listeners[a];c||(this.$$listeners[a]=c=[]);c.push(b);var d=this;do d.$$listenerCount[a]||(d.$$listenerCount[a]=0),d.$$listenerCount[a]++;while(d=d.$parent);var e=this;return function(){c[ab(c,b)]=null;l(e,1,a)}},$emit:function(a,b){var c=[],d,f=this,g=!1,h={name:a,targetScope:f,stopPropagation:function(){g=!0},preventDefault:function(){h.defaultPrevented=
|
||||
!0},defaultPrevented:!1},m=[h].concat(va.call(arguments,1)),k,l;do{d=f.$$listeners[a]||c;h.currentScope=f;k=0;for(l=d.length;k<l;k++)if(d[k])try{d[k].apply(null,m)}catch(p){e(p)}else d.splice(k,1),k--,l--;if(g)break;f=f.$parent}while(f);return h},$broadcast:function(a,b){for(var c=this,d=this,f={name:a,targetScope:this,preventDefault:function(){f.defaultPrevented=!0},defaultPrevented:!1},g=[f].concat(va.call(arguments,1)),h,k;c=d;){f.currentScope=c;d=c.$$listeners[a]||[];h=0;for(k=d.length;h<k;h++)if(d[h])try{d[h].apply(null,
|
||||
g)}catch(m){e(m)}else d.splice(h,1),h--,k--;if(!(d=c.$$listenerCount[a]&&c.$$childHead||c!==this&&c.$$nextSibling))for(;c!==this&&!(d=c.$$nextSibling);)c=c.$parent}return f}};var p=new h;return p}]}function Bd(){var b=/^\s*(https?|ftp|mailto|tel|file):/,a=/^\s*(https?|ftp|file):|data:image\//;this.aHrefSanitizationWhitelist=function(a){return B(a)?(b=a,this):b};this.imgSrcSanitizationWhitelist=function(b){return B(b)?(a=b,this):a};this.$get=function(){return function(c,d){var e=d?a:b,g;if(!M||8<=
|
||||
M)if(g=qa(c).href,""!==g&&!g.match(e))return"unsafe:"+g;return c}}}function Cd(b){if("self"===b)return b;if(D(b)){if(-1<b.indexOf("***"))throw sa("iwcard",b);b=b.replace(/([-()\[\]{}+?*.$\^|,:#<!\\])/g,"\\$1").replace(/\x08/g,"\\x08").replace("\\*\\*",".*").replace("\\*","[^:/.?&;]*");return RegExp("^"+b+"$")}if($a(b))return RegExp("^"+b.source+"$");throw sa("imatcher");}function yc(b){var a=[];B(b)&&q(b,function(b){a.push(Cd(b))});return a}function Dd(){this.SCE_CONTEXTS=ea;var b=["self"],a=[];this.resourceUrlWhitelist=
|
||||
function(a){arguments.length&&(b=yc(a));return b};this.resourceUrlBlacklist=function(b){arguments.length&&(a=yc(b));return a};this.$get=["$injector",function(c){function d(a){var b=function(a){this.$$unwrapTrustedValue=function(){return a}};a&&(b.prototype=new a);b.prototype.valueOf=function(){return this.$$unwrapTrustedValue()};b.prototype.toString=function(){return this.$$unwrapTrustedValue().toString()};return b}var e=function(a){throw sa("unsafe");};c.has("$sanitize")&&(e=c.get("$sanitize"));
|
||||
var g=d(),f={};f[ea.HTML]=d(g);f[ea.CSS]=d(g);f[ea.URL]=d(g);f[ea.JS]=d(g);f[ea.RESOURCE_URL]=d(f[ea.URL]);return{trustAs:function(a,b){var c=f.hasOwnProperty(a)?f[a]:null;if(!c)throw sa("icontext",a,b);if(null===b||b===r||""===b)return b;if("string"!==typeof b)throw sa("itype",a);return new c(b)},getTrusted:function(c,d){if(null===d||d===r||""===d)return d;var g=f.hasOwnProperty(c)?f[c]:null;if(g&&d instanceof g)return d.$$unwrapTrustedValue();if(c===ea.RESOURCE_URL){var g=qa(d.toString()),l,n,p=
|
||||
!1;l=0;for(n=b.length;l<n;l++)if("self"===b[l]?Fb(g):b[l].exec(g.href)){p=!0;break}if(p)for(l=0,n=a.length;l<n;l++)if("self"===a[l]?Fb(g):a[l].exec(g.href)){p=!1;break}if(p)return d;throw sa("insecurl",d.toString());}if(c===ea.HTML)return e(d);throw sa("unsafe");},valueOf:function(a){return a instanceof g?a.$$unwrapTrustedValue():a}}}]}function Ed(){var b=!0;this.enabled=function(a){arguments.length&&(b=!!a);return b};this.$get=["$parse","$sniffer","$sceDelegate",function(a,c,d){if(b&&c.msie&&8>c.msieDocumentMode)throw sa("iequirks");
|
||||
var e=fa(ea);e.isEnabled=function(){return b};e.trustAs=d.trustAs;e.getTrusted=d.getTrusted;e.valueOf=d.valueOf;b||(e.trustAs=e.getTrusted=function(a,b){return b},e.valueOf=Aa);e.parseAs=function(b,c){var d=a(c);return d.literal&&d.constant?d:function(a,c){return e.getTrusted(b,d(a,c))}};var g=e.parseAs,f=e.getTrusted,h=e.trustAs;q(ea,function(a,b){var c=x(b);e[Pa("parse_as_"+c)]=function(b){return g(a,b)};e[Pa("get_trusted_"+c)]=function(b){return f(a,b)};e[Pa("trust_as_"+c)]=function(b){return h(a,
|
||||
b)}});return e}]}function Fd(){this.$get=["$window","$document",function(b,a){var c={},d=S((/android (\d+)/.exec(x((b.navigator||{}).userAgent))||[])[1]),e=/Boxee/i.test((b.navigator||{}).userAgent),g=a[0]||{},f=g.documentMode,h,m=/^(Moz|webkit|O|ms)(?=[A-Z])/,k=g.body&&g.body.style,l=!1,n=!1;if(k){for(var p in k)if(l=m.exec(p)){h=l[0];h=h.substr(0,1).toUpperCase()+h.substr(1);break}h||(h="WebkitOpacity"in k&&"webkit");l=!!("transition"in k||h+"Transition"in k);n=!!("animation"in k||h+"Animation"in
|
||||
k);!d||l&&n||(l=D(g.body.style.webkitTransition),n=D(g.body.style.webkitAnimation))}return{history:!(!b.history||!b.history.pushState||4>d||e),hashchange:"onhashchange"in b&&(!f||7<f),hasEvent:function(a){if("input"==a&&9==M)return!1;if(z(c[a])){var b=g.createElement("div");c[a]="on"+a in b}return c[a]},csp:Tb(),vendorPrefix:h,transitions:l,animations:n,android:d,msie:M,msieDocumentMode:f}}]}function Gd(){this.$get=["$rootScope","$browser","$q","$exceptionHandler",function(b,a,c,d){function e(e,h,
|
||||
m){var k=c.defer(),l=k.promise,n=B(m)&&!m;h=a.defer(function(){try{k.resolve(e())}catch(a){k.reject(a),d(a)}finally{delete g[l.$$timeoutId]}n||b.$apply()},h);l.$$timeoutId=h;g[h]=k;return l}var g={};e.cancel=function(b){return b&&b.$$timeoutId in g?(g[b.$$timeoutId].reject("canceled"),delete g[b.$$timeoutId],a.defer.cancel(b.$$timeoutId)):!1};return e}]}function qa(b,a){var c=b;M&&(Y.setAttribute("href",c),c=Y.href);Y.setAttribute("href",c);return{href:Y.href,protocol:Y.protocol?Y.protocol.replace(/:$/,
|
||||
""):"",host:Y.host,search:Y.search?Y.search.replace(/^\?/,""):"",hash:Y.hash?Y.hash.replace(/^#/,""):"",hostname:Y.hostname,port:Y.port,pathname:"/"===Y.pathname.charAt(0)?Y.pathname:"/"+Y.pathname}}function Fb(b){b=D(b)?qa(b):b;return b.protocol===zc.protocol&&b.host===zc.host}function Hd(){this.$get=$(Z)}function Ac(b){function a(d,e){if(X(d)){var g={};q(d,function(b,c){g[c]=a(c,b)});return g}return b.factory(d+c,e)}var c="Filter";this.register=a;this.$get=["$injector",function(a){return function(b){return a.get(b+
|
||||
c)}}];a("currency",Bc);a("date",Cc);a("filter",Id);a("json",Jd);a("limitTo",Kd);a("lowercase",Ld);a("number",Dc);a("orderBy",Ec);a("uppercase",Md)}function Id(){return function(b,a,c){if(!K(b))return b;var d=typeof c,e=[];e.check=function(a){for(var b=0;b<e.length;b++)if(!e[b](a))return!1;return!0};"function"!==d&&(c="boolean"===d&&c?function(a,b){return Na.equals(a,b)}:function(a,b){b=(""+b).toLowerCase();return-1<(""+a).toLowerCase().indexOf(b)});var g=function(a,b){if("string"==typeof b&&"!"===
|
||||
b.charAt(0))return!g(a,b.substr(1));switch(typeof a){case "boolean":case "number":case "string":return c(a,b);case "object":switch(typeof b){case "object":return c(a,b);default:for(var d in a)if("$"!==d.charAt(0)&&g(a[d],b))return!0}return!1;case "array":for(d=0;d<a.length;d++)if(g(a[d],b))return!0;return!1;default:return!1}};switch(typeof a){case "boolean":case "number":case "string":a={$:a};case "object":for(var f in a)"$"==f?function(){if(a[f]){var b=f;e.push(function(c){return g(c,a[b])})}}():
|
||||
function(){if("undefined"!=typeof a[f]){var b=f;e.push(function(c){return g(ub(c,b),a[b])})}}();break;case "function":e.push(a);break;default:return b}for(var d=[],h=0;h<b.length;h++){var m=b[h];e.check(m)&&d.push(m)}return d}}function Bc(b){var a=b.NUMBER_FORMATS;return function(b,d){z(d)&&(d=a.CURRENCY_SYM);return Fc(b,a.PATTERNS[1],a.GROUP_SEP,a.DECIMAL_SEP,2).replace(/\u00A4/g,d)}}function Dc(b){var a=b.NUMBER_FORMATS;return function(b,d){return Fc(b,a.PATTERNS[0],a.GROUP_SEP,a.DECIMAL_SEP,d)}}
|
||||
function Fc(b,a,c,d,e){if(isNaN(b)||!isFinite(b))return"";var g=0>b;b=Math.abs(b);var f=b+"",h="",m=[],k=!1;if(-1!==f.indexOf("e")){var l=f.match(/([\d\.]+)e(-?)(\d+)/);l&&"-"==l[2]&&l[3]>e+1?f="0":(h=f,k=!0)}if(k)0<e&&(-1<b&&1>b)&&(h=b.toFixed(e));else{f=(f.split(Gc)[1]||"").length;z(e)&&(e=Math.min(Math.max(a.minFrac,f),a.maxFrac));f=Math.pow(10,e);b=Math.round(b*f)/f;b=(""+b).split(Gc);f=b[0];b=b[1]||"";var l=0,n=a.lgSize,p=a.gSize;if(f.length>=n+p)for(l=f.length-n,k=0;k<l;k++)0===(l-k)%p&&0!==
|
||||
k&&(h+=c),h+=f.charAt(k);for(k=l;k<f.length;k++)0===(f.length-k)%n&&0!==k&&(h+=c),h+=f.charAt(k);for(;b.length<e;)b+="0";e&&"0"!==e&&(h+=d+b.substr(0,e))}m.push(g?a.negPre:a.posPre);m.push(h);m.push(g?a.negSuf:a.posSuf);return m.join("")}function Lb(b,a,c){var d="";0>b&&(d="-",b=-b);for(b=""+b;b.length<a;)b="0"+b;c&&(b=b.substr(b.length-a));return d+b}function W(b,a,c,d){c=c||0;return function(e){e=e["get"+b]();if(0<c||e>-c)e+=c;0===e&&-12==c&&(e=12);return Lb(e,a,d)}}function kb(b,a){return function(c,
|
||||
d){var e=c["get"+b](),g=Ga(a?"SHORT"+b:b);return d[g][e]}}function Cc(b){function a(a){var b;if(b=a.match(c)){a=new Date(0);var g=0,f=0,h=b[8]?a.setUTCFullYear:a.setFullYear,m=b[8]?a.setUTCHours:a.setHours;b[9]&&(g=S(b[9]+b[10]),f=S(b[9]+b[11]));h.call(a,S(b[1]),S(b[2])-1,S(b[3]));g=S(b[4]||0)-g;f=S(b[5]||0)-f;h=S(b[6]||0);b=Math.round(1E3*parseFloat("0."+(b[7]||0)));m.call(a,g,f,h,b)}return a}var c=/^(\d{4})-?(\d\d)-?(\d\d)(?:T(\d\d)(?::?(\d\d)(?::?(\d\d)(?:\.(\d+))?)?)?(Z|([+-])(\d\d):?(\d\d))?)?$/;
|
||||
return function(c,e){var g="",f=[],h,m;e=e||"mediumDate";e=b.DATETIME_FORMATS[e]||e;D(c)&&(c=Nd.test(c)?S(c):a(c));rb(c)&&(c=new Date(c));if(!Ja(c))return c;for(;e;)(m=Od.exec(e))?(f=f.concat(va.call(m,1)),e=f.pop()):(f.push(e),e=null);q(f,function(a){h=Pd[a];g+=h?h(c,b.DATETIME_FORMATS):a.replace(/(^'|'$)/g,"").replace(/''/g,"'")});return g}}function Jd(){return function(b){return pa(b,!0)}}function Kd(){return function(b,a){if(!K(b)&&!D(b))return b;a=S(a);if(D(b))return a?0<=a?b.slice(0,a):b.slice(a,
|
||||
b.length):"";var c=[],d,e;a>b.length?a=b.length:a<-b.length&&(a=-b.length);0<a?(d=0,e=a):(d=b.length+a,e=b.length);for(;d<e;d++)c.push(b[d]);return c}}function Ec(b){return function(a,c,d){function e(a,b){return Ma(b)?function(b,c){return a(c,b)}:a}if(!K(a)||!c)return a;c=K(c)?c:[c];c=Qc(c,function(a){var c=!1,d=a||Aa;if(D(a)){if("+"==a.charAt(0)||"-"==a.charAt(0))c="-"==a.charAt(0),a=a.substring(1);d=b(a)}return e(function(a,b){var c;c=d(a);var e=d(b),f=typeof c,g=typeof e;f==g?("string"==f&&(c=
|
||||
c.toLowerCase(),e=e.toLowerCase()),c=c===e?0:c<e?-1:1):c=f<g?-1:1;return c},c)});for(var g=[],f=0;f<a.length;f++)g.push(a[f]);return g.sort(e(function(a,b){for(var d=0;d<c.length;d++){var e=c[d](a,b);if(0!==e)return e}return 0},d))}}function ta(b){L(b)&&(b={link:b});b.restrict=b.restrict||"AC";return $(b)}function Hc(b,a){function c(a,c){c=c?"-"+cb(c,"-"):"";b.removeClass((a?lb:mb)+c).addClass((a?mb:lb)+c)}var d=this,e=b.parent().controller("form")||nb,g=0,f=d.$error={},h=[];d.$name=a.name||a.ngForm;
|
||||
d.$dirty=!1;d.$pristine=!0;d.$valid=!0;d.$invalid=!1;e.$addControl(d);b.addClass(Ha);c(!0);d.$addControl=function(a){xa(a.$name,"input");h.push(a);a.$name&&(d[a.$name]=a)};d.$removeControl=function(a){a.$name&&d[a.$name]===a&&delete d[a.$name];q(f,function(b,c){d.$setValidity(c,!0,a)});Ka(h,a)};d.$setValidity=function(a,b,h){var n=f[a];if(b)n&&(Ka(n,h),n.length||(g--,g||(c(b),d.$valid=!0,d.$invalid=!1),f[a]=!1,c(!0,a),e.$setValidity(a,!0,d)));else{g||c(b);if(n){if(-1!=ab(n,h))return}else f[a]=n=[],
|
||||
g++,c(!1,a),e.$setValidity(a,!1,d);n.push(h);d.$valid=!1;d.$invalid=!0}};d.$setDirty=function(){b.removeClass(Ha).addClass(ob);d.$dirty=!0;d.$pristine=!1;e.$setDirty()};d.$setPristine=function(){b.removeClass(ob).addClass(Ha);d.$dirty=!1;d.$pristine=!0;q(h,function(a){a.$setPristine()})}}function pb(b,a,c,d,e,g){if(!e.android){var f=!1;a.on("compositionstart",function(a){f=!0});a.on("compositionend",function(){f=!1})}var h=function(){if(!f){var e=a.val();Ma(c.ngTrim||"T")&&(e=aa(e));d.$viewValue!==
|
||||
e&&(b.$$phase?d.$setViewValue(e):b.$apply(function(){d.$setViewValue(e)}))}};if(e.hasEvent("input"))a.on("input",h);else{var m,k=function(){m||(m=g.defer(function(){h();m=null}))};a.on("keydown",function(a){a=a.keyCode;91===a||(15<a&&19>a||37<=a&&40>=a)||k()});if(e.hasEvent("paste"))a.on("paste cut",k)}a.on("change",h);d.$render=function(){a.val(d.$isEmpty(d.$viewValue)?"":d.$viewValue)};var l=c.ngPattern,n=function(a,b){if(d.$isEmpty(b)||a.test(b))return d.$setValidity("pattern",!0),b;d.$setValidity("pattern",
|
||||
!1);return r};l&&((e=l.match(/^\/(.*)\/([gim]*)$/))?(l=RegExp(e[1],e[2]),e=function(a){return n(l,a)}):e=function(c){var d=b.$eval(l);if(!d||!d.test)throw F("ngPattern")("noregexp",l,d,ga(a));return n(d,c)},d.$formatters.push(e),d.$parsers.push(e));if(c.ngMinlength){var p=S(c.ngMinlength);e=function(a){if(!d.$isEmpty(a)&&a.length<p)return d.$setValidity("minlength",!1),r;d.$setValidity("minlength",!0);return a};d.$parsers.push(e);d.$formatters.push(e)}if(c.ngMaxlength){var s=S(c.ngMaxlength);e=function(a){if(!d.$isEmpty(a)&&
|
||||
a.length>s)return d.$setValidity("maxlength",!1),r;d.$setValidity("maxlength",!0);return a};d.$parsers.push(e);d.$formatters.push(e)}}function Mb(b,a){b="ngClass"+b;return function(){return{restrict:"AC",link:function(c,d,e){function g(b){if(!0===a||c.$index%2===a){var d=f(b||"");h?ua(b,h)||e.$updateClass(d,f(h)):e.$addClass(d)}h=fa(b)}function f(a){if(K(a))return a.join(" ");if(X(a)){var b=[];q(a,function(a,c){a&&b.push(c)});return b.join(" ")}return a}var h;c.$watch(e[b],g,!0);e.$observe("class",
|
||||
function(a){g(c.$eval(e[b]))});"ngClass"!==b&&c.$watch("$index",function(d,g){var h=d&1;if(h!==g&1){var n=f(c.$eval(e[b]));h===a?e.$addClass(n):e.$removeClass(n)}})}}}}var x=function(b){return D(b)?b.toLowerCase():b},Ga=function(b){return D(b)?b.toUpperCase():b},M,A,Ba,va=[].slice,Qd=[].push,Za=Object.prototype.toString,La=F("ng"),Na=Z.angular||(Z.angular={}),Ta,Fa,ka=["0","0","0"];M=S((/msie (\d+)/.exec(x(navigator.userAgent))||[])[1]);isNaN(M)&&(M=S((/trident\/.*; rv:(\d+)/.exec(x(navigator.userAgent))||
|
||||
[])[1]));w.$inject=[];Aa.$inject=[];var aa=function(){return String.prototype.trim?function(b){return D(b)?b.trim():b}:function(b){return D(b)?b.replace(/^\s\s*/,"").replace(/\s\s*$/,""):b}}();Fa=9>M?function(b){b=b.nodeName?b:b[0];return b.scopeName&&"HTML"!=b.scopeName?Ga(b.scopeName+":"+b.nodeName):b.nodeName}:function(b){return b.nodeName?b.nodeName:b[0].nodeName};var Tc=/[A-Z]/g,Rd={full:"1.2.7",major:1,minor:2,dot:7,codeName:"emoji-clairvoyance"},Qa=O.cache={},db=O.expando="ng-"+(new Date).getTime(),
|
||||
Xc=1,Ic=Z.document.addEventListener?function(b,a,c){b.addEventListener(a,c,!1)}:function(b,a,c){b.attachEvent("on"+a,c)},Ab=Z.document.removeEventListener?function(b,a,c){b.removeEventListener(a,c,!1)}:function(b,a,c){b.detachEvent("on"+a,c)},Vc=/([\:\-\_]+(.))/g,Wc=/^moz([A-Z])/,xb=F("jqLite"),Ea=O.prototype={ready:function(b){function a(){c||(c=!0,b())}var c=!1;"complete"===Q.readyState?setTimeout(a):(this.on("DOMContentLoaded",a),O(Z).on("load",a))},toString:function(){var b=[];q(this,function(a){b.push(""+
|
||||
a)});return"["+b.join(", ")+"]"},eq:function(b){return 0<=b?A(this[b]):A(this[this.length+b])},length:0,push:Qd,sort:[].sort,splice:[].splice},fb={};q("multiple selected checked disabled readOnly required open".split(" "),function(b){fb[x(b)]=b});var fc={};q("input select option textarea button form details".split(" "),function(b){fc[Ga(b)]=!0});q({data:bc,inheritedData:eb,scope:function(b){return A(b).data("$scope")||eb(b.parentNode||b,["$isolateScope","$scope"])},isolateScope:function(b){return A(b).data("$isolateScope")||
|
||||
A(b).data("$isolateScopeNoTemplate")},controller:cc,injector:function(b){return eb(b,"$injector")},removeAttr:function(b,a){b.removeAttribute(a)},hasClass:Bb,css:function(b,a,c){a=Pa(a);if(B(c))b.style[a]=c;else{var d;8>=M&&(d=b.currentStyle&&b.currentStyle[a],""===d&&(d="auto"));d=d||b.style[a];8>=M&&(d=""===d?r:d);return d}},attr:function(b,a,c){var d=x(a);if(fb[d])if(B(c))c?(b[a]=!0,b.setAttribute(a,d)):(b[a]=!1,b.removeAttribute(d));else return b[a]||(b.attributes.getNamedItem(a)||w).specified?
|
||||
d:r;else if(B(c))b.setAttribute(a,c);else if(b.getAttribute)return b=b.getAttribute(a,2),null===b?r:b},prop:function(b,a,c){if(B(c))b[a]=c;else return b[a]},text:function(){function b(b,d){var e=a[b.nodeType];if(z(d))return e?b[e]:"";b[e]=d}var a=[];9>M?(a[1]="innerText",a[3]="nodeValue"):a[1]=a[3]="textContent";b.$dv="";return b}(),val:function(b,a){if(z(a)){if("SELECT"===Fa(b)&&b.multiple){var c=[];q(b.options,function(a){a.selected&&c.push(a.value||a.text)});return 0===c.length?null:c}return b.value}b.value=
|
||||
a},html:function(b,a){if(z(a))return b.innerHTML;for(var c=0,d=b.childNodes;c<d.length;c++)Ca(d[c]);b.innerHTML=a},empty:dc},function(b,a){O.prototype[a]=function(a,d){var e,g;if(b!==dc&&(2==b.length&&b!==Bb&&b!==cc?a:d)===r){if(X(a)){for(e=0;e<this.length;e++)if(b===bc)b(this[e],a);else for(g in a)b(this[e],g,a[g]);return this}e=b.$dv;g=e===r?Math.min(this.length,1):this.length;for(var f=0;f<g;f++){var h=b(this[f],a,d);e=e?e+h:h}return e}for(e=0;e<this.length;e++)b(this[e],a,d);return this}});q({removeData:$b,
|
||||
dealoc:Ca,on:function a(c,d,e,g){if(B(g))throw xb("onargs");var f=la(c,"events"),h=la(c,"handle");f||la(c,"events",f={});h||la(c,"handle",h=Yc(c,f));q(d.split(" "),function(d){var g=f[d];if(!g){if("mouseenter"==d||"mouseleave"==d){var l=Q.body.contains||Q.body.compareDocumentPosition?function(a,c){var d=9===a.nodeType?a.documentElement:a,e=c&&c.parentNode;return a===e||!!(e&&1===e.nodeType&&(d.contains?d.contains(e):a.compareDocumentPosition&&a.compareDocumentPosition(e)&16))}:function(a,c){if(c)for(;c=
|
||||
c.parentNode;)if(c===a)return!0;return!1};f[d]=[];a(c,{mouseleave:"mouseout",mouseenter:"mouseover"}[d],function(a){var c=a.relatedTarget;c&&(c===this||l(this,c))||h(a,d)})}else Ic(c,d,h),f[d]=[];g=f[d]}g.push(e)})},off:ac,one:function(a,c,d){a=A(a);a.on(c,function g(){a.off(c,d);a.off(c,g)});a.on(c,d)},replaceWith:function(a,c){var d,e=a.parentNode;Ca(a);q(new O(c),function(c){d?e.insertBefore(c,d.nextSibling):e.replaceChild(c,a);d=c})},children:function(a){var c=[];q(a.childNodes,function(a){1===
|
||||
a.nodeType&&c.push(a)});return c},contents:function(a){return a.childNodes||[]},append:function(a,c){q(new O(c),function(c){1!==a.nodeType&&11!==a.nodeType||a.appendChild(c)})},prepend:function(a,c){if(1===a.nodeType){var d=a.firstChild;q(new O(c),function(c){a.insertBefore(c,d)})}},wrap:function(a,c){c=A(c)[0];var d=a.parentNode;d&&d.replaceChild(c,a);c.appendChild(a)},remove:function(a){Ca(a);var c=a.parentNode;c&&c.removeChild(a)},after:function(a,c){var d=a,e=a.parentNode;q(new O(c),function(a){e.insertBefore(a,
|
||||
d.nextSibling);d=a})},addClass:Db,removeClass:Cb,toggleClass:function(a,c,d){z(d)&&(d=!Bb(a,c));(d?Db:Cb)(a,c)},parent:function(a){return(a=a.parentNode)&&11!==a.nodeType?a:null},next:function(a){if(a.nextElementSibling)return a.nextElementSibling;for(a=a.nextSibling;null!=a&&1!==a.nodeType;)a=a.nextSibling;return a},find:function(a,c){return a.getElementsByTagName?a.getElementsByTagName(c):[]},clone:zb,triggerHandler:function(a,c,d){c=(la(a,"events")||{})[c];d=d||[];var e=[{preventDefault:w,stopPropagation:w}];
|
||||
q(c,function(c){c.apply(a,e.concat(d))})}},function(a,c){O.prototype[c]=function(c,e,g){for(var f,h=0;h<this.length;h++)z(f)?(f=a(this[h],c,e,g),B(f)&&(f=A(f))):yb(f,a(this[h],c,e,g));return B(f)?f:this};O.prototype.bind=O.prototype.on;O.prototype.unbind=O.prototype.off});Ra.prototype={put:function(a,c){this[Da(a)]=c},get:function(a){return this[Da(a)]},remove:function(a){var c=this[a=Da(a)];delete this[a];return c}};var $c=/^function\s*[^\(]*\(\s*([^\)]*)\)/m,ad=/,/,bd=/^\s*(_?)(\S+?)\1\s*$/,Zc=
|
||||
/((\/\/.*$)|(\/\*[\s\S]*?\*\/))/mg,Sa=F("$injector"),Sd=F("$animate"),Td=["$provide",function(a){this.$$selectors={};this.register=function(c,d){var e=c+"-animation";if(c&&"."!=c.charAt(0))throw Sd("notcsel",c);this.$$selectors[c.substr(1)]=e;a.factory(e,d)};this.classNameFilter=function(a){1===arguments.length&&(this.$$classNameFilter=a instanceof RegExp?a:null);return this.$$classNameFilter};this.$get=["$timeout",function(a){return{enter:function(d,e,g,f){g?g.after(d):(e&&e[0]||(e=g.parent()),e.append(d));
|
||||
f&&a(f,0,!1)},leave:function(d,e){d.remove();e&&a(e,0,!1)},move:function(a,c,g,f){this.enter(a,c,g,f)},addClass:function(d,e,g){e=D(e)?e:K(e)?e.join(" "):"";q(d,function(a){Db(a,e)});g&&a(g,0,!1)},removeClass:function(d,e,g){e=D(e)?e:K(e)?e.join(" "):"";q(d,function(a){Cb(a,e)});g&&a(g,0,!1)},enabled:w}}]}],ja=F("$compile");ic.$inject=["$provide","$$sanitizeUriProvider"];var hd=/^(x[\:\-_]|data[\:\-_])/i,oc=F("$interpolate"),Ud=/^([^\?#]*)(\?([^#]*))?(#(.*))?$/,sd={http:80,https:443,ftp:21},Hb=F("$location");
|
||||
tc.prototype=Ib.prototype=sc.prototype={$$html5:!1,$$replace:!1,absUrl:ib("$$absUrl"),url:function(a,c){if(z(a))return this.$$url;var d=Ud.exec(a);d[1]&&this.path(decodeURIComponent(d[1]));(d[2]||d[1])&&this.search(d[3]||"");this.hash(d[5]||"",c);return this},protocol:ib("$$protocol"),host:ib("$$host"),port:ib("$$port"),path:uc("$$path",function(a){return"/"==a.charAt(0)?a:"/"+a}),search:function(a,c){switch(arguments.length){case 0:return this.$$search;case 1:if(D(a))this.$$search=Wb(a);else if(X(a))this.$$search=
|
||||
a;else throw Hb("isrcharg");break;default:z(c)||null===c?delete this.$$search[a]:this.$$search[a]=c}this.$$compose();return this},hash:uc("$$hash",Aa),replace:function(){this.$$replace=!0;return this}};var ya=F("$parse"),xc={},ra,Ia={"null":function(){return null},"true":function(){return!0},"false":function(){return!1},undefined:w,"+":function(a,c,d,e){d=d(a,c);e=e(a,c);return B(d)?B(e)?d+e:d:B(e)?e:r},"-":function(a,c,d,e){d=d(a,c);e=e(a,c);return(B(d)?d:0)-(B(e)?e:0)},"*":function(a,c,d,e){return d(a,
|
||||
c)*e(a,c)},"/":function(a,c,d,e){return d(a,c)/e(a,c)},"%":function(a,c,d,e){return d(a,c)%e(a,c)},"^":function(a,c,d,e){return d(a,c)^e(a,c)},"=":w,"===":function(a,c,d,e){return d(a,c)===e(a,c)},"!==":function(a,c,d,e){return d(a,c)!==e(a,c)},"==":function(a,c,d,e){return d(a,c)==e(a,c)},"!=":function(a,c,d,e){return d(a,c)!=e(a,c)},"<":function(a,c,d,e){return d(a,c)<e(a,c)},">":function(a,c,d,e){return d(a,c)>e(a,c)},"<=":function(a,c,d,e){return d(a,c)<=e(a,c)},">=":function(a,c,d,e){return d(a,
|
||||
c)>=e(a,c)},"&&":function(a,c,d,e){return d(a,c)&&e(a,c)},"||":function(a,c,d,e){return d(a,c)||e(a,c)},"&":function(a,c,d,e){return d(a,c)&e(a,c)},"|":function(a,c,d,e){return e(a,c)(a,c,d(a,c))},"!":function(a,c,d){return!d(a,c)}},Vd={n:"\n",f:"\f",r:"\r",t:"\t",v:"\v","'":"'",'"':'"'},Kb=function(a){this.options=a};Kb.prototype={constructor:Kb,lex:function(a){this.text=a;this.index=0;this.ch=r;this.lastCh=":";this.tokens=[];var c;for(a=[];this.index<this.text.length;){this.ch=this.text.charAt(this.index);
|
||||
if(this.is("\"'"))this.readString(this.ch);else if(this.isNumber(this.ch)||this.is(".")&&this.isNumber(this.peek()))this.readNumber();else if(this.isIdent(this.ch))this.readIdent(),this.was("{,")&&("{"===a[0]&&(c=this.tokens[this.tokens.length-1]))&&(c.json=-1===c.text.indexOf("."));else if(this.is("(){}[].,;:?"))this.tokens.push({index:this.index,text:this.ch,json:this.was(":[,")&&this.is("{[")||this.is("}]:,")}),this.is("{[")&&a.unshift(this.ch),this.is("}]")&&a.shift(),this.index++;else if(this.isWhitespace(this.ch)){this.index++;
|
||||
continue}else{var d=this.ch+this.peek(),e=d+this.peek(2),g=Ia[this.ch],f=Ia[d],h=Ia[e];h?(this.tokens.push({index:this.index,text:e,fn:h}),this.index+=3):f?(this.tokens.push({index:this.index,text:d,fn:f}),this.index+=2):g?(this.tokens.push({index:this.index,text:this.ch,fn:g,json:this.was("[,:")&&this.is("+-")}),this.index+=1):this.throwError("Unexpected next character ",this.index,this.index+1)}this.lastCh=this.ch}return this.tokens},is:function(a){return-1!==a.indexOf(this.ch)},was:function(a){return-1!==
|
||||
a.indexOf(this.lastCh)},peek:function(a){a=a||1;return this.index+a<this.text.length?this.text.charAt(this.index+a):!1},isNumber:function(a){return"0"<=a&&"9">=a},isWhitespace:function(a){return" "===a||"\r"===a||"\t"===a||"\n"===a||"\v"===a||"\u00a0"===a},isIdent:function(a){return"a"<=a&&"z">=a||"A"<=a&&"Z">=a||"_"===a||"$"===a},isExpOperator:function(a){return"-"===a||"+"===a||this.isNumber(a)},throwError:function(a,c,d){d=d||this.index;c=B(c)?"s "+c+"-"+this.index+" ["+this.text.substring(c,d)+
|
||||
"]":" "+d;throw ya("lexerr",a,c,this.text);},readNumber:function(){for(var a="",c=this.index;this.index<this.text.length;){var d=x(this.text.charAt(this.index));if("."==d||this.isNumber(d))a+=d;else{var e=this.peek();if("e"==d&&this.isExpOperator(e))a+=d;else if(this.isExpOperator(d)&&e&&this.isNumber(e)&&"e"==a.charAt(a.length-1))a+=d;else if(!this.isExpOperator(d)||e&&this.isNumber(e)||"e"!=a.charAt(a.length-1))break;else this.throwError("Invalid exponent")}this.index++}a*=1;this.tokens.push({index:c,
|
||||
text:a,json:!0,fn:function(){return a}})},readIdent:function(){for(var a=this,c="",d=this.index,e,g,f,h;this.index<this.text.length;){h=this.text.charAt(this.index);if("."===h||this.isIdent(h)||this.isNumber(h))"."===h&&(e=this.index),c+=h;else break;this.index++}if(e)for(g=this.index;g<this.text.length;){h=this.text.charAt(g);if("("===h){f=c.substr(e-d+1);c=c.substr(0,e-d);this.index=g;break}if(this.isWhitespace(h))g++;else break}d={index:d,text:c};if(Ia.hasOwnProperty(c))d.fn=Ia[c],d.json=Ia[c];
|
||||
else{var m=wc(c,this.options,this.text);d.fn=t(function(a,c){return m(a,c)},{assign:function(d,e){return jb(d,c,e,a.text,a.options)}})}this.tokens.push(d);f&&(this.tokens.push({index:e,text:".",json:!1}),this.tokens.push({index:e+1,text:f,json:!1}))},readString:function(a){var c=this.index;this.index++;for(var d="",e=a,g=!1;this.index<this.text.length;){var f=this.text.charAt(this.index),e=e+f;if(g)"u"===f?(f=this.text.substring(this.index+1,this.index+5),f.match(/[\da-f]{4}/i)||this.throwError("Invalid unicode escape [\\u"+
|
||||
f+"]"),this.index+=4,d+=String.fromCharCode(parseInt(f,16))):d=(g=Vd[f])?d+g:d+f,g=!1;else if("\\"===f)g=!0;else{if(f===a){this.index++;this.tokens.push({index:c,text:e,string:d,json:!0,fn:function(){return d}});return}d+=f}this.index++}this.throwError("Unterminated quote",c)}};var Xa=function(a,c,d){this.lexer=a;this.$filter=c;this.options=d};Xa.ZERO=function(){return 0};Xa.prototype={constructor:Xa,parse:function(a,c){this.text=a;this.json=c;this.tokens=this.lexer.lex(a);c&&(this.assignment=this.logicalOR,
|
||||
this.functionCall=this.fieldAccess=this.objectIndex=this.filterChain=function(){this.throwError("is not valid json",{text:a,index:0})});var d=c?this.primary():this.statements();0!==this.tokens.length&&this.throwError("is an unexpected token",this.tokens[0]);d.literal=!!d.literal;d.constant=!!d.constant;return d},primary:function(){var a;if(this.expect("("))a=this.filterChain(),this.consume(")");else if(this.expect("["))a=this.arrayDeclaration();else if(this.expect("{"))a=this.object();else{var c=
|
||||
this.expect();(a=c.fn)||this.throwError("not a primary expression",c);c.json&&(a.constant=!0,a.literal=!0)}for(var d;c=this.expect("(","[",".");)"("===c.text?(a=this.functionCall(a,d),d=null):"["===c.text?(d=a,a=this.objectIndex(a)):"."===c.text?(d=a,a=this.fieldAccess(a)):this.throwError("IMPOSSIBLE");return a},throwError:function(a,c){throw ya("syntax",c.text,a,c.index+1,this.text,this.text.substring(c.index));},peekToken:function(){if(0===this.tokens.length)throw ya("ueoe",this.text);return this.tokens[0]},
|
||||
peek:function(a,c,d,e){if(0<this.tokens.length){var g=this.tokens[0],f=g.text;if(f===a||f===c||f===d||f===e||!(a||c||d||e))return g}return!1},expect:function(a,c,d,e){return(a=this.peek(a,c,d,e))?(this.json&&!a.json&&this.throwError("is not valid json",a),this.tokens.shift(),a):!1},consume:function(a){this.expect(a)||this.throwError("is unexpected, expecting ["+a+"]",this.peek())},unaryFn:function(a,c){return t(function(d,e){return a(d,e,c)},{constant:c.constant})},ternaryFn:function(a,c,d){return t(function(e,
|
||||
g){return a(e,g)?c(e,g):d(e,g)},{constant:a.constant&&c.constant&&d.constant})},binaryFn:function(a,c,d){return t(function(e,g){return c(e,g,a,d)},{constant:a.constant&&d.constant})},statements:function(){for(var a=[];;)if(0<this.tokens.length&&!this.peek("}",")",";","]")&&a.push(this.filterChain()),!this.expect(";"))return 1===a.length?a[0]:function(c,d){for(var e,g=0;g<a.length;g++){var f=a[g];f&&(e=f(c,d))}return e}},filterChain:function(){for(var a=this.expression(),c;;)if(c=this.expect("|"))a=
|
||||
this.binaryFn(a,c.fn,this.filter());else return a},filter:function(){for(var a=this.expect(),c=this.$filter(a.text),d=[];;)if(a=this.expect(":"))d.push(this.expression());else{var e=function(a,e,h){h=[h];for(var m=0;m<d.length;m++)h.push(d[m](a,e));return c.apply(a,h)};return function(){return e}}},expression:function(){return this.assignment()},assignment:function(){var a=this.ternary(),c,d;return(d=this.expect("="))?(a.assign||this.throwError("implies assignment but ["+this.text.substring(0,d.index)+
|
||||
"] can not be assigned to",d),c=this.ternary(),function(d,g){return a.assign(d,c(d,g),g)}):a},ternary:function(){var a=this.logicalOR(),c,d;if(this.expect("?")){c=this.ternary();if(d=this.expect(":"))return this.ternaryFn(a,c,this.ternary());this.throwError("expected :",d)}else return a},logicalOR:function(){for(var a=this.logicalAND(),c;;)if(c=this.expect("||"))a=this.binaryFn(a,c.fn,this.logicalAND());else return a},logicalAND:function(){var a=this.equality(),c;if(c=this.expect("&&"))a=this.binaryFn(a,
|
||||
c.fn,this.logicalAND());return a},equality:function(){var a=this.relational(),c;if(c=this.expect("==","!=","===","!=="))a=this.binaryFn(a,c.fn,this.equality());return a},relational:function(){var a=this.additive(),c;if(c=this.expect("<",">","<=",">="))a=this.binaryFn(a,c.fn,this.relational());return a},additive:function(){for(var a=this.multiplicative(),c;c=this.expect("+","-");)a=this.binaryFn(a,c.fn,this.multiplicative());return a},multiplicative:function(){for(var a=this.unary(),c;c=this.expect("*",
|
||||
"/","%");)a=this.binaryFn(a,c.fn,this.unary());return a},unary:function(){var a;return this.expect("+")?this.primary():(a=this.expect("-"))?this.binaryFn(Xa.ZERO,a.fn,this.unary()):(a=this.expect("!"))?this.unaryFn(a.fn,this.unary()):this.primary()},fieldAccess:function(a){var c=this,d=this.expect().text,e=wc(d,this.options,this.text);return t(function(c,d,h){return e(h||a(c,d),d)},{assign:function(e,f,h){return jb(a(e,h),d,f,c.text,c.options)}})},objectIndex:function(a){var c=this,d=this.expression();
|
||||
this.consume("]");return t(function(e,g){var f=a(e,g),h=d(e,g),m;if(!f)return r;(f=Wa(f[h],c.text))&&(f.then&&c.options.unwrapPromises)&&(m=f,"$$v"in f||(m.$$v=r,m.then(function(a){m.$$v=a})),f=f.$$v);return f},{assign:function(e,g,f){var h=d(e,f);return Wa(a(e,f),c.text)[h]=g}})},functionCall:function(a,c){var d=[];if(")"!==this.peekToken().text){do d.push(this.expression());while(this.expect(","))}this.consume(")");var e=this;return function(g,f){for(var h=[],m=c?c(g,f):g,k=0;k<d.length;k++)h.push(d[k](g,
|
||||
f));k=a(g,f,m)||w;Wa(m,e.text);Wa(k,e.text);h=k.apply?k.apply(m,h):k(h[0],h[1],h[2],h[3],h[4]);return Wa(h,e.text)}},arrayDeclaration:function(){var a=[],c=!0;if("]"!==this.peekToken().text){do{var d=this.expression();a.push(d);d.constant||(c=!1)}while(this.expect(","))}this.consume("]");return t(function(c,d){for(var f=[],h=0;h<a.length;h++)f.push(a[h](c,d));return f},{literal:!0,constant:c})},object:function(){var a=[],c=!0;if("}"!==this.peekToken().text){do{var d=this.expect(),d=d.string||d.text;
|
||||
this.consume(":");var e=this.expression();a.push({key:d,value:e});e.constant||(c=!1)}while(this.expect(","))}this.consume("}");return t(function(c,d){for(var e={},m=0;m<a.length;m++){var k=a[m];e[k.key]=k.value(c,d)}return e},{literal:!0,constant:c})}};var Jb={},sa=F("$sce"),ea={HTML:"html",CSS:"css",URL:"url",RESOURCE_URL:"resourceUrl",JS:"js"},Y=Q.createElement("a"),zc=qa(Z.location.href,!0);Ac.$inject=["$provide"];Bc.$inject=["$locale"];Dc.$inject=["$locale"];var Gc=".",Pd={yyyy:W("FullYear",4),
|
||||
yy:W("FullYear",2,0,!0),y:W("FullYear",1),MMMM:kb("Month"),MMM:kb("Month",!0),MM:W("Month",2,1),M:W("Month",1,1),dd:W("Date",2),d:W("Date",1),HH:W("Hours",2),H:W("Hours",1),hh:W("Hours",2,-12),h:W("Hours",1,-12),mm:W("Minutes",2),m:W("Minutes",1),ss:W("Seconds",2),s:W("Seconds",1),sss:W("Milliseconds",3),EEEE:kb("Day"),EEE:kb("Day",!0),a:function(a,c){return 12>a.getHours()?c.AMPMS[0]:c.AMPMS[1]},Z:function(a){a=-1*a.getTimezoneOffset();return a=(0<=a?"+":"")+(Lb(Math[0<a?"floor":"ceil"](a/60),2)+
|
||||
Lb(Math.abs(a%60),2))}},Od=/((?:[^yMdHhmsaZE']+)|(?:'(?:[^']|'')*')|(?:E+|y+|M+|d+|H+|h+|m+|s+|a|Z))(.*)/,Nd=/^\-?\d+$/;Cc.$inject=["$locale"];var Ld=$(x),Md=$(Ga);Ec.$inject=["$parse"];var Wd=$({restrict:"E",compile:function(a,c){8>=M&&(c.href||c.name||c.$set("href",""),a.append(Q.createComment("IE fix")));if(!c.href&&!c.name)return function(a,c){c.on("click",function(a){c.attr("href")||a.preventDefault()})}}}),Nb={};q(fb,function(a,c){if("multiple"!=a){var d=ma("ng-"+c);Nb[d]=function(){return{priority:100,
|
||||
compile:function(){return function(a,g,f){a.$watch(f[d],function(a){f.$set(c,!!a)})}}}}}});q(["src","srcset","href"],function(a){var c=ma("ng-"+a);Nb[c]=function(){return{priority:99,link:function(d,e,g){g.$observe(c,function(c){c&&(g.$set(a,c),M&&e.prop(a,g[a]))})}}}});var nb={$addControl:w,$removeControl:w,$setValidity:w,$setDirty:w,$setPristine:w};Hc.$inject=["$element","$attrs","$scope"];var Jc=function(a){return["$timeout",function(c){return{name:"form",restrict:a?"EAC":"E",controller:Hc,compile:function(){return{pre:function(a,
|
||||
e,g,f){if(!g.action){var h=function(a){a.preventDefault?a.preventDefault():a.returnValue=!1};Ic(e[0],"submit",h);e.on("$destroy",function(){c(function(){Ab(e[0],"submit",h)},0,!1)})}var m=e.parent().controller("form"),k=g.name||g.ngForm;k&&jb(a,k,f,k);if(m)e.on("$destroy",function(){m.$removeControl(f);k&&jb(a,k,r,k);t(f,nb)})}}}}}]},Xd=Jc(),Yd=Jc(!0),Zd=/^(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?$/,$d=/^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,6}$/,ae=
|
||||
/^\s*(\-|\+)?(\d+|(\d*(\.\d*)))\s*$/,Kc={text:pb,number:function(a,c,d,e,g,f){pb(a,c,d,e,g,f);e.$parsers.push(function(a){var c=e.$isEmpty(a);if(c||ae.test(a))return e.$setValidity("number",!0),""===a?null:c?a:parseFloat(a);e.$setValidity("number",!1);return r});e.$formatters.push(function(a){return e.$isEmpty(a)?"":""+a});d.min&&(a=function(a){var c=parseFloat(d.min);if(!e.$isEmpty(a)&&a<c)return e.$setValidity("min",!1),r;e.$setValidity("min",!0);return a},e.$parsers.push(a),e.$formatters.push(a));
|
||||
d.max&&(a=function(a){var c=parseFloat(d.max);if(!e.$isEmpty(a)&&a>c)return e.$setValidity("max",!1),r;e.$setValidity("max",!0);return a},e.$parsers.push(a),e.$formatters.push(a));e.$formatters.push(function(a){if(e.$isEmpty(a)||rb(a))return e.$setValidity("number",!0),a;e.$setValidity("number",!1);return r})},url:function(a,c,d,e,g,f){pb(a,c,d,e,g,f);a=function(a){if(e.$isEmpty(a)||Zd.test(a))return e.$setValidity("url",!0),a;e.$setValidity("url",!1);return r};e.$formatters.push(a);e.$parsers.push(a)},
|
||||
email:function(a,c,d,e,g,f){pb(a,c,d,e,g,f);a=function(a){if(e.$isEmpty(a)||$d.test(a))return e.$setValidity("email",!0),a;e.$setValidity("email",!1);return r};e.$formatters.push(a);e.$parsers.push(a)},radio:function(a,c,d,e){z(d.name)&&c.attr("name",Ya());c.on("click",function(){c[0].checked&&a.$apply(function(){e.$setViewValue(d.value)})});e.$render=function(){c[0].checked=d.value==e.$viewValue};d.$observe("value",e.$render)},checkbox:function(a,c,d,e){var g=d.ngTrueValue,f=d.ngFalseValue;D(g)||
|
||||
(g=!0);D(f)||(f=!1);c.on("click",function(){a.$apply(function(){e.$setViewValue(c[0].checked)})});e.$render=function(){c[0].checked=e.$viewValue};e.$isEmpty=function(a){return a!==g};e.$formatters.push(function(a){return a===g});e.$parsers.push(function(a){return a?g:f})},hidden:w,button:w,submit:w,reset:w},Lc=["$browser","$sniffer",function(a,c){return{restrict:"E",require:"?ngModel",link:function(d,e,g,f){f&&(Kc[x(g.type)]||Kc.text)(d,e,g,f,c,a)}}}],mb="ng-valid",lb="ng-invalid",Ha="ng-pristine",
|
||||
ob="ng-dirty",be=["$scope","$exceptionHandler","$attrs","$element","$parse",function(a,c,d,e,g){function f(a,c){c=c?"-"+cb(c,"-"):"";e.removeClass((a?lb:mb)+c).addClass((a?mb:lb)+c)}this.$modelValue=this.$viewValue=Number.NaN;this.$parsers=[];this.$formatters=[];this.$viewChangeListeners=[];this.$pristine=!0;this.$dirty=!1;this.$valid=!0;this.$invalid=!1;this.$name=d.name;var h=g(d.ngModel),m=h.assign;if(!m)throw F("ngModel")("nonassign",d.ngModel,ga(e));this.$render=w;this.$isEmpty=function(a){return z(a)||
|
||||
""===a||null===a||a!==a};var k=e.inheritedData("$formController")||nb,l=0,n=this.$error={};e.addClass(Ha);f(!0);this.$setValidity=function(a,c){n[a]!==!c&&(c?(n[a]&&l--,l||(f(!0),this.$valid=!0,this.$invalid=!1)):(f(!1),this.$invalid=!0,this.$valid=!1,l++),n[a]=!c,f(c,a),k.$setValidity(a,c,this))};this.$setPristine=function(){this.$dirty=!1;this.$pristine=!0;e.removeClass(ob).addClass(Ha)};this.$setViewValue=function(d){this.$viewValue=d;this.$pristine&&(this.$dirty=!0,this.$pristine=!1,e.removeClass(Ha).addClass(ob),
|
||||
k.$setDirty());q(this.$parsers,function(a){d=a(d)});this.$modelValue!==d&&(this.$modelValue=d,m(a,d),q(this.$viewChangeListeners,function(a){try{a()}catch(d){c(d)}}))};var p=this;a.$watch(function(){var c=h(a);if(p.$modelValue!==c){var d=p.$formatters,e=d.length;for(p.$modelValue=c;e--;)c=d[e](c);p.$viewValue!==c&&(p.$viewValue=c,p.$render())}return c})}],ce=function(){return{require:["ngModel","^?form"],controller:be,link:function(a,c,d,e){var g=e[0],f=e[1]||nb;f.$addControl(g);a.$on("$destroy",
|
||||
function(){f.$removeControl(g)})}}},de=$({require:"ngModel",link:function(a,c,d,e){e.$viewChangeListeners.push(function(){a.$eval(d.ngChange)})}}),Mc=function(){return{require:"?ngModel",link:function(a,c,d,e){if(e){d.required=!0;var g=function(a){if(d.required&&e.$isEmpty(a))e.$setValidity("required",!1);else return e.$setValidity("required",!0),a};e.$formatters.push(g);e.$parsers.unshift(g);d.$observe("required",function(){g(e.$viewValue)})}}}},ee=function(){return{require:"ngModel",link:function(a,
|
||||
c,d,e){var g=(a=/\/(.*)\//.exec(d.ngList))&&RegExp(a[1])||d.ngList||",";e.$parsers.push(function(a){if(!z(a)){var c=[];a&&q(a.split(g),function(a){a&&c.push(aa(a))});return c}});e.$formatters.push(function(a){return K(a)?a.join(", "):r});e.$isEmpty=function(a){return!a||!a.length}}}},fe=/^(true|false|\d+)$/,ge=function(){return{priority:100,compile:function(a,c){return fe.test(c.ngValue)?function(a,c,g){g.$set("value",a.$eval(g.ngValue))}:function(a,c,g){a.$watch(g.ngValue,function(a){g.$set("value",
|
||||
a)})}}}},he=ta(function(a,c,d){c.addClass("ng-binding").data("$binding",d.ngBind);a.$watch(d.ngBind,function(a){c.text(a==r?"":a)})}),ie=["$interpolate",function(a){return function(c,d,e){c=a(d.attr(e.$attr.ngBindTemplate));d.addClass("ng-binding").data("$binding",c);e.$observe("ngBindTemplate",function(a){d.text(a)})}}],je=["$sce","$parse",function(a,c){return function(d,e,g){e.addClass("ng-binding").data("$binding",g.ngBindHtml);var f=c(g.ngBindHtml);d.$watch(function(){return(f(d)||"").toString()},
|
||||
function(c){e.html(a.getTrustedHtml(f(d))||"")})}}],ke=Mb("",!0),le=Mb("Odd",0),me=Mb("Even",1),ne=ta({compile:function(a,c){c.$set("ngCloak",r);a.removeClass("ng-cloak")}}),oe=[function(){return{scope:!0,controller:"@",priority:500}}],Nc={};q("click dblclick mousedown mouseup mouseover mouseout mousemove mouseenter mouseleave keydown keyup keypress submit focus blur copy cut paste".split(" "),function(a){var c=ma("ng-"+a);Nc[c]=["$parse",function(d){return{compile:function(e,g){var f=d(g[c]);return function(c,
|
||||
d,e){d.on(x(a),function(a){c.$apply(function(){f(c,{$event:a})})})}}}}]});var pe=["$animate",function(a){return{transclude:"element",priority:600,terminal:!0,restrict:"A",$$tlb:!0,link:function(c,d,e,g,f){var h,m;c.$watch(e.ngIf,function(g){Ma(g)?m||(m=c.$new(),f(m,function(c){c[c.length++]=Q.createComment(" end ngIf: "+e.ngIf+" ");h={clone:c};a.enter(c,d.parent(),d)})):(m&&(m.$destroy(),m=null),h&&(a.leave(vb(h.clone)),h=null))})}}}],qe=["$http","$templateCache","$anchorScroll","$animate","$sce",
|
||||
function(a,c,d,e,g){return{restrict:"ECA",priority:400,terminal:!0,transclude:"element",controller:Na.noop,compile:function(f,h){var m=h.ngInclude||h.src,k=h.onload||"",l=h.autoscroll;return function(f,h,q,r,y){var A=0,u,t,H=function(){u&&(u.$destroy(),u=null);t&&(e.leave(t),t=null)};f.$watch(g.parseAsResourceUrl(m),function(g){var m=function(){!B(l)||l&&!f.$eval(l)||d()},q=++A;g?(a.get(g,{cache:c}).success(function(a){if(q===A){var c=f.$new();r.template=a;a=y(c,function(a){H();e.enter(a,null,h,m)});
|
||||
u=c;t=a;u.$emit("$includeContentLoaded");f.$eval(k)}}).error(function(){q===A&&H()}),f.$emit("$includeContentRequested")):(H(),r.template=null)})}}}}],re=["$compile",function(a){return{restrict:"ECA",priority:-400,require:"ngInclude",link:function(c,d,e,g){d.html(g.template);a(d.contents())(c)}}}],se=ta({priority:450,compile:function(){return{pre:function(a,c,d){a.$eval(d.ngInit)}}}}),te=ta({terminal:!0,priority:1E3}),ue=["$locale","$interpolate",function(a,c){var d=/{}/g;return{restrict:"EA",link:function(e,
|
||||
g,f){var h=f.count,m=f.$attr.when&&g.attr(f.$attr.when),k=f.offset||0,l=e.$eval(m)||{},n={},p=c.startSymbol(),s=c.endSymbol(),r=/^when(Minus)?(.+)$/;q(f,function(a,c){r.test(c)&&(l[x(c.replace("when","").replace("Minus","-"))]=g.attr(f.$attr[c]))});q(l,function(a,e){n[e]=c(a.replace(d,p+h+"-"+k+s))});e.$watch(function(){var c=parseFloat(e.$eval(h));if(isNaN(c))return"";c in l||(c=a.pluralCat(c-k));return n[c](e,g,!0)},function(a){g.text(a)})}}}],ve=["$parse","$animate",function(a,c){var d=F("ngRepeat");
|
||||
return{transclude:"element",priority:1E3,terminal:!0,$$tlb:!0,link:function(e,g,f,h,m){var k=f.ngRepeat,l=k.match(/^\s*([\s\S]+?)\s+in\s+([\s\S]+?)(?:\s+track\s+by\s+([\s\S]+?))?\s*$/),n,p,s,r,y,t,u={$id:Da};if(!l)throw d("iexp",k);f=l[1];h=l[2];(l=l[3])?(n=a(l),p=function(a,c,d){t&&(u[t]=a);u[y]=c;u.$index=d;return n(e,u)}):(s=function(a,c){return Da(c)},r=function(a){return a});l=f.match(/^(?:([\$\w]+)|\(([\$\w]+)\s*,\s*([\$\w]+)\))$/);if(!l)throw d("iidexp",f);y=l[3]||l[1];t=l[2];var B={};e.$watchCollection(h,
|
||||
function(a){var f,h,l=g[0],n,u={},z,P,D,x,T,w,F=[];if(qb(a))T=a,n=p||s;else{n=p||r;T=[];for(D in a)a.hasOwnProperty(D)&&"$"!=D.charAt(0)&&T.push(D);T.sort()}z=T.length;h=F.length=T.length;for(f=0;f<h;f++)if(D=a===T?f:T[f],x=a[D],x=n(D,x,f),xa(x,"`track by` id"),B.hasOwnProperty(x))w=B[x],delete B[x],u[x]=w,F[f]=w;else{if(u.hasOwnProperty(x))throw q(F,function(a){a&&a.scope&&(B[a.id]=a)}),d("dupes",k,x);F[f]={id:x};u[x]=!1}for(D in B)B.hasOwnProperty(D)&&(w=B[D],f=vb(w.clone),c.leave(f),q(f,function(a){a.$$NG_REMOVED=
|
||||
!0}),w.scope.$destroy());f=0;for(h=T.length;f<h;f++){D=a===T?f:T[f];x=a[D];w=F[f];F[f-1]&&(l=F[f-1].clone[F[f-1].clone.length-1]);if(w.scope){P=w.scope;n=l;do n=n.nextSibling;while(n&&n.$$NG_REMOVED);w.clone[0]!=n&&c.move(vb(w.clone),null,A(l));l=w.clone[w.clone.length-1]}else P=e.$new();P[y]=x;t&&(P[t]=D);P.$index=f;P.$first=0===f;P.$last=f===z-1;P.$middle=!(P.$first||P.$last);P.$odd=!(P.$even=0===(f&1));w.scope||m(P,function(a){a[a.length++]=Q.createComment(" end ngRepeat: "+k+" ");c.enter(a,null,
|
||||
A(l));l=a;w.scope=P;w.clone=a;u[w.id]=w})}B=u})}}}],we=["$animate",function(a){return function(c,d,e){c.$watch(e.ngShow,function(c){a[Ma(c)?"removeClass":"addClass"](d,"ng-hide")})}}],xe=["$animate",function(a){return function(c,d,e){c.$watch(e.ngHide,function(c){a[Ma(c)?"addClass":"removeClass"](d,"ng-hide")})}}],ye=ta(function(a,c,d){a.$watch(d.ngStyle,function(a,d){d&&a!==d&&q(d,function(a,d){c.css(d,"")});a&&c.css(a)},!0)}),ze=["$animate",function(a){return{restrict:"EA",require:"ngSwitch",controller:["$scope",
|
||||
function(){this.cases={}}],link:function(c,d,e,g){var f,h,m=[];c.$watch(e.ngSwitch||e.on,function(d){for(var l=0,n=m.length;l<n;l++)m[l].$destroy(),a.leave(h[l]);h=[];m=[];if(f=g.cases["!"+d]||g.cases["?"])c.$eval(e.change),q(f,function(d){var e=c.$new();m.push(e);d.transclude(e,function(c){var e=d.element;h.push(c);a.enter(c,e.parent(),e)})})})}}}],Ae=ta({transclude:"element",priority:800,require:"^ngSwitch",compile:function(a,c){return function(a,e,g,f,h){f.cases["!"+c.ngSwitchWhen]=f.cases["!"+
|
||||
c.ngSwitchWhen]||[];f.cases["!"+c.ngSwitchWhen].push({transclude:h,element:e})}}}),Be=ta({transclude:"element",priority:800,require:"^ngSwitch",link:function(a,c,d,e,g){e.cases["?"]=e.cases["?"]||[];e.cases["?"].push({transclude:g,element:c})}}),Ce=ta({controller:["$element","$transclude",function(a,c){if(!c)throw F("ngTransclude")("orphan",ga(a));this.$transclude=c}],link:function(a,c,d,e){e.$transclude(function(a){c.empty();c.append(a)})}}),De=["$templateCache",function(a){return{restrict:"E",terminal:!0,
|
||||
compile:function(c,d){"text/ng-template"==d.type&&a.put(d.id,c[0].text)}}}],Ee=F("ngOptions"),Fe=$({terminal:!0}),Ge=["$compile","$parse",function(a,c){var d=/^\s*(.*?)(?:\s+as\s+(.*?))?(?:\s+group\s+by\s+(.*))?\s+for\s+(?:([\$\w][\$\w]*)|(?:\(\s*([\$\w][\$\w]*)\s*,\s*([\$\w][\$\w]*)\s*\)))\s+in\s+(.*?)(?:\s+track\s+by\s+(.*?))?$/,e={$setViewValue:w};return{restrict:"E",require:["select","?ngModel"],controller:["$element","$scope","$attrs",function(a,c,d){var m=this,k={},l=e,n;m.databound=d.ngModel;
|
||||
m.init=function(a,c,d){l=a;n=d};m.addOption=function(c){xa(c,'"option value"');k[c]=!0;l.$viewValue==c&&(a.val(c),n.parent()&&n.remove())};m.removeOption=function(a){this.hasOption(a)&&(delete k[a],l.$viewValue==a&&this.renderUnknownOption(a))};m.renderUnknownOption=function(c){c="? "+Da(c)+" ?";n.val(c);a.prepend(n);a.val(c);n.prop("selected",!0)};m.hasOption=function(a){return k.hasOwnProperty(a)};c.$on("$destroy",function(){m.renderUnknownOption=w})}],link:function(e,f,h,m){function k(a,c,d,e){d.$render=
|
||||
function(){var a=d.$viewValue;e.hasOption(a)?(x.parent()&&x.remove(),c.val(a),""===a&&w.prop("selected",!0)):z(a)&&w?c.val(""):e.renderUnknownOption(a)};c.on("change",function(){a.$apply(function(){x.parent()&&x.remove();d.$setViewValue(c.val())})})}function l(a,c,d){var e;d.$render=function(){var a=new Ra(d.$viewValue);q(c.find("option"),function(c){c.selected=B(a.get(c.value))})};a.$watch(function(){ua(e,d.$viewValue)||(e=fa(d.$viewValue),d.$render())});c.on("change",function(){a.$apply(function(){var a=
|
||||
[];q(c.find("option"),function(c){c.selected&&a.push(c.value)});d.$setViewValue(a)})})}function n(e,f,g){function h(){var a={"":[]},c=[""],d,k,r,t,v;t=g.$modelValue;v=A(e)||[];var C=n?Ob(v):v,F,I,z;I={};r=!1;var E,H;if(s)if(w&&K(t))for(r=new Ra([]),z=0;z<t.length;z++)I[l]=t[z],r.put(w(e,I),t[z]);else r=new Ra(t);for(z=0;F=C.length,z<F;z++){k=z;if(n){k=C[z];if("$"===k.charAt(0))continue;I[n]=k}I[l]=v[k];d=p(e,I)||"";(k=a[d])||(k=a[d]=[],c.push(d));s?d=B(r.remove(w?w(e,I):q(e,I))):(w?(d={},d[l]=t,d=
|
||||
w(e,d)===w(e,I)):d=t===q(e,I),r=r||d);E=m(e,I);E=B(E)?E:"";k.push({id:w?w(e,I):n?C[z]:z,label:E,selected:d})}s||(y||null===t?a[""].unshift({id:"",label:"",selected:!r}):r||a[""].unshift({id:"?",label:"",selected:!0}));I=0;for(C=c.length;I<C;I++){d=c[I];k=a[d];x.length<=I?(t={element:D.clone().attr("label",d),label:k.label},v=[t],x.push(v),f.append(t.element)):(v=x[I],t=v[0],t.label!=d&&t.element.attr("label",t.label=d));E=null;z=0;for(F=k.length;z<F;z++)r=k[z],(d=v[z+1])?(E=d.element,d.label!==r.label&&
|
||||
E.text(d.label=r.label),d.id!==r.id&&E.val(d.id=r.id),E[0].selected!==r.selected&&E.prop("selected",d.selected=r.selected)):(""===r.id&&y?H=y:(H=u.clone()).val(r.id).attr("selected",r.selected).text(r.label),v.push({element:H,label:r.label,id:r.id,selected:r.selected}),E?E.after(H):t.element.append(H),E=H);for(z++;v.length>z;)v.pop().element.remove()}for(;x.length>I;)x.pop()[0].element.remove()}var k;if(!(k=t.match(d)))throw Ee("iexp",t,ga(f));var m=c(k[2]||k[1]),l=k[4]||k[6],n=k[5],p=c(k[3]||""),
|
||||
q=c(k[2]?k[1]:l),A=c(k[7]),w=k[8]?c(k[8]):null,x=[[{element:f,label:""}]];y&&(a(y)(e),y.removeClass("ng-scope"),y.remove());f.empty();f.on("change",function(){e.$apply(function(){var a,c=A(e)||[],d={},h,k,m,p,t,u,v;if(s)for(k=[],p=0,u=x.length;p<u;p++)for(a=x[p],m=1,t=a.length;m<t;m++){if((h=a[m].element)[0].selected){h=h.val();n&&(d[n]=h);if(w)for(v=0;v<c.length&&(d[l]=c[v],w(e,d)!=h);v++);else d[l]=c[h];k.push(q(e,d))}}else if(h=f.val(),"?"==h)k=r;else if(""===h)k=null;else if(w)for(v=0;v<c.length;v++){if(d[l]=
|
||||
c[v],w(e,d)==h){k=q(e,d);break}}else d[l]=c[h],n&&(d[n]=h),k=q(e,d);g.$setViewValue(k)})});g.$render=h;e.$watch(h)}if(m[1]){var p=m[0];m=m[1];var s=h.multiple,t=h.ngOptions,y=!1,w,u=A(Q.createElement("option")),D=A(Q.createElement("optgroup")),x=u.clone();h=0;for(var v=f.children(),F=v.length;h<F;h++)if(""===v[h].value){w=y=v.eq(h);break}p.init(m,y,x);s&&(m.$isEmpty=function(a){return!a||0===a.length});t?n(e,f,m):s?l(e,f,m):k(e,f,m,p)}}}}],He=["$interpolate",function(a){var c={addOption:w,removeOption:w};
|
||||
return{restrict:"E",priority:100,compile:function(d,e){if(z(e.value)){var g=a(d.text(),!0);g||e.$set("value",d.text())}return function(a,d,e){var k=d.parent(),l=k.data("$selectController")||k.parent().data("$selectController");l&&l.databound?d.prop("selected",!1):l=c;g?a.$watch(g,function(a,c){e.$set("value",a);a!==c&&l.removeOption(c);l.addOption(a)}):l.addOption(e.value);d.on("$destroy",function(){l.removeOption(e.value)})}}}}],Ie=$({restrict:"E",terminal:!0});(Ba=Z.jQuery)?(A=Ba,t(Ba.fn,{scope:Ea.scope,
|
||||
isolateScope:Ea.isolateScope,controller:Ea.controller,injector:Ea.injector,inheritedData:Ea.inheritedData}),wb("remove",!0,!0,!1),wb("empty",!1,!1,!1),wb("html",!1,!1,!0)):A=O;Na.element=A;(function(a){t(a,{bootstrap:Yb,copy:fa,extend:t,equals:ua,element:A,forEach:q,injector:Zb,noop:w,bind:bb,toJson:pa,fromJson:Ub,identity:Aa,isUndefined:z,isDefined:B,isString:D,isFunction:L,isObject:X,isNumber:rb,isElement:Pc,isArray:K,version:Rd,isDate:Ja,lowercase:x,uppercase:Ga,callbacks:{counter:0},$$minErr:F,
|
||||
$$csp:Tb});Ta=Uc(Z);try{Ta("ngLocale")}catch(c){Ta("ngLocale",[]).provider("$locale",rd)}Ta("ng",["ngLocale"],["$provide",function(a){a.provider({$$sanitizeUri:Bd});a.provider("$compile",ic).directive({a:Wd,input:Lc,textarea:Lc,form:Xd,script:De,select:Ge,style:Ie,option:He,ngBind:he,ngBindHtml:je,ngBindTemplate:ie,ngClass:ke,ngClassEven:me,ngClassOdd:le,ngCloak:ne,ngController:oe,ngForm:Yd,ngHide:xe,ngIf:pe,ngInclude:qe,ngInit:se,ngNonBindable:te,ngPluralize:ue,ngRepeat:ve,ngShow:we,ngStyle:ye,ngSwitch:ze,
|
||||
ngSwitchWhen:Ae,ngSwitchDefault:Be,ngOptions:Fe,ngTransclude:Ce,ngModel:ce,ngList:ee,ngChange:de,required:Mc,ngRequired:Mc,ngValue:ge}).directive({ngInclude:re}).directive(Nb).directive(Nc);a.provider({$anchorScroll:cd,$animate:Td,$browser:ed,$cacheFactory:fd,$controller:id,$document:jd,$exceptionHandler:kd,$filter:Ac,$interpolate:pd,$interval:qd,$http:ld,$httpBackend:nd,$location:td,$log:ud,$parse:xd,$rootScope:Ad,$q:yd,$sce:Ed,$sceDelegate:Dd,$sniffer:Fd,$templateCache:gd,$timeout:Gd,$window:Hd})}])})(Na);
|
||||
A(Q).ready(function(){Sc(Q,Yb)})})(window,document);!angular.$$csp()&&angular.element(document).find("head").prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,.ng-hide{display:none !important;}ng\\:form{display:block;}</style>');
|
||||
//# sourceMappingURL=angular.min.js.map
|
||||
423
gui/app.js
Normal file
423
gui/app.js
Normal file
@@ -0,0 +1,423 @@
|
||||
/*jslint browser: true, continue: true, plusplus: true */
|
||||
/*global $: false, angular: false */
|
||||
|
||||
'use strict';
|
||||
|
||||
var syncthing = angular.module('syncthing', []);
|
||||
|
||||
syncthing.controller('SyncthingCtrl', function ($scope, $http) {
|
||||
var prevDate = 0,
|
||||
modelGetOK = true;
|
||||
|
||||
$scope.connections = {};
|
||||
$scope.config = {};
|
||||
$scope.myID = '';
|
||||
$scope.nodes = [];
|
||||
$scope.configInSync = true;
|
||||
$scope.errors = [];
|
||||
$scope.seenError = '';
|
||||
|
||||
// Strings before bools look better
|
||||
$scope.settings = [
|
||||
{id: 'ListenStr', descr: 'Sync Protocol Listen Addresses', type: 'text', restart: true},
|
||||
{id: 'GUIAddress', descr: 'GUI Listen Address', type: 'text', restart: true},
|
||||
{id: 'MaxSendKbps', descr: 'Outgoing Rate Limit (KBps)', type: 'number', restart: true},
|
||||
{id: 'RescanIntervalS', descr: 'Rescan Interval (s)', type: 'number', restart: true},
|
||||
{id: 'ReconnectIntervalS', descr: 'Reconnect Interval (s)', type: 'number', restart: true},
|
||||
{id: 'ParallelRequests', descr: 'Max Outstanding Requests', type: 'number', restart: true},
|
||||
{id: 'MaxChangeKbps', descr: 'Max File Change Rate (KBps)', type: 'number', restart: true},
|
||||
|
||||
{id: 'ReadOnly', descr: 'Read Only', type: 'bool', restart: true},
|
||||
{id: 'AllowDelete', descr: 'Allow Delete', type: 'bool', restart: true},
|
||||
{id: 'FollowSymlinks', descr: 'Follow Symlinks', type: 'bool', restart: true},
|
||||
{id: 'GlobalAnnEnabled', descr: 'Global Announce', type: 'bool', restart: true},
|
||||
{id: 'LocalAnnEnabled', descr: 'Local Announce', type: 'bool', restart: true},
|
||||
{id: 'StartBrowser', descr: 'Start Browser', type: 'bool'},
|
||||
];
|
||||
|
||||
function modelGetSucceeded() {
|
||||
if (!modelGetOK) {
|
||||
$('#networkError').modal('hide');
|
||||
modelGetOK = true;
|
||||
}
|
||||
}
|
||||
|
||||
function modelGetFailed() {
|
||||
if (modelGetOK) {
|
||||
$('#networkError').modal({backdrop: 'static', keyboard: false});
|
||||
modelGetOK = false;
|
||||
}
|
||||
}
|
||||
|
||||
function nodeCompare(a, b) {
|
||||
if (a.NodeID === $scope.myID) {
|
||||
return -1;
|
||||
}
|
||||
if (b.NodeID === $scope.myID) {
|
||||
return 1;
|
||||
}
|
||||
if (a.NodeID < b.NodeID) {
|
||||
return -1;
|
||||
}
|
||||
return a.NodeID > b.NodeID;
|
||||
}
|
||||
|
||||
$http.get('/rest/version').success(function (data) {
|
||||
$scope.version = data;
|
||||
});
|
||||
$http.get('/rest/system').success(function (data) {
|
||||
$scope.system = data;
|
||||
$scope.myID = data.myID;
|
||||
|
||||
$http.get('/rest/config').success(function (data) {
|
||||
$scope.config = data;
|
||||
$scope.config.Options.ListenStr = $scope.config.Options.ListenAddress.join(', ');
|
||||
|
||||
var nodes = $scope.config.Repositories[0].Nodes;
|
||||
nodes.sort(nodeCompare);
|
||||
$scope.nodes = nodes;
|
||||
});
|
||||
$http.get('/rest/config/sync').success(function (data) {
|
||||
$scope.configInSync = data.configInSync;
|
||||
});
|
||||
});
|
||||
|
||||
$scope.refresh = function () {
|
||||
$http.get('/rest/system').success(function (data) {
|
||||
$scope.system = data;
|
||||
});
|
||||
$http.get('/rest/model').success(function (data) {
|
||||
$scope.model = data;
|
||||
modelGetSucceeded();
|
||||
}).error(function () {
|
||||
modelGetFailed();
|
||||
});
|
||||
$http.get('/rest/connections').success(function (data) {
|
||||
var now = Date.now(),
|
||||
td = (now - prevDate) / 1000,
|
||||
id;
|
||||
|
||||
prevDate = now;
|
||||
$scope.inbps = 0;
|
||||
$scope.outbps = 0;
|
||||
|
||||
for (id in data) {
|
||||
if (!data.hasOwnProperty(id)) {
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
data[id].inbps = Math.max(0, 8 * (data[id].InBytesTotal - $scope.connections[id].InBytesTotal) / td);
|
||||
data[id].outbps = Math.max(0, 8 * (data[id].OutBytesTotal - $scope.connections[id].OutBytesTotal) / td);
|
||||
} catch (e) {
|
||||
data[id].inbps = 0;
|
||||
data[id].outbps = 0;
|
||||
}
|
||||
$scope.inbps += data[id].inbps;
|
||||
$scope.outbps += data[id].outbps;
|
||||
}
|
||||
$scope.connections = data;
|
||||
});
|
||||
$http.get('/rest/need').success(function (data) {
|
||||
var i, name;
|
||||
for (i = 0; i < data.length; i++) {
|
||||
name = data[i].Name.split('/');
|
||||
data[i].ShortName = name[name.length - 1];
|
||||
}
|
||||
data.sort(function (a, b) {
|
||||
if (a.ShortName < b.ShortName) {
|
||||
return -1;
|
||||
}
|
||||
if (a.ShortName > b.ShortName) {
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
});
|
||||
$scope.need = data;
|
||||
});
|
||||
$http.get('/rest/errors').success(function (data) {
|
||||
$scope.errors = data;
|
||||
});
|
||||
};
|
||||
|
||||
$scope.nodeStatus = function (nodeCfg) {
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
if (conn.Completion === 100) {
|
||||
return 'In Sync';
|
||||
} else {
|
||||
return 'Syncing (' + conn.Completion + '%)';
|
||||
}
|
||||
}
|
||||
|
||||
return 'Disconnected';
|
||||
};
|
||||
|
||||
$scope.nodeIcon = function (nodeCfg) {
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
if (conn.Completion === 100) {
|
||||
return 'ok';
|
||||
} else {
|
||||
return 'refresh';
|
||||
}
|
||||
}
|
||||
|
||||
return 'minus';
|
||||
};
|
||||
|
||||
$scope.nodeClass = function (nodeCfg) {
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
if (conn.Completion === 100) {
|
||||
return 'success';
|
||||
} else {
|
||||
return 'primary';
|
||||
}
|
||||
}
|
||||
|
||||
return 'info';
|
||||
};
|
||||
|
||||
$scope.nodeAddr = function (nodeCfg) {
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
return conn.Address;
|
||||
}
|
||||
return '(unknown address)';
|
||||
};
|
||||
|
||||
$scope.nodeCompletion = function (nodeCfg) {
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
return conn.Completion + '%';
|
||||
}
|
||||
return '';
|
||||
};
|
||||
|
||||
$scope.nodeVer = function (nodeCfg) {
|
||||
if (nodeCfg.NodeID === $scope.myID) {
|
||||
return $scope.version;
|
||||
}
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
return conn.ClientVersion;
|
||||
}
|
||||
return '(unknown version)';
|
||||
};
|
||||
|
||||
$scope.nodeName = function (nodeCfg) {
|
||||
if (nodeCfg.Name) {
|
||||
return nodeCfg.Name;
|
||||
}
|
||||
return nodeCfg.NodeID.substr(0, 6);
|
||||
};
|
||||
|
||||
$scope.saveSettings = function () {
|
||||
$scope.configInSync = false;
|
||||
$scope.config.Options.ListenAddress = $scope.config.Options.ListenStr.split(',').map(function (x) { return x.trim(); });
|
||||
$http.post('/rest/config', JSON.stringify($scope.config), {headers: {'Content-Type': 'application/json'}});
|
||||
$('#settingsTable').collapse('hide');
|
||||
};
|
||||
|
||||
$scope.restart = function () {
|
||||
$http.post('/rest/restart');
|
||||
$scope.configInSync = true;
|
||||
};
|
||||
|
||||
$scope.editNode = function (nodeCfg) {
|
||||
$scope.currentNode = nodeCfg;
|
||||
$scope.editingExisting = true;
|
||||
$scope.currentNode.AddressesStr = nodeCfg.Addresses.join(', ');
|
||||
$('#editNode').modal({backdrop: 'static', keyboard: false});
|
||||
};
|
||||
|
||||
$scope.addNode = function () {
|
||||
$scope.currentNode = {NodeID: '', AddressesStr: 'dynamic'};
|
||||
$scope.editingExisting = false;
|
||||
$('#editNode').modal({backdrop: 'static', keyboard: false});
|
||||
};
|
||||
|
||||
$scope.deleteNode = function () {
|
||||
var newNodes = [], i;
|
||||
|
||||
$('#editNode').modal('hide');
|
||||
if (!$scope.editingExisting) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < $scope.nodes.length; i++) {
|
||||
if ($scope.nodes[i].NodeID !== $scope.currentNode.NodeID) {
|
||||
newNodes.push($scope.nodes[i]);
|
||||
}
|
||||
}
|
||||
|
||||
$scope.nodes = newNodes;
|
||||
$scope.config.Repositories[0].Nodes = newNodes;
|
||||
|
||||
$scope.configInSync = false;
|
||||
$http.post('/rest/config', JSON.stringify($scope.config), {headers: {'Content-Type': 'application/json'}});
|
||||
};
|
||||
|
||||
$scope.saveNode = function () {
|
||||
var nodeCfg, done, i;
|
||||
|
||||
$scope.configInSync = false;
|
||||
$('#editNode').modal('hide');
|
||||
nodeCfg = $scope.currentNode;
|
||||
nodeCfg.Addresses = nodeCfg.AddressesStr.split(',').map(function (x) { return x.trim(); });
|
||||
|
||||
done = false;
|
||||
for (i = 0; i < $scope.nodes.length; i++) {
|
||||
if ($scope.nodes[i].NodeID === nodeCfg.NodeID) {
|
||||
$scope.nodes[i] = nodeCfg;
|
||||
done = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!done) {
|
||||
$scope.nodes.push(nodeCfg);
|
||||
}
|
||||
|
||||
$scope.nodes.sort(nodeCompare);
|
||||
$scope.config.Repositories[0].Nodes = $scope.nodes;
|
||||
|
||||
$http.post('/rest/config', JSON.stringify($scope.config), {headers: {'Content-Type': 'application/json'}});
|
||||
};
|
||||
|
||||
$scope.otherNodes = function () {
|
||||
var nodes = [], i, n;
|
||||
|
||||
for (i = 0; i < $scope.nodes.length; i++) {
|
||||
n = $scope.nodes[i];
|
||||
if (n.NodeID !== $scope.myID) {
|
||||
nodes.push(n);
|
||||
}
|
||||
}
|
||||
return nodes;
|
||||
};
|
||||
|
||||
$scope.thisNode = function () {
|
||||
var i, n;
|
||||
|
||||
for (i = 0; i < $scope.nodes.length; i++) {
|
||||
n = $scope.nodes[i];
|
||||
if (n.NodeID === $scope.myID) {
|
||||
return [n];
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
$scope.errorList = function () {
|
||||
var errors = [];
|
||||
for (var i = 0; i < $scope.errors.length; i++) {
|
||||
var e = $scope.errors[i];
|
||||
if (e.Time > $scope.seenError) {
|
||||
errors.push(e);
|
||||
}
|
||||
}
|
||||
return errors;
|
||||
};
|
||||
|
||||
$scope.clearErrors = function () {
|
||||
$scope.seenError = $scope.errors[$scope.errors.length - 1].Time;
|
||||
};
|
||||
|
||||
$scope.friendlyNodes = function (str) {
|
||||
for (var i = 0; i < $scope.nodes.length; i++) {
|
||||
var cfg = $scope.nodes[i];
|
||||
str = str.replace(cfg.NodeID, $scope.nodeName(cfg));
|
||||
}
|
||||
return str;
|
||||
};
|
||||
|
||||
$scope.refresh();
|
||||
setInterval($scope.refresh, 10000);
|
||||
});
|
||||
|
||||
function decimals(val, num) {
|
||||
var digits, decs;
|
||||
|
||||
if (val === 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
digits = Math.floor(Math.log(Math.abs(val)) / Math.log(10));
|
||||
decs = Math.max(0, num - digits);
|
||||
return decs;
|
||||
}
|
||||
|
||||
syncthing.filter('natural', function () {
|
||||
return function (input, valid) {
|
||||
return input.toFixed(decimals(input, valid));
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('binary', function () {
|
||||
return function (input) {
|
||||
if (input === undefined) {
|
||||
return '0 ';
|
||||
}
|
||||
if (input > 1024 * 1024 * 1024) {
|
||||
input /= 1024 * 1024 * 1024;
|
||||
return input.toFixed(decimals(input, 2)) + ' Gi';
|
||||
}
|
||||
if (input > 1024 * 1024) {
|
||||
input /= 1024 * 1024;
|
||||
return input.toFixed(decimals(input, 2)) + ' Mi';
|
||||
}
|
||||
if (input > 1024) {
|
||||
input /= 1024;
|
||||
return input.toFixed(decimals(input, 2)) + ' Ki';
|
||||
}
|
||||
return Math.round(input) + ' ';
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('metric', function () {
|
||||
return function (input) {
|
||||
if (input === undefined) {
|
||||
return '0 ';
|
||||
}
|
||||
if (input > 1000 * 1000 * 1000) {
|
||||
input /= 1000 * 1000 * 1000;
|
||||
return input.toFixed(decimals(input, 2)) + ' G';
|
||||
}
|
||||
if (input > 1000 * 1000) {
|
||||
input /= 1000 * 1000;
|
||||
return input.toFixed(decimals(input, 2)) + ' M';
|
||||
}
|
||||
if (input > 1000) {
|
||||
input /= 1000;
|
||||
return input.toFixed(decimals(input, 2)) + ' k';
|
||||
}
|
||||
return Math.round(input) + ' ';
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('short', function () {
|
||||
return function (input) {
|
||||
return input.substr(0, 6);
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('alwaysNumber', function () {
|
||||
return function (input) {
|
||||
if (input === undefined) {
|
||||
return 0;
|
||||
}
|
||||
return input;
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.directive('optionEditor', function () {
|
||||
return {
|
||||
restrict: 'C',
|
||||
replace: true,
|
||||
transclude: true,
|
||||
scope: {
|
||||
setting: '=setting',
|
||||
},
|
||||
template: '<input type="text" ng-model="config.Options[setting.id]"></input>',
|
||||
};
|
||||
});
|
||||
7
gui/bootstrap/css/bootstrap-theme.min.css
vendored
Executable file
7
gui/bootstrap/css/bootstrap-theme.min.css
vendored
Executable file
File diff suppressed because one or more lines are too long
7
gui/bootstrap/css/bootstrap.min.css
vendored
Executable file
7
gui/bootstrap/css/bootstrap.min.css
vendored
Executable file
File diff suppressed because one or more lines are too long
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.eot
Executable file
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.eot
Executable file
Binary file not shown.
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.svg
Executable file
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.svg
Executable file
Binary file not shown.
|
After Width: | Height: | Size: 61 KiB |
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.ttf
Executable file
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.ttf
Executable file
Binary file not shown.
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.woff
Executable file
BIN
gui/bootstrap/fonts/glyphicons-halflings-regular.woff
Executable file
Binary file not shown.
7
gui/bootstrap/js/bootstrap.min.js
vendored
Executable file
7
gui/bootstrap/js/bootstrap.min.js
vendored
Executable file
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user