Compare commits

...

612 Commits

Author SHA1 Message Date
ProactiveServices
c953cdc375 gui: Package attribution and copyright bumps (fixes #3861)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3863
2017-01-10 07:50:11 +00:00
Jakob Borg
c20c17e3c6 gui, man: Update docs & translations 2017-01-10 08:41:14 +01:00
Audrius Butkevicius
1a1e35d998 lib/osutil: Replace IsDir with TraversesSymlink (fixes #3839)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3883
LGTM: calmh
2017-01-10 07:09:31 +00:00
Adel Qalieh
8d2a31e38e lib/model: Remove syncthing-specific files (fixes #3819)
Syncthing adds some hidden files when a folder is added, but there is currently
no equivalent cleanup procedure. This change is conservative as not to
accidentally cause data loss.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3874
2017-01-07 17:05:30 +00:00
Jakob Borg
fe377a166a authors: Update adelq 2017-01-07 17:56:46 +01:00
Jakob Borg
5770d80bf9 authors: Add adelq 2017-01-07 17:54:41 +01:00
Jakob Borg
7a16dbd31d lib/scanner: Fix previous commit: don't stop scan completely 2017-01-05 15:05:49 +01:00
Jakob Borg
926c88cfc4 lib/scanner: Never, ever descend into symlinks (ref #3857)
On Windows we would descend into SYMLINKD type links when we scanned
them successfully, as we would return nil from the walk function and the
filepath.Walk iterator apparently thought it OK to descend into the
symlinked directory.

With this change we always return filepath.SkipDir no matter what.

Tested on Windows 10 as admin, does what it should.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3875
2017-01-05 13:26:29 +00:00
Audrius Butkevicius
29d010ec0e lib/model, lib/weakhash: Hash using adler32, add heuristic in puller
Adler32 is much faster, and the heuristic avoid the obvious cases where it
will not help.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3872
2017-01-04 21:04:13 +00:00
Jakob Borg
920274bce4 lib/db: Don't panic on unknown folder in ListFolders (fixes #3584)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3869
2017-01-04 10:34:52 +00:00
Jakob Borg
2ebd6ad77f lib/scanner: Don't stop byte counter ticks before scan is done 2017-01-03 15:03:32 +01:00
Kudalufi
79dd6918f2 cmd/syncthing: Add support for -auditfile= (fixes #3859)
Adds support for -auditfile= where is "-" for stdout, "--" for stderr, or a
filename. It can be left blank (or left out entirely) for the original
behaviour of creating a timestamped filename.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3860
2017-01-03 08:54:28 +00:00
Jakob Borg
79f7f50c4d authors: Add Kudalufi 2017-01-03 09:24:29 +01:00
Jakob Borg
987718baf8 vendor: Update github.com/gogo/protobuf
Also tweaks the proto definitions:

 - [packed=false] on the block_indexes field to retain compat with
   v0.14.16 and earlier.

 - Uses the vendored protobuf package in include paths.

And, "build.go setup" will install the vendored protoc-gen-gogofast.
This should ensure that a proto rebuild isn't so dependent on whatever
version of the compiler and package the developer has installed...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3864
2017-01-03 00:16:21 +00:00
Jakob Borg
4fb9c143ac authors: Update ProactiveServices 2017-01-02 15:12:57 +01:00
Jakob Borg
ec62888539 lib/connections: Allow on the fly changes to rate limits (fixes #3846)
Also replaces github.com/juju/ratelimit with golang.org/x/time/rate as
the latter supports changing the rate on the fly.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3862
2017-01-02 11:29:20 +00:00
Jakob Borg
8c34a76f7a man: refresh.sh requires bash 2017-01-01 20:45:52 +01:00
Jakob Borg
6809d38cde lib/protocol: Revert protobuf encoder changes in v0.14.17 (fixes #3855)
The protobuf encoder now produces packed arrays for things like []int32,
which is actually correct according to the proto3 spec. However
Syncthing v0.14.16 and earlier doesn't support this. This reverts the
encoding change, but keeps the updated decoder so that we are both more
compatible with other proto3 implementations and can move to the updated
encoder in the future.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3856
2017-01-01 17:19:00 +00:00
Mark Pulford
69ae4aa024 cmd/syncthing: Avoid Keepalive/GUI refresh race
This avoids unnecessary browser request failures and retries. Eg:
- Browser reuses existing HTTP connection for GUI refresh request
- Server closes connection with request in flight
- Browser retries GET request.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3854
2017-01-01 12:38:31 +00:00
Jakob Borg
8e8b867fba authors: Add mpx 2017-01-01 13:28:33 +01:00
Jakob Borg
0a118d2979 lib/config, lib/model: Temporarily disable bad tests (ref #3834, #3843) 2017-01-01 13:27:18 +01:00
Nathan Morrison
8daaa5d0d2 gui: Populate global changes on load
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3848
2016-12-30 01:33:27 +00:00
Jakob Borg
eb14f85a57 vendor: Update github.com/syndtr/goleveldb 2016-12-28 12:19:14 +01:00
Jakob Borg
c69c3c7c36 lib/sha256: Smoke test the implementation on startup (hello OpenSUSE!) 2016-12-28 12:15:51 +01:00
Jakob Borg
54911d44c5 gui: s/foldersendonly.html/foldertypes.html 2016-12-27 11:29:12 +01:00
Jakob Borg
c765f7be8d gui, man: Update docs and translations 2016-12-26 14:23:55 +01:00
Jakob Borg
44bdaf3ac2 cmd/syncthing: Add -reset-deltas option to reset delta index IDs
Also rename and clarify the description of -reset-database (formerly
-reset).
2016-12-26 13:49:51 +01:00
Jakob Borg
fc16e49cb0 Merge branch 'v0.14.16-hotfix'
* v0.14.16-hotfix:
  gui, man: Update docs & translations
  lib/model: Allow empty subdirs in scan request (fixes #3829)
  lib/model: Don't send symlinks to old devices that can't handle them (fixes #3802)
  lib/model: Accept scan requests of paths ending in slash (fixes #3804)
  gui: Avoid pause between event polls (ref #3527)
2016-12-24 20:12:53 +01:00
Jakob Borg
f5a310ad64 Revert "lib/model: Handle filename conflicts on Windows."
This reverts commit 01e50eb3fa.
2016-12-23 11:10:58 +01:00
Unrud
01e50eb3fa lib/model: Handle filename conflicts on Windows.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3810
LGTM: calmh
2016-12-22 23:04:53 +00:00
Jakob Borg
722b81c6f0 gui, man: Update docs & translations 2016-12-21 19:46:28 +01:00
Jakob Borg
f0efa2b974 lib/model: Allow empty subdirs in scan request (fixes #3829)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3833
2016-12-21 19:45:38 +01:00
Audrius Butkevicius
bab7c8ebbf all: Add folder pause, make pauses permanent (fixes #3407, fixes #215, fixes #3001)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3520
2016-12-21 18:41:25 +00:00
Nathan Morrison
0725e3af38 all: Add a global change list
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3694
2016-12-21 16:35:20 +00:00
Jakob Borg
dd7bb6c4b8 lib/model: Fix tests, clean up pool usage in protocol 2016-12-21 14:53:45 +01:00
Jakob Borg
d41c131364 build: Enable gometalinter "gosimple" check, improve build.go 2016-12-21 14:53:45 +01:00
Jakob Borg
47f22ff3e5 build: Enable gometalinter "unconvert" check 2016-12-21 14:53:45 +01:00
Jakob Borg
744c2e82b5 build: Enable gometalinter "staticcheck" check 2016-12-21 14:53:45 +01:00
Jakob Borg
ead7281c20 build: Enable gometalinter "unused" check 2016-12-21 14:53:45 +01:00
AudriusButkevicius
aa3ef49dd7 lib/model: Fix lock order
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3823
2016-12-21 12:22:18 +00:00
Jakob Borg
5c067661f4 lib/model: Consistently show folder description in startup messages
Since we anyway need the folderConfig for this I'm skipping the copying
of all it's attributes that rwfolder did and just keeping the original
around instead.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3825
2016-12-21 11:23:20 +00:00
Jakob Borg
226da976dc lib/model: Allow empty subdirs in scan request (fixes #3829)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3833
2016-12-21 10:33:07 +00:00
AudriusButkevicius
ba17cc0a11 gui: Show introducedBy (fixes #3809) 2016-12-21 11:01:15 +01:00
AudriusButkevicius
9e0afb7d8a lib/connections: Support setting traffic class (fixes #3790) 2016-12-21 11:01:15 +01:00
AudriusButkevicius
9e7d50bc76 cmd/syncthing: Explain corruption panics (fixes #3689) 2016-12-21 11:01:15 +01:00
Jakob Borg
d7d5687faa lib/protocol: Warnln should have been Warnf 2016-12-21 10:43:12 +01:00
Jakob Borg
21eb098dd2 vendor: Update github.com/minio/sha256-simd, CPU detection (Linux) 2016-12-20 09:20:28 +01:00
Jakob Borg
2f770f8bfb lib/config, lib/protocol: Improve folder description with empty label 2016-12-19 10:12:06 +01:00
Jakob Borg
f09698d845 gui: Add missing strings to lang-en.json 2016-12-18 22:29:43 +01:00
Jakob Borg
3fbcb024a7 gui: Update link to send only documentation page 2016-12-18 22:29:43 +01:00
Jakob Borg
b8c1c0e048 cmd/syncthing: Enable better crypto, print negotiated cipher suite
This adds support for AES_256_GCM_SHA384 (in there since Go 1.5, a bit
of a shame we missed it) and ChaCha20-Poly1305 (if built with Go 1.8;
ignored on older Gos).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3822
2016-12-18 21:07:44 +00:00
Jakob Borg
2d47242d54 jenkins: Don't clean out when building with old Go (nukes coverage data) 2016-12-18 19:58:46 +01:00
Jakob Borg
66a7829eee jenkins: Also try build with old Go version, if available 2016-12-18 19:25:27 +01:00
Jakob Borg
9c67bd2550 lib/connections: Fix port fixup in Go 1.8 (fixes #3817)
The test for the error string is fragile, and the error string changed
in Go 1.8 so the relevant part is no longer a prefix. This covers it
with a test though, so it should be fine in the future as well.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3818
2016-12-18 11:28:18 +00:00
Jakob Borg
f67c5a2fd6 lib/model: Don't send symlinks to old devices that can't handle them (fixes #3802)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3811
2016-12-17 19:48:33 +00:00
Jakob Borg
0437f6dd66 lib/model: Don't send symlinks to old devices that can't handle them (fixes #3802) 2016-12-17 12:39:22 +01:00
Jakob Borg
11b35d650d lib/model: Accept scan requests of paths ending in slash (fixes #3804)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3805
2016-12-17 12:39:22 +01:00
Jakob Borg
b279e261a1 gui: Avoid pause between event polls (ref #3527)
We lose important events due to the frequency of ItemStarted /
ItemFinished events.
2016-12-17 12:39:22 +01:00
Jakob Borg
263402f80a cmd/stcli: Add copyright headers to satisfy check 2016-12-17 12:28:59 +01:00
Jakob Borg
920a83ec7a cmd/stcli: Fix metalint ineffasign complaint 2016-12-17 10:51:48 +01:00
Jakob Borg
3c2ac3522c cmd/stcli: Update for new folder type 2016-12-17 01:44:18 +01:00
Jakob Borg
9fdaa637a8 vendor: Add github.com/AudriusButkevicius/cli 2016-12-17 01:33:17 +01:00
Jakob Borg
81a9d7f2b9 cmd/stcli: Make it build again 2016-12-17 01:33:17 +01:00
Audrius Butkevičius
d8d3f05164 cmd/stcli: Import from syncthing-cli repository 2016-12-17 01:33:17 +01:00
Jakob Borg
653be136ee gui: Update lang-en.json 2016-12-17 00:59:25 +01:00
Heiko Zuerker
398c356f22 lib/model: Clarify master terminology (fixes #2679)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3793
2016-12-16 22:23:35 +00:00
Audrius Butkevicius
542b76f687 lib/model: Moar sleep
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3807
2016-12-16 12:05:27 +00:00
Jakob Borg
abb8a1914a lib/model: Accept scan requests of paths ending in slash (fixes #3804)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3805
2016-12-16 11:21:22 +00:00
Jakob Borg
163d335078 gui: Update lang-en.json 2016-12-16 09:42:13 +01:00
Audrius Butkevicius
0582836820 lib/model, lib/scanner: Efficient inserts/deletes in the middle of the file
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3527
2016-12-14 23:30:29 +00:00
Jakob Borg
bb15776ae6 gui: Avoid pause between event polls (ref #3527)
We lose important events due to the frequency of ItemStarted /
ItemFinished events.
2016-12-14 10:31:16 +01:00
Jakob Borg
dde9d4c9eb lib/rc: Remove pause to aggregate events (ref #3527) 2016-12-14 10:24:36 +01:00
Jakob Borg
1ef75be1c6 gui, man: Update docs & translations 2016-12-13 11:29:40 +01:00
Jakob Borg
3582783972 lib/model, lib/osutil: Verify target directory before pulling / requesting
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3798
2016-12-13 10:24:10 +00:00
Jakob Borg
5070d52f2f lib/model: Add benchmark for model.Request() 2016-12-09 23:14:56 +01:00
Jakob Borg
7b07ed6580 lib/model, lib/protocol, lib/scanner: Include symlink target in index, pull symlinks synchronously
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3792
2016-12-09 18:02:18 +00:00
Han Boetes
f6a2b6252a gui: Tweak wording (fixes #3769)
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3796
2016-12-09 17:16:29 +00:00
Jakob Borg
a9b03de99a gui, lib/db: Correct space accounting of symlinks, for "out of sync" status 2016-12-09 10:38:36 +01:00
Jakob Borg
a7f7058636 vendor: Update github.com/gobwas/glob (bugfix) 2016-12-07 09:25:58 +01:00
Lars K.W. Gohlke
8ce9b026e9 lib/model: Minor cleanups
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3765
2016-12-06 08:54:04 +00:00
Audrius Butkevicius
0dcf2f1bc8 cmd/strelaysrv: Use legacy dial (fixes #3753)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3784
2016-12-02 22:45:08 +00:00
Audrius Butkevicius
99922feb3b gui: Disable device removal when we know it will be reintroduced
Skip-check: pr-build-windows

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3762
2016-12-02 21:07:02 +00:00
Jakob Borg
2fd1dca905 lib/connections: Fix odd logging, forgot to call function 2016-12-02 12:56:14 +01:00
Jakob Borg
e3cf718998 lib/ignore: Add central check for internal files, used in scanning, pulling and requests
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3779
2016-12-01 14:00:11 +00:00
Stefan Kuntz
7157917a16 etc: Updated ufw firewall application preset with default GUI port
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3774
2016-12-01 12:36:15 +00:00
Jakob Borg
3266aae1c3 lib/protocol: Apply input filtering on file names
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3775
2016-12-01 12:35:32 +00:00
Jakob Borg
63194a37f6 lib/model: Double check results in filepath.Join where needed
Wherever we have untrusted relative paths, make sure they are not
escaping their folder root.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3776
2016-12-01 12:35:11 +00:00
Unrud
cabe94552a lib/model: Prevent collisions in the progressemitter registry
Using filepath.Join can cause collisions. The folder ID could be something
like ".." or "../..".

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3778
2016-12-01 12:34:20 +00:00
Jakob Borg
48a229a0cd lib/model: Temp names from all platforms should be recognized as such
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3777
LGTM: AudriusButkevicius
2016-11-30 21:23:24 +00:00
Jakob Borg
e4db86836b lib/model: Locking in the request test 2016-11-30 13:11:06 +01:00
Jakob Borg
913a85c571 lib/model: Add simple file syncing test
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3772
2016-11-30 09:32:28 +00:00
Jakob Borg
ed4f6fc4b3 lib/connections, lib/model: Connection service should expose a single interface
Makes testing easier, which we'll need

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3771
2016-11-30 07:54:20 +00:00
Jakob Borg
9da422f1c5 gui, man: Update docs & translations 2016-11-29 11:56:02 +01:00
Stefan Tatschner
ab1739ba34 cmd/syncthing: Trigger usage message on extra CLI parameters
fixes: #3690

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3763
2016-11-27 11:21:05 +00:00
Jakob Borg
fc1430aa92 lib/fs: The interface and basicfs
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3748
2016-11-24 12:07:14 +00:00
Jakob Borg
3cde608eda lib/db: Fix ineffassign lint issue 2016-11-24 12:08:44 +01:00
Jakob Borg
2898552f4b build: Setup should insteall deadcode metalinter 2016-11-24 12:05:04 +01:00
Jakob Borg
911c148c71 build: Improve setup, add metalint ineffasign 2016-11-24 11:33:43 +01:00
Jakob Borg
91568a173a lib/model: Remove ineffectual assignment in test 2016-11-24 11:33:27 +01:00
Jakob Borg
e57f5499a1 lib/sync: Remove unused struct field 2016-11-24 11:30:55 +01:00
Jakob Borg
9abb7b71a9 lib/osutil: Fix lint warning on error formatting (fixes #3760) 2016-11-24 11:20:51 +01:00
Jakob Borg
724c354d62 cmd/stdiscosrv: Fix lint warning on Context keys (fixes #3760) 2016-11-24 11:20:51 +01:00
Jakob Borg
c44779094d build: Setup should download latest version of linters etc 2016-11-24 11:20:51 +01:00
Wulf Weich
eeedab4091 gui: bottom nav always behind dropdown (fixes #3758)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3759
2016-11-23 17:03:43 +00:00
Jakob Borg
8559e20237 lib/osutil: Don't chmod in atomic file creation (fixes #2472)
Instead, trust (and test) that the temp file has appropriate permissions
from the start. The only place where this changes our behavior is for
ignores which go from 0644 to 0600. I'm OK with that.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3756
2016-11-23 14:06:08 +00:00
Jakob Borg
26730eb083 lib/model: Fix test that relies on ignore reloading 2016-11-23 14:42:29 +01:00
Simon Frei
4160ce674d model: consistently use cfg when referring to config instance and not package
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3755
2016-11-22 23:14:20 +00:00
Jakob Borg
2dbeea21c4 lib/ignore: Don't slow down tests by sleeping 2016-11-22 22:44:04 +01:00
Jakob Borg
a2b8485a89 lib/ignore: Fast reload of unchanged ignores (fixes #3394)
This changes the "seen" map that we're anyway keeping around to track
the modtimes of loaded files instead. When doing a Load() we check that
1) the file we are loading is in the modtime set, and 2) that none of
the files in the modtime set have changed modtimes. If that's the case
we do a quick return without parsing anything or clearing the cache.

This required adding two one seconds sleeps in the tests to make sure
the modtimes were updated when we expect cache reloads, because I'm on a
crappy filesystem with one second timestamp granularity. That also
proves it works...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3754
2016-11-22 21:30:45 +00:00
Jakob Borg
5bb74ee61c gui, man: Update docs & translations 2016-11-22 09:32:57 +01:00
Jakob Borg
462fde5e7d cmd/syncthing: Make the default folder default again
The current way is quite confusing for new users - we create a default
folder, but it's not usable with the default folder created somewhere
else. Instead, when setting up for the first time with two devices, the
default folder must be removed and recreated on one of them. This comes
up on IRC and the forum now and then.

I think this matches expectactions better.

Another alternative would be to remove it entirely (not create a default
folder), but then we should also add some guidance in the UI on how to
proceed.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3751
2016-11-22 08:18:43 +00:00
Unrud
f1e83a57cd lib/osutil: Remove unnecessary fsync in Copy()
Fsyncing the file has a small performance penalty and seems unnecessary. The
file will be fsynced anyway, when the changes are commited to the database.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3749
2016-11-22 07:59:54 +00:00
Jakob Borg
cc9a9fb390 lib/model, lib/protocol: Add Folder.Description() for logging (ref #3741) 2016-11-22 08:36:14 +01:00
Jakob Borg
8fbcceb742 authors: Add further Unrud address 2016-11-22 08:14:22 +01:00
Roman Zaynetdinov
d3a251e6d9 lib/model: Log folder IDs and labels (fixes #3724)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3741
2016-11-21 20:09:18 +01:00
Jakob Borg
be80b26c18 authors: Add zaynetro 2016-11-21 20:08:31 +01:00
Unrud
1574b7d834 lib/model: Add fsync of files and directories, option to disable (fixes #3711) 2016-11-21 18:09:51 +01:00
Jakob Borg
51e10e344d Add Unrud 2016-11-21 17:59:44 +01:00
kwhite17
0d55d8c5b0 gui: Convert URLs in warning messages to HTML links (fixes #3241)
Skip-check: metalint (annoying timeout)

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3747
2016-11-21 08:27:44 +00:00
Jakob Borg
1392589d36 authors: Add kwhite17 2016-11-21 09:12:03 +01:00
Jakob Borg
548a324256 lib/connections: Slow down failing listeners
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3745
2016-11-19 12:37:14 +00:00
Jakob Borg
a8a0bc356a lib/model: Minor cleanup to not fondle cfg.Raw things in handleDeintroductions
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3739
2016-11-17 08:56:55 +00:00
Jakob Borg
faee1d5a8d lib/model: Fix locking around introduction handling (fixes #3737)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3738
2016-11-17 08:50:24 +00:00
Jakob Borg
3088dac33b lib/model: Clean up generateClusterConfig, fix spurious test failure by sorting 2016-11-17 07:45:45 +01:00
Jakob Borg
2641062c17 gui, man: Update docs & translations 2016-11-15 07:23:48 +01:00
Jakob Borg
95c738ea28 lib/protocol: Serialize the all zeroes device ID to the empty string
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3734
2016-11-15 06:22:36 +00:00
Jakob Borg
562d2f67a6 snapcraft: Point home and config dir towards non-versioned snap home (fixes #3730) 2016-11-14 19:06:05 +01:00
Ben Schulz
ba6aff4a1b gui: Use icons and tooltips for folder size info (fixes #3710)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3731
2016-11-13 13:56:07 +00:00
Audrius Butkevicius
bb23e3940e cmd/strelaysrv: Use listen address for outgoing HTTP requests (fixes #3682) 2016-11-13 09:32:05 +01:00
Audrius Butkevicius
94e4370c7e cmd/strelaysrv: Outbox will get GCed (fixes #3718) 2016-11-13 09:32:05 +01:00
Audrius Butkevicius
38d28c3f4a lib/relay: Close invitation channel in all error cases (fixes #3726) 2016-11-13 09:32:05 +01:00
Audrius Butkevicius
f60b424d70 lib/config: Raw() -> RawCopy() 2016-11-13 09:29:35 +01:00
Audrius Butkevicius
a1a91d5ef4 lib/model: Introducer can remove stuff it introduced (fixes #1015)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3522
2016-11-13 09:29:33 +01:00
Jakob Borg
bfb48b5dde jenkins: Clean should remove old snaps 2016-11-12 10:08:13 +01:00
Jakob Borg
2860813a8e build: Set snap grade to "stable" for releases 2016-11-12 09:47:57 +01:00
Jakob Borg
72538e350d build: Snap versions should not have initial "v"
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3728
2016-11-12 08:36:19 +00:00
Jakob Borg
59f3d1445f Revert "lib/model: Introducer can remove stuff it introduced (fixes #1015)"
This reverts commit 0b88cf1d03.
2016-11-12 08:38:29 +01:00
Audrius Butkevicius
0b88cf1d03 lib/model: Introducer can remove stuff it introduced (fixes #1015)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3522
2016-11-11 15:54:25 +01:00
Audrius Butkevicius
56e2ba29d0 lib/config: Subscribers get a copy of the config
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3722
2016-11-11 14:52:23 +00:00
Jakob Borg
6ec9b84674 test: Fix test config 2016-11-09 09:02:55 +08:00
Leo Arias
afd15392b1 build: Build snaps for ARM
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3717
2016-11-09 00:52:33 +00:00
Jakob Borg
ae4cc94a9d lib/model: Fix locking order in Availability() (fixes #3634)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3714
2016-11-08 06:38:50 +00:00
Jakob Borg
3f9b75b7b3 Revert "lib/model: Introducer can remove stuff it introduced (fixes #1015)"
This reverts commit ec2b097313.
2016-11-08 14:27:32 +08:00
Audrius Butkevicius
ec2b097313 lib/model: Introducer can remove stuff it introduced (fixes #1015)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3522
2016-11-08 00:40:48 +08:00
Audrius Butkevicius
caaab462bc lib/sync: Fix broken build 2016-11-05 02:31:52 +00:00
Audrius Butkevicius
da413b823b lib/sync: Add option for sasha-s/go-deadlock 2016-11-05 02:24:53 +00:00
Audrius Butkevicius
14937e7dd2 build: Fix proto builder on Windows 2016-11-03 22:06:51 +00:00
Audrius Butkevicius
3418497f3d lib/sync: Log everything... 2016-11-03 21:33:33 +00:00
Stefan Kuntz
e408f1061a etc: Added ufw firewall application preset
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3703
2016-11-03 15:46:25 +00:00
佛跳墙
c08fe4e2c5 gui: Remove erroneous right parenthesis
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3699
2016-11-02 13:03:57 +00:00
Jakob Borg
b7e21984a1 gui, man: Update docs & translations 2016-11-01 11:26:25 +01:00
Audrius Butkevicius
7fba8cf759 lib/sync: Print all lockers, add holder to RWMutex 2016-10-30 00:17:25 +01:00
Jakob Borg
0296c23685 lib/protocol: Use DeviceID in protocol messages, with custom marshalling
This makes the device ID a real type that can be used in the protobuf
schema. That avoids the juggling back and forth from []byte in a bunch
of places and simplifies the code.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3695
2016-10-29 21:56:24 +00:00
Jakob Borg
1cdfef4d6a script: Missed a newline in the commit-msg hook output 2016-10-27 21:47:14 +02:00
Jakob Borg
cead20ec91 script: Add commit message check hook 2016-10-27 21:42:05 +02:00
MikeLund
74dd051d51 all: Update docs.s.n links to use https
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3691
2016-10-27 17:02:19 +00:00
Jakob Borg
5473285010 gui: Add help link for sync protocol listen addresses 2016-10-26 21:16:53 +02:00
Jakob Borg
4b3adfa21c vendor: Update gobwas/glob to fix question mark handling 2016-10-23 15:47:31 +02:00
Simon Frei
7c37301c91 lib/ignore: Add directory separator to glob.Compile call
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3674
LGTM: calmh
2016-10-21 07:33:40 +00:00
Jakob Borg
d9040f8038 authors: Add imsodin 2016-10-21 15:23:17 +08:00
Jakob Borg
f41606c0b0 jenkins: Build snap
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3678
2016-10-20 09:31:07 +00:00
Leo Arias
31d9750579 build: Add build method for snapcraft
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3636
2016-10-20 09:16:30 +00:00
Jakob Borg
173fb97832 authors: Add elopio 2016-10-20 16:38:14 +08:00
Wulf Weich
81248c3f56 gui: resurrect old dark theme as black theme
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3676
2016-10-19 09:06:27 +00:00
Frank Isemann
2914a0a0a5 gui: Slightly lighten Text for Disconnected/Scanning in Dark Theme 2016-10-19 08:27:17 +08:00
Audrius Butkevicius
815588daba lib/sync, lib/model: Capture locker routine ID, print locker details on deadlock 2016-10-18 21:00:01 +01:00
Jakob Borg
6152eb6d6d gui: Hide "Failed Items" unless there is an actual failure (fixes #3647)
Since delta indexes it's perfectly normal for us to need files that are
currently unavailable due to devices being disconnected. This doesn't
imply a failure, so we should not show the "Failed Items" line and
corresponding eternal spinner (since it would never be filled in, since
there is no failure).

We still show state "Out of Sync" (correct) and the list of files we
need (correct).
2016-10-17 23:57:58 +02:00
Jakob Borg
ff0ebc196c gui: Improve display of local size, ignore pattern status (fixes #3623)
The discrepancy between global and local sizes is fine and expected in
the presence of ignores. This just moves the "we have ignore patterns"
indication to the actual local size metric, as an explanation of why it
may differ from the global size...
2016-10-17 23:57:58 +02:00
Jakob Borg
4e8c8d7e2c cmd/syncthing, lib/db, lib/model: Track more detailed file/dirs/links/deleted counts 2016-10-17 23:57:43 +02:00
Jakob Borg
b8a90b7eaa lib/model: Remove chatty delta index confirmation
This had its moment when making sure it worked initially, but nowadays
there is not point discussing the obvious on every folder on every
connect.
2016-10-17 09:21:58 +02:00
Jakob Borg
60e7ca4a4c gui, man: Update docs & translations 2016-10-17 09:12:16 +02:00
Dale Visser
3a3c8ec6b8 readme: Add Core Infrastructure Initiative Badge
Skip-check: authors, pr-build

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3515
2016-10-17 07:03:52 +00:00
Benny Ng
05c37e58c1 lib/osutil: Prevent infinite Glob recursion (fixes #3577)
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3665
2016-10-12 20:55:38 +00:00
Jakob Borg
d203dd4770 cmd/syncthing: go fmt traceback.go 2016-10-12 20:37:26 +02:00
Jakob Borg
29ccf10d0b cmd/syncthing: Only SetTraceback on Go 1.7+ (fixes #3664) 2016-10-10 17:16:18 +02:00
Jakob Borg
b49df09fec build: Trivial perf improvement of shouldRebuildAssets 2016-10-09 14:28:20 +02:00
Jakob Borg
ce3e117976 cmd/stdiscosrv: rm 'cmd/stdiscosrv/stdiscosrv' (fixes #3663) 2016-10-09 14:26:43 +02:00
Audrius Butkevicius
309795198d cmd/strelaypoolsrv: Remove hostnames from statusAddr 2016-10-08 10:03:53 +01:00
Audrius Butkevicius
7db00132b2 cmd/strelaysrv: Fix sorting zeros versus undefined 2016-10-07 21:24:47 +01:00
Audrius Butkevicius
76a2862b7a authors: Add Xavier O. 2016-10-07 21:24:47 +01:00
Jakob Borg
215503b4f7 cmd/syncthing: Delay browser start until the GUI is ready (fixes #3619) 2016-10-07 12:10:26 +09:00
Jakob Borg
54d4010f1a gui, man: Update docs and translations 2016-10-07 11:09:19 +09:00
Jakob Borg
cb1b53cfb5 gui: Update English base strings (fixes #3638) 2016-10-07 11:07:12 +09:00
Xav
96e8f94833 skip-check: authors
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3650
2016-10-05 19:13:47 +00:00
MikeLund
1e54a3e801 jenkins: use https when downloading docs (fixes #3651)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3652
2016-10-05 12:17:35 +00:00
Tim Howes
fe9c2b9857 lib/ignore: Match directory contents for patterns ending in / (fixes #3639)
Appends "**" to patterns with a terminal slash, so that directory
contents are ignored, but not the directory itself.
2016-10-04 08:12:55 +09:00
Jakob Borg
2a2177e7fa authors: Add timhowes 2016-10-04 08:11:57 +09:00
Jakob Borg
d1d565e58b cmd/syncthing: Localhost header comparison should be case insensitive 2016-10-03 17:34:13 +09:00
Peter Hoeg
891ff383ec etc/linux-systemd: Remove bogus dependency on networking for user unit
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3627
2016-10-03 03:49:00 +00:00
Nathan Morrison
d322ebd0b9 Add API service for local disk changes
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3626
LGTM: calmh, AudriusButkevicius
2016-09-28 15:54:13 +00:00
Peter Hoeg
50190236bb Ignore pkill error on resume
The ```syncthing-resume.service``` will show as a failed service in case
there are no syncthing processes running after resume but it can be
safely ignored because it makes no difference.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3630
2016-09-28 10:21:15 +00:00
Jakob Borg
d5a0f91cb4 cmd/syncthing: Restore useful levels of traceback on panic 2016-09-26 21:14:17 +02:00
Jakob Borg
467c1b26fb cmd/syncthing, lib/config: Log errors replacing or saving config (ref #3567) 2016-09-24 09:59:09 +02:00
Jakob Borg
3cabecda04 lib/upnp: Correct the result deduplication mechanism (fixes #3578)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3618
2016-09-24 07:33:56 +00:00
Jakob Borg
6d3160b0ab gui: Folder is out of sync when it needs deletes, too (fixes #3588) 2016-09-24 09:11:38 +02:00
Jakob Borg
d328e0fb75 cmd/syncthing: Add selectable sha256 package (fixes #3613, fixes #3614)
This adds autodetection of the fastest hashing library on startup, thus
handling the performance regression. It also adds an environment
variable to control the selection, STHASHING=standard (Go standard
library version, avoids SIGILL crash when the minio library has bugs on
odd CPUs), STHASHING=minio (to force using the minio version) or unset
for the default autodetection.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3617
2016-09-23 19:33:54 +00:00
Jakob Borg
5f01afb7ea build: No need for outdated go2xunit 2016-09-18 21:02:42 +02:00
fti7
6fe2fa5ff0 gui: Slightly lighten the dark theme
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3605
2016-09-18 13:47:36 +00:00
Jakob Borg
b371b1fe34 lib/versioner: Test both spaces and parens in ext versioner paths
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3610
2016-09-18 12:24:55 +00:00
Jakob Borg
90c0a39df8 lib/versioner: Test for external versioner
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3609
2016-09-17 20:34:50 +00:00
Lars K.W. Gohlke
70c5a5dff1 lib/versioner: Rename versioner_test to simple_test
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3603
2016-09-16 11:01:43 +00:00
Jakob Borg
da0b7cc7f2 lib/model: Correct lock taking order in ConnectionStats (fixes #3596)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3597
2016-09-14 19:38:55 +00:00
Jakob Borg
139e9b144e lib/config: Fix tests for changes in previous commit 2016-09-13 22:20:22 +02:00
Jakob Borg
77c0a19451 vendor: Update github.com/d4l3k/messagediff 2016-09-13 22:20:22 +02:00
Jakob Borg
58cbd19742 vendor: Update golang.org/cznic/... 2016-09-13 22:20:22 +02:00
Jakob Borg
9bf6917ae8 vendor: Update golang.org/x/crypto/... 2016-09-13 22:20:22 +02:00
Jakob Borg
897cca0a82 vendor: Update golang.org/x/net/... 2016-09-13 22:20:22 +02:00
Jakob Borg
6af09c61be vendor: Update github.com/thejerf/suture 2016-09-13 22:20:22 +02:00
Jakob Borg
c3c7798446 vendor: Update github.com/gobwas/glob 2016-09-13 22:20:22 +02:00
Jakob Borg
06dc91fadf vendor: Update github.com/syndtr/goleveldb 2016-09-13 22:20:22 +02:00
Jakob Borg
526cab538a jenkins: Don't fetch --prune unnecessarily, print build version on Windows 2016-09-13 22:18:55 +02:00
Jakob Borg
81d19a00aa vendor: Add github.com/cznic/lldb and friends (new recursive dependency) 2016-09-13 21:57:19 +02:00
Jakob Borg
ca755ec9e0 vendor: Add golang.org/x/net/bpf 2016-09-13 21:56:33 +02:00
Jakob Borg
4f6206cb2d build: Simpler creation of Debian packages
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3591
2016-09-12 12:21:07 +00:00
Jakob Borg
7fb53ec954 lib/config: Correct name of discovery-v6-4 server 2016-09-12 11:30:06 +02:00
Jakob Borg
d8b5070ca8 lib/config: Update default set of discovery servers 2016-09-12 09:55:45 +02:00
Jakob Borg
5e99d38412 all: Use github.com/minio/sha256-simd
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3581
2016-09-09 09:57:51 +00:00
Laurent Etiemble
3990014073 cmd/syncthing: Conditionally enable CORS
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3541
LGTM: AudriusButkevicius
2016-09-06 22:16:50 +00:00
Jakob Borg
3e51206a6b build, jenkins: Jenkins version tag should be same as when building manually 2016-09-06 13:02:17 +02:00
Aranjedeath
7569b75d61 cmd/strelaysrv: Correct go get command in README
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3564
2016-09-04 21:06:30 +00:00
Jakob Borg
8fcabac518 jenkins: Add batch file for Windows 2016-09-04 16:43:56 +02:00
Jakob Borg
abb0cfde72 jenkins: Add scripts for automated builds (Linux & Mac) 2016-09-04 15:30:16 +02:00
Jakob Borg
7990ffcc60 cmd/syncthing: Copy config on upgrade, instead of renaming (fixes #3525)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3560
2016-09-03 21:29:32 +00:00
Jakob Borg
49910a1d85 lib/config, cmd/syncthing: Enforce localhost only connections
When the GUI/API is bound to localhost, we enforce that the Host header
looks like localhost. This can be disabled by setting
insecureSkipHostCheck in the GUI config.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3558
2016-09-03 08:33:34 +00:00
Jakob Borg
46a143e80e lib/model: Handle deleted-then-ignored files (fixes #3502)
When files that were previously marked as deleted became ignored, we
used to do nothing at all. This changes that behavior to set the Invalid
bit (that we should rename to Ignored). This then becomes an update to
other devices that they should not trust our knowledge about the file in
question.

Read this diff without whitespace...

Tested by
- creating a bunch of files on s1
- letting them sync to s2
- shutting down s2
- deleting the files on s1 and rescanning
- adding the files to .stignore on s1 and rescanning
- starting up s2 and letting it sync
- observing the files are not deleted on s2, and it considers itself up
  to date.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3557
2016-09-02 13:23:24 +00:00
Jakob Borg
69b7f26e4c lib/model, cmd/syncthing: Also account for deleted files in folder summary events (ref #3496)
This should probably be reflected in the GUI somewhere as well...
2016-09-02 10:45:39 +02:00
Jakob Borg
5b37d0356c lib/model, gui: Correct completion percentages when there are lots of deletes (fixes #3496)
We used to consider deleted files & directories 128 bytes large. After
the delta indexes change a bug slipped in where deleted files would be
weighted according to their old non-deleted size. Both ways are
incorrect (but the latest change made it worse), as if there are more
files deleted than remaining data in the repo the needSize can be
greater than the globalSize, resulting in a negative completion
percentage.

This change makes it so that deleted items are zero bytes large, which
makes more sense. Instead we expose the number of files that we need to
delete as a separate field in the Completion() result, and hack the
percentage down to 95% complete if it was 100% complete but we need to
delete files. This latter part is sort of ugly, but necessary to give
the user some sort of feedback.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3556
2016-09-02 06:45:46 +00:00
Jakob Borg
1188ebbb7b gui, man: Update docs & translations 2016-08-23 10:42:09 +02:00
Audrius Butkevicius
76b903b2e0 lib/upgrade: Cleanup failed upgrades (fixes #3500, fixes #3530)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3535
2016-08-23 06:53:39 +00:00
Audrius Butkevicius
be38c2111f cmd/strelaysrv: Add uPNP support, ability to set listen protocol (fixes #3503, fixes #3505, fixes #3506) 2016-08-23 08:43:27 +02:00
Audrius Butkevicius
1de787fab8 cmd/strelaypoolsrv: Ability to select listen protocol 2016-08-23 08:42:57 +02:00
Audrius Butkevicius
81f683a61c cmd/stdiscosrv: Generate keys if missing on startup (fixes #3511) 2016-08-23 08:41:49 +02:00
Audrius Butkevicius
db6f68d031 cmd/stdiscosrv: Use UTC in database timestamps (fixes #3509) 2016-08-23 08:41:15 +02:00
Jakob Borg
d0a1c805e9 cmd/syncthing: uintptr may not be stored in a variable
Must do the whole uintptr(unsafe.Pointer(&whatever)) directly in the
call to sycall.Call, as per https://golang.org/pkg/unsafe/.
2016-08-22 18:24:26 +02:00
Jakob Borg
00a654845f cmd/syncthing: Remove old temp index dbs on startup (fixes #3529)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3532
2016-08-22 12:19:19 +00:00
Jakob Borg
04dad8485a authors: Add calmh email 2016-08-18 19:38:15 +02:00
Jakob Borg
0b1475169f lib/model: Correct virtual mtime handling (fixes #3516)
We previously set the mtime on the temp file, and then renamed it to the
real path. Unfortunately that means we'd save the real timestamp under
the under the temp name ".syncthing.foo.tmp" when the actual file that
we will look up on the next scan is "foo". This moves the Chtimes later,
ensuring that it gets recorded correctly under the right name.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3519
2016-08-16 18:22:19 +00:00
Audrius Butkevicius
6ec4fbc82b lib/model: Add minumum interval for progress emitter (fixes #3517)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3518
2016-08-16 18:22:01 +00:00
Jakob Borg
18cc7a663b lib: Remove osutil.Remove & osutil.RemoveAll (fixes #3513)
These are no longer required with Go 1.7. Change made by removing the
functions, doing a global s/osutil.Remove/os.Remove/.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3514
2016-08-16 10:01:58 +00:00
Jakob Borg
cf5febad47 build, cmd, lib: Minimum supported compiler version is Go 1.5 2016-08-15 08:37:32 +02:00
Jakob Borg
42849af5a8 cmd/syncthing: Default folder should also have lower case ID 2016-08-13 23:23:18 +02:00
Jakob Borg
e6364407a9 cmd/stdiscosrv: Fix index creation checks on startup 2016-08-12 11:39:10 +02:00
Jakob Borg
480b78f2c8 cmd/stdiscosrv: Longer address in schema 2016-08-12 11:38:37 +02:00
Jakob Borg
fa8f339478 gui: Fix division by zero in completion calc (ref #3493) 2016-08-12 08:49:16 +02:00
Jakob Borg
7776839c82 cmd/syncthing, gui: Improve completion calculation (fixes #3492)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3493
2016-08-12 06:41:43 +00:00
Jakob Borg
7114cacb85 gui, man: Update docs & translations 2016-08-10 11:41:46 +02:00
Jakob Borg
e52be3d83e lib/connections, lib/model: Refactor connection close handling (fixes #3466)
So there were some issues here. The main problem was that
model.Close(deviceID) was overloaded to mean "the connection was closed
by the protocol layer" and "i want to close this connection". That meant
it could get called twice - once *to* close the connection and then once
more when the connection *was* closed.

After this refactor there is instead a Closed(conn) method that is the
callback. I didn't need to change the parameter in the end, but I think
it's clearer what it means when it takes the connection that was closed
instead of a device ID. To close a connection, the new close(deviceID)
method is used instead, which only closes the underlying connection and
leaves the cleanup to the Closed() callback.

I also changed how we do connection switching. Instead of the connection
service calling close and then adding the connection, it just adds the
new connection. The model knows that it already has a connection and
makes sure to close and clean out that one before adding the new
connection.

To make sure to sequence this properly I added a new map of channels
that get created on connection add and closed by Closed(), so that
AddConnection() can do the close and wait for the cleanup to happen
before proceeding.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3490
2016-08-10 09:37:32 +00:00
Antoine Lamielle
c9cf01e0b6 gui: weighting % of devices according to folder size (fixes #1300)
The completion of remote devices was based only on the average of the percentages of all folders, which is irrelevant in case of two folders with very different sizes.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3481
LGTM: calmh, AudriusButkevicius
2016-08-09 19:58:44 +00:00
Jakob Borg
dcbf68e104 lib/versioner: Hack to make test coverage stable from run to run
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3485
2016-08-08 18:27:55 +00:00
Jakob Borg
c2d8c07137 lib/events: Hack to make test coverage stable from run to run
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3484
2016-08-08 18:09:40 +00:00
Jakob Borg
a4ed50ca85 build, lib: Correct total test coverage calculation
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3483
2016-08-08 16:29:32 +00:00
Jakob Borg
b3788c8ea0 authors: Fixup 0x010C 2016-08-08 08:34:36 +02:00
Jakob Borg
946c074a41 authors: Add 0x010C 2016-08-08 08:19:02 +02:00
Jakob Borg
19f79afb0f build: Setup should install golint 2016-08-07 21:58:27 +02:00
Audrius Butkevicius
af3b6f9c83 lib/model, lib/config: Support "live" device removal, folder unsharing and folder configuration changes
Furthermore:
1. Cleans configs received, migrates them as we receive them.
2. Clears indexes of devices we no longer share the folder with

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3478
2016-08-07 16:21:59 +00:00
Jakob Borg
fbe42c156d gui: Move "ignore patterns" away from "remove" in folder edit dialog 2016-08-07 14:26:32 +02:00
Jakob Borg
a1f6cbd354 lib/protocol: Clean away outdated files 2016-08-07 14:24:25 +02:00
Audrius Butkevicius
a4f052ad31 lib/connections: Fix connection switching
It seems that it would be impossible to drop down to relay after establishing a direct connection
Also, we should not drop the existing connection until after we've passed the validation steps,
and it seems it's being dropped in two places unnecesserily at the moment.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3480
2016-08-07 12:20:37 +00:00
Jakob Borg
ea87bcefd6 lib/protocol, lib/model: Implement high precision time stamps (fixes #3305)
This adds a new nanoseconds field to the FileInfo, populates it during
scans and sets the non-truncated time in Chtimes calls.

The actual file modification time is defined as modified_s seconds +
modified_ns nanoseconds. It's expected that the modified_ns field is <=
1e9 (that is, all whole seconds should go in the modified_s field) but
not really enforced. Given that it's an int32 the timestamp can be
adjusted += ~2.9 seconds by the modified_ns field...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3431
2016-08-06 13:05:59 +00:00
Jakob Borg
0655991a19 lib/db, lib/fs, lib/model: Introduce fs.MtimeFS, remove VirtualMtimeRepo
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3479
2016-08-05 17:45:45 +00:00
Jakob Borg
f368d2278f lib/config, lib/connections: Refactor handling of ignored devices (fixes #3470)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3471
2016-08-05 09:29:49 +00:00
Jakob Borg
1eb6db6ca8 cmd/syncthing, lib/...: Correctly handle ignores & invalid file names (fixes #3012, fixes #3457, fixes #3458)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3464
2016-08-05 07:13:52 +00:00
Jakob Borg
a25b63e2df cmd/syncthing: Delete old format indexes after a while (fixes #3468)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3469
2016-08-02 15:44:09 +00:00
Jakob Borg
ffe7a2fcd7 cmd/syncthing, lib/config: Enable HTTP CPU/heap profile collection for users
This adds a config to enable debug functions on the API server, which is
by default disabled. When enabled, the /rest/debug things become
available and become available without requiring a CSRF token (although
authentication is required if configured).

We also add a new endpoint /rest/debug/cpuprof?duration=15s (with the
duration being configurable, defaulting to 30s). This runs a CPU profile
for the duration and returns it as a file. It sets headers so that a
browser will save the file with an informative name.

The same is done for heap profiles, /rest/debug/heapprof, which does not
take any parameters.

The purpose of this is that any user can enable debugging under
advanced, then point their browser to the endpoint above and get a file
that contains a CPU or heap profile we can use, with the filename
telling us what version and architecture the profile is from.

On the command line, this becomes

    curl -O -J http://localhost:8082/rest/debug/cpuprof?duration=5s
    curl: Saved to filename
    'syncthing-cpu-darwin-amd64-v0.14.3+4-g935bcc0-110307.pprof'

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3467
2016-08-02 11:06:45 +00:00
Audrius Butkevicius
08b5a7908f gui: Add one-off notifications that need to be acked
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3254
2016-08-02 08:07:30 +00:00
derekriemer
a8cd9d0154 gui: Improve accessibility (fixes #3297)
skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3463
2016-07-31 22:59:44 +00:00
Jakob Borg
297240facf all: Rename LocalVersion to Sequence (fixes #3461)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3462
2016-07-29 19:54:24 +00:00
Jakob Borg
a022b0cfff gui, man: Update docs & translations 2016-07-28 13:15:14 +02:00
Jakob Borg
72026db599 lib/db, lib/model: Create temp sorting database in config dir (fixes #3449)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3454
2016-07-27 21:38:43 +00:00
Jakob Borg
aafc96f58f lib/model, lib/protocol: Sequence ClusterConfig properly (fixes #3448)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3452
2016-07-27 21:36:25 +00:00
Jakob Borg
7c7e8648ff lib/model: Trigger a puller iteration on connection (fixes #3451)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3453
2016-07-27 21:35:41 +00:00
Jakob Borg
24e2ce0764 build: Allow easy influencing build user and build host
To facilitate reproducible builds.
2016-07-27 23:27:47 +02:00
aviau
d7cb4d407b man: Include stdiscosrv and strelaysrv manpages
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3450
2016-07-27 15:00:10 +00:00
Jakob Borg
66a506e72b lib/scanner: Correctly scan symlinks (fixes #3445)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3446
2016-07-26 11:55:25 +00:00
Jakob Borg
25a7b0a6f8 gui, man: Update docs & translations 2016-07-26 10:53:00 +02:00
Jakob Borg
7aaa1dd8a3 lib/scanner: Recheck file size and modification time after hashing (ref #3440)
To catch the case where the file changed. Also make sure we never let a
size-vs-blocklist mismatch slip through.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3443
2016-07-26 08:51:39 +00:00
Jakob Borg
2a6f164923 lib/scanner: When scanning a file, stick to the size given by Lstat (fixes #3440)
Otherwise if the file grows during scanning the block list will be out
of sync with the stated size and things get confused. We could fixup the
size afterwards based on the block list, but then we might see other
inconsistencies as the mtime should have changed to reflect the new size
etc. Better stick to the original state and let the next scan pick up
the change.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3442
2016-07-25 19:16:49 +00:00
Jakob Borg
0f28626bb4 cmd/syncthing: Generate FolderCompletion events for folders shared with a connecting device (fixes #3436)
This used to happen by itself as the connecting device always sent an
Index message and we triggered on that. Nowadays there's no guarantee
for that, but we anyway need to send out one event to let listeners know
the state of folders shared with the device.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3438
2016-07-25 10:42:17 +00:00
Jakob Borg
6ed22d0885 lib/model: Stricter temporary file permissions
We could have a file to sync with permissions rw------- but we'd create
the temp file with rw-rw-rw- minus umask, usually rw-r--r--. This
potentially exposes private data while the file is being synced.

Similarly, when ignorePerms was set and we were reusing a temp files we
would set the permissions to rw-r--r-- explicitly, potentially
overriding a strict umask that would otherwise have had the file be
rw-------.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3437
2016-07-25 10:18:05 +00:00
Jakob Borg
6715b91a6c build: Remove unused docker-* commands 2016-07-25 08:10:46 +02:00
Jakob Borg
694da60659 lib/db: Reinstate database update locking
The previous commit loosened the locking around database updates.
Apparently that was not fine - what happens is that parallell updates
to the same file for different devices stomp on each others updates to
the global index, leaving it missing one of the two devices.
2016-07-23 20:32:15 +02:00
Jakob Borg
47fa4b0a2c cmd/syncthing, lib/db, lib/model, lib/protocol: Implement delta indexes (fixes #438)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3427
2016-07-23 12:46:31 +00:00
Jakob Borg
8ab6b60778 lib/model: Sort outgoing index updates by LocalVersion
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3411
2016-07-21 17:21:15 +00:00
Jakob Borg
e1a4f81e50 gui, man: Update docs & translations 2016-07-17 23:45:22 +02:00
Jakob Borg
7b7e35d339 lib/protocol: Hello message length is an int16
It used to be an int32, but that's unnecessary and the spec now says
int16. Also relaxes the size requirement to that which fits in a signed
int16 instead of limiting to 1024 bytes, to allow for future growth.

As reported in
https://forum.syncthing.net/t/difference-between-documented-and-implemented-protocol/7798

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3406
2016-07-17 21:41:20 +00:00
Jakob Borg
3176629410 cmd, lib: Fix ineffectual assignments (ineffasign) and comment spelling
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3405
2016-07-15 14:23:20 +00:00
Cedric Staniewski
e3ccc45d19 gui: Fix usage statistics URL in report usage preview (fixes #3397)
This applies the fix from 9d75652 to the usage report preview.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3402
2016-07-12 22:31:11 +00:00
Jakob Borg
beec9e834e gui, man: Update docs & translations 2016-07-10 09:23:58 +02:00
Audrius Butkevicius
f6f0486ff9 repo: Add message about voting
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3398
2016-07-09 15:58:55 +00:00
Jakob Borg
518f446d31 cmd/strelaypoolsrv: Fix vet warnings about type inference
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3393
2016-07-08 06:40:46 +00:00
Jakob Borg
fbbd510088 vendor: Update to latest github.com/syndtr/goleveldb 2016-07-06 09:57:15 +02:00
Jakob Borg
e440d30028 lib/protocol: Allow unknown message types
This lets us add message types in the future, for authentication or
other purposes, without completely breaking old clients. I see this as
similar behavior to adding fields to messages - newer clients must
simple be aware that older ones may ignore the message and act
accordingly.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3390
2016-07-05 09:29:28 +00:00
Jakob Borg
44d30c83bf lib/config, cmd/syncthing: Handle committing configuration better (fixes #3077)
This slightly changes the interface used for committing configuration
changes. The two parts are now:

 - VerifyConfiguration, which runs synchronously and locked, and can
   abort the config change. These callbacks shouldn't *do* anything
   apart from looking at the config changes and saying yes or no. No
   change from previously.

 - CommitConfiguration, which runs asynchronously (one goroutine per
   call) *after* replacing the config and releasing any locks. Returning
   false from these methods sets the "requires restart" flag, which now
   lives in the config.Wrapper.

This should be deadlock free as the CommitConfiguration calls can take
as long as they like and can wait for locks to be released when they
need to tweak things. I think this should be safe compared to before as
the CommitConfiguration calls were always made from a random background
goroutine (typically one from the HTTP server), so it was always
concurrent with everything else anyway.

Hence the CommitResponse type is gone, instead you get an error back on
verification failure only, and need to explicitly check
w.RequiresRestart() afterwards if you care.

As an added bonus this fixes a bug where we would reset the "requires
restart" indicator if a config that did not require restart was saved,
even if we already were in the requires-restart state.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3386
2016-07-04 20:32:34 +00:00
Jakob Borg
7ff7b55732 cmd/strelaypoolsrv: Remove unused var (metalint) 2016-07-04 21:22:53 +02:00
Jakob Borg
44346b3a5a cmd/strelaypoolsrv: Fixup import in main 2016-07-04 14:58:29 +02:00
Jakob Borg
23a538d61a script: Copyright in protofmt.go 2016-07-04 14:55:17 +02:00
Jakob Borg
dcb5026f33 script, lib/discover: Fixup copyright checks 2016-07-04 14:53:11 +02:00
Jakob Borg
778ff9daa9 script: Fixup check-authors after strelaypoolsrv merge 2016-07-04 14:46:24 +02:00
Jakob Borg
ce9dc809bc build, cmd/strelaypoolsrv: Build assets using standard script 2016-07-04 13:34:44 +02:00
Jakob Borg
59370588dd vendor: Add dependencies for strelaypoolsrv 2016-07-04 13:34:34 +02:00
Jakob Borg
7d434aa9c4 build: Add strelaypoolsrv target 2016-07-04 13:34:28 +02:00
Jakob Borg
59ce7c0424 cmd/strelaypoolsrv: Merge relaypoolsrv repo into main
* relaypoolsrv/master: (32 commits)
  Fetch deps of deps X_x
  Here we go with gvt bugs
  Screw godep
  Add solaris support back in
  Add font awesome
  No value is less than zero
  Screw solaris
  Godeps
  Refactor javascript, always show table, add sorting
  Add local geoip
  Update dependencies
  Hey look, had to check all code out on linux to fix the deps
  Update godeps, reduce amount of time spent testing a relay. Goddamit godeps.
  Add timeouts, deal with overlapping markers, add a table, increase circle radiuses
  Fix a couple of issues with the relays map (geoip, 'data unavailable')
  Rate infos are in kbps, not kBps
  Add support for header holding IP address
  Update relay parameters even if it already exists (fixes #3)
  Add missing space
  Add homepage
  ...
2016-07-04 13:33:57 +02:00
Jakob Borg
9a0e5a7c18 lib/discover: Add instance ID to local discovery (fixes #3278)
A random "instance ID" is generated on each start of the local discovery
service. The instance ID is included in the announcement. When we see a
new instance ID we treat is a new device and respond with an
announcement of our own. Hence devices get to know each other quickly on
restart.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3385
2016-07-04 11:16:48 +00:00
Jakob Borg
8d0019595f cmd/syncthing: Update code name for v0.14
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3384
2016-07-04 10:58:45 +00:00
aviau
6ff74cfcab build, cmd/stdiscosrv, cmd/strelaysrv: Rename binaries to add "st" prefix
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3371
2016-07-04 10:51:22 +00:00
Jakob Borg
aa50ef4069 lib/model: Invalidate files with trailing white space on Windows (fixes #3227)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3383
2016-07-04 10:44:30 +00:00
Jakob Borg
fa0101bd60 lib/protocol, lib/discover, lib/db: Use protocol buffer serialization (fixes #3080)
This changes the BEP protocol to use protocol buffer serialization
instead of XDR, and therefore also the database format. The local
discovery protocol is also updated to be protocol buffer format.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3276
LGTM: AudriusButkevicius
2016-07-04 10:40:29 +00:00
Cedric Staniewski
21f5b16e47 gui: Sort device folder lists by label
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3381
2016-07-03 21:11:39 +00:00
Cedric Staniewski
223a835f33 lib/discover: Respect the listen address scheme (fixes #3346)
This is a supplement patch to commit a58f69b which only fixed global
discovery. This patch adds the missing parts for the local discovery.

If the listen address scheme is set to tcp4:// or tcp6:// and no
explicit host is specified, an address should not be considered if the
source address does not match this scheme.

This prevents invalid URIs like tcp4://<IPv6 address>:<port> or tcp6://<IPv4
address>:<port> for local discovery.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3380
2016-07-03 20:43:26 +00:00
Jakob Borg
223e14b0d0 gui, man: Update docs & translations 2016-07-03 13:29:32 +02:00
Cedric Staniewski
a58f69be04 cmd/discosrv: Respect the listen address scheme (fixes #3346)
If the listen address scheme is set to tcp4:// or tcp6://, it needs to be
made sure that the remote address matches this scheme before it is added to
the database.

This prevents invalid URIs like tcp4://<IPv6 address>:<port> or tcp6://<IPv4
address>:<port>.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3378
2016-07-03 11:19:12 +00:00
Phil Davis
e194eb1f69 cmd/relaysrv: Typos in options
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3377
2016-07-03 08:44:41 +00:00
Jakob Borg
672824641b lib/connections: TLS handshake must complete in a timely fashion (fixes #3375)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3376
2016-07-02 20:33:31 +00:00
Jakob Borg
6d357211b2 lib/config: Remove "Invalid" attribute (fixes #2471)
This contains the following behavioral changes:

 - Duplicate folder IDs is now fatal during startup
 - Invalid folder flags in the ClusterConfig is fatal for the connection
   (this will go away soon with the proto changes, as we won't have any
   unknown flags any more then)
 - Empty path is a folder error reported at runtime

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3370
2016-07-02 19:38:39 +00:00
Nicolas Braud-Santoni
8e39e2889d lib: Fix typos in connections/service.go and model/model.go
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3366
LGTM: aviau
2016-06-30 13:42:53 +00:00
Nicolas Braud-Santoni
a9ee4bb9f1 lib/upgrade: Remove TestGithubRelease (fixes #3362)
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3365
2016-06-29 19:06:09 +00:00
Jakob Borg
80fd6c2400 build: Use SOURCE_DATE_EPOCH for build time stamp when available
Apparently common practice for reproducible builds:

   https://reproducible-builds.org/specs/source-date-epoch/

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3364
2016-06-29 18:52:49 +00:00
aviau
3cbe7d40d1 script: Remove build date in genassets.go
The build date prevented the builds from being reproducible.

Debian bug: https://bugs.debian.org/828994

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3363
2016-06-29 18:41:33 +00:00
Lars K.W. Gohlke
af0bc95de5 lib/model: Refactor encapsulation of the folder scanning
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3017
2016-06-29 06:37:34 +00:00
Jakob Borg
4bf3e7485b lib/events: Make events test less likely to fail 2016-06-28 08:29:49 +02:00
Peter Dave Hello
b701de60ce gui, assets: Compress PNGs using ZopfliPNG
Skip-check: authors pr-build

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3358
2016-06-28 06:19:12 +00:00
Antony Male
7ef2743964 lib/events: Introduce per-subscription event IDs (fixes #3335)
Events API consumers rely on being able to detect that events were skipped
by the fact that the event ID has increased by more than 1. This is
documented, and is absolutely necessary when trying to maintain a local
model of Syncthing's state.

With the introduction of LocalChangeDetected, which is not exposed to the
Events API, this contract was broken.

This commit introduces separate concepts of a "Global ID" and a
"Subscription ID". The Global ID of an event is unique across all
subscriptions. The Subscription ID is local to a particular subscription,
and always increments by 1. They are both exposed over the Events API, but
the Subscription ID uses the key "id" for backwards compatibility, and
the "?since=xx" parameter refers to the Subscription ID (making the Global
ID for information only).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3351
LGTM: calmh
2016-06-27 21:18:58 +00:00
Jakob Borg
a165838cbd lib/model: Decrease max temp filename length (fixes #3338, fixes #3355)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3356
2016-06-27 11:47:40 +00:00
Jakob Borg
3c77b8388c gui: Suggest lower case only folder ID (fixes #3128)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3353
2016-06-27 09:39:25 +00:00
Jakob Borg
9d16f4545d lib/events: Add logging/receiving benchmark 2016-06-27 10:26:59 +02:00
Jakob Borg
d57e6808cc lib/db: Fix alignment crash on 32 bit platforms
Fixes #3347
Fixes #3348
Fixes #3349

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3350
2016-06-26 13:40:51 +00:00
Jakob Borg
b71cc8a580 gui, man: Update docs & translations 2016-06-26 12:48:18 +02:00
Jakob Borg
ac3b03881a gui: Add addresses for disconnected devices (fixes #3340)
Also fixes an issue where the discovery cache call would only return the
newest cache entry for a given device instead of the merged addresses
from all cache entries (which is more useful).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3344
2016-06-26 10:47:23 +00:00
Jakob Borg
b0d03d1f1c lib/config: Retain slash at end of path after expanding ~ (fixes #2782)
The various path cleaning operations done in in cleanedPath() removes
it, so we make sure it's added again at the end. This makes adding the
slash in prepare() unnecessary, but keep it anyway for display purposes
(people looking at the config).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3342
2016-06-26 10:17:20 +00:00
Jakob Borg
a2dcffcca2 lib/nat: Avoid concurrent reset of NAT timer (fixes #3337)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3341
2016-06-26 10:17:12 +00:00
Jakob Borg
9323f0faf8 lib/model: Refactor CheckFolderHealth into separate methods
While attempting to fix #2782 I thought the problem was the
CheckFolderHealth method, so I cleaned it up. That turned out not to be
the case, but I think this is better anyhow.

It also moves the "create folder and marker if the folder was empty in
the index" code to StartFolder where I think it makes better sense.

This is covered by a number of existing tests.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3343
2016-06-26 10:07:27 +00:00
Jakob Borg
f343c8ba36 lib/model, lib/scanner: Silence vet warnings
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3333
2016-06-20 21:00:39 +00:00
Audrius Butkevicius
502bee9a09 lib/osutil: Return "/" as filesystem root on non-windows (fixes #3321)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3332
2016-06-20 20:25:00 +00:00
Jakob Borg
379e2119a8 build: Use forward slashes in Zip and Tar files (fixes #3330)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3331
2016-06-20 09:49:19 +00:00
Cedric Staniewski
89a29946f9 gui: Sort folders by label, fall back to ids if required (fixes #3310)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3326
2016-06-18 19:16:53 +00:00
Jakob Borg
20a94fafa7 authors: Add xduugu 2016-06-18 21:04:32 +02:00
Daniel Harte
99ddf1e4ab gui: Adjust border-radius on accordion title buttons (fixes #3299)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3320
2016-06-17 06:59:07 +00:00
Daniel Harte
fb778218f5 gui: Improve layout of "out of sync" modal (fixes #3306)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3296
2016-06-17 06:54:33 +00:00
Daniel Harte
55fc3cb2c5 gui: Load modals before calling initController() (fixes #3301)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3319
2016-06-17 06:44:55 +00:00
Jakob Borg
b779e22205 lib/model: Don't set ignore bit when it's already set
This adds a metric for "committed items" to the database instance that I
use in the test code, and a couple of tests that ensure that scans that
don't change anything also don't commit anything.

There was a case in the scanner where we set the invalid bit on files
that are ignored, even though they were already ignored and had the
invalid bit set. I had assumed this would result in an extra database
commit, but it was in fact filtered out by the Set... Anyway, I think we
can save some work on not pushing that change to the Set at all.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3298
2016-06-13 17:44:03 +00:00
Jakob Borg
bb5b1f8f01 gui: Temporarily disable the usage reporting prompt (ref #3301) 2016-06-13 18:06:12 +02:00
Jakob Borg
c1a96d4900 gui, man: Update docs & translations 2016-06-12 16:21:34 +02:00
Daniel Harte
de298da532 gui: Modal tweaks
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3292
LGTM: AudriusButkevicius, calmh
2016-06-12 14:06:48 +00:00
Jakob Borg
6f5ca53f99 lib/connections: Limit rate at which we print warnings about version mismatch
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3291
2016-06-09 12:30:35 +00:00
Jakob Borg
d507126101 lib/protocol: Understand older/newer Hello messages (fixes #3287)
This is in preparation for future changes, but also improves the
handling when talking to pre-v0.13 clients. It breaks out the Hello
message and magic from the rest of the protocol implementation, with the
intention that this small part of the protocol will survive future
changes.

To enable this, and future testing, the new ExchangeHello function takes
an interface that can be implemented by future Hello versions and
returns a version indendent result type. It correctly detects pre-v0.13
protocols and returns a "too old" error message which gets logged to the
user at warning level:

   [I6KAH] 09:21:36 WARNING: Connecting to [...]:
     the remote device speaks an older version of the protocol (v0.12) not
     compatible with this version

Conversely, something entirely unknown will generate:

   [I6KAH] 09:40:27 WARNING: Connecting to [...]:
     the remote device speaks an unknown (newer?) version of the protocol

The intention is that in future iterations the Hello exchange will
succeed on at least one side and ExchangeHello will return the actual
data from the Hello together with ErrTooOld and an even more precise
message can be generated.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3289
2016-06-09 10:50:14 +00:00
Daniel Harte
9a25df01fe gui: Add support for multiple stacked modals
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3288
2016-06-09 10:45:43 +00:00
perewa
11b9212948 cmd/syncthing: Increase timeout in hello message exchange
Required to establish connections on high latency links

Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3286
2016-06-08 19:46:54 +00:00
Jakob Borg
b4e2914b70 build: Move metalint to a separate build step (and add build step timings)
I run a lot of builds. They're quite slow now:

    jb@syno:~/s/g/s/syncthing $ BUILDDEBUG=1 ./build.sh
        ... snipped commands ...
    runError: gometalinter --disable-all --deadline=60s --enable=varcheck . ./cmd/... ./lib/...
    ... in 13.00592726s
    ... build completed in 15.392265235s

That's 15 s total build time, 13 s of which is the varcheck call. The
build server is welcome to run it, but I don't want to on each build. :)

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3285
2016-06-08 16:15:45 +00:00
Daniel Harte
09b7348595 gui: Accordion titles as buttons
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3284
LGTM: calmh, AudriusButkevicius
2016-06-08 15:55:44 +00:00
Daniel Harte
d2bb6e0c0a gui: Bootstrap tooltips (in modals)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3280
2016-06-08 14:55:30 +00:00
Daniel Harte
8632a03662 gui: Remove tooltip shown over whole modal
Changed the attribute 'title' (reserved) to 'heading' for the modal
template

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3281
2016-06-08 13:07:53 +00:00
Daniel Harte
e71c78ae84 cmd/syncthing: Remove folder limit on /rest/system/browse
Previously limited to 10 results, now unlimited.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3279
LGTM: calmh, AudriusButkevicius
2016-06-08 07:09:50 +00:00
Jakob Borg
03a8027efc cmd/syncthing: Refactor out staticsServer (prev. embeddedStatic) a bit
The purpose of this operation is to separate the serving of GUI assets a
bit from the serving of the REST API. It's by no means complete. The end
goal is something like a combined server type that embeds a statics
server and an API server and wraps it in authentication and HTTPS and
stuff, plus possibly a named pipe server that only provides the API and
does not wrap in the same authentication etc.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3273
2016-06-07 07:46:45 +00:00
Jakob Borg
b7e186b370 cmd/discosrv: Fix lint warnings
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3275
2016-06-07 07:33:11 +00:00
Jakob Borg
4a69f3987f cmd/relaysrv: Fix lint warnings
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3274
2016-06-07 07:31:43 +00:00
Lars K.W. Gohlke
343dc486e0 build: Extract runCommand from main
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3160
2016-06-07 07:12:10 +00:00
Jakob Borg
5aacfd1639 cmd/syncthing: Make API serve loop more robust (fixes #3136)
This sacrifices the ability to return an error when creating the service
for being more persistent in keeping it running.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3270
LGTM: AudriusButkevicius, canton7
2016-06-06 22:12:23 +00:00
Jakob Borg
06e63aedea gui: "Syncing" favicon is no longer animated (fixes #3267)
Resaved with just first frame in Photoshop, ran pngcrush on it.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3269
2016-06-06 13:01:40 +00:00
Daniel Harte
0320194757 gui: Swap edit / pause buttons on devices
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3266
2016-06-06 12:39:47 +00:00
Jakob Borg
1753771356 build: Tags must be joined by space, not comma (fixes #3262)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3268
2016-06-06 11:39:08 +00:00
scienmind
bc794e7c15 lib/connections: Relay failures should be informative, not warning
Skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3263
2016-06-04 10:41:36 +00:00
Jakob Borg
eefcecc7ce gui, man: Update docs & translations 2016-06-03 13:03:24 +02:00
Daniel Harte
3795a786c9 gui: CSS tweaks for mobile views
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3257
2016-06-02 23:21:19 +00:00
Daniel Harte
855a1bef89 gui: Vertically center identicons in panel titles
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3256
2016-06-02 20:52:10 +00:00
Jakob Borg
6a67921e40 vendor: Revert to github.com/jackpal/gateway instead of fork
It now includes all the fixes

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3246
2016-06-02 20:40:30 +00:00
Daniel Harte
8709fec517 gui: Make warning titles more readable in Dark Theme
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3253
2016-06-02 20:03:21 +00:00
Majed Abdulaziz
48245effdf lib/model, lib/stats: Keep track of folder's last scan time (ref #3143)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3250
LGTM: calmh, AudriusButkevicius
2016-06-02 19:26:52 +00:00
Daniel Harte
16063933d1 gui: Vertically center identicons in accordion titles
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3252
2016-06-02 19:03:18 +00:00
Daniel Harte
d317f197be gui: Early return 'danger' over 'warning'
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3251
2016-06-02 18:34:25 +00:00
Jakob Borg
8ac862f50a build: Use purego build tag on tests 2016-06-02 16:39:17 +02:00
Jakob Borg
0e996c4664 build: Use purego tags on 'all' target 2016-06-02 16:32:23 +02:00
Jakob Borg
287cfee73c cmd/syncthing: Re-enable auto upgrade for dev builds (fixes #901)
As noted in the ticket I no longer agree that dev builds should not auto
upgrade. The main reason is that we give dev builds to users to test
specific fixes, and noone is happier by them being inadvertently stuck
on that version when a newer version including the fix is released.

For developers, it's first of all probably unlikely that development is
happening on a build that's older than release, and secondly STNOUPGRADE
can be set in the environment once and for all if it an issue.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3244
2016-06-02 13:01:59 +00:00
Jakob Borg
a6c465e929 cmd/relaysrv: Go 1.3 fix (we should probably drop that compatibility soon) 2016-06-02 14:48:25 +02:00
Audrius Butkevicius
becb5ab1dc cmd/relaysrv: Missed changes in the repo merge (README, systemd) 2016-06-02 14:42:57 +02:00
Audrius Butkevicius
49170bf2d8 cmd/relaysrv: Add number of routines 2016-06-02 14:39:19 +02:00
Majed Abdulaziz
b1205db7ac cmd/discosrv: Accept host names in announced addresses
GitHub-Pull-Request: https://github.com/syncthing/discosrv/pull/48
2016-06-02 14:34:04 +02:00
Jakob Borg
ff0cd413e6 build: Add default purego tag to discosrv build 2016-06-02 14:22:40 +02:00
Jakob Borg
7a56e4a0e5 cmd/relaysrv: Copyright headers 2016-06-02 14:16:02 +02:00
Jakob Borg
d17608d0a0 cmd/relaysrv: vet: composite literal uses unkeyed fields 2016-06-02 14:10:55 +02:00
Jakob Borg
0af216fea0 cmd/relaysrv: Add build stamped version, print at startup 2016-06-02 14:09:36 +02:00
Jakob Borg
1287433a99 build: Add build steps for relaysrv 2016-06-02 14:07:29 +02:00
Jakob Borg
56a9964101 cmd/relaysrv: Merge relaysrv repo
* relaysrv/master: (60 commits)
  Add new dependencies
  Add more logging in the case of relaypoolsrv internal server error
  Dependency update
  Update deps
  Update packages, fix testutil. Goddamit godep.
  Typo
  Add signal handlers (fixes #15)
  Update readme (fixes #16)
  Limit number of connections (fixes #23)
  Enable extra logging in pool.go even when -debug not specified
  Add Antony Male to CONTRIBUTORS
  Allow extAddress to be set from the command line
  URLs should have Go units
  Add CORS headers
  Fix units
  Expose provided by in status endpoint
  Add ability to advertise provider
  Change the URL
  Rename relaysrv binary, see #11
  Jail the whole thing a bit more
  ...
2016-06-02 14:04:22 +02:00
Jakob Borg
532b4383bf cmd/discosrv: Add build stamped version, print at startup 2016-06-02 13:58:39 +02:00
Jakob Borg
f9e2623fdc vendor: Add dependencies for discosrv 2016-06-02 13:53:30 +02:00
Jakob Borg
eacae83886 authors: Add majedev 2016-06-02 13:52:18 +02:00
Jakob Borg
5fc53f59c7 build: Add build steps for discosrv 2016-06-02 13:51:43 +02:00
Jakob Borg
7035ea3ab7 cmd/discosrv: Merge discosrv repo
* discosrv/master: (64 commits)
  Use atomics for statistics handling (fixes #45)
  Lower case JSON fields are nicer
  Change v13 to v2
  Remove explicit relay handling
  Update vendored github.com/cznic/ql (fixes #34)
  Defer fd.Close() (fixes #37)
  There is no "get dependencies" step
  Add vendor/golang.org/x/net/context
  Use Go 1.5 vendoring instead of Godeps
  Add debug performance logging per request
  Must close result sets
  Set Retry-After header
  Ignores
  lru.Cache is not concurrency safe
  We need a limit on the number of PostgreSQL connections
  Correct example DSN (fixes #29)
  Allow plain HTTP serving behind a proxy
  Fix Query/Answer stats
  Reduce our patience with slow clients somewhat
  Discovery server should print device ID of certificate at startup
  ...
2016-06-02 13:51:17 +02:00
Jakob Borg
d67c0a1eda authors: Clean up AUTHORS and NICKS files
Git didn't really understand the multiple email addresses in the NICKS
file the same way I expected it to, and this fixes that. It also makes
AUTHORS the "master" file that everything else depends on, so it
now includes all of name, nickname and email addresses.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3243
2016-06-02 08:19:12 +00:00
Daniel Harte
36c6a1955f gui: Improve navigation header layout on mobile
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3240
2016-06-02 06:08:18 +00:00
Daniel Harte
f792989d9b gui: Show 'scanning' on unshared folders (fixes #3068)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3239
2016-06-02 00:17:48 +00:00
Daniel Harte
ee398f17e1 gui: Restore broken logo on mobile
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3238
2016-06-01 23:40:11 +00:00
Audrius Butkevicius
8c4723ff43 gui: Fix editing devices (fixes #3236)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3237
2016-06-01 20:24:43 +00:00
Daniel Harte
01ae866d58 gui: Use favicon as indication for status (fixes #1018)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3217
2016-06-01 19:06:36 +00:00
Jakob Borg
3b8ae33fe3 contributing: Clarify license situation for parts of the project
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3234
2016-06-01 13:47:25 +00:00
Audrius Butkevicius
6f63909c65 lib/db,cmd/stindex: Expose VersionList and use it in stindex
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3231
2016-05-31 19:29:26 +00:00
Audrius Butkevicius
1612baca92 gui: /rest/system/browse with no arguments returns drives on Windows (ref #3201)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3203
2016-05-31 19:27:07 +00:00
Jakob Borg
4970bd7f65 lib/relay: Correctly get IP from remote addr via proxy (fixes #3223)
Correctly handles addresses, and fixes one more panicing place.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3230
2016-05-31 14:42:10 +00:00
Jakob Borg
a775dd2b79 script: Improve changelog layout
Pull issue information from Github to show both the resolved issue
subject and the commit subject. Also show reviewer, when different from
author.

    * #3201: api: /rest/system/browse behaves strangely on Windows

      lib/osutil: Fix globbing at root (by @AudriusButkevicius, reviewed by
      @calmh)

    * #3174: Ignore patterns with non-ASCII characters causes out of memory
      crash

      vendor: Update github.com/gobwas/glob (by @calmh)

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3228
2016-05-31 12:40:30 +00:00
Jakob Borg
137894348b test: Update test configs to latest format 2016-05-31 10:36:33 +02:00
Jakob Borg
ac40b27c79 lib/connections: Handle wrapped connection in SetTCPOptions (fixes #3223)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3225
2016-05-31 08:11:57 +00:00
Jakob Borg
9d756525ce gui: Extract URL from translated string (fixes #3204)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3224
2016-05-31 07:24:42 +00:00
Antony Male
6361172bea cmd/syncthing: Be more explicit about how assets should be cached (fixes #3182)
With the previous setup, browsers were free to use a local cache for any
length of time they pleased: we didn't set an 'Expires' header (or max-age
directive), and Cache-Control just said "you're free to cache this".

Therefore be more explicit: we don't mind if browsers cache things, but they
MUST revalidate everything on every request.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3221
2016-05-30 13:54:55 +00:00
Antony Male
56b6383407 gui: Prevent log bar from flashing up while page is loading
The log bar is hidden by CSS, but will appear briefly while the page is
loading (after the html is fetched, but before dev.css is fetched).

Hide it by using an inline style instead, so this does not happen.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3220
2016-05-30 13:16:15 +00:00
Daniel Harte
46fa5a374b gui: Improve layout of accordion titles
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3172
2016-05-30 08:18:09 +00:00
Alexander Graf
7373d2eb3c cmd/syncthing: Fix upgrade of running syncthing from CLI (fixes #3193)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3198
2016-05-28 14:08:26 +00:00
Jakob Borg
4453236949 vendor: Update github.com/gobwas/glob (fixes #3174)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3207
2016-05-28 04:43:54 +00:00
Audrius Butkevicius
c2dc4a8e06 lib/db: Have prefix should be normalized
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3206
2016-05-28 04:18:31 +00:00
Audrius Butkevicius
92a23da3ec lib/model: Make the (?d) prefix actually work
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3205
2016-05-28 04:17:34 +00:00
Audrius Butkevicius
242db26343 lib/osutil: Fix globbing at root (fixes #3201)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3202
2016-05-28 04:13:34 +00:00
Audrius Butkevicius
87701339fe lib/nat, lib/connections: Fix a few issues with NAT traversal
1. For the same internal port we ask for the same external port on all devices. This can be a problem if one device speaks over two protocols.
2. Always add a nil address even if we managed to get external address of the gateway, just because the gateway might be in DMZ behind another gateway.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3196
2016-05-27 06:28:46 +00:00
Jakob Borg
4669ce0766 debian: Rename debian directory to debtpl (fixes #3099)
To keep it out of the way for actual, real, Debian packagers

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3195
2016-05-26 16:37:09 +00:00
Jakob Borg
9bb5988b4e lib/model: Don't deadlock when returning temp index block counts
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3194
2016-05-26 09:16:08 +00:00
Jakob Borg
c513171014 gui: Update translations 2016-05-26 09:49:07 +02:00
Jakob Borg
da5010d37a cmd/syncthing: Use API to generate API Key and folder ID (fixes #3179)
Expose a random string generator in the API and use it when the GUI
needs random strings for API key and folder ID.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3192
2016-05-26 07:25:34 +00:00
Jakob Borg
e6b78e5d56 lib/rand: Break out random functions into separate package
The intention for this package is to provide a combination of the
security of crypto/rand and the convenience of math/rand. It should be
the first choice of random data unless ultimate performance is required
and the usage is provably irrelevant from a security standpoint.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3186
2016-05-26 07:02:56 +00:00
Audrius Butkevicius
410d700ae3 cmd/syncthing: Do not modify events (fixes #3002)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3190
2016-05-26 06:54:44 +00:00
Audrius Butkevicius
fc173bf679 lib/model: Fix wild completion percentages
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3188
2016-05-26 06:53:27 +00:00
Jakob Borg
72154aa668 lib/upgrade: Prefer a minor upgrade over a major (fixes #3163)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3184
2016-05-25 14:01:52 +00:00
Jakob Borg
31b5156191 lib/util: Add secure random numbers source (fixes #3178)
The math/rand package contains lots of convenient functions, for example
to get an integer in a specified range without running into issues
caused by just truncating a number from a different distribution and so
on. But it's insecure, and we use if for things that benefit from being
more secure like session IDs, CSRF tokens and API keys.

This implements a math/rand.Source that reads from crypto/rand.Reader,
this bridging the gap between them. It also updates our RandomString to
use the new source, thus giving us secure session IDs and CSRF tokens.

Some future work remains:

 - Fix API keys by making the generation in the UI use this code as well

 - Refactor out these things into an actual random package, and audit
   our use of randomness everywhere

I'll leave both of those for the future in order to not muddy the waters
on this diff...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3180
2016-05-25 06:38:38 +00:00
Lars K.W. Gohlke
ebce5d07ac lib/connections: Shorten connection limiting lines
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3177
2016-05-24 21:57:56 +00:00
Audrius Butkevicius
915e1ac7de lib/model: Handle (?d) deletes of directories (fixes #3164)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3170
2016-05-23 23:32:08 +00:00
Lars K.W. Gohlke
b78bfc0a43 build.go: add gometalinter to lint runs
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3085
2016-05-23 21:19:08 +00:00
Lars K.W. Gohlke
30436741a7 build: Also vet and lint build script
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3159
2016-05-23 12:23:55 +00:00
Jakob Borg
98734375f2 cmd/syncthing: Correctly set, parse and compare modified time HTTP headers (fixes #3165)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3167
2016-05-23 12:16:14 +00:00
norgeous
37816e3818 gui: Remove extra href on folder panel titles
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3139
2016-05-22 16:17:33 +00:00
Jakob Borg
4bc2b3f369 gui: Set CSRF stuff earlier (fixes #3138)
We need to set these properties *before* Angular starts making requests,
and doing that from the response to a request is too late. The obvious
choice (to me) would be to use the angular $cookies service, but that
service isn't available until after initialization so we can't use it.
Instead, add a special file that is loaded by index.html and includes
the info we need before the JS app even starts running.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3152
2016-05-22 10:26:09 +00:00
Audrius Butkevicius
00be2bf18d lib/model: Track puller creation times (fixes #3145)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3150
2016-05-22 10:16:09 +00:00
Jakob Borg
44290a66b7 lib/model: Leave temp file in place when final rename fails (fixes #3146)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3148
2016-05-22 09:06:07 +00:00
Jakob Borg
f6cc344623 vendor: Replace github.com/jackpal/gateway with github.com/calmh/gateway (fixes #3142)
Switch to my forked version which contains a fix for this issue. I'll
track upstream in the future if things update there, and attempt to
contribute back fixes...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3149
2016-05-22 09:04:27 +00:00
Jakob Borg
a89d487510 vendor: Bump github.com/AudriusButkevicius/go-nat-pmp 2016-05-22 17:46:36 +09:00
Jakob Borg
a0ec4467fd cmd/syncthing: Emit new RemoteDownloadProgress event to track remote download progress
Without this the summary service doesn't know to recalculate completion
percentage for remote devices when DownloadProgress messages come in.
That means that completion percentage isn't updated in the GUI while
transfers of large files are ongoing. With this change, it updates
correctly.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3144
2016-05-22 07:52:08 +00:00
Jakob Borg
7dddc0de9e Use atomics for statistics handling (fixes #45)
This is one of those rare cases where that's actually cleaner, I
think...
2016-05-22 09:24:11 +09:00
Jakob Borg
7b43ba809b Lower case JSON fields are nicer 2016-04-30 10:53:42 +02:00
Audrius Butkevicius
175f65aabc Change v13 to v2
GitHub-Pull-Request: https://github.com/syncthing/discosrv/pull/41
2016-04-26 20:18:37 +00:00
Audrius Butkevicius
94a392144b Remove explicit relay handling
GitHub-Pull-Request: https://github.com/syncthing/discosrv/pull/40
2016-04-26 07:46:43 +00:00
Audrius Butkevicius
e9063c639a Fetch deps of deps X_x 2016-04-17 15:03:02 +01:00
Audrius Butkevicius
8d6dedc15b Here we go with gvt bugs 2016-04-17 14:57:31 +01:00
Audrius Butkevicius
1bc4c1a8ac Screw godep 2016-04-17 14:49:00 +01:00
Jakob Borg
6d3aae32bc Update vendored github.com/cznic/ql (fixes #34) 2016-04-16 12:59:53 +02:00
AudriusButkevicius
1a35c440e8 Add solaris support back in 2016-04-14 19:28:06 -04:00
Audrius Butkevicius
2c6c84ac61 Add font awesome 2016-04-14 22:31:56 +01:00
Audrius Butkevicius
bd666daf82 No value is less than zero 2016-04-14 22:26:31 +01:00
AudriusButkevicius
ca3831c4f5 Screw solaris 2016-04-14 17:21:44 -04:00
AudriusButkevicius
bbe0d34f43 Godeps 2016-04-14 17:19:56 -04:00
Audrius Butkevicius
dd364c962f Refactor javascript, always show table, add sorting 2016-04-14 22:01:25 +01:00
Audrius Butkevicius
50068b0b0f Add local geoip 2016-04-13 21:34:11 +01:00
Jakob Borg
96afcd90e3 Merge pull request #38 from kc1212/issue-37
Defer fd.Close() (fixes #37)
2016-03-21 14:29:47 +01:00
kc1212
ea61f8f597 Defer fd.Close() (fixes #37) 2016-03-21 01:07:51 +01:00
Jakob Borg
2e44473ce4 There is no "get dependencies" step 2016-03-18 14:42:12 +01:00
Jakob Borg
26d6969384 Add vendor/golang.org/x/net/context 2016-03-18 14:41:00 +01:00
Jakob Borg
2dbde224d9 Use Go 1.5 vendoring instead of Godeps 2016-03-18 14:38:08 +01:00
Jakob Borg
8d7ed9f8bf Add debug performance logging per request 2016-03-18 14:34:55 +01:00
Jakob Borg
1250850492 Must close result sets 2015-12-09 09:55:49 +01:00
Jakob Borg
ebfef15fb0 Add new dependencies 2015-12-08 09:19:16 +01:00
Jakob Borg
ad418abf91 Merge pull request #25 from syncthing/fixes
Fix ALL THE BUGS
2015-12-07 13:36:49 +01:00
Audrius Butkevicius
c7d51a26f6 Merge pull request #26 from canton7/feature/better-logging
Add more logging in the case of relaypoolsrv internal server error
2015-12-07 11:18:34 +00:00
Antony Male
2c01cc000e Add more logging in the case of relaypoolsrv internal server error
It's useful to know *why* relaypoolsrv returns an internal server error
2015-12-07 11:11:01 +00:00
Jakob Borg
175769b53e Update dependencies 2015-12-04 15:27:55 +01:00
Jakob Borg
22f193f042 Dependency update 2015-12-04 15:20:01 +01:00
Audrius Butkevicius
55da600433 Merge pull request #33 from syncthing/negcache
Set Retry-After header
2015-12-01 09:58:20 +00:00
Jakob Borg
96b5c2ae00 Set Retry-After header 2015-12-01 10:49:16 +01:00
Audrius Butkevicius
b24a9e57fd Update deps 2015-11-27 21:04:40 +00:00
Audrius Butkevicius
07722dc33d Hey look, had to check all code out on linux to fix the deps 2015-11-27 21:02:19 +00:00
Audrius Butkevicius
f39f816a98 Update godeps, reduce amount of time spent testing a relay. Goddamit godeps. 2015-11-23 21:33:22 +00:00
Audrius Butkevicius
bc5b95be8a Update packages, fix testutil. Goddamit godep. 2015-11-23 21:29:23 +00:00
Audrius Butkevicius
845f31b98f Add timeouts, deal with overlapping markers, add a table, increase circle radiuses 2015-11-22 22:47:48 +00:00
Audrius Butkevicius
89b6c32cee Merge pull request #6 from canton7/feature/fix-map
Fix a couple of issues with the relays map (geoip, 'data unavailable')
2015-11-22 14:27:58 +00:00
Antony Male
6ee36fe361 Fix a couple of issues with the relays map (geoip, 'data unavailable')
- Move to ipinfo.io for geoip, rather than Telize. Telize has been closed
   down. ipinfo.io has apparently got decent availability, and allows
   1,000 requests per day on the free tier. Since requests are made by the
   client, this should be more than enough (and the total across all clients
   should still be less than this).

 - Fix issue where one nonresponsive relay would cause 'data unavailable'
   to be shown for many relays. This was caused by the relay status
   promise not being correctly added to the list of things being waited
   for before the map was rendered. Any delayed relay status requests
   would therefore occur after the map was rendered, which was too late.
2015-11-22 14:10:29 +00:00
AudriusButkevicius
77572d0aee Typo 2015-11-21 18:58:52 +00:00
Audrius Butkevicius
37b79735bf Add signal handlers (fixes #15) 2015-11-21 00:35:38 +00:00
Audrius Butkevicius
9d9ad6de88 Update readme (fixes #16) 2015-11-21 00:35:38 +00:00
Audrius Butkevicius
20b925abec Limit number of connections (fixes #23) 2015-11-21 00:35:31 +00:00
Jakob Borg
7d00722bbf Ignores 2015-11-13 10:14:10 +01:00
Jakob Borg
4ea600d34e lru.Cache is not concurrency safe 2015-11-13 09:13:53 +01:00
Audrius Butkevicius
b61d7c2428 Merge pull request #5 from andyleap/patch-1
Rate infos are in kbps, not kBps
2015-11-10 10:05:54 -05:00
andyleap
bcc5d7c00f Rate infos are in kbps, not kBps 2015-11-10 09:52:07 -05:00
Jakob Borg
4a36cca703 We need a limit on the number of PostgreSQL connections 2015-11-09 15:11:21 +01:00
Audrius Butkevicius
f83ae630c1 Merge pull request #31 from syncthing/http
Allow plain HTTP serving behind a proxy
2015-11-08 12:26:05 -05:00
Jakob Borg
5894f35364 Correct example DSN (fixes #29) 2015-11-08 14:53:39 +01:00
Jakob Borg
c5acbf7e22 Allow plain HTTP serving behind a proxy 2015-11-07 16:01:31 +01:00
Audrius Butkevicius
567aaf87c6 Merge pull request #19 from canton7/feature/pool-logging
Enable extra logging in pool.go even when -debug not specified
2015-11-06 13:51:46 +00:00
Antony Male
e660d683a0 Enable extra logging in pool.go even when -debug not specified
Knowing why a relay server failed to join the pool can be important. This
is typically an issue which must be investigated after it occurred, so
having logs available is useful.

Running with -debug permanently enabled is impractical, due to the amount
of traffic that is generated, particularly when data is being transferred.

Logging is limited to at most one message per minute, although one message
per hour is more likely.
2015-11-06 12:58:44 +00:00
Jakob Borg
685306c386 Fix Query/Answer stats 2015-11-06 11:21:28 +01:00
Jakob Borg
5e04274d84 Reduce our patience with slow clients somewhat 2015-11-06 11:20:28 +01:00
Audrius Butkevicius
3357fded14 Merge pull request #18 from canton7/feature/contributer
Add Antony Male to CONTRIBUTORS
2015-11-05 23:24:04 +00:00
Antony Male
618fc54ac2 Add Antony Male to CONTRIBUTORS
A 1-line hack makes me a contributer, apparently :)
2015-11-05 23:10:14 +00:00
Audrius Butkevicius
339e058b64 Merge pull request #17 from canton7/feature/ext-address
Allow extAddress to be set from the command line
2015-11-05 21:43:12 +00:00
Antony Male
102027a343 Allow extAddress to be set from the command line
This allows relaysrv to listen on an unprivileged port, with port
forwarding directing traffic from 443, thus providing an alternative
to using setcap cap_net_bind_service=+ep
2015-11-05 21:26:58 +00:00
Jakob Borg
0d1df6bec3 Discovery server should print device ID of certificate at startup 2015-11-04 16:55:21 +00:00
Audrius Butkevicius
925f60d9c3 Add support for header holding IP address 2015-11-03 21:23:35 +00:00
Audrius Butkevicius
8b3f5fda07 Update relay parameters even if it already exists (fixes #3) 2015-10-31 17:27:43 +00:00
Audrius Butkevicius
ac17b2c584 Add missing space 2015-10-29 19:42:42 +00:00
Jakob Borg
c67c861dc6 Merge pull request #2 from syncthing/homepage
Add homepage
2015-10-29 17:45:21 +01:00
Audrius Butkevicius
09ba9e6259 Add homepage 2015-10-24 00:06:02 +01:00
Audrius Butkevicius
7775166477 URLs should have Go units 2015-10-23 22:24:53 +01:00
Audrius Butkevicius
0e167f5c24 Add CORS headers 2015-10-22 21:44:50 +01:00
Audrius Butkevicius
a310a32371 Add CORS headers 2015-10-22 21:44:29 +01:00
Audrius Butkevicius
c00e26be81 Fix units 2015-10-22 21:40:36 +01:00
Audrius Butkevicius
ce1a5cd2ce Expose provided by in status endpoint 2015-10-18 23:15:01 +01:00
Audrius Butkevicius
5c8a28d717 Add ability to advertise provider 2015-10-18 16:57:13 +01:00
Audrius Butkevicius
59c5d984af Change the URL 2015-10-17 00:07:01 +01:00
Audrius Butkevicius
c885903ff2 Change endpoint URL, as we might want to run some stats pages 2015-10-17 00:05:44 +01:00
Audrius Butkevicius
e4403ca396 Merge pull request #12 from rumpelsepp/systemd
Rename relaysrv binary, see #11
2015-10-10 14:26:12 +01:00
Stefan Tatschner
04912ea888 Rename relaysrv binary, see #11 2015-10-10 15:24:20 +02:00
Audrius Butkevicius
103238066d Merge pull request #11 from rumpelsepp/systemd
Jail the whole thing a bit more
2015-10-10 13:59:40 +01:00
Stefan Tatschner
7e4f08c033 Jail the whole thing a bit more
Add WorkingDirectory to create and use the certificates within
/var/lib/syncthing-relaysrv. Add RootDirectory to chroot(2) the whole
thing into that directory.
2015-10-10 14:56:47 +02:00
Jakob Borg
d47d82d8e1 Merge pull request #10 from syncthing/stuff
Add more info to status
2015-10-10 20:14:05 +09:00
Audrius Butkevicius
9b9b44dd65 Merge pull request #4 from rumpelsepp/systemd
Add systemd service file
2015-10-10 11:51:31 +01:00
Stefan Tatschner
dc5627a2ef Add systemd service file 2015-10-10 12:50:21 +02:00
Audrius Butkevicius
c1dfae1a6e Add options to status 2015-10-10 11:49:34 +01:00
Audrius Butkevicius
7b5e4ab426 Add uptime 2015-10-10 11:43:07 +01:00
Audrius Butkevicius
26a44068d8 Merge pull request #9 from syncthing/deps
Use vendored dependencies, new protocol location
2015-09-22 19:22:40 +01:00
Audrius Butkevicius
602b12dcf5 Merge pull request #23 from syncthing/deps
Use vendored dependencies, new relay/client location
2015-09-22 19:22:29 +01:00
Audrius Butkevicius
a91a836224 Merge pull request #1 from syncthing/deps
Use vendored dependencies, new relay/client location
2015-09-22 19:18:21 +01:00
Jakob Borg
969d7c802d Use vendored dependencies, new relay/client location 2015-09-22 19:55:12 +02:00
Jakob Borg
4e196d408a Use vendored dependencies, new protocol location 2015-09-22 19:54:20 +02:00
Jakob Borg
8450ab8dab Use vendored dependencies, new relay/client location 2015-09-22 19:51:40 +02:00
Jakob Borg
168889d999 Option for perm relay file, keep test cert in temp dir 2015-09-22 09:02:18 +02:00
Jakob Borg
e1339628d9 Default values tweak 2015-09-22 08:55:06 +02:00
Audrius Butkevicius
1ee190e844 Update README.md 2015-09-21 23:07:39 +01:00
Audrius Butkevicius
aadcfed17d Update README.md 2015-09-21 23:06:37 +01:00
Audrius Butkevicius
8f99f6eb66 Update README.md 2015-09-21 22:55:13 +01:00
Audrius Butkevicius
a51b948f45 Update README.md 2015-09-21 22:53:29 +01:00
Audrius Butkevicius
425f61cf34 Division by zero not good 2015-09-21 21:51:12 +00:00
Audrius Butkevicius
87cc2d2313 A bit more verbose 2015-09-21 22:33:29 +01:00
Audrius Butkevicius
0e2132ad3e Always print URI 2015-09-21 22:15:29 +01:00
Audrius Butkevicius
7d9df5abc6 Update README.md 2015-09-21 22:06:12 +01:00
Audrius Butkevicius
118cba4d9b Add build file 2015-09-21 20:53:01 +00:00
Jakob Borg
3b2adc9a3e /ping with empty response 2015-09-21 12:49:17 +02:00
Audrius Butkevicius
009b5bc72b Merge pull request #22 from syncthing/tls
New discovery protocol over HTTPS
2015-09-21 08:56:52 +01:00
Jakob Borg
9b541a28e6 New discovery protocol over HTTPS 2015-09-20 22:00:19 +02:00
Audrius Butkevicius
3533429563 Merge pull request #8 from syncthing/latency
Connected clients should know their own latency
2015-09-20 12:47:37 +01:00
Jakob Borg
500230af51 Connected clients should know their own latency 2015-09-20 13:40:24 +02:00
Jakob Borg
4a2cbc1715 Merge pull request #5 from syncthing/info
Tweaks
2015-09-14 16:20:08 +02:00
Audrius Butkevicius
61f8fdd9e8 Merge pull request #7 from syncthing/ping
Server should respond to ping
2015-09-14 12:49:12 +01:00
Jakob Borg
cfdca9f702 Server should respond to ping 2015-09-14 13:46:20 +02:00
Jakob Borg
cbe24d0c61 Errors should not increment for ever 2015-09-12 22:44:59 +02:00
Audrius Butkevicius
50f0da6793 Drop all sessions when we realize a node has gone away 2015-09-11 22:29:50 +01:00
Audrius Butkevicius
0b7ab0a095 Tweaks
1. Advertise relay server paramters so that clients could make a decision wether or not to connect
2. Generate certificate if it's not there.
2015-09-11 20:06:14 +01:00
AudriusButkevicius
3cacb48f3c Add IP based rate limiting, check if client IP matches advertised relay, reorder stuff 2015-09-07 18:13:50 +01:00
AudriusButkevicius
f6a58151cb Handle 403 2015-09-07 18:12:18 +01:00
AudriusButkevicius
3404393974 Join relay pool by default 2015-09-07 09:21:23 +01:00
AudriusButkevicius
6965812d79 Relays are matched by ip:port pairs 2015-09-07 09:14:14 +01:00
AudriusButkevicius
78fb7fe9f9 Implementation 2015-09-06 20:52:31 +01:00
AudriusButkevicius
24bcf6a088 Receive the invite, otherwise stop blocks, add extra arguments 2015-09-06 20:25:53 +01:00
AudriusButkevicius
25d0a363a8 Add a test method, fix nil pointer panic 2015-09-06 18:35:38 +01:00
Audrius Butkevicius
d7c8075862 Initial commit 2015-09-06 17:29:14 +01:00
AudriusButkevicius
041b97dd25 Use new method name 2015-09-02 22:02:17 +01:00
AudriusButkevicius
9b85a6fb7c Use a single socket for relaying 2015-09-02 21:35:52 +01:00
Jakob Borg
f407ff8861 Improve status reporter 2015-08-20 14:29:57 +02:00
Jakob Borg
a413b83c01 Fix broken connection close 2015-08-20 13:58:07 +02:00
Jakob Borg
81f4de965f Very basic status service 2015-08-20 12:59:44 +02:00
Jakob Borg
030b1f3467 I contribute stuff 2015-08-20 12:33:52 +02:00
Jakob Borg
b7a180114e Cleaner build 2015-08-20 12:33:11 +02:00
Jakob Borg
4c9a26dbca Cleaner build 2015-08-20 12:28:26 +02:00
Jakob Borg
e611828249 Merge branch 'v0.12'
* v0.12:
  Add relay support, add ql support
  Stats files
  Rewrite for a PostgreSQL backend
2015-08-20 12:20:09 +02:00
Audrius Butkevicius
e80a9b0075 Fix after package move 2015-08-19 20:49:34 +01:00
Jakob Borg
9370f9cae4 s/internal/lib/ 2015-08-09 09:39:28 +02:00
Audrius Butkevicius
604f2c9161 Connection errors are debug errors 2015-07-23 20:53:16 +01:00
Audrius Butkevicius
4d9ca822a7 Add relay support, add ql support 2015-07-23 19:12:40 +01:00
Audrius Butkevicius
d1f3d95c96 Add ability to lookup relay status 2015-07-22 22:34:05 +01:00
Audrius Butkevicius
efa0a06947 Merge pull request #1 from syncthing/review
Code review
2015-07-20 18:43:31 +01:00
Jakob Borg
11eb241c8f Style and minor fixes, client package 2015-07-20 14:04:34 +02:00
Jakob Borg
ebef239a06 Style and minor fixes, main package 2015-07-20 14:04:34 +02:00
Audrius Butkevicius
3d5507451b Merge pull request #2 from syncthing/ratelimit
Implement global and per session rate limiting
2015-07-20 12:57:55 +01:00
Jakob Borg
98a13204b2 Implement global and per session rate limiting 2015-07-20 13:37:11 +02:00
Jakob Borg
c318fdc94b Build script from discosrv 2015-07-20 12:11:06 +02:00
Audrius Butkevicius
d0229b62da Fix bugs 2015-07-17 22:04:02 +01:00
Audrius Butkevicius
37ad20a71b Change receiver type, add GoStringer 2015-07-17 21:49:45 +01:00
Audrius Butkevicius
fcd6ebb06e General cleanup 2015-07-17 20:17:49 +01:00
Audrius Butkevicius
dc9c86e3a1 Change EOL 2015-06-28 21:18:38 +01:00
Audrius Butkevicius
6bc6ae2d28 Do scheme validation in the client 2015-06-28 20:34:28 +01:00
Audrius Butkevicius
f8bedc55e5 Progress 2015-06-28 19:57:13 +01:00
Audrius Butkevicius
f376c79f7f Add initial code 2015-06-24 15:02:23 +01:00
Audrius Butkevicius
a98824b4cf Initial commit 2015-06-24 00:34:16 +01:00
Jakob Borg
860fbe48dd Stats files 2015-05-31 13:31:28 +02:00
Jakob Borg
9d06132743 Rewrite for a PostgreSQL backend 2015-03-25 15:37:00 +01:00
Jakob Borg
51eea3f90b GPL->MIT 2015-03-25 08:07:33 +01:00
Jakob Borg
27c70bdf07 Build moar platforms 2015-03-21 20:22:19 +01:00
Jakob Borg
f8abb8e541 Also build for solaris and freebsd 2015-02-11 13:40:58 +01:00
Jakob Borg
c8346d0581 Add a simple build script 2015-02-11 12:57:58 +01:00
Jakob Borg
7aaea6d005 Protocol has moved 2015-01-13 23:15:55 +01:00
Jakob Borg
e3911bacde Fix goleveldb API change 2014-12-11 12:53:00 +01:00
Jakob Borg
962eaa8a4b Handle error from XDR marshalling 2014-10-21 08:48:51 +02:00
Jakob Borg
bfba18fdcb Use WriteToUDP rather than WriteMsgUDP (fixes #4) 2014-10-07 10:50:09 +02:00
Jakob Borg
175669c61e GPL 2014-09-30 16:43:54 +02:00
Jakob Borg
3599b98dca Node -> Device here too 2014-09-28 22:39:38 +02:00
Jakob Borg
d1c3be3251 Use syncthing internal packages 2014-09-27 15:44:40 +02:00
Jakob Borg
b9f83c7780 Optionally log unknown packet data (for debugging) 2014-09-21 10:57:19 +02:00
Jakob Borg
cbf73ef29e Align cleaning routine in time 2014-09-08 12:43:30 +02:00
Jakob Borg
db6d3b495b Use persistent (leveldb) storage 2014-09-08 11:48:26 +02:00
Jakob Borg
6ea8e2525a Latest build badge should link to latest build 2014-08-20 12:22:23 +02:00
Jakob Borg
29296ec998 Link to build.syncthing.net instead 2014-08-13 13:52:53 +02:00
Jakob Borg
bdd265a1b1 Link to linux binary 2014-08-02 09:03:38 +02:00
Jakob Borg
2c9df7aad1 Update import paths, calmh -> syncthing 2014-08-02 08:25:17 +02:00
Jakob Borg
1fca248d4c Build status from drone.io 2014-07-31 12:37:18 +02:00
Jakob Borg
99081ea2a0 LICENSE & README 2014-07-30 22:15:16 +02:00
Jakob Borg
1f62247c7e New port number for new format global discovery 2014-07-13 09:36:22 +02:00
Jakob Borg
6415d1a6a5 Copyright wording 2014-07-13 01:07:49 +02:00
Jakob Borg
926b08c197 Refactor node ID handling, use check digits (fixes #269)
New node ID:s contain four Luhn check digits and are grouped
differently. Code uses NodeID type instead of string, so it's formatted
homogenously everywhere.
2014-06-30 01:42:03 +02:00
Jakob Borg
aff41d0b08 discosrv: Tunable limiter settings 2014-06-27 22:39:03 +02:00
Jakob Borg
5d9c968614 Add license header 2014-06-01 22:50:14 +02:00
Jakob Borg
c020cf05e1 Fix discosrv build, build as part of all (fixes #257) 2014-05-22 08:46:19 +02:00
Jakob Borg
09e8d85b1e discosrv: Better statistics 2014-04-19 23:14:56 +02:00
Jakob Borg
4d3eb134a2 discosrv: Remove deprecated v1 support 2014-04-19 23:02:14 +02:00
Jakob Borg
b92df85893 discosrv: Clean up debug logging 2014-04-16 15:06:54 +02:00
Jakob Borg
545025ed2b discosrv: Remove duplicate logging of limiter cache entries 2014-04-04 12:00:52 +02:00
Jakob Borg
3158962506 discosrv: Source based rate limiting 2014-04-03 23:40:10 +02:00
Jakob Borg
c314f74de6 discosrv: Refactor handler loop 2014-04-03 23:40:03 +02:00
Jakob Borg
65615385e7 Rework XDR encoding 2014-02-20 17:42:17 +01:00
Jakob Borg
727f35b35b discosrv: Expire nodes, reduce debug logging 2014-02-17 09:23:37 +01:00
Jakob Borg
07ddf7e87b External discover 2013-12-22 21:35:05 -05:00
1731 changed files with 978665 additions and 22933 deletions

9
.gitignore vendored
View File

@@ -1,11 +1,11 @@
syncthing
!gui/syncthing
!debian/syncthing
!Godeps/_workspace/src/github.com/syncthing
/syncthing
/stdiscosrv
syncthing.exe
stdiscosrv.exe
*.tar.gz
*.zip
*.asc
*.deb
.jshintrc
coverage.out
files/pidx
@@ -16,3 +16,4 @@ syncthing.sig
RELEASE
deb
lib/auto/gui.files.go
snapcraft.yaml

196
AUTHORS
View File

@@ -1,90 +1,110 @@
# This is the official list of Syncthing authors for copyright purposes.
# The format is:
#
# Name Name Name (nickname) <email1@example.com> <email2@example.com>
#
# The NICKS list is auto generated from this file.
Aaron Bieber <qbit@deftly.net>
Adam Piggott <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com>
Alessandro G. <alessandro.g89@gmail.com>
Alexander Graf <register-github@alex-graf.de>
Anderson Mesquita <andersonvom@gmail.com>
Andrew Dunham <andrew@du.nham.ca>
Antony Male <antony.male@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Alexandre Viau <alexandre@alexandreviau.net> <aviau@debian.org>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Bart De Vries <devriesb@gmail.com>
Ben Curthoys <ben@bencurthoys.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Benny Ng <benny.tpng@gmail.com>
Brandon Philips <brandon@ifup.org>
Brendan Long <self@brendanlong.com>
Brian R. Becker <brbecker@gmail.com>
Caleb Callaway <enlightened.despot@gmail.com>
Carsten Hagemann <moter8@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Howie <me@chrishowie.com>
Chris Joel <chris@scriptolo.gy>
Colin Kennedy <moshen.colin@gmail.com>
Daniel Bergmann <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Harte <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@users.noreply.github.com>
Daniel Martí <mvdan@mvdan.cc>
David Rimmer <dinosore@dbrsoftware.co.uk>
Denis A. <denisva@gmail.com>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Elias Jarlebring <jarlebring@gmail.com>
Emil Hessman <emil@hessman.se>
Erik Meitner <e.meitner@willystreet.coop>
Federico Castagnini <federico.castagnini@gmail.com>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
Francois-Xavier Gsell <fxgsell@gmail.com>
Frank Isemann <frank@isemann.name>
Gilli Sigurdsson <gilli@vx.is>
Jaakko Hannikainen <jgke@jgke.fi>
Jacek Szafarkiewicz <szafar@linux.pl>
Jake Peterson <jake@acogdev.com>
Jakob Borg <jakob@nym.se>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec <dzardacz@gmail.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
Jochen Voss <voss@seehuhn.de>
Johan Vromans <jvromans@squirrel.nl>
Karol Różycki <rozycki.karol@gmail.com>
Kelong Cong <kc04bc@gmx.com> <kc1212@users.noreply.github.com>
Ken'ichi Kamada <kamada@nanohz.org>
Kevin Allen <kma1660@gmail.com>
Lars K.W. Gohlke <lkwg82@gmx.de>
Laurent Etiemble <laurent.etiemble@gmail.com> <laurent.etiemble@monobjc.net>
Lode Hoste <zillode@zillode.be>
Lord Landon Agahnim <lordlandon@gmail.com>
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol <kilburn@la3.org>
Marcin Dziadus <dziadus.marcin@gmail.com>
Mateusz Naściszewski <matin1111@wp.pl>
Matt Burke <mburke@amplify.com> <burkemw3@gmail.com>
Max Schulze <max.schulze@online.de> <kralo@users.noreply.github.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Ploujnikov <ploujj@gmail.com>
Michael Tilli <pyfisch@gmail.com>
Nate Morrison <natemorrison@gmail.com>
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>
Peter Hoeg <peter@speartail.com>
Philippe Schommers <philippe@schommers.be>
Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Scott Klupfel <kluppy@going2blue.com>
Sergey Mishin <ralder@yandex.ru>
Stefan Kuntz <stefan.github@gmail.com> <Stefan.github@gmail.com>
Stefan Tatschner <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
Tim Abell <tim@timwise.co.uk>
Tobias Nygren <tnn@nygren.pp.se>
Tomas Cerveny <kozec@kozec.com>
Tully Robinson <tully@tojr.org>
Tyler Brazier <tyler@tylerbrazier.com>
Veeti Paananen <veeti.paananen@rojekti.fi>
Victor Buinsky <vix_booja@tut.by>
Vil Brekin <vilbrekin@gmail.com>
William A. Kennington III <william@wkennington.com>
Wulf Weich <wweich@users.noreply.github.com> <wweich@gmx.de>
Yannic A. <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>
Aaron Bieber (qbit) <qbit@deftly.net>
Adam Piggott (ProactiveServices) <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com> <ProactiveServices@users.noreply.github.com>
Adel Qalieh (adelq) <aqalieh95@gmail.com> <adelq@users.noreply.github.com>
Alessandro G. (alessandro.g89) <alessandro.g89@gmail.com>
Alexander Graf (alex2108) <register-github@alex-graf.de>
Alexandre Viau (aviau) <alexandre@alexandreviau.net> <aviau@debian.org>
Anderson Mesquita (andersonvom) <andersonvom@gmail.com>
Andrew Dunham (andrew-d) <andrew@du.nham.ca>
Andrey D (scienmind) <scintertech@cryptolab.net>
Antoine Lamielle (0x010C) <antoine.lamielle@0x010c.fr> <gh@0x010c.fr>
Antony Male (canton7) <antony.male@gmail.com>
Arthur Axel fREW Schmidt (frioux) <frew@afoolishmanifesto.com> <frioux@gmail.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com>
Bart De Vries (mogwa1) <devriesb@gmail.com>
Ben Curthoys (bencurthoys) <ben@bencurthoys.com>
Ben Schulz (uok) <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom (bsidhom) <bsidhom@gmail.com>
Benny Ng (tpng) <benny.tpng@gmail.com>
Brandon Philips (philips) <brandon@ifup.org>
Brendan Long (brendanlong) <self@brendanlong.com>
Brian R. Becker (brbecker) <brbecker@gmail.com>
Caleb Callaway (cqcallaw) <enlightened.despot@gmail.com>
Carsten Hagemann (Moter8) <moter8@gmail.com>
Cathryne Linenweaver (Cathryne) <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Cedric Staniewski (xduugu) <cedric@gmx.ca>
Chris Howie (cdhowie) <me@chrishowie.com>
Chris Joel (cdata) <chris@scriptolo.gy>
Colin Kennedy (moshen) <moshen.colin@gmail.com>
Daniel Bergmann (brgmnn) <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Harte (norgeous) <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@users.noreply.github.com>
Daniel Martí (mvdan) <mvdan@mvdan.cc>
David Rimmer (dinosore) <dinosore@dbrsoftware.co.uk>
Denis A. (dva) <denisva@gmail.com>
Dennis Wilson (snnd) <dw@risu.io>
Dominik Heidler (asdil12) <dominik@heidler.eu>
Elias Jarlebring (jarlebring) <jarlebring@gmail.com>
Emil Hessman (ceh) <emil@hessman.se>
Erik Meitner (WSGCSysadmin) <e.meitner@willystreet.coop>
Federico Castagnini (facastagnini) <federico.castagnini@gmail.com>
Felix Ableitner (Nutomic) <me@nutomic.com>
Felix Unterpaintner (bigbear2nd) <bigbear2nd@gmail.com>
Francois-Xavier Gsell (zukoo) <fxgsell@gmail.com>
Frank Isemann (fti7) <frank@isemann.name>
Gilli Sigurdsson (gillisig) <gilli@vx.is>
Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Jaakko Hannikainen (jgke) <jgke@jgke.fi>
Jacek Szafarkiewicz (hadogenes) <szafar@linux.pl>
Jake Peterson (acogdev) <jake@acogdev.com>
Jakob Borg (calmh) <jakob@nym.se> <jakob@kastelo.net>
James Patterson (jpjp) <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec (dzarda) <dzardacz@gmail.com>
Jens Diemer (jedie) <github.com@jensdiemer.de> <git@jensdiemer.de>
Jochen Voss (seehuhn) <voss@seehuhn.de>
Johan Vromans (sciurius) <jvromans@squirrel.nl>
Karol Różycki (krozycki) <rozycki.karol@gmail.com>
Kelong Cong (kc1212) <kc04bc@gmx.com> <kc1212@users.noreply.github.com>
Ken'ichi Kamada (kamadak) <kamada@nanohz.org>
Kevin Allen (ironmig) <kma1660@gmail.com>
Kevin White, Jr. (kwhite17) <kevinwhite1710@gmail.com>
Kurt Fitzner (Kudalufi) <kurt@va1der.ca>
Lars K.W. Gohlke (lkwg82) <lkwg82@gmx.de>
Laurent Etiemble (letiemble) <laurent.etiemble@gmail.com> <laurent.etiemble@monobjc.net>
Leo Arias (elopio) <yo@elopio.net>
Lode Hoste (Zillode) <zillode@zillode.be>
Lord Landon Agahnim (LordLandon) <lordlandon@gmail.com>
Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol (kilburn) <kilburn@la3.org>
Marcin Dziadus (marcindziadus) <dziadus.marcin@gmail.com>
Mark Pulford (mpx) <mark@kyne.com.au>
Mateusz Naściszewski (mateon1) <matin1111@wp.pl>
Matt Burke (burkemw3) <mburke@amplify.com> <burkemw3@gmail.com>
Max Schulze (kralo) <max.schulze@online.de> <kralo@users.noreply.github.com>
Michael Jephcote (Rewt0r) <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Ploujnikov (plouj) <ploujj@gmail.com>
Michael Tilli (pyfisch) <pyfisch@gmail.com>
Nate Morrison (nrm21) <natemorrison@gmail.com>
Pascal Jungblut (pascalj) <github@pascalj.com> <mail@pascal-jungblut.com>
Peter Hoeg (peterhoeg) <peter@speartail.com>
Philippe Schommers (filoozoom) <philippe@schommers.be>
Phill Luby (pluby) <phill.luby@newredo.com>
Piotr Bejda (piobpl) <piotrb10@gmail.com>
Roman Zaynetdinov (zaynetro) <romanznet@gmail.com>
Ryan Sullivan (KayoticSully) <kayoticsully@gmail.com>
Scott Klupfel (kluppy) <kluppy@going2blue.com>
Sergey Mishin (ralder) <ralder@yandex.ru>
Simon Frei (imsodin) <freisim93@gmail.com>
Stefan Kuntz (Stefan-Code) <stefan.github@gmail.com> <Stefan.github@gmail.com>
Stefan Tatschner (rumpelsepp) <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
Tim Abell (timabell) <tim@timwise.co.uk>
Tim Howes (timhowes) <timhowes@berkeley.edu>
Tobias Nygren (tnn2) <tnn@nygren.pp.se>
Tomas Cerveny (kozec) <kozec@kozec.com>
Tully Robinson (tojrobinson) <tully@tojr.org>
Tyler Brazier (tylerbrazier) <tyler@tylerbrazier.com>
Unrud (Unrud) <unrud@openaliasbox.org> <Unrud@users.noreply.github.com>
Veeti Paananen (veeti) <veeti.paananen@rojekti.fi>
Victor Buinsky (buinsky) <vix_booja@tut.by>
Vil Brekin (Vilbrekin) <vilbrekin@gmail.com>
William A. Kennington III (wkennington) <william@wkennington.com>
Wulf Weich (wweich) <wweich@users.noreply.github.com> <wweich@gmx.de>
Xavier O. (damajor) <damajor@gmail.com>
Yannic A. (eipiminus1) <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>

View File

@@ -33,20 +33,31 @@ latest info on Transifex.
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! See the [Contribution
Guidelines](http://docs.syncthing.net/dev/contributing.html) for the full
Guidelines](https://docs.syncthing.net/dev/contributing.html) for the full
story on committing code.
## Contributing Documentation
Updates to the [documentation site](http://docs.syncthing.net/) can be
Updates to the [documentation site](https://docs.syncthing.net/) can be
made as pull requests on the [documentation
repository](https://github.com/syncthing/docs).
## Licensing
All contributions are made under the same MPLv2 license as the rest of
the project, except documentation, user interface text and translation
strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
All contributions are made available under the same license as the already
existing material being contributed to. For most of the project and unless
otherwise stated this means MPLv2, but there are exceptions:
- Certain commands (under cmd/...) may have a separate license, indicated by
the presence of a LICENSE file in the corresponding directory.
- The documentation (man/...) is licensed under the Creative Commons
Attribution 4.0 International License.
- Projects under vendor/... are copyright by and licensed from their
respective original authors. Contributions should be made to the original
project, not here.
Regardless of the license in effect, you retain the copyright to your
contribution.

215
NICKS
View File

@@ -1,89 +1,132 @@
# This file maps email addresses used in commits to nicks used the changelog.
# It is auto generated from the AUTHORS file by script/authors.go.
acogdev <jake@acogdev.com>
alex2108 <register-github@alex-graf.de>
alessandro.g89 <alessandro.g89@gmail.com>
andersonvom <andersonvom@gmail.com>
andrew-d <andrew@du.nham.ca>
asdil12 <dominik@heidler.eu>
AudriusButkevicius <audrius.butkevicius@gmail.com>
aviau <alexandre@alexandreviau.net> <aviau@debian.org>
bencurthoys <ben@bencurthoys.com>
bigbear2nd <bigbear2nd@gmail.com>
brbecker <brbecker@gmail.com>
brendanlong <self@brendanlong.com>
brgmnn <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
bsidhom <bsidhom@gmail.com>
buinsky <vix_booja@tut.by>
burkemw3 <mburke@amplify.com> <burkemw3@gmail.com>
calmh <jakob@nym.se>
canton7 <antony.male@gmail.com>
Cathryne <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
cdata <chris@scriptolo.gy>
cdhowie <me@chrishowie.com>
ceh <emil@hessman.se>
cqcallaw <enlightened.despot@gmail.com>
dinosore <dinosore@dbrsoftware.co.uk>
dva <denisva@gmail.com>
dzarda <dzardacz@gmail.com>
eipiminus1 <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>
facastagnini <federico.castagnini@gmail.com>
filoozoom <philippe@schommers.be>
frioux <frew@afoolishmanifesto.com> <frioux@gmail.com>
fti7 <frank@isemann.name>
gillisig <gilli@vx.is>
hadogenes <szafar@linux.pl>
ironmig <kma1660@gmail.com>
jarlebring <jarlebring@gmail.com>
jedie <github.com@jensdiemer.de> <git@jensdiemer.de>
jgke <jgke@jgke.fi>
jpjp <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
kamadak <kamada@nanohz.org>
KayoticSully <kayoticsully@gmail.com>
kilburn <kilburn@la3.org>
kluppy <kluppy@going2blue.com>
kozec <kozec@kozec.com>
kralo <max.schulze@online.de>
krozycki <rozycki.karol@gmail.com>
letiemble <laurent.etiemble@gmail.com> <laurent.etiemble@monobjc.net>
LordLandon <lordlandon@gmail.com>
lkwg82 <lkwg82@gmx.de>
marcindziadus <dziadus.marcin@gmail.com>
0x010C <antoine.lamielle@0x010c.fr>
0x010C <gh@0x010c.fr>
acogdev <jake@acogdev.com>
adelq <aqalieh95@gmail.com>
adelq <adelq@users.noreply.github.com>
alessandro.g89 <alessandro.g89@gmail.com>
alex2108 <register-github@alex-graf.de>
andersonvom <andersonvom@gmail.com>
andrew-d <andrew@du.nham.ca>
asdil12 <dominik@heidler.eu>
AudriusButkevicius <audrius.butkevicius@gmail.com>
aviau <alexandre@alexandreviau.net>
aviau <aviau@debian.org>
bencurthoys <ben@bencurthoys.com>
bigbear2nd <bigbear2nd@gmail.com>
brbecker <brbecker@gmail.com>
brendanlong <self@brendanlong.com>
brgmnn <dan.arne.bergmann@gmail.com>
brgmnn <brgmnn@users.noreply.github.com>
bsidhom <bsidhom@gmail.com>
buinsky <vix_booja@tut.by>
burkemw3 <mburke@amplify.com>
burkemw3 <burkemw3@gmail.com>
calmh <jakob@nym.se>
calmh <jakob@kastelo.net>
canton7 <antony.male@gmail.com>
Cathryne <cathryne.linenweaver@gmail.com>
Cathryne <Cathryne@users.noreply.github.com>
cdata <chris@scriptolo.gy>
cdhowie <me@chrishowie.com>
ceh <emil@hessman.se>
cqcallaw <enlightened.despot@gmail.com>
damajor <damajor@gmail.com>
dinosore <dinosore@dbrsoftware.co.uk>
dva <denisva@gmail.com>
dzarda <dzardacz@gmail.com>
eipiminus1 <eipiminusone+github@gmail.com>
eipiminus1 <eipiminus1@users.noreply.github.com>
elopio <yo@elopio.net>
facastagnini <federico.castagnini@gmail.com>
filoozoom <philippe@schommers.be>
frioux <frew@afoolishmanifesto.com>
frioux <frioux@gmail.com>
fti7 <frank@isemann.name>
gillisig <gilli@vx.is>
hadogenes <szafar@linux.pl>
imsodin <freisim93@gmail.com>
ironmig <kma1660@gmail.com>
jarlebring <jarlebring@gmail.com>
jedie <github.com@jensdiemer.de>
jedie <git@jensdiemer.de>
jgke <jgke@jgke.fi>
jpjp <jamespatterson@operamail.com>
jpjp <jpjp@users.noreply.github.com>
kamadak <kamada@nanohz.org>
KayoticSully <kayoticsully@gmail.com>
kc1212 <kc04bc@gmx.com>
kc1212 <kc1212@users.noreply.github.com>
kilburn <kilburn@la3.org>
kluppy <kluppy@going2blue.com>
kozec <kozec@kozec.com>
kralo <max.schulze@online.de>
kralo <kralo@users.noreply.github.com>
krozycki <rozycki.karol@gmail.com>
Kudalufi <kurt@va1der.ca>
kwhite17 <kevinwhite1710@gmail.com>
letiemble <laurent.etiemble@gmail.com>
letiemble <laurent.etiemble@monobjc.net>
lkwg82 <lkwg82@gmx.de>
LordLandon <lordlandon@gmail.com>
majedev <majed.alhajry@gmail.com>
marcindziadus <dziadus.marcin@gmail.com>
marclaporte <marc@marclaporte.com>
mateon1 <matin1111@wp.pl>
mogwa1 <devriesb@gmail.com>
moshen <moshen.colin@gmail.com>
Moter8 <moter8@gmail.com>
mvdan <mvdan@mvdan.cc>
norgeous <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@users.noreply.github.com>
nrm21 <natemorrison@gmail.com>
Nutomic <me@nutomic.com>
pascalj <github@pascalj.com> <mail@pascal-jungblut.com>
peterhoeg <peter@speartail.com>
philips <brandon@ifup.org>
piobpl <piotrb10@gmail.com>
plouj <ploujj@gmail.com>
pluby <phill.luby@newredo.com>
pyfisch <pyfisch@gmail.com>
qbit <qbit@deftly.net>
ralder <ralder@yandex.ru>
Rewt0r <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
rumpelsepp <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
sciurius <jvromans@squirrel.nl>
seehuhn <voss@seehuhn.de>
simplypeachy <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com>
snnd <dw@risu.io>
Stefan-Code <stefan.github@gmail.com> <Stefan.github@gmail.com>
timabell <tim@timwise.co.uk>
tnn2 <tnn@nygren.pp.se>
tojrobinson <tully@tojr.org>
tpng <benny.tpng@gmail.com>
tylerbrazier <tyler@tylerbrazier.com>
uok <ueomkail@gmail.com> <uok@users.noreply.github.com>
veeti <veeti.paananen@rojekti.fi>
Vilbrekin <vilbrekin@gmail.com>
wkennington <william@wkennington.com>
wsgcsysadmin <e.meitner@willystreet.coo>
wweich <wweich@users.noreply.github.com> <wweich@gmx.de>
Zillode <zillode@zillode.be>
zukoo <fxgsell@gmail.com>
marclaporte <marc@laporte.name>
mateon1 <matin1111@wp.pl>
mogwa1 <devriesb@gmail.com>
moshen <moshen.colin@gmail.com>
Moter8 <moter8@gmail.com>
mpx <mark@kyne.com.au>
mvdan <mvdan@mvdan.cc>
norgeous <daniel@harte.me>
norgeous <daniel@danielharte.co.uk>
norgeous <norgeous@users.noreply.github.com>
nrm21 <natemorrison@gmail.com>
Nutomic <me@nutomic.com>
pascalj <github@pascalj.com>
pascalj <mail@pascal-jungblut.com>
peterhoeg <peter@speartail.com>
philips <brandon@ifup.org>
piobpl <piotrb10@gmail.com>
plouj <ploujj@gmail.com>
pluby <phill.luby@newredo.com>
ProactiveServices <aD@simplypeachy.co.uk>
ProactiveServices <simplypeachy@users.noreply.github.com>
ProactiveServices <ProactiveServices@users.noreply.github.com>
pyfisch <pyfisch@gmail.com>
qbit <qbit@deftly.net>
ralder <ralder@yandex.ru>
Rewt0r <rewt0r@gmx.com>
Rewt0r <Rewt0r@users.noreply.github.com>
rumpelsepp <stefan@sevenbyte.org>
rumpelsepp <rumpelsepp@sevenbyte.org>
scienmind <scintertech@cryptolab.net>
sciurius <jvromans@squirrel.nl>
seehuhn <voss@seehuhn.de>
Smiley73 <heiko@zuerker.org>
snnd <dw@risu.io>
Stefan-Code <stefan.github@gmail.com>
Stefan-Code <Stefan.github@gmail.com>
timabell <tim@timwise.co.uk>
timhowes <timhowes@berkeley.edu>
tnn2 <tnn@nygren.pp.se>
tojrobinson <tully@tojr.org>
tpng <benny.tpng@gmail.com>
tylerbrazier <tyler@tylerbrazier.com>
Unrud <unrud@openaliasbox.org>
Unrud <Unrud@users.noreply.github.com>
uok <ueomkail@gmail.com>
uok <uok@users.noreply.github.com>
veeti <veeti.paananen@rojekti.fi>
Vilbrekin <vilbrekin@gmail.com>
wkennington <william@wkennington.com>
WSGCSysadmin <e.meitner@willystreet.coop>
wweich <wweich@users.noreply.github.com>
wweich <wweich@gmx.de>
xduugu <cedric@gmx.ca>
zaynetro <romanznet@gmail.com>
Zillode <zillode@zillode.be>
zukoo <fxgsell@gmail.com>

View File

@@ -3,6 +3,7 @@
[![Latest Build (Official)](https://img.shields.io/jenkins/s/http/build.syncthing.net/syncthing.svg?style=flat-square&label=unix%20build)](http://build.syncthing.net/job/syncthing/lastBuild/)
[![API Documentation](https://img.shields.io/badge/api-Godoc-blue.svg?style=flat-square)](http://godoc.org/github.com/syncthing/syncthing)
[![MPLv2 License](https://img.shields.io/badge/license-MPLv2-blue.svg?style=flat-square)](https://www.mozilla.org/MPL/2.0/)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/88/badge)](https://bestpractices.coreinfrastructure.org/projects/88)
This is the Syncthing project which pursues the following goals:
@@ -27,6 +28,11 @@ There are a few examples for keeping Syncthing running in the background
on your system in [the etc directory][3]. There are also several [GUI
implementations][11] for Windows, Mac and Linux.
## Vote on features/bugs
We'd like to encourage you to [vote][12] on issues that matter to you.
This helps the team understand what are the biggest pain points for our users, and could potentially influence what is being worked on next.
## Getting in Touch
The first and best point of contact is the [Forum][8]. There is also an IRC
@@ -55,14 +61,15 @@ Please see the [Syncthing documentation site][6].
All code is licensed under the [MPLv2 License][7].
[1]: http://docs.syncthing.net/specs/bep-v1.html
[2]: http://docs.syncthing.net/intro/getting-started.html
[1]: https://docs.syncthing.net/specs/bep-v1.html
[2]: https://docs.syncthing.net/intro/getting-started.html
[3]: https://github.com/syncthing/syncthing/blob/master/etc
[4]: http://www.freenode.net/irc_servers.shtml
[5]: http://docs.syncthing.net/dev/building.html
[6]: http://docs.syncthing.net/
[5]: https://docs.syncthing.net/dev/building.html
[6]: https://docs.syncthing.net/
[7]: https://github.com/syncthing/syncthing/blob/master/LICENSE
[8]: https://forum.syncthing.net/
[9]: https://kiwiirc.com/client/irc.freenode.net/#syncthing
[10]: https://github.com/syncthing/syncthing/issues
[11]: http://docs.syncthing.net/users/contrib.html#gui-wrappers
[11]: https://docs.syncthing.net/users/contrib.html#gui-wrappers
[12]: https://www.bountysource.com/teams/syncthing/issues

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 9.8 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.4 KiB

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 9.8 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 8.2 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

BIN
assets/statusicons/sync.svg Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

485
build.go
View File

@@ -13,6 +13,7 @@ import (
"archive/zip"
"bytes"
"compress/gzip"
"errors"
"flag"
"fmt"
"io"
@@ -26,7 +27,6 @@ import (
"runtime"
"strconv"
"strings"
"syscall"
"text/template"
"time"
)
@@ -39,14 +39,16 @@ var (
version string
goVersion float64
race bool
debug = os.Getenv("BUILDDEBUG") != ""
)
type target struct {
name string
buildPkg string
binaryName string
archiveFiles []archiveFile
debianFiles []archiveFile
name string
buildPkg string
binaryName string
archiveFiles []archiveFile
installationFiles []archiveFile
tags []string
}
type archiveFile struct {
@@ -60,6 +62,7 @@ var targets = map[string]target{
// Only valid for the "build" and "install" commands as it lacks all
// the archive creation stuff.
buildPkg: "./cmd/...",
tags: []string{"purego"},
},
"syncthing": {
// The default target for "build", "install", "tar", "zip", "deb", etc.
@@ -73,7 +76,7 @@ var targets = map[string]target{
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
// All files from etc/ and extra/ added automatically in init().
},
debianFiles: []archiveFile{
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "README.md", dst: "deb/usr/share/doc/syncthing/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing/LICENSE.txt", perm: 0644},
@@ -91,10 +94,99 @@ var targets = map[string]target{
{src: "etc/linux-systemd/system/syncthing@.service", dst: "deb/lib/systemd/system/syncthing@.service", perm: 0644},
{src: "etc/linux-systemd/system/syncthing-resume.service", dst: "deb/lib/systemd/system/syncthing-resume.service", perm: 0644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0644},
{src: "etc/firewall-ufw/syncthing", dst: "deb/etc/ufw/applications.d/syncthing", perm: 0644},
},
},
"stdiscosrv": {
name: "stdiscosrv",
buildPkg: "./cmd/stdiscosrv",
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/stdiscosrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/stdiscosrv/README.txt", perm: 0644},
{src: "cmd/stdiscosrv/LICENSE", dst: "deb/usr/share/doc/stdiscosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/stdiscosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
},
tags: []string{"purego"},
},
"strelaysrv": {
name: "strelaysrv",
buildPkg: "./cmd/strelaysrv",
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/strelaysrv/README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "deb/usr/share/doc/strelaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/strelaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
},
},
"strelaypoolsrv": {
name: "strelaypoolsrv",
buildPkg: "./cmd/strelaypoolsrv",
binaryName: "strelaypoolsrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "deb/usr/share/doc/relaysrv/README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "deb/usr/share/doc/relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/relaysrv/AUTHORS.txt", perm: 0644},
},
},
}
var (
// fast linters complete in a fraction of a second and might as well be
// run always as part of the build
fastLinters = []string{
"deadcode",
"golint",
"ineffassign",
"vet",
}
// slow linters take several seconds and are run only as part of the
// "metalint" command.
slowLinters = []string{
"gosimple",
"staticcheck",
"structcheck",
"unused",
"varcheck",
}
// Which parts of the tree to lint
lintDirs = []string{".", "./lib/...", "./cmd/..."}
// Messages to ignore
lintExcludes = []string{
".pb.go",
"should have comment",
"protocol.Vector composite literal uses unkeyed fields",
"cli.Requires composite literal uses unkeyed fields",
"Use DialContext instead", // Go 1.7
"os.SEEK_SET is deprecated", // Go 1.7
}
)
func init() {
// The "syncthing" target includes a few more files found in the "etc"
// and "extra" dirs.
@@ -106,17 +198,24 @@ func init() {
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0644})
}
for _, file := range listFiles("extra") {
syncthingPkg.debianFiles = append(syncthingPkg.debianFiles, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0644})
syncthingPkg.installationFiles = append(syncthingPkg.installationFiles, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0644})
}
targets["syncthing"] = syncthingPkg
}
const minGoVersion = 1.3
const minGoVersion = 1.5
func main() {
log.SetOutput(os.Stdout)
log.SetFlags(0)
if debug {
t0 := time.Now()
defer func() {
log.Println("... build completed in", time.Since(t0))
}()
}
if os.Getenv("GOPATH") == "" {
setGoPath()
}
@@ -130,44 +229,40 @@ func main() {
parseFlags()
checkArchitecture()
goVersion, _ = checkRequiredGoVersion()
// Invoking build.go with no parameters at all builds everything (incrementally),
// which is what you want for maximum error checking during development.
if flag.NArg() == 0 {
runCommand("install", targets["all"])
} else {
// with any command given but not a target, the target is
// "syncthing". So "go run build.go install" is "go run build.go install
// syncthing" etc.
targetName := "syncthing"
if flag.NArg() > 1 {
targetName = flag.Arg(1)
}
target, ok := targets[targetName]
if !ok {
log.Fatalln("Unknown target", target)
}
runCommand(flag.Arg(0), target)
}
}
func checkArchitecture() {
switch goarch {
case "386", "amd64", "arm", "arm64", "ppc64", "ppc64le":
break
default:
log.Printf("Unknown goarch %q; proceed with caution!", goarch)
}
}
goVersion, _ = checkRequiredGoVersion()
// Invoking build.go with no parameters at all is equivalent to "go run
// build.go install all" as that builds everything (incrementally),
// which is what you want for maximum error checking during development.
if flag.NArg() == 0 {
var tags []string
if noupgrade {
tags = []string{"noupgrade"}
}
install(targets["all"], tags)
vet("cmd", "lib")
lint("./cmd/...")
lint("./lib/...")
return
}
// Otherwise, with any command given but not a target, the target is
// "syncthing". So "go run build.go install" is "go run build.go install
// syncthing" etc.
targetName := "syncthing"
if flag.NArg() > 1 {
targetName = flag.Arg(1)
}
target, ok := targets[targetName]
if !ok {
log.Fatalln("Unknown target", target)
}
cmd := flag.Arg(0)
func runCommand(cmd string, target target) {
switch cmd {
case "setup":
setup()
@@ -178,6 +273,7 @@ func main() {
tags = []string{"noupgrade"}
}
install(target, tags)
metalint(fastLinters, lintDirs)
case "build":
var tags []string
@@ -185,6 +281,7 @@ func main() {
tags = []string{"noupgrade"}
}
build(target, tags)
metalint(fastLinters, lintDirs)
case "test":
test("./lib/...", "./cmd/...")
@@ -195,8 +292,8 @@ func main() {
case "assets":
rebuildAssets()
case "xdr":
xdr()
case "proto":
proto()
case "translate":
translate()
@@ -213,15 +310,24 @@ func main() {
case "deb":
buildDeb(target)
case "snap":
buildSnap(target)
case "clean":
clean()
case "vet":
vet("cmd", "lib")
metalint(fastLinters, lintDirs)
case "lint":
lint("./cmd/...")
lint("./lib/...")
metalint(fastLinters, lintDirs)
case "metalint":
metalint(fastLinters, lintDirs)
metalint(slowLinters, lintDirs)
case "version":
fmt.Println(getVersion())
default:
log.Fatalf("Unknown command %q", cmd)
@@ -273,12 +379,29 @@ func checkRequiredGoVersion() (float64, bool) {
}
func setup() {
runPrint("go", "get", "-v", "golang.org/x/tools/cmd/cover")
runPrint("go", "get", "-v", "golang.org/x/net/html")
runPrint("go", "get", "-v", "github.com/FiloSottile/gvt")
runPrint("go", "get", "-v", "github.com/axw/gocov/gocov")
runPrint("go", "get", "-v", "github.com/AlekSi/gocov-xml")
runPrint("go", "get", "-v", "bitbucket.org/tebeka/go2xunit")
packages := []string{
"github.com/alecthomas/gometalinter",
"github.com/AlekSi/gocov-xml",
"github.com/axw/gocov/gocov",
"github.com/FiloSottile/gvt",
"github.com/golang/lint/golint",
"github.com/gordonklaus/ineffassign",
"github.com/mdempsky/unconvert",
"github.com/mitchellh/go-wordwrap",
"github.com/opennota/check/cmd/...",
"github.com/tsenart/deadcode",
"golang.org/x/net/html",
"golang.org/x/tools/cmd/cover",
"honnef.co/go/simple/cmd/gosimple",
"honnef.co/go/staticcheck/cmd/staticcheck",
"honnef.co/go/unused/cmd/unused",
}
for _, pkg := range packages {
fmt.Println(pkg)
runPrint("go", "get", "-u", pkg)
}
runPrint("go", "install", "-v", "./vendor/github.com/gogo/protobuf/protoc-gen-gogofast")
}
func test(pkgs ...string) {
@@ -292,9 +415,9 @@ func test(pkgs ...string) {
}
if useRace {
runPrint("go", append([]string{"test", "-short", "-race", "-timeout", "60s"}, pkgs...)...)
runPrint("go", append([]string{"test", "-short", "-race", "-timeout", "60s", "-tags", "purego"}, pkgs...)...)
} else {
runPrint("go", append([]string{"test", "-short", "-timeout", "60s"}, pkgs...)...)
runPrint("go", append([]string{"test", "-short", "-timeout", "60s", "-tags", "purego"}, pkgs...)...)
}
}
@@ -306,6 +429,8 @@ func bench(pkgs ...string) {
func install(target target, tags []string) {
lazyRebuildAssets()
tags = append(target.tags, tags...)
cwd, err := os.Getwd()
if err != nil {
log.Fatal(err)
@@ -313,7 +438,7 @@ func install(target target, tags []string) {
os.Setenv("GOBIN", filepath.Join(cwd, "bin"))
args := []string{"install", "-v", "-ldflags", ldflags()}
if len(tags) > 0 {
args = append(args, "-tags", strings.Join(tags, ","))
args = append(args, "-tags", strings.Join(tags, " "))
}
if race {
args = append(args, "-race")
@@ -328,10 +453,12 @@ func install(target target, tags []string) {
func build(target target, tags []string) {
lazyRebuildAssets()
tags = append(target.tags, tags...)
rmr(target.binaryName)
args := []string{"build", "-i", "-v", "-ldflags", ldflags()}
if len(tags) > 0 {
args = append(args, "-tags", strings.Join(tags, ","))
args = append(args, "-tags", strings.Join(tags, " "))
}
if race {
args = append(args, "-race")
@@ -397,8 +524,8 @@ func buildDeb(target target) {
os.RemoveAll("deb")
// "goarch" here is set to whatever the Debian packages expect. We correct
// "it to what we actually know how to build and keep the Debian variant
// "name in "debarch".
// it to what we actually know how to build and keep the Debian variant
// name in "debarch".
debarch := goarch
switch goarch {
case "i386":
@@ -409,46 +536,69 @@ func buildDeb(target target) {
build(target, []string{"noupgrade"})
for i := range target.debianFiles {
target.debianFiles[i].src = strings.Replace(target.debianFiles[i].src, "{{binary}}", target.binaryName, 1)
target.debianFiles[i].dst = strings.Replace(target.debianFiles[i].dst, "{{binary}}", target.binaryName, 1)
for i := range target.installationFiles {
target.installationFiles[i].src = strings.Replace(target.installationFiles[i].src, "{{binary}}", target.binaryName, 1)
target.installationFiles[i].dst = strings.Replace(target.installationFiles[i].dst, "{{binary}}", target.binaryName, 1)
}
for _, af := range target.debianFiles {
for _, af := range target.installationFiles {
if err := copyFile(af.src, af.dst, af.perm); err != nil {
log.Fatal(err)
}
}
os.MkdirAll("deb/DEBIAN", 0755)
maintainer := "Syncthing Release Management <release@syncthing.net>"
debver := version
if strings.HasPrefix(debver, "v") {
debver = debver[1:]
}
runPrint("fpm", "-t", "deb", "-s", "dir", "-C", "deb",
"-n", "syncthing", "-v", debver, "-a", debarch,
"--vendor", maintainer, "-m", maintainer,
"-d", "libc6",
"-d", "procps", // because postinst script
"--url", "https://syncthing.net/",
"--description", "Open Source Continuous File Synchronization",
"--after-upgrade", "script/post-upgrade",
"--license", "MPL-2")
}
data := map[string]string{
"name": target.name,
"arch": debarch,
"version": version[1:],
"date": time.Now().Format(time.RFC1123),
func buildSnap(target target) {
os.RemoveAll("snap")
tmpl, err := template.ParseFiles("snapcraft.yaml.template")
if err != nil {
log.Fatal(err)
}
f, err := os.Create("snapcraft.yaml")
defer f.Close()
if err != nil {
log.Fatal(err)
}
debTemplateFiles := append(listFiles("debian/common"), listFiles("debian/"+target.name)...)
for _, file := range debTemplateFiles {
tpl, err := template.New(filepath.Base(file)).ParseFiles(file)
if err != nil {
log.Fatal(err)
}
outFile := filepath.Join("deb/DEBIAN", filepath.Base(file))
out, err := os.Create(outFile)
if err != nil {
log.Fatal(err)
}
if err := tpl.Execute(out, data); err != nil {
log.Fatal(err)
}
if err := out.Close(); err != nil {
log.Fatal(err)
}
info, _ := os.Lstat(file)
os.Chmod(outFile, info.Mode())
snaparch := goarch
if snaparch == "armhf" {
goarch = "arm"
}
snapver := version
if strings.HasPrefix(snapver, "v") {
snapver = snapver[1:]
}
snapgrade := "devel"
if matched, _ := regexp.MatchString(`^\d+\.\d+\.\d+$`, snapver); matched {
snapgrade = "stable"
}
err = tmpl.Execute(f, map[string]string{
"Version": snapver,
"Architecture": snaparch,
"Grade": snapgrade,
})
if err != nil {
log.Fatal(err)
}
runPrint("snapcraft", "clean")
build(target, []string{"noupgrade"})
runPrint("snapcraft")
}
func copyFile(src, dst string, perm os.FileMode) error {
@@ -484,16 +634,17 @@ func listFiles(dir string) []string {
func rebuildAssets() {
runPipe("lib/auto/gui.files.go", "go", "run", "script/genassets.go", "gui")
runPipe("cmd/strelaypoolsrv/auto/gui.go", "go", "run", "script/genassets.go", "cmd/strelaypoolsrv/gui")
}
func lazyRebuildAssets() {
if shouldRebuildAssets() {
if shouldRebuildAssets("lib/auto/gui.files.go", "gui") || shouldRebuildAssets("cmd/strelaypoolsrv/auto/gui.go", "cmd/strelaypoolsrv/auto/gui") {
rebuildAssets()
}
}
func shouldRebuildAssets() bool {
info, err := os.Stat("lib/auto/gui.files.go")
func shouldRebuildAssets(target, srcdir string) bool {
info, err := os.Stat(target)
if err != nil {
// If the file doesn't exist, we must rebuild it
return true
@@ -503,22 +654,23 @@ func shouldRebuildAssets() bool {
// so we should rebuild it.
currentBuild := info.ModTime()
assetsAreNewer := false
filepath.Walk("gui", func(path string, info os.FileInfo, err error) error {
stop := errors.New("no need to iterate further")
filepath.Walk(srcdir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if assetsAreNewer {
return nil
if info.ModTime().After(currentBuild) {
assetsAreNewer = true
return stop
}
assetsAreNewer = info.ModTime().After(currentBuild)
return nil
})
return assetsAreNewer
}
func xdr() {
runPrint("go", "generate", "./lib/discover", "./lib/db", "./lib/protocol", "./lib/relay/protocol")
func proto() {
runPrint("go", "generate", "./lib/...")
}
func translate() {
@@ -559,7 +711,9 @@ func ldflags() string {
func rmr(paths ...string) {
for _, path := range paths {
log.Println("rm -r", path)
if debug {
log.Println("rm -r", path)
}
os.RemoveAll(path)
}
}
@@ -658,15 +812,27 @@ func getBranchSuffix() string {
}
func buildStamp() int64 {
// If SOURCE_DATE_EPOCH is set, use that.
if s, _ := strconv.ParseInt(os.Getenv("SOURCE_DATE_EPOCH"), 10, 64); s > 0 {
return s
}
// Try to get the timestamp of the latest commit.
bs, err := runError("git", "show", "-s", "--format=%ct")
if err != nil {
// Fall back to "now".
return time.Now().Unix()
}
s, _ := strconv.ParseInt(string(bs), 10, 64)
return s
}
func buildUser() string {
if v := os.Getenv("BUILD_USER"); v != "" {
return v
}
u, err := user.Current()
if err != nil {
return "unknown-user"
@@ -675,6 +841,10 @@ func buildUser() string {
}
func buildHost() string {
if v := os.Getenv("BUILD_HOST"); v != "" {
return v
}
h, err := os.Hostname()
if err != nil {
return "unknown-host"
@@ -695,13 +865,26 @@ func archiveName(target target) string {
}
func runError(cmd string, args ...string) ([]byte, error) {
if debug {
t0 := time.Now()
log.Println("runError:", cmd, strings.Join(args, " "))
defer func() {
log.Println("... in", time.Since(t0))
}()
}
ecmd := exec.Command(cmd, args...)
bs, err := ecmd.CombinedOutput()
return bytes.TrimSpace(bs), err
}
func runPrint(cmd string, args ...string) {
log.Println(cmd, strings.Join(args, " "))
if debug {
t0 := time.Now()
log.Println("runPrint:", cmd, strings.Join(args, " "))
defer func() {
log.Println("... in", time.Since(t0))
}()
}
ecmd := exec.Command(cmd, args...)
ecmd.Stdout = os.Stdout
ecmd.Stderr = os.Stderr
@@ -712,7 +895,13 @@ func runPrint(cmd string, args ...string) {
}
func runPipe(file, cmd string, args ...string) {
log.Println(cmd, strings.Join(args, " "), ">", file)
if debug {
t0 := time.Now()
log.Println("runPipe:", cmd, strings.Join(args, " "))
defer func() {
log.Println("... in", time.Since(t0))
}()
}
fd, err := os.Create(file)
if err != nil {
log.Fatal(err)
@@ -801,7 +990,7 @@ func zipFile(out string, files []archiveFile) {
if err != nil {
log.Fatal(err)
}
fh.Name = f.dst
fh.Name = filepath.ToSlash(f.dst)
fh.Method = zip.Deflate
if strings.HasSuffix(f.dst, ".txt") {
@@ -842,45 +1031,6 @@ func zipFile(out string, files []archiveFile) {
}
}
func vet(dirs ...string) {
params := []string{"tool", "vet", "-all"}
params = append(params, dirs...)
bs, err := runError("go", params...)
if len(bs) > 0 {
log.Printf("%s", bs)
}
if err != nil {
if exitStatus(err) == 3 {
// Exit code 3, the "vet" tool is not installed
return
}
// A genuine error exit from the vet tool.
log.Fatal(err)
}
}
func lint(pkg string) {
bs, err := runError("golint", pkg)
if err != nil {
log.Println(`- No golint, not linting. Try "go get -u github.com/golang/lint/golint".`)
return
}
analCommentPolicy := regexp.MustCompile(`exported (function|method|const|type|var) [^\s]+ should have comment`)
for _, line := range bytes.Split(bs, []byte("\n")) {
if analCommentPolicy.Match(line) {
continue
}
if len(line) > 0 {
log.Printf("%s", line)
}
}
}
func macosCodesign(file string) {
if pass := os.Getenv("CODESIGN_KEYCHAIN_PASS"); pass != "" {
bs, err := runError("security", "unlock-keychain", "-p", pass)
@@ -900,12 +1050,59 @@ func macosCodesign(file string) {
}
}
func exitStatus(err error) int {
if err, ok := err.(*exec.ExitError); ok {
if ws, ok := err.ProcessState.Sys().(syscall.WaitStatus); ok {
return ws.ExitStatus()
func metalint(linters []string, dirs []string) {
ok := true
if isGometalinterInstalled() {
if !gometalinter(linters, dirs, lintExcludes...) {
ok = false
}
}
return -1
if !ok {
log.Fatal("Build succeeded, but there were lint warnings")
}
}
func isGometalinterInstalled() bool {
if _, err := runError("gometalinter", "--disable-all"); err != nil {
log.Println("gometalinter is not installed")
return false
}
return true
}
func gometalinter(linters []string, dirs []string, excludes ...string) bool {
params := []string{"--disable-all", "--concurrency=2", "--deadline=300s"}
for _, linter := range linters {
params = append(params, "--enable="+linter)
}
for _, exclude := range excludes {
params = append(params, "--exclude="+exclude)
}
for _, dir := range dirs {
params = append(params, dir)
}
bs, _ := runError("gometalinter", params...)
nerr := 0
lines := make(map[string]struct{})
for _, line := range strings.Split(string(bs), "\n") {
if line == "" {
continue
}
if _, ok := lines[line]; ok {
continue
}
log.Println(line)
if strings.Contains(line, "executable file not found") {
log.Println(` - Try "go run build.go setup" to install missing tools`)
}
lines[line] = struct{}{}
nerr++
}
return nerr == 0
}

View File

@@ -104,7 +104,7 @@ case "${1:-default}" in
# For every package in the repo
for dir in $(go list ./lib/... ./cmd/...) ; do
# run the tests
GOPATH="$(pwd)/Godeps/_workspace:$GOPATH" go test -race -coverprofile=profile.out $dir
GOPATH="$(pwd)/Godeps/_workspace:$GOPATH" go test -coverprofile=profile.out $dir
if [ -f profile.out ] ; then
# and if there was test output, append it to coverage.out
grep -v "mode: " profile.out >> coverage.out
@@ -112,6 +112,11 @@ case "${1:-default}" in
fi
done
notCovered=$(egrep -c '\s0$' coverage.out)
total=$(wc -l coverage.out | awk '{print $1}')
coverPct=$(awk "BEGIN{print (1 - $notCovered / $total) * 100}")
echo "Total coverage is $coverPct%"
gocov convert coverage.out | gocov-xml > coverage.xml
# This is usually run from within Jenkins. If it is, we need to
@@ -131,55 +136,6 @@ case "${1:-default}" in
go2xunit -output tests.xml -fail < tests.out
;;
docker-all)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -c './build.sh clean \
&& ./build.sh test-cov \
&& ./build.sh bench \
&& ./build.sh all'
;;
docker-test)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -euxc './build.sh clean \
&& go run build.go -race \
&& export GOPATH=$(pwd)/Godeps/_workspace:$GOPATH \
&& cd test \
&& go test -tags integration -v -timeout 90m -short \
&& git clean -fxd .'
;;
docker-lint)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -euxc 'go run build.go lint'
;;
docker-vet)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -euxc 'go run build.go vet'
;;
*)
echo "Unknown build command $1"
;;

19
cmd/stcli/LICENSE Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2014 Audrius Butkevičius
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

10
cmd/stcli/README.md Normal file
View File

@@ -0,0 +1,10 @@
syncthing-cli
=============
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/syncthing-cli.svg?style=flat-square)](http://build.syncthing.net/job/syncthing-cli/lastBuild/)
A CLI that talks to the Syncthing REST interface.
`go get github.com/syncthing/syncthing-cli`
Or download the [latest build](http://build.syncthing.net/job/syncthing-cli/lastSuccessfulBuild/artifact/).

115
cmd/stcli/client.go Normal file
View File

@@ -0,0 +1,115 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"bytes"
"crypto/tls"
"net/http"
"strings"
"github.com/AudriusButkevicius/cli"
)
type APIClient struct {
httpClient http.Client
endpoint string
apikey string
username string
password string
id string
csrf string
}
var instance *APIClient
func getClient(c *cli.Context) *APIClient {
if instance != nil {
return instance
}
endpoint := c.GlobalString("endpoint")
if !strings.HasPrefix(endpoint, "http") {
endpoint = "http://" + endpoint
}
httpClient := http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: c.GlobalBool("insecure"),
},
},
}
client := APIClient{
httpClient: httpClient,
endpoint: endpoint,
apikey: c.GlobalString("apikey"),
username: c.GlobalString("username"),
password: c.GlobalString("password"),
}
if client.apikey == "" {
request, err := http.NewRequest("GET", client.endpoint, nil)
die(err)
response := client.handleRequest(request)
client.id = response.Header.Get("X-Syncthing-ID")
if client.id == "" {
die("Failed to get device ID")
}
for _, item := range response.Cookies() {
if item.Name == "CSRF-Token-"+client.id[:5] {
client.csrf = item.Value
goto csrffound
}
}
die("Failed to get CSRF token")
csrffound:
}
instance = &client
return &client
}
func (client *APIClient) handleRequest(request *http.Request) *http.Response {
if client.apikey != "" {
request.Header.Set("X-API-Key", client.apikey)
}
if client.username != "" || client.password != "" {
request.SetBasicAuth(client.username, client.password)
}
if client.csrf != "" {
request.Header.Set("X-CSRF-Token-"+client.id[:5], client.csrf)
}
response, err := client.httpClient.Do(request)
die(err)
if response.StatusCode == 404 {
die("Invalid endpoint or API call")
} else if response.StatusCode == 401 {
die("Invalid username or password")
} else if response.StatusCode == 403 {
if client.apikey == "" {
die("Invalid CSRF token")
}
die("Invalid API key")
} else if response.StatusCode != 200 {
body := strings.TrimSpace(string(responseToBArray(response)))
if body != "" {
die(body)
}
die("Unknown HTTP status returned: " + response.Status)
}
return response
}
func httpGet(c *cli.Context, url string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("GET", client.endpoint+"/rest/"+url, nil)
die(err)
return client.handleRequest(request)
}
func httpPost(c *cli.Context, url string, body string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("POST", client.endpoint+"/rest/"+url, bytes.NewBufferString(body))
die(err)
return client.handleRequest(request)
}

188
cmd/stcli/cmd_devices.go Normal file
View File

@@ -0,0 +1,188 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "devices",
HideHelp: true,
Usage: "Device command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List registered devices",
Requires: &cli.Requires{},
Action: devicesList,
},
{
Name: "add",
Usage: "Add a new device",
Requires: &cli.Requires{"device id", "device name?"},
Action: devicesAdd,
},
{
Name: "remove",
Usage: "Remove an existing device",
Requires: &cli.Requires{"device id"},
Action: devicesRemove,
},
{
Name: "get",
Usage: "Get a property of a device",
Requires: &cli.Requires{"device id", "property"},
Action: devicesGet,
},
{
Name: "set",
Usage: "Set a property of a device",
Requires: &cli.Requires{"device id", "property", "value..."},
Action: devicesSet,
},
},
})
}
func devicesList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, device := range cfg.Devices {
if !first {
fmt.Fprintln(writer)
}
fmt.Fprintln(writer, "ID:\t", device.DeviceID, "\t")
fmt.Fprintln(writer, "Name:\t", device.Name, "\t(name)")
fmt.Fprintln(writer, "Address:\t", strings.Join(device.Addresses, " "), "\t(address)")
fmt.Fprintln(writer, "Compression:\t", device.Compression, "\t(compression)")
fmt.Fprintln(writer, "Certificate name:\t", device.CertName, "\t(certname)")
fmt.Fprintln(writer, "Introducer:\t", device.Introducer, "\t(introducer)")
first = false
}
writer.Flush()
}
func devicesAdd(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
newDevice := config.DeviceConfiguration{
DeviceID: id,
Name: nid,
Addresses: []string{"dynamic"},
}
if len(c.Args()) > 1 {
newDevice.Name = c.Args()[1]
}
if len(c.Args()) > 2 {
addresses := c.Args()[2:]
for _, item := range addresses {
if item == "dynamic" {
continue
}
validAddress(item)
}
newDevice.Addresses = addresses
}
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID == id {
die("Device " + nid + " already exists")
}
}
cfg.Devices = append(cfg.Devices, newDevice)
setConfig(c, cfg)
}
func devicesRemove(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
if nid == getMyID(c) {
die("Cannot remove yourself")
}
cfg := getConfig(c)
for i, device := range cfg.Devices {
if device.DeviceID == id {
last := len(cfg.Devices) - 1
cfg.Devices[i] = cfg.Devices[last]
cfg.Devices = cfg.Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + nid + " not found")
}
func devicesGet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
fmt.Println(device.Name)
case "address":
fmt.Println(strings.Join(device.Addresses, "\n"))
case "compression":
fmt.Println(device.Compression.String())
case "certname":
fmt.Println(device.CertName)
case "introducer":
fmt.Println(device.Introducer)
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
return
}
die("Device " + nid + " not found")
}
func devicesSet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
config := getConfig(c)
for i, device := range config.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
config.Devices[i].Name = strings.Join(c.Args()[2:], " ")
case "address":
for _, item := range c.Args()[2:] {
if item == "dynamic" {
continue
}
validAddress(item)
}
config.Devices[i].Addresses = c.Args()[2:]
case "compression":
err := config.Devices[i].Compression.UnmarshalText([]byte(c.Args()[2]))
die(err)
case "certname":
config.Devices[i].CertName = strings.Join(c.Args()[2:], " ")
case "introducer":
config.Devices[i].Introducer = parseBool(c.Args()[2])
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
setConfig(c, config)
return
}
die("Device " + nid + " not found")
}

67
cmd/stcli/cmd_errors.go Normal file
View File

@@ -0,0 +1,67 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Requires: &cli.Requires{},
Action: errorsShow,
},
{
Name: "push",
Usage: "Push an error to active clients",
Requires: &cli.Requires{"error message..."},
Action: errorsPush,
},
{
Name: "clear",
Usage: "Clear pending errors",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/error/clear"),
},
},
})
}
func errorsShow(c *cli.Context) {
response := httpGet(c, "system/error")
var data map[string][]map[string]interface{}
json.Unmarshal(responseToBArray(response), &data)
writer := newTableWriter()
for _, item := range data["errors"] {
time := item["time"].(string)[:19]
time = strings.Replace(time, "T", " ", 1)
err := item["error"].(string)
err = strings.TrimSpace(err)
fmt.Fprintln(writer, time+":\t"+err)
}
writer.Flush()
}
func errorsPush(c *cli.Context) {
err := strings.Join(c.Args(), " ")
response := httpPost(c, "system/error", strings.TrimSpace(err))
if response.StatusCode != 200 {
err = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
}

351
cmd/stcli/cmd_folders.go Normal file
View File

@@ -0,0 +1,351 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"path/filepath"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "folders",
HideHelp: true,
Usage: "Folder command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List available folders",
Requires: &cli.Requires{},
Action: foldersList,
},
{
Name: "add",
Usage: "Add a new folder",
Requires: &cli.Requires{"folder id", "directory"},
Action: foldersAdd,
},
{
Name: "remove",
Usage: "Remove an existing folder",
Requires: &cli.Requires{"folder id"},
Action: foldersRemove,
},
{
Name: "override",
Usage: "Override changes from other nodes for a master folder",
Requires: &cli.Requires{"folder id"},
Action: foldersOverride,
},
{
Name: "get",
Usage: "Get a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersGet,
},
{
Name: "set",
Usage: "Set a property of a folder",
Requires: &cli.Requires{"folder id", "property", "value..."},
Action: foldersSet,
},
{
Name: "unset",
Usage: "Unset a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersUnset,
},
{
Name: "devices",
Usage: "Folder devices command group",
HideHelp: true,
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List of devices which the folder is shared with",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesList,
},
{
Name: "add",
Usage: "Share a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesAdd,
},
{
Name: "remove",
Usage: "Unshare a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesRemove,
},
{
Name: "clear",
Usage: "Unshare a folder with all devices",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesClear,
},
},
},
},
})
}
func foldersList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, folder := range cfg.Folders {
if !first {
fmt.Fprintln(writer)
}
fmt.Fprintln(writer, "ID:\t", folder.ID, "\t")
fmt.Fprintln(writer, "Path:\t", folder.RawPath, "\t(directory)")
fmt.Fprintln(writer, "Folder type:\t", folder.Type, "\t(type)")
fmt.Fprintln(writer, "Ignore permissions:\t", folder.IgnorePerms, "\t(permissions)")
fmt.Fprintln(writer, "Rescan interval in seconds:\t", folder.RescanIntervalS, "\t(rescan)")
if folder.Versioning.Type != "" {
fmt.Fprintln(writer, "Versioning:\t", folder.Versioning.Type, "\t(versioning)")
for key, value := range folder.Versioning.Params {
fmt.Fprintf(writer, "Versioning %s:\t %s \t(versioning-%s)\n", key, value, key)
}
}
first = false
}
writer.Flush()
}
func foldersAdd(c *cli.Context) {
cfg := getConfig(c)
abs, err := filepath.Abs(c.Args()[1])
die(err)
folder := config.FolderConfiguration{
ID: c.Args()[0],
RawPath: filepath.Clean(abs),
}
cfg.Folders = append(cfg.Folders, folder)
setConfig(c, cfg)
}
func foldersRemove(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for i, folder := range cfg.Folders {
if folder.ID == rid {
last := len(cfg.Folders) - 1
cfg.Folders[i] = cfg.Folders[last]
cfg.Folders = cfg.Folders[:last]
setConfig(c, cfg)
return
}
}
die("Folder " + rid + " not found")
}
func foldersOverride(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid && folder.Type == config.FolderTypeSendOnly {
response := httpPost(c, "db/override", "")
if response.StatusCode != 200 {
err := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
return
}
}
die("Folder " + rid + " not found or folder not master")
}
func foldersGet(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
value, ok := folder.Versioning.Params[arg]
if ok {
fmt.Println(value)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "directory":
fmt.Println(folder.RawPath)
case "type":
fmt.Println(folder.Type)
case "permissions":
fmt.Println(folder.IgnorePerms)
case "rescan":
fmt.Println(folder.RescanIntervalS)
case "versioning":
if folder.Versioning.Type != "" {
fmt.Println(folder.Versioning.Type)
}
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, type, permissions, versioning, versioning-<key>")
}
return
}
die("Folder " + rid + " not found")
}
func foldersSet(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
val := strings.Join(c.Args()[2:], " ")
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
cfg.Folders[i].Versioning.Params[arg[11:]] = val
setConfig(c, cfg)
return
}
switch arg {
case "directory":
cfg.Folders[i].RawPath = val
case "type":
var t config.FolderType
if err := t.UnmarshalText([]byte(val)); err != nil {
die("Invalid folder type: " + err.Error())
}
cfg.Folders[i].Type = t
case "permissions":
cfg.Folders[i].IgnorePerms = parseBool(val)
case "rescan":
cfg.Folders[i].RescanIntervalS = parseInt(val)
case "versioning":
cfg.Folders[i].Versioning.Type = val
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, master, permissions, versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersUnset(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
if _, ok := folder.Versioning.Params[arg]; ok {
delete(cfg.Folders[i].Versioning.Params, arg)
setConfig(c, cfg)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "versioning":
cfg.Folders[i].Versioning.Type = ""
cfg.Folders[i].Versioning.Params = make(map[string]string)
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesList(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
fmt.Println(device.DeviceID)
}
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesAdd(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
if device.DeviceID == nid {
die("Device " + c.Args()[1] + " is already part of this folder")
}
}
for _, device := range cfg.Devices {
if device.DeviceID == nid {
cfg.Folders[i].Devices = append(folder.Devices, config.FolderDeviceConfiguration{
DeviceID: device.DeviceID,
})
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found in device list")
}
die("Folder " + rid + " not found")
}
func foldersDevicesRemove(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for ri, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for ni, device := range folder.Devices {
if device.DeviceID == nid {
last := len(folder.Devices) - 1
cfg.Folders[ri].Devices[ni] = folder.Devices[last]
cfg.Folders[ri].Devices = cfg.Folders[ri].Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found")
}
die("Folder " + rid + " not found")
}
func foldersDevicesClear(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
cfg.Folders[i].Devices = []config.FolderDeviceConfiguration{}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}

78
cmd/stcli/cmd_general.go Normal file
View File

@@ -0,0 +1,78 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, []cli.Command{
{
Name: "id",
Usage: "Get ID of the Syncthing client",
Requires: &cli.Requires{},
Action: generalID,
},
{
Name: "status",
Usage: "Configuration status, whether or not a restart is required for changes to take effect",
Requires: &cli.Requires{},
Action: generalStatus,
},
{
Name: "restart",
Usage: "Restart syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/restart"),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/shutdown"),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/reset"),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/upgrade"),
},
{
Name: "version",
Usage: "Syncthing client version",
Requires: &cli.Requires{},
Action: generalVersion,
},
}...)
}
func generalID(c *cli.Context) {
fmt.Println(getMyID(c))
}
func generalStatus(c *cli.Context) {
response := httpGet(c, "system/config/insync")
var status struct{ ConfigInSync bool }
json.Unmarshal(responseToBArray(response), &status)
if !status.ConfigInSync {
die("Config out of sync")
}
fmt.Println("Config in sync")
}
func generalVersion(c *cli.Context) {
response := httpGet(c, "system/version")
version := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &version)
prettyPrintJSON(version)
}

127
cmd/stcli/cmd_gui.go Normal file
View File

@@ -0,0 +1,127 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "gui",
HideHelp: true,
Usage: "GUI command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all GUI configuration settings",
Requires: &cli.Requires{},
Action: guiDump,
},
{
Name: "get",
Usage: "Get a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiGet,
},
{
Name: "set",
Usage: "Set a GUI configuration setting",
Requires: &cli.Requires{"setting", "value"},
Action: guiSet,
},
{
Name: "unset",
Usage: "Unset a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiUnset,
},
},
})
}
func guiDump(c *cli.Context) {
cfg := getConfig(c).GUI
writer := newTableWriter()
fmt.Fprintln(writer, "Enabled:\t", cfg.Enabled, "\t(enabled)")
fmt.Fprintln(writer, "Use HTTPS:\t", cfg.UseTLS(), "\t(tls)")
fmt.Fprintln(writer, "Listen Addresses:\t", cfg.Address(), "\t(address)")
if cfg.User != "" {
fmt.Fprintln(writer, "Authentication User:\t", cfg.User, "\t(username)")
fmt.Fprintln(writer, "Authentication Password:\t", cfg.Password, "\t(password)")
}
if cfg.APIKey != "" {
fmt.Fprintln(writer, "API Key:\t", cfg.APIKey, "\t(apikey)")
}
writer.Flush()
}
func guiGet(c *cli.Context) {
cfg := getConfig(c).GUI
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "enabled":
fmt.Println(cfg.Enabled)
case "tls":
fmt.Println(cfg.UseTLS())
case "address":
fmt.Println(cfg.Address())
case "user":
if cfg.User != "" {
fmt.Println(cfg.User)
}
case "password":
if cfg.User != "" {
fmt.Println(cfg.Password)
}
case "apikey":
if cfg.APIKey != "" {
fmt.Println(cfg.APIKey)
}
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
}
func guiSet(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "enabled":
cfg.GUI.Enabled = parseBool(val)
case "tls":
cfg.GUI.RawUseTLS = parseBool(val)
case "address":
validAddress(val)
cfg.GUI.RawAddress = val
case "user":
cfg.GUI.User = val
case "password":
cfg.GUI.Password = val
case "apikey":
cfg.GUI.APIKey = val
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
setConfig(c, cfg)
}
func guiUnset(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "user":
cfg.GUI.User = ""
case "password":
cfg.GUI.Password = ""
case "apikey":
cfg.GUI.APIKey = ""
default:
die("Invalid setting: " + arg + "\nAvailable settings: user, password, apikey")
}
setConfig(c, cfg)
}

173
cmd/stcli/cmd_options.go Normal file
View File

@@ -0,0 +1,173 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "options",
HideHelp: true,
Usage: "Options command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all Syncthing option settings",
Requires: &cli.Requires{},
Action: optionsDump,
},
{
Name: "get",
Usage: "Get a Syncthing option setting",
Requires: &cli.Requires{"setting"},
Action: optionsGet,
},
{
Name: "set",
Usage: "Set a Syncthing option setting",
Requires: &cli.Requires{"setting", "value..."},
Action: optionsSet,
},
},
})
}
func optionsDump(c *cli.Context) {
cfg := getConfig(c).Options
writer := newTableWriter()
fmt.Fprintln(writer, "Sync protocol listen addresses:\t", strings.Join(cfg.ListenAddresses, " "), "\t(addresses)")
fmt.Fprintln(writer, "Global discovery enabled:\t", cfg.GlobalAnnEnabled, "\t(globalannenabled)")
fmt.Fprintln(writer, "Global discovery servers:\t", strings.Join(cfg.GlobalAnnServers, " "), "\t(globalannserver)")
fmt.Fprintln(writer, "Local discovery enabled:\t", cfg.LocalAnnEnabled, "\t(localannenabled)")
fmt.Fprintln(writer, "Local discovery port:\t", cfg.LocalAnnPort, "\t(localannport)")
fmt.Fprintln(writer, "Outgoing rate limit in KiB/s:\t", cfg.MaxSendKbps, "\t(maxsend)")
fmt.Fprintln(writer, "Incoming rate limit in KiB/s:\t", cfg.MaxRecvKbps, "\t(maxrecv)")
fmt.Fprintln(writer, "Reconnect interval in seconds:\t", cfg.ReconnectIntervalS, "\t(reconnect)")
fmt.Fprintln(writer, "Start browser:\t", cfg.StartBrowser, "\t(browser)")
fmt.Fprintln(writer, "Enable UPnP:\t", cfg.NATEnabled, "\t(nat)")
fmt.Fprintln(writer, "UPnP Lease in minutes:\t", cfg.NATLeaseM, "\t(natlease)")
fmt.Fprintln(writer, "UPnP Renewal period in minutes:\t", cfg.NATRenewalM, "\t(natrenew)")
fmt.Fprintln(writer, "Restart on Wake Up:\t", cfg.RestartOnWakeup, "\t(wake)")
reporting := "unrecognized value"
switch cfg.URAccepted {
case -1:
reporting = "false"
case 0:
reporting = "undecided/false"
case 1:
reporting = "true"
}
fmt.Fprintln(writer, "Anonymous usage reporting:\t", reporting, "\t(reporting)")
writer.Flush()
}
func optionsGet(c *cli.Context) {
cfg := getConfig(c).Options
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "address":
fmt.Println(strings.Join(cfg.ListenAddresses, "\n"))
case "globalannenabled":
fmt.Println(cfg.GlobalAnnEnabled)
case "globalannservers":
fmt.Println(strings.Join(cfg.GlobalAnnServers, "\n"))
case "localannenabled":
fmt.Println(cfg.LocalAnnEnabled)
case "localannport":
fmt.Println(cfg.LocalAnnPort)
case "maxsend":
fmt.Println(cfg.MaxSendKbps)
case "maxrecv":
fmt.Println(cfg.MaxRecvKbps)
case "reconnect":
fmt.Println(cfg.ReconnectIntervalS)
case "browser":
fmt.Println(cfg.StartBrowser)
case "nat":
fmt.Println(cfg.NATEnabled)
case "natlease":
fmt.Println(cfg.NATLeaseM)
case "natrenew":
fmt.Println(cfg.NATRenewalM)
case "reporting":
switch cfg.URAccepted {
case -1:
fmt.Println("false")
case 0:
fmt.Println("undecided/false")
case 1:
fmt.Println("true")
default:
fmt.Println("unknown")
}
case "wake":
fmt.Println(cfg.RestartOnWakeup)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
}
func optionsSet(c *cli.Context) {
config := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "address":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.ListenAddresses = c.Args().Tail()
case "globalannenabled":
config.Options.GlobalAnnEnabled = parseBool(val)
case "globalannserver":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.GlobalAnnServers = c.Args().Tail()
case "localannenabled":
config.Options.LocalAnnEnabled = parseBool(val)
case "localannport":
config.Options.LocalAnnPort = parsePort(val)
case "maxsend":
config.Options.MaxSendKbps = parseUint(val)
case "maxrecv":
config.Options.MaxRecvKbps = parseUint(val)
case "reconnect":
config.Options.ReconnectIntervalS = parseUint(val)
case "browser":
config.Options.StartBrowser = parseBool(val)
case "nat":
config.Options.NATEnabled = parseBool(val)
case "natlease":
config.Options.NATLeaseM = parseUint(val)
case "natrenew":
config.Options.NATRenewalM = parseUint(val)
case "reporting":
switch strings.ToLower(val) {
case "u", "undecided", "unset":
config.Options.URAccepted = 0
default:
boolvalue := parseBool(val)
if boolvalue {
config.Options.URAccepted = 1
} else {
config.Options.URAccepted = -1
}
}
case "wake":
config.Options.RestartOnWakeup = parseBool(val)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
setConfig(c, config)
}

72
cmd/stcli/cmd_report.go Normal file
View File

@@ -0,0 +1,72 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "report",
HideHelp: true,
Usage: "Reporting command group",
Subcommands: []cli.Command{
{
Name: "system",
Usage: "Report system state",
Requires: &cli.Requires{},
Action: reportSystem,
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Requires: &cli.Requires{},
Action: reportConnections,
},
{
Name: "usage",
Usage: "Usage report",
Requires: &cli.Requires{},
Action: reportUsage,
},
},
})
}
func reportSystem(c *cli.Context) {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
prettyPrintJSON(data)
}
func reportConnections(c *cli.Context) {
response := httpGet(c, "system/connections")
data := make(map[string]map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
var overall map[string]interface{}
for key, value := range data {
if key == "total" {
overall = value
continue
}
value["Device ID"] = key
prettyPrintJSON(value)
fmt.Println()
}
if overall != nil {
fmt.Println("=== Overall statistics ===")
prettyPrintJSON(overall)
}
}
func reportUsage(c *cli.Context) {
response := httpGet(c, "svc/report")
report := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &report)
prettyPrintJSON(report)
}

31
cmd/stcli/labels.go Normal file
View File

@@ -0,0 +1,31 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
var jsonAttributeLabels = map[string]string{
"folderMaxMiB": "Largest folder size in MiB",
"folderMaxFiles": "Largest folder file count",
"longVersion": "Long version",
"totMiB": "Total size in MiB",
"totFiles": "Total files",
"uniqueID": "Unique ID",
"numFolders": "Folder count",
"numDevices": "Device count",
"memoryUsageMiB": "Memory usage in MiB",
"memorySize": "Total memory in MiB",
"sha256Perf": "SHA256 Benchmark",
"At": "Last contacted",
"Completion": "Percent complete",
"InBytesTotal": "Total bytes received",
"OutBytesTotal": "Total bytes sent",
"ClientVersion": "Client version",
"alloc": "Memory allocated in bytes",
"sys": "Memory using in bytes",
"cpuPercent": "CPU load in percent",
"extAnnounceOK": "External announcments working",
"goroutines": "Number of Go routines",
"myID": "Client ID",
"tilde": "Tilde expands to",
"arch": "Architecture",
"os": "OS",
}

63
cmd/stcli/main.go Normal file
View File

@@ -0,0 +1,63 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"sort"
"github.com/AudriusButkevicius/cli"
)
type ByAlphabet []cli.Command
func (a ByAlphabet) Len() int { return len(a) }
func (a ByAlphabet) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByAlphabet) Less(i, j int) bool { return a[i].Name < a[j].Name }
var cliCommands []cli.Command
func main() {
app := cli.NewApp()
app.Name = "syncthing-cli"
app.Author = "Audrius Butkevičius"
app.Email = "audrius.butkevicius@gmail.com"
app.Usage = "Syncthing command line interface"
app.Version = "0.1"
app.HideHelp = true
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "endpoint, e",
Value: "http://127.0.0.1:8384",
Usage: "End point to connect to",
EnvVar: "STENDPOINT",
},
cli.StringFlag{
Name: "apikey, k",
Value: "",
Usage: "API Key",
EnvVar: "STAPIKEY",
},
cli.StringFlag{
Name: "username, u",
Value: "",
Usage: "Username",
EnvVar: "STUSERNAME",
},
cli.StringFlag{
Name: "password, p",
Value: "",
Usage: "Password",
EnvVar: "STPASSWORD",
},
cli.BoolFlag{
Name: "insecure, i",
Usage: "Do not verify SSL certificate",
EnvVar: "STINSECURE",
},
}
sort.Sort(ByAlphabet(cliCommands))
app.Commands = cliCommands
app.RunAndExitOnError()
}

165
cmd/stcli/utils.go Normal file
View File

@@ -0,0 +1,165 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os"
"regexp"
"sort"
"strconv"
"strings"
"text/tabwriter"
"unicode"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
)
func responseToBArray(response *http.Response) []byte {
defer response.Body.Close()
bytes, err := ioutil.ReadAll(response.Body)
if err != nil {
die(err)
}
return bytes
}
func die(vals ...interface{}) {
if len(vals) > 1 || vals[0] != nil {
os.Stderr.WriteString(fmt.Sprintln(vals...))
os.Exit(1)
}
}
func wrappedHTTPPost(url string) func(c *cli.Context) {
return func(c *cli.Context) {
httpPost(c, url, "")
}
}
func prettyPrintJSON(json map[string]interface{}) {
writer := newTableWriter()
remap := make(map[string]interface{})
for k, v := range json {
key, ok := jsonAttributeLabels[k]
if !ok {
key = firstUpper(k)
}
remap[key] = v
}
jsonKeys := make([]string, 0, len(remap))
for key := range remap {
jsonKeys = append(jsonKeys, key)
}
sort.Strings(jsonKeys)
for _, k := range jsonKeys {
var value string
rvalue := remap[k]
switch rvalue.(type) {
case int, int16, int32, int64, uint, uint16, uint32, uint64, float32, float64:
value = fmt.Sprintf("%.0f", rvalue)
default:
value = fmt.Sprint(rvalue)
}
if value == "" {
continue
}
fmt.Fprintln(writer, k+":\t"+value)
}
writer.Flush()
}
func firstUpper(str string) string {
for i, v := range str {
return string(unicode.ToUpper(v)) + str[i+1:]
}
return ""
}
func newTableWriter() *tabwriter.Writer {
writer := new(tabwriter.Writer)
writer.Init(os.Stdout, 0, 8, 0, '\t', 0)
return writer
}
func getMyID(c *cli.Context) string {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
return data["myID"].(string)
}
func getConfig(c *cli.Context) config.Configuration {
response := httpGet(c, "system/config")
config := config.Configuration{}
json.Unmarshal(responseToBArray(response), &config)
return config
}
func setConfig(c *cli.Context, cfg config.Configuration) {
body, err := json.Marshal(cfg)
die(err)
response := httpPost(c, "system/config", string(body))
if response.StatusCode != 200 {
die("Unexpected status code", response.StatusCode)
}
}
func parseBool(input string) bool {
val, err := strconv.ParseBool(input)
if err != nil {
die(input + " is not a valid value for a boolean")
}
return val
}
func parseInt(input string) int {
val, err := strconv.ParseInt(input, 0, 64)
if err != nil {
die(input + " is not a valid value for an integer")
}
return int(val)
}
func parseUint(input string) int {
val, err := strconv.ParseUint(input, 0, 64)
if err != nil {
die(input + " is not a valid value for an unsigned integer")
}
return int(val)
}
func parsePort(input string) int {
port := parseUint(input)
if port < 1 || port > 65535 {
die(input + " is not a valid port\nExpected value between 1 and 65535")
}
return port
}
func validAddress(input string) {
tokens := strings.Split(input, ":")
if len(tokens) != 2 {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
matched, err := regexp.MatchString("^[a-zA-Z0-9]+([-a-zA-Z0-9.]+[-a-zA-Z0-9]+)?$", tokens[0])
die(err)
if !matched {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
parsePort(tokens[1])
}
func parseDeviceID(input string) protocol.DeviceID {
device, err := protocol.DeviceIDFromString(input)
if err != nil {
die(input + " is not a valid device id")
}
return device
}

View File

@@ -7,8 +7,8 @@
package main
import (
"bytes"
"crypto/rand"
"encoding/binary"
"flag"
"log"
"strings"
@@ -44,7 +44,7 @@ func main() {
flag.Parse()
if fake {
log.Println("My ID:", protocol.DeviceIDFromBytes(myID))
log.Println("My ID:", myID)
}
runbeacon(beacon.NewMulticast(mc), fake)
@@ -66,24 +66,25 @@ func recv(bc beacon.Interface) {
seen := make(map[string]bool)
for {
data, src := bc.Recv()
var ann discover.Announce
ann.UnmarshalXDR(data)
if m := binary.BigEndian.Uint32(data); m != discover.Magic {
log.Printf("Incorrect magic %x in announcement from %v", m, src)
continue
}
if bytes.Equal(ann.This.ID, myID) {
var ann discover.Announce
ann.Unmarshal(data[4:])
if ann.ID == myID {
// This is one of our own fake packets, don't print it.
continue
}
// Print announcement details for the first packet from a given
// device ID and source address, or if -all was given.
key := string(ann.This.ID) + src.String()
key := ann.ID.String() + src.String()
if all || !seen[key] {
log.Printf("Announcement from %v\n", src)
log.Printf(" %v at %s\n", protocol.DeviceIDFromBytes(ann.This.ID), strings.Join(addrStrs(ann.This), ", "))
for _, dev := range ann.Extra {
log.Printf(" %v at %s\n", protocol.DeviceIDFromBytes(dev.ID), strings.Join(addrStrs(dev), ", "))
}
log.Printf(" %v at %s\n", ann.ID, strings.Join(ann.Addresses, ", "))
seen[key] = true
}
}
@@ -92,15 +93,10 @@ func recv(bc beacon.Interface) {
// sends fake discovery announcements once every second
func send(bc beacon.Interface) {
ann := discover.Announce{
Magic: discover.AnnouncementMagic,
This: discover.Device{
ID: myID,
Addresses: []discover.Address{
{URL: "tcp://fake.example.com:12345"},
},
},
ID: myID,
Addresses: []string{"tcp://fake.example.com:12345"},
}
bs, _ := ann.MarshalXDR()
bs, _ := ann.Marshal()
for {
bc.Send(bs)
@@ -108,19 +104,10 @@ func send(bc beacon.Interface) {
}
}
// returns the list of address URLs
func addrStrs(dev discover.Device) []string {
ss := make([]string, len(dev.Addresses))
for i, addr := range dev.Addresses {
ss[i] = addr.URL
}
return ss
}
// returns a random but recognizable device ID
func randomDeviceID() []byte {
var id [32]byte
func randomDeviceID() protocol.DeviceID {
var id protocol.DeviceID
copy(id[:], randomPrefix)
rand.Read(id[len(randomPrefix):])
return id[:]
return id
}

19
cmd/stdiscosrv/LICENSE Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2014-2015 The Discosrv Authors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

44
cmd/stdiscosrv/README.md Normal file
View File

@@ -0,0 +1,44 @@
stdiscosrv
==========
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/stdiscosrv.svg?style=flat-square)](http://build.syncthing.net/job/stdiscosrv/lastBuild/)
This is the global discovery server for the `syncthing` project.
To get it, run `go get github.com/syncthing/stdiscosrv` or download the
[latest build](http://build.syncthing.net/job/stdiscosrv/lastSuccessfulBuild/artifact/)
from the build server.
Usage
-----
The discovery server supports `ql` and `postgres` backends.
Specify the backend via `-db-backend` and the database DSN via `-db-dsn`.
By default it will use in-memory `ql` backend. If you wish to persist the
information on disk between restarts in `ql`, specify a file DSN:
```bash
$ stdiscosrv -db-dsn="file:///var/run/stdiscosrv.db"
```
For `postgres`, you will need to create a database and a user with permissions
to create tables in it, then start the stdiscosrv as follows:
```bash
$ export STDISCOSRV_DB_DSN="postgres://user:password@localhost/databasename"
$ stdiscosrv -db-backend="postgres"
```
You can pass the DSN as command line option, but the value what you pass in will
be visible in most process managers, potentially exposing the database password
to other users.
In all cases, the appropriate tables and indexes will be created at first
startup. If it doesn't exit with an error, you're fine.
See `stdiscosrv -help` for other options.
##### Third-party attribution
[cznic/lldb](https://github.com/cznic/lldb), Copyright (C) 2014 The lldb Authors.

75
cmd/stdiscosrv/clean.go Normal file
View File

@@ -0,0 +1,75 @@
// Copyright (C) 2014-2015 Jakob Borg and Contributors (see the CONTRIBUTORS file).
package main
import (
"database/sql"
"log"
"time"
)
type cleansrv struct {
intv time.Duration
db *sql.DB
prep map[string]*sql.Stmt
}
func (s *cleansrv) Serve() {
for {
time.Sleep(next(s.intv))
err := s.cleanOldEntries()
if err != nil {
log.Println("Clean:", err)
}
}
}
func (s *cleansrv) Stop() {
panic("stop unimplemented")
}
func (s *cleansrv) cleanOldEntries() (err error) {
var tx *sql.Tx
tx, err = s.db.Begin()
if err != nil {
return err
}
defer func() {
if err == nil {
err = tx.Commit()
} else {
tx.Rollback()
}
}()
res, err := tx.Stmt(s.prep["cleanAddress"]).Exec()
if err != nil {
return err
}
if rows, _ := res.RowsAffected(); rows > 0 {
log.Printf("Clean: %d old addresses", rows)
}
res, err = tx.Stmt(s.prep["cleanDevice"]).Exec()
if err != nil {
return err
}
if rows, _ := res.RowsAffected(); rows > 0 {
log.Printf("Clean: %d old devices", rows)
}
var devs, addrs int
row := tx.Stmt(s.prep["countDevice"]).QueryRow()
if err = row.Scan(&devs); err != nil {
return err
}
row = tx.Stmt(s.prep["countAddress"]).QueryRow()
if err = row.Scan(&addrs); err != nil {
return err
}
log.Printf("Database: %d devices, %d addresses", devs, addrs)
return nil
}

32
cmd/stdiscosrv/db.go Normal file
View File

@@ -0,0 +1,32 @@
// Copyright (C) 2014-2015 Jakob Borg and Contributors (see the CONTRIBUTORS file).
package main
import (
"database/sql"
"fmt"
)
type setupFunc func(db *sql.DB) error
type compileFunc func(db *sql.DB) (map[string]*sql.Stmt, error)
var (
setupFuncs = make(map[string]setupFunc)
compileFuncs = make(map[string]compileFunc)
)
func register(name string, setup setupFunc, compile compileFunc) {
setupFuncs[name] = setup
compileFuncs[name] = compile
}
func setup(backend string, db *sql.DB) (map[string]*sql.Stmt, error) {
setup, ok := setupFuncs[backend]
if !ok {
return nil, fmt.Errorf("Unsupported backend")
}
if err := setup(db); err != nil {
return nil, err
}
return compileFuncs[backend](db)
}

146
cmd/stdiscosrv/main.go Normal file
View File

@@ -0,0 +1,146 @@
// Copyright (C) 2014-2015 Jakob Borg and Contributors (see the CONTRIBUTORS file).
package main
import (
"crypto/tls"
"database/sql"
"flag"
"fmt"
"log"
"os"
"runtime"
"strconv"
"time"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/thejerf/suture"
)
const (
minNegCache = 60 // seconds
maxNegCache = 3600 // seconds
maxDeviceAge = 7 * 86400 // one week, in seconds
)
var (
Version string
BuildStamp string
BuildUser string
BuildHost string
BuildDate time.Time
LongVersion string
)
func init() {
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`stdiscosrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
lruSize = 10240
limitAvg = 5
limitBurst = 20
globalStats stats
statsFile string
backend = "ql"
dsn = getEnvDefault("STDISCOSRV_DB_DSN", "memory://stdiscosrv")
certFile = "cert.pem"
keyFile = "key.pem"
debug = false
useHTTP = false
)
func main() {
const (
cleanIntv = 1 * time.Hour
statsIntv = 5 * time.Minute
)
var listen string
log.SetOutput(os.Stdout)
log.SetFlags(0)
flag.StringVar(&listen, "listen", ":8443", "Listen address")
flag.IntVar(&lruSize, "limit-cache", lruSize, "Limiter cache entries")
flag.IntVar(&limitAvg, "limit-avg", limitAvg, "Allowed average package rate, per 10 s")
flag.IntVar(&limitBurst, "limit-burst", limitBurst, "Allowed burst size, packets")
flag.StringVar(&statsFile, "stats-file", statsFile, "File to write periodic operation stats to")
flag.StringVar(&backend, "db-backend", backend, "Database backend to use")
flag.StringVar(&dsn, "db-dsn", dsn, "Database DSN")
flag.StringVar(&certFile, "cert", certFile, "Certificate file")
flag.StringVar(&keyFile, "key", keyFile, "Key file")
flag.BoolVar(&debug, "debug", debug, "Debug")
flag.BoolVar(&useHTTP, "http", useHTTP, "Listen on HTTP (behind an HTTPS proxy)")
flag.Parse()
log.Println(LongVersion)
var cert tls.Certificate
var err error
if !useHTTP {
cert, err = tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv", 3072)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
}
devID := protocol.NewDeviceID(cert.Certificate[0])
log.Println("Server device ID is", devID)
}
db, err := sql.Open(backend, dsn)
if err != nil {
log.Fatalln("sql.Open:", err)
}
prep, err := setup(backend, db)
if err != nil {
log.Fatalln("Setup:", err)
}
main := suture.NewSimple("main")
main.Add(&querysrv{
addr: listen,
cert: cert,
db: db,
prep: prep,
})
main.Add(&cleansrv{
intv: cleanIntv,
db: db,
prep: prep,
})
main.Add(&statssrv{
intv: statsIntv,
file: statsFile,
db: db,
})
globalStats.Reset()
main.Serve()
}
func getEnvDefault(key, def string) string {
if val := os.Getenv(key); val != "" {
return val
}
return def
}
func next(intv time.Duration) time.Duration {
t0 := time.Now()
t1 := t0.Add(intv).Truncate(intv)
return t1.Sub(t0)
}

98
cmd/stdiscosrv/psql.go Normal file
View File

@@ -0,0 +1,98 @@
// Copyright (C) 2014-2015 Jakob Borg and Contributors (see the CONTRIBUTORS file).
package main
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
)
func init() {
register("postgres", postgresSetup, postgresCompile)
}
func postgresSetup(db *sql.DB) error {
var err error
db.SetMaxIdleConns(4)
db.SetMaxOpenConns(8)
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS Devices (
DeviceID CHAR(63) NOT NULL PRIMARY KEY,
Seen TIMESTAMP NOT NULL
)`)
if err != nil {
return err
}
var tmp string
row := db.QueryRow(`SELECT 'DevicesDeviceIDIndex'::regclass`)
if err = row.Scan(&tmp); err != nil {
_, err = db.Exec(`CREATE INDEX DevicesDeviceIDIndex ON Devices (DeviceID)`)
}
if err != nil {
return err
}
row = db.QueryRow(`SELECT 'DevicesSeenIndex'::regclass`)
if err = row.Scan(&tmp); err != nil {
_, err = db.Exec(`CREATE INDEX DevicesSeenIndex ON Devices (Seen)`)
}
if err != nil {
return err
}
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS Addresses (
DeviceID CHAR(63) NOT NULL,
Seen TIMESTAMP NOT NULL,
Address VARCHAR(2048) NOT NULL
)`)
if err != nil {
return err
}
row = db.QueryRow(`SELECT 'AddressesDeviceIDSeenIndex'::regclass`)
if err = row.Scan(&tmp); err != nil {
_, err = db.Exec(`CREATE INDEX AddressesDeviceIDSeenIndex ON Addresses (DeviceID, Seen)`)
}
if err != nil {
return err
}
row = db.QueryRow(`SELECT 'AddressesDeviceIDAddressIndex'::regclass`)
if err = row.Scan(&tmp); err != nil {
_, err = db.Exec(`CREATE INDEX AddressesDeviceIDAddressIndex ON Addresses (DeviceID, Address)`)
}
if err != nil {
return err
}
return nil
}
func postgresCompile(db *sql.DB) (map[string]*sql.Stmt, error) {
stmts := map[string]string{
"cleanAddress": "DELETE FROM Addresses WHERE Seen < now() - '2 hour'::INTERVAL",
"cleanDevice": fmt.Sprintf("DELETE FROM Devices WHERE Seen < now() - '%d hour'::INTERVAL", maxDeviceAge/3600),
"countAddress": "SELECT count(*) FROM Addresses",
"countDevice": "SELECT count(*) FROM Devices",
"insertAddress": "INSERT INTO Addresses (DeviceID, Seen, Address) VALUES ($1, now(), $2)",
"insertDevice": "INSERT INTO Devices (DeviceID, Seen) VALUES ($1, now())",
"selectAddress": "SELECT Address FROM Addresses WHERE DeviceID=$1 AND Seen > now() - '1 hour'::INTERVAL ORDER BY random() LIMIT 16",
"selectDevice": "SELECT Seen FROM Devices WHERE DeviceID=$1",
"updateAddress": "UPDATE Addresses SET Seen=now() WHERE DeviceID=$1 AND Address=$2",
"updateDevice": "UPDATE Devices SET Seen=now() WHERE DeviceID=$1",
}
res := make(map[string]*sql.Stmt, len(stmts))
for key, stmt := range stmts {
prep, err := db.Prepare(stmt)
if err != nil {
return nil, err
}
res[key] = prep
}
return res, nil
}

81
cmd/stdiscosrv/ql.go Normal file
View File

@@ -0,0 +1,81 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
package main
import (
"database/sql"
"fmt"
"log"
"github.com/cznic/ql"
)
func init() {
ql.RegisterDriver()
register("ql", qlSetup, qlCompile)
}
func qlSetup(db *sql.DB) (err error) {
tx, err := db.Begin()
if err != nil {
return
}
defer func() {
if err == nil {
err = tx.Commit()
} else {
tx.Rollback()
}
}()
_, err = tx.Exec(`CREATE TABLE IF NOT EXISTS Devices (
DeviceID STRING NOT NULL,
Seen TIME NOT NULL
)`)
if err != nil {
return
}
if _, err = tx.Exec(`CREATE INDEX IF NOT EXISTS DevicesDeviceIDIndex ON Devices (DeviceID)`); err != nil {
return
}
_, err = tx.Exec(`CREATE TABLE IF NOT EXISTS Addresses (
DeviceID STRING NOT NULL,
Seen TIME NOT NULL,
Address STRING NOT NULL,
)`)
if err != nil {
return
}
_, err = tx.Exec(`CREATE INDEX IF NOT EXISTS AddressesDeviceIDAddressIndex ON Addresses (DeviceID, Address)`)
return
}
func qlCompile(db *sql.DB) (map[string]*sql.Stmt, error) {
stmts := map[string]string{
"cleanAddress": `DELETE FROM Addresses WHERE Seen < now() - duration("2h")`,
"cleanDevice": fmt.Sprintf(`DELETE FROM Devices WHERE Seen < now() - duration("%dh")`, maxDeviceAge/3600),
"countAddress": "SELECT count(*) FROM Addresses",
"countDevice": "SELECT count(*) FROM Devices",
"insertAddress": "INSERT INTO Addresses (DeviceID, Seen, Address) VALUES ($1, now(), $2)",
"insertDevice": "INSERT INTO Devices (DeviceID, Seen) VALUES ($1, now())",
"selectAddress": `SELECT Address from Addresses WHERE DeviceID==$1 AND Seen > now() - duration("1h") LIMIT 16`,
"selectDevice": "SELECT Seen FROM Devices WHERE DeviceID==$1",
"updateAddress": "UPDATE Addresses Seen=now() WHERE DeviceID==$1 AND Address==$2",
"updateDevice": "UPDATE Devices Seen=now() WHERE DeviceID==$1",
}
res := make(map[string]*sql.Stmt, len(stmts))
for key, stmt := range stmts {
prep, err := db.Prepare(stmt)
if err != nil {
log.Println("Failed to compile", stmt)
return nil, err
}
res[key] = prep
}
return res, nil
}

492
cmd/stdiscosrv/querysrv.go Normal file
View File

@@ -0,0 +1,492 @@
// Copyright (C) 2014-2015 Jakob Borg and Contributors (see the CONTRIBUTORS file).
package main
import (
"bytes"
"crypto/tls"
"database/sql"
"encoding/json"
"encoding/pem"
"fmt"
"log"
"math/rand"
"net"
"net/http"
"net/url"
"strconv"
"sync"
"time"
"github.com/golang/groupcache/lru"
"github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/net/context"
"golang.org/x/time/rate"
)
type querysrv struct {
addr string
db *sql.DB
prep map[string]*sql.Stmt
limiter *safeCache
cert tls.Certificate
listener net.Listener
}
type announcement struct {
Seen time.Time `json:"seen"`
Addresses []string `json:"addresses"`
}
type safeCache struct {
*lru.Cache
mut sync.Mutex
}
func (s *safeCache) Get(key string) (val interface{}, ok bool) {
s.mut.Lock()
val, ok = s.Cache.Get(key)
s.mut.Unlock()
return
}
func (s *safeCache) Add(key string, val interface{}) {
s.mut.Lock()
s.Cache.Add(key, val)
s.mut.Unlock()
}
type requestID int64
func (i requestID) String() string {
return fmt.Sprintf("%016x", int64(i))
}
type contextKey int
const idKey contextKey = iota
func negCacheFor(lastSeen time.Time) int {
since := time.Since(lastSeen).Seconds()
if since >= maxDeviceAge {
return maxNegCache
}
if since < 0 {
// That's weird
return minNegCache
}
// Return a value linearly scaled from minNegCache (at zero seconds ago)
// to maxNegCache (at maxDeviceAge seconds ago).
r := since / maxDeviceAge
return int(minNegCache + r*(maxNegCache-minNegCache))
}
func (s *querysrv) Serve() {
s.limiter = &safeCache{
Cache: lru.New(lruSize),
}
if useHTTP {
listener, err := net.Listen("tcp", s.addr)
if err != nil {
log.Println("Listen:", err)
return
}
s.listener = listener
} else {
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{s.cert},
ClientAuth: tls.RequestClientCert,
SessionTicketsDisabled: true,
MinVersion: tls.VersionTLS12,
CipherSuites: []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
},
}
tlsListener, err := tls.Listen("tcp", s.addr, tlsCfg)
if err != nil {
log.Println("Listen:", err)
return
}
s.listener = tlsListener
}
http.HandleFunc("/v2/", s.handler)
http.HandleFunc("/ping", handlePing)
srv := &http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 5 * time.Second,
MaxHeaderBytes: 1 << 10,
}
if err := srv.Serve(s.listener); err != nil {
log.Println("Serve:", err)
}
}
var topCtx = context.Background()
func (s *querysrv) handler(w http.ResponseWriter, req *http.Request) {
reqID := requestID(rand.Int63())
ctx := context.WithValue(topCtx, idKey, reqID)
if debug {
log.Println(reqID, req.Method, req.URL)
}
t0 := time.Now()
defer func() {
diff := time.Since(t0)
var comment string
if diff > time.Second {
comment = "(very slow request)"
} else if diff > 100*time.Millisecond {
comment = "(slow request)"
}
if comment != "" || debug {
log.Println(reqID, req.Method, req.URL, "completed in", diff, comment)
}
}()
var remoteIP net.IP
if useHTTP {
remoteIP = net.ParseIP(req.Header.Get("X-Forwarded-For"))
} else {
addr, err := net.ResolveTCPAddr("tcp", req.RemoteAddr)
if err != nil {
log.Println("remoteAddr:", err)
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
remoteIP = addr.IP
}
if s.limit(remoteIP) {
if debug {
log.Println(remoteIP, "is limited")
}
w.Header().Set("Retry-After", "60")
http.Error(w, "Too Many Requests", 429)
return
}
switch req.Method {
case "GET":
s.handleGET(ctx, w, req)
case "POST":
s.handlePOST(ctx, remoteIP, w, req)
default:
globalStats.Error()
http.Error(w, "Method Not Allowed", http.StatusMethodNotAllowed)
}
}
func (s *querysrv) handleGET(ctx context.Context, w http.ResponseWriter, req *http.Request) {
reqID := ctx.Value(idKey).(requestID)
deviceID, err := protocol.DeviceIDFromString(req.URL.Query().Get("device"))
if err != nil {
if debug {
log.Println(reqID, "bad device param")
}
globalStats.Error()
http.Error(w, "Bad Request", http.StatusBadRequest)
return
}
var ann announcement
ann.Seen, err = s.getDeviceSeen(deviceID)
negCache := strconv.Itoa(negCacheFor(ann.Seen))
w.Header().Set("Retry-After", negCache)
w.Header().Set("Cache-Control", "public, max-age="+negCache)
if err != nil {
// The device is not in the database.
globalStats.Query()
http.Error(w, "Not Found", http.StatusNotFound)
return
}
t0 := time.Now()
ann.Addresses, err = s.getAddresses(ctx, deviceID)
if err != nil {
log.Println(reqID, "getAddresses:", err)
globalStats.Error()
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
if debug {
log.Println(reqID, "getAddresses in", time.Since(t0))
}
globalStats.Query()
if len(ann.Addresses) == 0 {
http.Error(w, "Not Found", http.StatusNotFound)
return
}
globalStats.Answer()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(ann)
}
func (s *querysrv) handlePOST(ctx context.Context, remoteIP net.IP, w http.ResponseWriter, req *http.Request) {
reqID := ctx.Value(idKey).(requestID)
rawCert := certificateBytes(req)
if rawCert == nil {
if debug {
log.Println(reqID, "no certificates")
}
globalStats.Error()
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
var ann announcement
if err := json.NewDecoder(req.Body).Decode(&ann); err != nil {
if debug {
log.Println(reqID, "decode:", err)
}
globalStats.Error()
http.Error(w, "Bad Request", http.StatusBadRequest)
return
}
deviceID := protocol.NewDeviceID(rawCert)
// handleAnnounce returns *two* errors. The first indicates a problem with
// something the client posted to us. We should return a 400 Bad Request
// and not worry about it. The second indicates that the request was fine,
// but something internal messed up. We should log it and respond with a
// more apologetic 500 Internal Server Error.
userErr, internalErr := s.handleAnnounce(ctx, remoteIP, deviceID, ann.Addresses)
if userErr != nil {
if debug {
log.Println(reqID, "handleAnnounce:", userErr)
}
globalStats.Error()
http.Error(w, "Bad Request", http.StatusBadRequest)
return
}
if internalErr != nil {
log.Println(reqID, "handleAnnounce:", internalErr)
globalStats.Error()
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
globalStats.Announce()
// TODO: Slowly increase this for stable clients
w.Header().Set("Reannounce-After", "1800")
// We could return the lookup result here, but it's kind of unnecessarily
// expensive to go query the database again so we let the client decide to
// do a lookup if they really care.
w.WriteHeader(http.StatusNoContent)
}
func (s *querysrv) Stop() {
s.listener.Close()
}
func (s *querysrv) handleAnnounce(ctx context.Context, remote net.IP, deviceID protocol.DeviceID, addresses []string) (userErr, internalErr error) {
reqID := ctx.Value(idKey).(requestID)
tx, err := s.db.Begin()
if err != nil {
internalErr = err
return
}
defer func() {
// Since we return from a bunch of different places, we handle
// rollback in the defer.
if internalErr != nil || userErr != nil {
tx.Rollback()
}
}()
for _, annAddr := range addresses {
uri, err := url.Parse(annAddr)
if err != nil {
userErr = err
return
}
host, port, err := net.SplitHostPort(uri.Host)
if err != nil {
userErr = err
return
}
ip := net.ParseIP(host)
if host == "" || ip.IsUnspecified() {
// Do not use IPv6 remote address if requested scheme is tcp4
if uri.Scheme == "tcp4" && remote.To4() == nil {
continue
}
// Do not use IPv4 remote address if requested scheme is tcp6
if uri.Scheme == "tcp6" && remote.To4() != nil {
continue
}
host = remote.String()
}
uri.Host = net.JoinHostPort(host, port)
if err := s.updateAddress(ctx, tx, deviceID, uri.String()); err != nil {
internalErr = err
return
}
}
if err := s.updateDevice(ctx, tx, deviceID); err != nil {
internalErr = err
return
}
t0 := time.Now()
internalErr = tx.Commit()
if debug {
log.Println(reqID, "commit in", time.Since(t0))
}
return
}
func (s *querysrv) limit(remote net.IP) bool {
key := remote.String()
bkt, ok := s.limiter.Get(key)
if ok {
bkt := bkt.(*rate.Limiter)
if !bkt.Allow() {
// Rate limit exceeded; ignore packet
return true
}
} else {
// limitAvg is in packets per ten seconds.
s.limiter.Add(key, rate.NewLimiter(rate.Limit(limitAvg)/10, limitBurst))
}
return false
}
func (s *querysrv) updateDevice(ctx context.Context, tx *sql.Tx, device protocol.DeviceID) error {
reqID := ctx.Value(idKey).(requestID)
t0 := time.Now()
res, err := tx.Stmt(s.prep["updateDevice"]).Exec(device.String())
if err != nil {
return err
}
if debug {
log.Println(reqID, "updateDevice in", time.Since(t0))
}
if rows, _ := res.RowsAffected(); rows == 0 {
t0 = time.Now()
_, err := tx.Stmt(s.prep["insertDevice"]).Exec(device.String())
if err != nil {
return err
}
if debug {
log.Println(reqID, "insertDevice in", time.Since(t0))
}
}
return nil
}
func (s *querysrv) updateAddress(ctx context.Context, tx *sql.Tx, device protocol.DeviceID, uri string) error {
res, err := tx.Stmt(s.prep["updateAddress"]).Exec(device.String(), uri)
if err != nil {
return err
}
if rows, _ := res.RowsAffected(); rows == 0 {
_, err := tx.Stmt(s.prep["insertAddress"]).Exec(device.String(), uri)
if err != nil {
return err
}
}
return nil
}
func (s *querysrv) getAddresses(ctx context.Context, device protocol.DeviceID) ([]string, error) {
rows, err := s.prep["selectAddress"].Query(device.String())
if err != nil {
return nil, err
}
defer rows.Close()
var res []string
for rows.Next() {
var addr string
err := rows.Scan(&addr)
if err != nil {
log.Println("Scan:", err)
continue
}
res = append(res, addr)
}
return res, nil
}
func (s *querysrv) getDeviceSeen(device protocol.DeviceID) (time.Time, error) {
row := s.prep["selectDevice"].QueryRow(device.String())
var seen time.Time
if err := row.Scan(&seen); err != nil {
return time.Time{}, err
}
return seen.In(time.UTC), nil
}
func handlePing(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(204)
}
func certificateBytes(req *http.Request) []byte {
if req.TLS != nil && len(req.TLS.PeerCertificates) > 0 {
return req.TLS.PeerCertificates[0].Raw
}
if hdr := req.Header.Get("X-SSL-Cert"); hdr != "" {
bs := []byte(hdr)
// The certificate is in PEM format but with spaces for newlines. We
// need to reinstate the newlines for the PEM decoder. But we need to
// leave the spaces in the BEGIN and END lines - the first and last
// space - alone.
firstSpace := bytes.Index(bs, []byte(" "))
lastSpace := bytes.LastIndex(bs, []byte(" "))
for i := firstSpace + 1; i < lastSpace; i++ {
if bs[i] == ' ' {
bs[i] = '\n'
}
}
block, _ := pem.Decode(bs)
if block == nil {
// Decoding failed
return nil
}
return block.Bytes
}
return nil
}

141
cmd/stdiscosrv/stats.go Normal file
View File

@@ -0,0 +1,141 @@
// Copyright (C) 2014-2015 Jakob Borg and Contributors (see the CONTRIBUTORS file).
package main
import (
"bytes"
"database/sql"
"fmt"
"io/ioutil"
"log"
"os"
"sync/atomic"
"time"
)
type stats struct {
// Incremented atomically
announces int64
queries int64
answers int64
errors int64
}
func (s *stats) Announce() {
atomic.AddInt64(&s.announces, 1)
}
func (s *stats) Query() {
atomic.AddInt64(&s.queries, 1)
}
func (s *stats) Answer() {
atomic.AddInt64(&s.answers, 1)
}
func (s *stats) Error() {
atomic.AddInt64(&s.errors, 1)
}
// Reset returns a copy of the current stats and resets the counters to
// zero.
func (s *stats) Reset() stats {
// Create a copy of the stats using atomic reads
copy := stats{
announces: atomic.LoadInt64(&s.announces),
queries: atomic.LoadInt64(&s.queries),
answers: atomic.LoadInt64(&s.answers),
errors: atomic.LoadInt64(&s.errors),
}
// Reset the stats by subtracting the values that we copied
atomic.AddInt64(&s.announces, -copy.announces)
atomic.AddInt64(&s.queries, -copy.queries)
atomic.AddInt64(&s.answers, -copy.answers)
atomic.AddInt64(&s.errors, -copy.errors)
return copy
}
type statssrv struct {
intv time.Duration
file string
db *sql.DB
}
func (s *statssrv) Serve() {
lastReset := time.Now()
for {
time.Sleep(next(s.intv))
stats := globalStats.Reset()
d := time.Since(lastReset).Seconds()
lastReset = time.Now()
log.Printf("Stats: %.02f announces/s, %.02f queries/s, %.02f answers/s, %.02f errors/s",
float64(stats.announces)/d, float64(stats.queries)/d, float64(stats.answers)/d, float64(stats.errors)/d)
if s.file != "" {
s.writeToFile(stats, d)
}
}
}
func (s *statssrv) Stop() {
panic("stop unimplemented")
}
func (s *statssrv) writeToFile(stats stats, secs float64) {
newLine := []byte("\n")
var addrs int
row := s.db.QueryRow("SELECT COUNT(*) FROM Addresses")
if err := row.Scan(&addrs); err != nil {
log.Println("stats query:", err)
return
}
fd, err := os.OpenFile(s.file, os.O_RDWR|os.O_CREATE, 0666)
if err != nil {
log.Println("stats file:", err)
return
}
defer func() {
err = fd.Close()
if err != nil {
log.Println("stats file:", err)
}
}()
bs, err := ioutil.ReadAll(fd)
if err != nil {
log.Println("stats file:", err)
return
}
lines := bytes.Split(bytes.TrimSpace(bs), newLine)
if len(lines) > 12 {
lines = lines[len(lines)-12:]
}
latest := fmt.Sprintf("%v: %6d addresses, %8.02f announces/s, %8.02f queries/s, %8.02f answers/s, %8.02f errors/s\n",
time.Now().UTC().Format(time.RFC3339), addrs,
float64(stats.announces)/secs, float64(stats.queries)/secs, float64(stats.answers)/secs, float64(stats.errors)/secs)
lines = append(lines, []byte(latest))
_, err = fd.Seek(0, 0)
if err != nil {
log.Println("stats file:", err)
return
}
err = fd.Truncate(0)
if err != nil {
log.Println("stats file:", err)
return
}
_, err = fd.Write(bytes.Join(lines, newLine))
if err != nil {
log.Println("stats file:", err)
return
}
}

View File

@@ -40,7 +40,8 @@ func main() {
log.Println("Lstat:")
log.Printf(" Size: %d bytes", fi.Size())
log.Printf(" Mode: 0%o", fi.Mode())
log.Printf(" Time: %v (%d)", fi.ModTime(), fi.ModTime().Unix())
log.Printf(" Time: %v", fi.ModTime())
log.Printf(" %d.%09d", fi.ModTime().Unix(), fi.ModTime().Nanosecond())
log.Println()
if !fi.Mode().IsDir() && !fi.Mode().IsRegular() {
@@ -52,7 +53,8 @@ func main() {
log.Println("Stat:")
log.Printf(" Size: %d bytes", fi.Size())
log.Printf(" Mode: 0%o", fi.Mode())
log.Printf(" Time: %v (%d)", fi.ModTime(), fi.ModTime().Unix())
log.Printf(" Time: %v", fi.ModTime())
log.Printf(" %d.%09d", fi.ModTime().Unix(), fi.ModTime().Nanosecond())
log.Println()
}

View File

@@ -66,7 +66,7 @@ func checkServers(deviceID protocol.DeviceID, servers ...string) {
}()
}
for _ = range servers {
for range servers {
res := <-resc
u, _ := url.Parse(res.server)

View File

@@ -66,7 +66,7 @@ func generateFiles(dir string, files, maxexp int, srcname string) error {
}
func generateOneFile(fd io.ReadSeeker, p1 string, s int64) error {
src := io.LimitReader(&inifiteReader{fd}, int64(s))
src := io.LimitReader(&inifiteReader{fd}, s)
dst, err := os.Create(p1)
if err != nil {
return err
@@ -85,12 +85,7 @@ func generateOneFile(fd io.ReadSeeker, p1 string, s int64) error {
_ = os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
t := time.Now().Add(-time.Duration(rand.Intn(30*86400)) * time.Second)
err = os.Chtimes(p1, t, t)
if err != nil {
return err
}
return nil
return os.Chtimes(p1, t, t)
}
func randomName() string {

View File

@@ -10,51 +10,71 @@ import (
"encoding/binary"
"fmt"
"log"
"time"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
)
func dump(ldb *leveldb.DB) {
func dump(ldb *db.Instance) {
it := ldb.NewIterator(nil, nil)
var dev protocol.DeviceID
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := nulString(key[1 : 1+64])
devBytes := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
copy(dev[:], devBytes)
fmt.Printf("[device] F:%q N:%q D:%v\n", folder, name, dev)
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
fmt.Printf("[device] F:%d D:%d N:%q", folder, device, name)
var f protocol.FileInfo
err := f.UnmarshalXDR(it.Value())
err := f.Unmarshal(it.Value())
if err != nil {
log.Fatal(err)
}
fmt.Printf(" N:%q\n F:%#o\n M:%d\n V:%v\n S:%d\n B:%d\n", f.Name, f.Flags, f.Modified, f.Version, f.Size(), len(f.Blocks))
fmt.Printf(" V:%v\n", f)
case db.KeyTypeGlobal:
folder := nulString(key[1 : 1+64])
name := nulString(key[1+64:])
fmt.Printf("[global] F:%q N:%q V:%x\n", folder, name, it.Value())
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv db.VersionList
flv.Unmarshal(it.Value())
fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, flv)
case db.KeyTypeBlock:
folder := nulString(key[1 : 1+64])
hash := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
fmt.Printf("[block] F:%q H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
folder := binary.BigEndian.Uint32(key[1:])
hash := key[1+4 : 1+4+32]
name := nulString(key[1+4+32:])
fmt.Printf("[block] F:%d H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
case db.KeyTypeDeviceStatistic:
fmt.Printf("[dstat]\n %x\n %x\n", it.Key(), it.Value())
fmt.Printf("[dstat] K:%x V:%x\n", it.Key(), it.Value())
case db.KeyTypeFolderStatistic:
fmt.Printf("[fstat]\n %x\n %x\n", it.Key(), it.Value())
fmt.Printf("[fstat] K:%x V:%x\n", it.Key(), it.Value())
case db.KeyTypeVirtualMtime:
fmt.Printf("[mtime]\n %x\n %x\n", it.Key(), it.Value())
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
val := it.Value()
var real, virt time.Time
real.UnmarshalBinary(val[:len(val)/2])
virt.UnmarshalBinary(val[len(val)/2:])
fmt.Printf("[mtime] F:%d N:%q R:%v V:%v\n", folder, name, real, virt)
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
fmt.Printf("[folderidx] K:%d V:%q\n", key, it.Value())
case db.KeyTypeDeviceIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
val := it.Value()
if len(val) == 0 {
fmt.Printf("[deviceidx] K:%d V:<nil>\n", key)
} else {
dev := protocol.DeviceIDFromBytes(val)
fmt.Printf("[deviceidx] K:%d V:%s\n", key, dev)
}
default:
fmt.Printf("[???]\n %x\n %x\n", it.Key(), it.Value())

View File

@@ -8,11 +8,10 @@ package main
import (
"container/heap"
"encoding/binary"
"fmt"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
)
type SizedElement struct {
@@ -38,33 +37,31 @@ func (h *ElementHeap) Pop() interface{} {
return x
}
func dumpsize(ldb *leveldb.DB) {
func dumpsize(ldb *db.Instance) {
h := &ElementHeap{}
heap.Init(h)
it := ldb.NewIterator(nil, nil)
var dev protocol.DeviceID
var ele SizedElement
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := nulString(key[1 : 1+64])
devBytes := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
copy(dev[:], devBytes)
ele.key = fmt.Sprintf("DEVICE:%s:%s:%s", dev, folder, name)
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
ele.key = fmt.Sprintf("DEVICE:%d:%d:%s", folder, device, name)
case db.KeyTypeGlobal:
folder := nulString(key[1 : 1+64])
name := nulString(key[1+64:])
ele.key = fmt.Sprintf("GLOBAL:%s:%s", folder, name)
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
ele.key = fmt.Sprintf("GLOBAL:%d:%s", folder, name)
case db.KeyTypeBlock:
folder := nulString(key[1 : 1+64])
hash := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
ele.key = fmt.Sprintf("BLOCK:%s:%x:%s", folder, hash, name)
folder := binary.BigEndian.Uint32(key[1:])
hash := key[1+4 : 1+4+32]
name := nulString(key[1+4+32:])
ele.key = fmt.Sprintf("BLOCK:%d:%x:%s", folder, hash, name)
case db.KeyTypeDeviceStatistic:
ele.key = fmt.Sprintf("DEVICESTATS:%s", key[1:])
@@ -75,6 +72,14 @@ func dumpsize(ldb *leveldb.DB) {
case db.KeyTypeVirtualMtime:
ele.key = fmt.Sprintf("MTIME:%s", key[1:])
case db.KeyTypeFolderIdx:
id := binary.BigEndian.Uint32(key[1:])
ele.key = fmt.Sprintf("FOLDERIDX:%d", id)
case db.KeyTypeDeviceIdx:
id := binary.BigEndian.Uint32(key[1:])
ele.key = fmt.Sprintf("DEVICEIDX:%d", id)
default:
ele.key = fmt.Sprintf("UNKNOWN:%x", key)
}

View File

@@ -13,8 +13,7 @@ import (
"os"
"path/filepath"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syncthing/syncthing/lib/db"
)
func main() {
@@ -28,16 +27,12 @@ func main() {
path := flag.Arg(0)
if path == "" {
path = filepath.Join(defaultConfigDir(), "index-v0.11.0.db")
path = filepath.Join(defaultConfigDir(), "index-v0.14.0.db")
}
fmt.Println("Path:", path)
ldb, err := leveldb.OpenFile(path, &opt.Options{
ErrorIfMissing: true,
Strict: opt.StrictAll,
OpenFilesCacheCapacity: 100,
})
ldb, err := db.Open(path)
if err != nil {
log.Fatal(err)
}

View File

@@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 The Syncthing Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,21 @@
# relaypoolsrv
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/relaypoolsrv.svg?style=flat-square)](http://build.syncthing.net/job/relaypoolsrv/lastBuild/)
This is the relay pool server for the `syncthing` project, which allows community hosted [relaysrv](https://github.com/syncthing/relaysrv)'s to join the public pool.
Servers that join the pool are then advertised to users of `syncthing` as potential connection points for those who are unable to connect directly due to NAT or firewall issues.
There is very little reason why you'd want to run this yourself, as `relaypoolsrv` is just used for announcement and lookup of public relay servers. If you are looking to setup a private or a public relay, please check the documentation for [relaysrv](https://github.com/syncthing/relaysrv), which also explains how to join the default public pool.
If you still want to run it, you can run `go get github.com/syncthing/relaypoolsrv` download it or download the
[latest build](http://build.syncthing.net/job/relaypoolsrv/lastSuccessfulBuild/artifact/)
from the build server.
See `relaypoolsrv -help` for configuration options.
##### Third-party attributions
[oschwald/geoip2-golang](https://github.com/oschwald/geoip2-golang), [oschwald/maxminddb-golang](https://github.com/oschwald/maxminddb-golang), Copyright (C) 2015 [Gregory J. Oschwald](mailto:oschwald@gmail.com).
[lib/pq](https://github.com/lib/pq)</a>, Copyright (C) 2011-2013 'pq' Contributors Portions Copyright (C) 2011 Blake Mizerany.

1
cmd/strelaypoolsrv/auto/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
gui.go

View File

@@ -0,0 +1,408 @@
<!DOCTYPE html>
<html lang="en" ng-app="syncthing" ng-controller="relayDataController">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
<meta name="author" content="">
<title>Relay stats</title>
<link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.6.1/css/font-awesome.min.css">
<style>
#map {
height: 600px;
}
.ng-cloak {
display: none;
}
table {
font-size: 11px !important;
width: 100%;
border: 1px;
}
td {
padding: 0px !important;
}
tfoot td {
font-weight: bold;
}
</style>
</head>
<body class="ng-cloak">
<div class="container">
<h1>Relay Pool Data</h2>
<div ng-if="relays === undefined" class="text-center">
<img src="//cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif"/>
<p>Please wait while we gather data</p>
</div>
<div>
<div ng-show="relays !== undefined" class="ng-hide">
<p>
Currently {{ relays.length }} relays online ({{ totals.goMaxProcs }} cores in total).
</p>
</div>
<div id="map"></div> <!-- Can't hide the map, otherwise it freaks out -->
<p>The circle size represents how much bytes the relay transfered relative to other relays</p>
</div>
<div>
<table class="table table-striped table-condensed table">
<thead>
<tr>
<th rowspan="2">Address</td>
<th rowspan="2">
<a ng-click="sortType = 'status.numActiveSessions'; sortReverse = !sortReverse">
Sessions
<span ng-show="sortType == 'status.numActiveSessions' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.numActiveSessions' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th rowspan="2">
<a ng-click="sortType = 'status.numConnections'; sortReverse = !sortReverse">
Connections
<span ng-show="sortType == 'status.numConnections' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.numConnections' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th rowspan="2">
<a ng-click="sortType = 'status.bytesProxied'; sortReverse = !sortReverse">
Data relayed
<span ng-show="sortType == 'status.bytesProxied' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.bytesProxied' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th colspan="6" class="text-center">Transfer rate in the last period</th>
<th rowspan="2">
<a ng-click="sortType = 'status.uptimeSeconds'; sortReverse = !sortReverse">
Uptime hours
<span ng-show="sortType == 'status.uptimeSeconds' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.uptimeSeconds' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th rowspan="2">
<a ng-click="sortType = 'status.options[\'provided-by\'] || \'\''; sortReverse = !sortReverse">
Provided by
<span ng-show="sortType == 'status.options[\'provided-by\'] || \'\'' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.options[\'provided-by\'] || \'\'' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
</tr>
<tr>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[0]'; sortReverse = !sortReverse">
10s
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[0]' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[0]' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[1]'; sortReverse = !sortReverse">
1m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[1]' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[1]' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[2]'; sortReverse = !sortReverse">
5m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[2]' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[2]' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[3]'; sortReverse = !sortReverse">
15m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[3]' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[3]' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[4]'; sortReverse = !sortReverse">
30m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[4]' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[4]' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[5]'; sortReverse = !sortReverse">
60m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[5]' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[5]' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
</tr>
</thead>
<tbody>
<tr ng-repeat="relay in relays | orderBy:sortType:sortReverse:sortCompare">
<td>{{ relay.address }}</td>
<td ng-if="relay.status === undefined" colspan="11" class="text-center">Looking up...</td>
<td ng-if-start="relay.status !== undefined">{{ relay.status.numActiveSessions }}</td>
<td>{{ relay.status.numConnections }}</td>
<td>{{ relay.status.bytesProxied | bytes }}</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[0] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[1] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[2] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[3] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[4] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[5] * 128 | bytes }}/s</td>
<td ng-if="relay.status.uptimeSeconds != undefined">{{ relay.status.uptimeSeconds/60/60 | number:0 }}</td>
<td ng-if="relay.status.uptimeSeconds == undefined"></td>
<td title="{{ relay.status.options['provided-by'] || '' }}" ng-if-end>
{{ relay.status.options['provided-by'] || '' | limitTo:50 }}
<span ng-if="(relay.status.options['provided-by'] || '').length > 50">&hellip;
</td>
</tr>
</tbody>
<tfoot>
<tr>
<td>Totals</td>
<td>{{ totals.numActiveSessions }}</td>
<td>{{ totals.numConnections }}</td>
<td>{{ totals.bytesProxied | bytes }}</td>
<td>{{ totals.kbps10s1m5m15m30m60m[0] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[1] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[2] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[3] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[4] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[5] * 128 | bytes }}/s</td>
<td>{{ totals.uptimeSeconds/60/60 | number:0 }} hours</td>
<td>{{ relays.length }} relays</td>
</tr>
</tfoor>
</table>
</div>
<hr>
<p>
This product includes GeoLite2 data created by MaxMind, available from
<a href="http://www.maxmind.com">http://www.maxmind.com</a>.
</p>
</div>
<script src="//code.jquery.com/jquery-2.1.4.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.8/angular.min.js"></script>
<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script src="//maps.googleapis.com/maps/api/js"></script>
</body>
<script>
angular.module('syncthing', [
])
.config(function($httpProvider) {
$httpProvider.defaults.timeout = 5000;
})
.filter('bytes', function() {
return function(bytes, precision) {
if (isNaN(parseFloat(bytes)) || !isFinite(bytes)) return '-';
if (typeof precision === 'undefined') precision = 1;
var units = ['bytes', 'kB', 'MB', 'GB', 'TB', 'PB'],
number = Math.floor(Math.log(bytes) / Math.log(1024));
var value = (bytes / Math.pow(1000, Math.floor(number)));
if (!isFinite(value)) {
value = 0;
precision = 0;
}
if (!isFinite(number)) {
units = 'bytes';
} else {
units = units[number];
}
return value.toFixed(precision) + ' ' + units;
}
})
.controller('relayDataController', ['$scope', '$rootScope', '$http', '$q', '$compile', '$timeout', function($scope, $rootScope, $http, $q, $compile, $timeout) {
$scope.totals = {
bytesProxied: 0,
goMaxProcs: 0,
kbps10s1m5m15m30m60m: [0, 0, 0, 0, 0, 0],
numActiveSessions: 0,
numConnections: 0,
numPendingSessionKeys: 0,
numProxies: 0,
uptimeSeconds: 0,
};
$scope.map = new google.maps.Map(document.getElementById('map'), {
zoom: 1,
mapTypeId: google.maps.MapTypeId.ROADMAP
});
$scope.mapBounds = new google.maps.LatLngBounds();
$scope.tooltipTemplate = $('#infoTemplate').html();
$scope.usedLocations = {};
$scope.sortType = 'status.numActiveSessions';
$scope.sortReverse = true;
$scope.sortCompare = function(a, b) {
if (a.value == b.value) {
return 0;
}
if (a.type == "undefined") {
return -1;
}
if (b.type == "undefined") {
return 1;
}
return a.value > b.value ? 1 : -1;
}
$http.get("/endpoint").then(function(response) {
$scope.relays = response.data.relays;
var promises = [];
angular.forEach($scope.relays, function(relay) {
relay.uri = constructURI(relay.url);
relay.address = relay.url.split('/')[2];
addMarkerToMap(relay);
promises.push(getRelayStatus(relay));
});
// Can only add circles once we know the totals for transfers, which means
// we need to resolve all statuses.
$q.all(promises).then(function() {
angular.forEach($scope.relays, function(relay) {
if (relay.status) {
addCircleToMap(relay);
}
});
});
$scope.map.fitBounds($scope.mapBounds);
if ($scope.relays.length == 1) {
$scope.map.setZoom(13);
}
});
function addMarkerToMap(relay) {
var loc = relay.location.latitude + "," + relay.location.longitude;
// Deal with overlapping markers
while (loc in $scope.usedLocations) {
var locParts = loc.split(',');
locParts = [parseFloat(locParts[0]), parseFloat(locParts[1])];
locParts[Math.round(Math.random())] += 0.5 * (Math.random() >= 0.5 ? 1 : -1);
loc = locParts.join(',');
}
$scope.usedLocations[loc] = true;
var locParts = loc.split(',');
relay.marker = new google.maps.Marker({
map: $scope.map,
position: new google.maps.LatLng(locParts[0], locParts[1]),
title: relay.url,
});
var scope = $rootScope.$new(true);
scope.relay = relay;
relay.marker.info = new google.maps.InfoWindow({
content: $compile($scope.tooltipTemplate)(scope)[0],
});
relay.marker.addListener('mouseover', function() {
relay.marker.info.open($scope.map, relay.marker);
});
relay.marker.addListener('mouseout', function() {
relay.marker.info.close();
});
$scope.mapBounds.extend(relay.marker.position);
}
function addCircleToMap(relay) {
relay.marker.circle = new google.maps.Circle({
strokeColor: '#FF0000',
strokeOpacity: 0.8,
strokeWeight: 2,
fillColor: '#FF0000',
fillOpacity: 0.35,
map: $scope.map,
center: relay.marker.position,
radius: ((relay.status.bytesProxied * 100) / $scope.totals.bytesProxied) * 10000
});
}
function getRelayStatus(relay) {
// Normal timeout doesn't deal with relays which accept the TCP connection
// but don't respond (some firewalls do that), so deal with it this way.
var timeoutRequest = $q.defer();
var resolveStatus = $q.defer();
$http.get("http://" + relay.uri.hostname + ':' + ((relay.uri.args.statusAddr && relay.uri.args.statusAddr.split(':')[1]) || "22070") + "/status", { timeout: timeoutRequest.promise }).then(function (response) {
relay.status = response.data;
resolveStatus.resolve();
angular.forEach($scope.totals, function(value, key) {
if (typeof $scope.totals[key] == 'number') {
$scope.totals[key] += response.data[key];
} else if (typeof $scope.totals[key] == 'object' && $scope.totals[key] instanceof Array) {
angular.forEach($scope.totals[key], function(value, index) {
$scope.totals[key][index] += response.data[key][index];
});
}
});
}, function() {
relay.status = null;
resolveStatus.resolve();
});
$timeout(function() {
timeoutRequest.resolve();
}, 5000);
return resolveStatus.promise;
}
function constructURI(url) {
var uri = document.createElement('a');
// HAX, otherwise doesn't work
uri.href = url.replace('relay://', 'http://');
// Convert query string to object
uri.args = {};
angular.forEach(uri.search.replace(/^\?/, '').split('&'), function(query) {
var split = query.split('=');
uri.args[split[0]] = split[1];
});
return uri;
}
}]);
</script>
<script type="text/template" id="infoTemplate">
<div>
<p><b>{{ relay.uri.hostname }}</b> <span ng-if="relay.status.options['provided-by']">provided by <u>{{ relay.status.options['provided-by'] }}</u></span></p>
<div ng-if="relay.status">
<span ng-if="relay.status.startTime">Start time: {{ relay.status.startTime | date:"medium" }}</br></span>
<span ng-if="relay.status.bytesProxied != undefined">Proxied: {{ relay.status.bytesProxied | bytes }}</br></span>
<span ng-if="relay.status.numActiveSessions != undefined">Sessions: {{ relay.status.numActiveSessions }}</br></span>
<span ng-if="relay.status.numConnections != undefined">Clients: {{ relay.status.numConnections }}</br></span>
<span ng-if="relay.status.options.pools">Pools: {{ relay.status.options.pools.join(', ') }}</br></span>
<span ng-if="relay.status.options['global-rate'] != undefined">
<span ng-if="relay.status.options['global-rate'] > 0">Global rate limit: {{ relay.status.options['global-rate'] | bytes }}/s</span>
<span ng-if="relay.status.options['global-rate'] == 0">Global rate limit: unlimited</span>
</br>
</span>
<span ng-if="relay.status.options['per-session-rate'] != undefined">
<span ng-if="relay.status.options['per-session-rate'] > 0">Session rate limit: {{ relay.status.options['per-session-rate'] | bytes }}/s</span>
<span ng-if="relay.status.options['per-session-rate'] == 0">Session rate limit: unlimited</span>
</br>
</span>
</div>
<div ng-if="!relay.status">
Data unavailable.
<div>
</div>
</script>
</html>

537
cmd/strelaypoolsrv/main.go Normal file
View File

@@ -0,0 +1,537 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
//go:generate go run ../../script/genassets.go gui >auto/gui.go
package main
import (
"bytes"
"compress/gzip"
"crypto/tls"
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"math/rand"
"mime"
"net"
"net/http"
"net/url"
"path/filepath"
"strings"
"time"
"github.com/golang/groupcache/lru"
"github.com/oschwald/geoip2-golang"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
"golang.org/x/time/rate"
)
type location struct {
Latitude float64 `json:"latitude"`
Longitude float64 `json:"longitude"`
}
type relay struct {
URL string `json:"url"`
Location location `json:"location"`
uri *url.URL
}
func (r relay) String() string {
return r.URL
}
type request struct {
relay relay
uri *url.URL
result chan result
}
type result struct {
err error
eviction time.Duration
}
var (
testCert tls.Certificate
listen = ":80"
dir string
evictionTime = time.Hour
debug bool
getLRUSize = 10 << 10
getLimitBurst = 10
getLimitAvg = 1
postLRUSize = 1 << 10
postLimitBurst = 2
postLimitAvg = 1
getLimit time.Duration
postLimit time.Duration
permRelaysFile string
ipHeader string
geoipPath string
proto string
getMut = sync.NewRWMutex()
getLRUCache *lru.Cache
postMut = sync.NewRWMutex()
postLRUCache *lru.Cache
requests = make(chan request, 10)
mut = sync.NewRWMutex()
knownRelays = make([]relay, 0)
permanentRelays = make([]relay, 0)
evictionTimers = make(map[string]*time.Timer)
)
func main() {
flag.StringVar(&listen, "listen", listen, "Listen address")
flag.StringVar(&dir, "keys", dir, "Directory where http-cert.pem and http-key.pem is stored for TLS listening")
flag.BoolVar(&debug, "debug", debug, "Enable debug output")
flag.DurationVar(&evictionTime, "eviction", evictionTime, "After how long the relay is evicted")
flag.IntVar(&getLRUSize, "get-limit-cache", getLRUSize, "Get request limiter cache size")
flag.IntVar(&getLimitAvg, "get-limit-avg", 2, "Allowed average get request rate, per 10 s")
flag.IntVar(&getLimitBurst, "get-limit-burst", getLimitBurst, "Allowed burst get requests")
flag.IntVar(&postLRUSize, "post-limit-cache", postLRUSize, "Post request limiter cache size")
flag.IntVar(&postLimitAvg, "post-limit-avg", 2, "Allowed average post request rate, per minute")
flag.IntVar(&postLimitBurst, "post-limit-burst", postLimitBurst, "Allowed burst post requests")
flag.StringVar(&permRelaysFile, "perm-relays", "", "Path to list of permanent relays")
flag.StringVar(&ipHeader, "ip-header", "", "Name of header which holds clients ip:port. Only meaningful when running behind a reverse proxy.")
flag.StringVar(&geoipPath, "geoip", "GeoLite2-City.mmdb", "Path to GeoLite2-City database")
flag.StringVar(&proto, "protocol", "tcp", "Protocol used for listening. 'tcp' for IPv4 and IPv6, 'tcp4' for IPv4, 'tcp6' for IPv6")
flag.Parse()
getLimit = 10 * time.Second / time.Duration(getLimitAvg)
postLimit = time.Minute / time.Duration(postLimitAvg)
getLRUCache = lru.New(getLRUSize)
postLRUCache = lru.New(postLRUSize)
var listener net.Listener
var err error
if permRelaysFile != "" {
loadPermanentRelays(permRelaysFile)
}
testCert = createTestCertificate()
go requestProcessor()
if dir != "" {
if debug {
log.Println("Starting TLS listener on", listen)
}
certFile, keyFile := filepath.Join(dir, "http-cert.pem"), filepath.Join(dir, "http-key.pem")
var cert tls.Certificate
cert, err = tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Fatalln("Failed to load HTTP X509 key pair:", err)
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: tls.VersionTLS10, // No SSLv3
CipherSuites: []uint16{
// No RC4
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
},
}
listener, err = tls.Listen(proto, listen, tlsCfg)
} else {
if debug {
log.Println("Starting plain listener on", listen)
}
listener, err = net.Listen(proto, listen)
}
if err != nil {
log.Fatalln("listen:", err)
}
handler := http.NewServeMux()
handler.HandleFunc("/", handleAssets)
handler.HandleFunc("/endpoint", handleRequest)
srv := http.Server{
Handler: handler,
ReadTimeout: 10 * time.Second,
}
err = srv.Serve(listener)
if err != nil {
log.Fatalln("serve:", err)
}
}
func handleAssets(w http.ResponseWriter, r *http.Request) {
assets := auto.Assets()
path := r.URL.Path[1:]
if path == "" {
path = "index.html"
}
bs, ok := assets[path]
if !ok {
w.WriteHeader(http.StatusNotFound)
return
}
mtype := mimeTypeForFile(path)
if len(mtype) != 0 {
w.Header().Set("Content-Type", mtype)
}
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
w.Header().Set("Content-Encoding", "gzip")
} else {
// ungzip if browser not send gzip accepted header
var gr *gzip.Reader
gr, _ = gzip.NewReader(bytes.NewReader(bs))
bs, _ = ioutil.ReadAll(gr)
gr.Close()
}
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
w.Write(bs)
}
func mimeTypeForFile(file string) string {
// We use a built in table of the common types since the system
// TypeByExtension might be unreliable. But if we don't know, we delegate
// to the system.
ext := filepath.Ext(file)
switch ext {
case ".htm", ".html":
return "text/html"
case ".css":
return "text/css"
case ".js":
return "application/javascript"
case ".json":
return "application/json"
case ".png":
return "image/png"
case ".ttf":
return "application/x-font-ttf"
case ".woff":
return "application/x-font-woff"
case ".svg":
return "image/svg+xml"
default:
return mime.TypeByExtension(ext)
}
}
func handleRequest(w http.ResponseWriter, r *http.Request) {
if ipHeader != "" {
r.RemoteAddr = r.Header.Get(ipHeader)
}
w.Header().Set("Access-Control-Allow-Origin", "*")
switch r.Method {
case "GET":
if limit(r.RemoteAddr, getLRUCache, getMut, getLimit, getLimitBurst) {
w.WriteHeader(429)
return
}
handleGetRequest(w, r)
case "POST":
if limit(r.RemoteAddr, postLRUCache, postMut, postLimit, postLimitBurst) {
w.WriteHeader(429)
return
}
handlePostRequest(w, r)
default:
if debug {
log.Println("Unhandled HTTP method", r.Method)
}
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
func handleGetRequest(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
mut.RLock()
relays := append(permanentRelays, knownRelays...)
mut.RUnlock()
// Shuffle
for i := range relays {
j := rand.Intn(i + 1)
relays[i], relays[j] = relays[j], relays[i]
}
json.NewEncoder(w).Encode(map[string][]relay{
"relays": relays,
})
}
func handlePostRequest(w http.ResponseWriter, r *http.Request) {
var newRelay relay
err := json.NewDecoder(r.Body).Decode(&newRelay)
r.Body.Close()
if err != nil {
if debug {
log.Println("Failed to parse payload")
}
http.Error(w, err.Error(), 500)
return
}
uri, err := url.Parse(newRelay.URL)
if err != nil {
if debug {
log.Println("Failed to parse URI", newRelay.URL)
}
http.Error(w, err.Error(), 500)
return
}
host, port, err := net.SplitHostPort(uri.Host)
if err != nil {
if debug {
log.Println("Failed to split URI", newRelay.URL)
}
http.Error(w, err.Error(), 500)
return
}
// Get the IP address of the client
rhost, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
if debug {
log.Println("Failed to split remote address", r.RemoteAddr)
}
http.Error(w, err.Error(), 500)
return
}
ip := net.ParseIP(host)
// The client did not provide an IP address, use the IP address of the client.
if ip == nil || ip.IsUnspecified() {
uri.Host = net.JoinHostPort(rhost, port)
newRelay.URL = uri.String()
} else if host != rhost {
if debug {
log.Println("IP address advertised does not match client IP address", r.RemoteAddr, uri)
}
http.Error(w, "IP address does not match client IP", http.StatusUnauthorized)
return
}
newRelay.uri = uri
newRelay.Location = getLocation(uri.Host)
for _, current := range permanentRelays {
if current.uri.Host == newRelay.uri.Host {
if debug {
log.Println("Asked to add a relay", newRelay, "which exists in permanent list")
}
http.Error(w, "Invalid request", 500)
return
}
}
reschan := make(chan result)
select {
case requests <- request{newRelay, uri, reschan}:
result := <-reschan
if result.err != nil {
http.Error(w, result.err.Error(), 500)
return
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(map[string]time.Duration{
"evictionIn": result.eviction,
})
default:
if debug {
log.Println("Dropping request")
}
w.WriteHeader(429)
}
}
func requestProcessor() {
for request := range requests {
if debug {
log.Println("Request for", request.relay)
}
if !client.TestRelay(request.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if debug {
log.Println("Test for relay", request.relay, "failed")
}
request.result <- result{fmt.Errorf("test failed"), 0}
continue
}
mut.Lock()
timer, ok := evictionTimers[request.relay.uri.Host]
if ok {
if debug {
log.Println("Stopping existing timer for", request.relay)
}
timer.Stop()
}
for i, current := range knownRelays {
if current.uri.Host == request.relay.uri.Host {
if debug {
log.Println("Relay", request.relay, "already exists")
}
// Evict the old entry anyway, as configuration might have changed.
last := len(knownRelays) - 1
knownRelays[i] = knownRelays[last]
knownRelays = knownRelays[:last]
goto found
}
}
if debug {
log.Println("Adding new relay", request.relay)
}
found:
knownRelays = append(knownRelays, request.relay)
evictionTimers[request.relay.uri.Host] = time.AfterFunc(evictionTime, evict(request.relay))
mut.Unlock()
request.result <- result{nil, evictionTime}
}
}
func evict(relay relay) func() {
return func() {
mut.Lock()
defer mut.Unlock()
if debug {
log.Println("Evicting", relay)
}
for i, current := range knownRelays {
if current.uri.Host == relay.uri.Host {
if debug {
log.Println("Evicted", relay)
}
last := len(knownRelays) - 1
knownRelays[i] = knownRelays[last]
knownRelays = knownRelays[:last]
}
}
delete(evictionTimers, relay.uri.Host)
}
}
func limit(addr string, cache *lru.Cache, lock sync.RWMutex, intv time.Duration, burst int) bool {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return false
}
lock.RLock()
bkt, ok := cache.Get(host)
lock.RUnlock()
if ok {
bkt := bkt.(*rate.Limiter)
if !bkt.Allow() {
// Rate limit
return true
}
} else {
lock.Lock()
cache.Add(host, rate.NewLimiter(rate.Every(intv), burst))
lock.Unlock()
}
return false
}
func loadPermanentRelays(file string) {
content, err := ioutil.ReadFile(file)
if err != nil {
log.Fatal(err)
}
for _, line := range strings.Split(string(content), "\n") {
if len(line) == 0 {
continue
}
uri, err := url.Parse(line)
if err != nil {
if debug {
log.Println("Skipping permanent relay", line, "due to parse error", err)
}
continue
}
permanentRelays = append(permanentRelays, relay{
URL: line,
Location: getLocation(uri.Host),
uri: uri,
})
if debug {
log.Println("Adding permanent relay", line)
}
}
}
func createTestCertificate() tls.Certificate {
tmpDir, err := ioutil.TempDir("", "relaypoolsrv")
if err != nil {
log.Fatal(err)
}
certFile, keyFile := filepath.Join(tmpDir, "cert.pem"), filepath.Join(tmpDir, "key.pem")
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv", 3072)
if err != nil {
log.Fatalln("Failed to create test X509 key pair:", err)
}
return cert
}
func getLocation(host string) location {
db, err := geoip2.Open(geoipPath)
if err != nil {
return location{}
}
defer db.Close()
addr, err := net.ResolveTCPAddr("tcp", host)
if err != nil {
return location{}
}
city, err := db.City(addr.IP)
if err != nil {
return location{}
}
return location{
Latitude: city.Location.Latitude,
Longitude: city.Location.Longitude,
}
}

22
cmd/strelaysrv/LICENSE Normal file
View File

@@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 The Syncthing Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

137
cmd/strelaysrv/README.md Normal file
View File

@@ -0,0 +1,137 @@
strelaysrv
==========
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/strelaysrv.svg?style=flat-square)](http://build.syncthing.net/job/strelaysrv/lastBuild/)
This is the relay server for the `syncthing` project.
To get it, run `go get github.com/syncthing/syncthing/cmd/strelaysrv` or download the
[latest build](http://build.syncthing.net/job/strelaysrv/lastSuccessfulBuild/artifact/)
from the build server.
:exclamation:Warnings:exclamation: - Read or regret
-----
By default, all relay servers will join to the default public relay pool, which means that the relay server will be availble for public use, and **will consume your bandwidth** helping others to connect.
If you wish to disable this behaviour, please specify the `-pools=""` argument.
Please note that `strelaysrv` is only usable by `syncthing` **version v0.12 and onwards**.
To run `strelaysrv` you need to have port 22067 available to the internet, which means you might need to port forward it and/or allow it through your firewall.
Furthermore, by default `strelaysrv` will also expose a /status HTTP endpoint on port 22070, which is used by the pool servers to read metrics of the `strelaysrv`, such as the current transfer rates, how many clients are connected, etc. If you wish this information to be available you may need to port forward and allow it through your firewall. This is not mandatory for the `strelaysrv` to function, and is used only to gather metrics and present them in the overview page of the pool server.
At the point of writing the endpoint output looks as follows:
```
{
"bytesProxied": 0,
"goArch": "amd64",
"goMaxProcs": 1,
"goNumRoutine": 13,
"goOS": "linux",
"goVersion": "go1.6",
"kbps10s1m5m15m30m60m": [
0,
0,
0,
0,
0,
0
],
"numActiveSessions": 0,
"numConnections": 0,
"numPendingSessionKeys": 2,
"numProxies": 0,
"options": {
"global-rate": 0,
"message-timeout": 60,
"network-timeout": 120,
"per-session-rate": 0,
"ping-interval": 60,
"pools": [
"https://relays.syncthing.net/endpoint"
],
"provided-by": ""
},
"startTime": "2016-03-06T12:53:07.090847749-05:00",
"uptimeSeconds": 17
}
```
If you wish to disable the /status endpoint, provide `-status-srv=""` as one of the arguments when starting the strelaysrv.
Running for public use
----
Make sure you have a public IP with port 22067 open, or have forwarded port 22067 if you are behind a NAT.
Run the `strelaysrv` with no arguments (or `-debug` if you want more output), and that should be enough for the server to join the public relay pool.
You should see a message saying:
```
2015/09/21 22:45:46 pool.go:60: Joined https://relays.syncthing.net/endpoint rejoining in 48m0s
```
See `strelaysrv -help` for other options, such as rate limits, timeout intervals, etc.
Running for private use
-----
Once you've started the `strelaysrv`, it will generate a key pair and print a URI:
```bash
relay://:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
```
This URI contains a partial address of the relay server, as well as its options which in the future may be taken into account when choosing the most suitable relay.
Because the `-listen` option was not used `strelaysrv` does not know its external IP, therefore you should replace the host part of the URI with your public IP address on which the `strelaysrv` will be available:
```bash
relay://192.0.2.1:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
```
If you do not care about certificate pinning (improved security) or do not care about passing verbose settings to the clients, you can shorten the URL to just the host part:
```bash
relay://192.0.2.1:22067
```
This URI can then be used in `syncthing` clients as one of the relay servers by adding the URI to the "Sync Protocol Listen Address" field, under Actions and Settings.
See `strelaysrv -help` for other options, such as rate limits, timeout intervals, etc.
Other items available in this repo
----
##### testutil
A test utility which can be used to test the connectivity of a relay server.
You need to generate two x509 key pairs (key.pem and cert.pem), one for the client and one for the server, in separate directories.
Afterwards, start the client:
```bash
./testutil -relay="relay://192.0.2.1:22067" -keys=certs/client/ -join
```
This prints out the client ID:
```
2015/09/21 23:00:52 main.go:42: ID: BG2C5ZA-W7XPFDO-LH222Z6-65F3HJX-ADFTGRT-3SBFIGM-KV26O2Q-E5RMRQ2
```
In the other terminal run the following:
```bash
./testutil -relay="relay://192.0.2.1:22067" -keys=certs/server/ -connect=BG2C5ZA-W7XPFDO-LH222Z6-65F3HJX-ADFTGRT-3SBFIGM-KV26O2Q-E5RMRQ2
```
Which should then give you an interactive prompt, where you can type things in one terminal, and they get relayed to the other terminal.
Relay related libraries used by this repo
----
##### Relay protocol definition.
[Available here](https://github.com/syncthing/syncthing/tree/master/lib/relay/protocol)
##### Relay client
Only used by the testutil.
[Available here](https://github.com/syncthing/syncthing/tree/master/lib/relay/client)

View File

@@ -0,0 +1,17 @@
[Unit]
Description=Syncthing relay server
After=network.target
[Service]
User=strelaysrv
Group=strelaysrv
ExecStart=/usr/bin/strelaysrv
WorkingDirectory=/var/lib/strelaysrv
PrivateTmp=true
ProtectSystem=full
ProtectHome=true
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target

346
cmd/strelaysrv/listener.go Normal file
View File

@@ -0,0 +1,346 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors.
package main
import (
"crypto/tls"
"encoding/hex"
"log"
"net"
"sync"
"sync/atomic"
"time"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/relay/protocol"
)
var (
outboxesMut = sync.RWMutex{}
outboxes = make(map[syncthingprotocol.DeviceID]chan interface{})
numConnections int64
)
func listener(proto, addr string, config *tls.Config) {
tcpListener, err := net.Listen("tcp", addr)
if err != nil {
log.Fatalln(err)
}
listener := tlsutil.DowngradingListener{
Listener: tcpListener,
}
for {
conn, isTLS, err := listener.AcceptNoWrapTLS()
if err != nil {
if debug {
log.Println("Listener failed to accept connection from", conn.RemoteAddr(), ". Possibly a TCP Ping.")
}
continue
}
setTCPOptions(conn)
if debug {
log.Println("Listener accepted connection from", conn.RemoteAddr(), "tls", isTLS)
}
if isTLS {
go protocolConnectionHandler(conn, config)
} else {
go sessionConnectionHandler(conn)
}
}
}
func protocolConnectionHandler(tcpConn net.Conn, config *tls.Config) {
conn := tls.Server(tcpConn, config)
err := conn.Handshake()
if err != nil {
if debug {
log.Println("Protocol connection TLS handshake:", conn.RemoteAddr(), err)
}
conn.Close()
return
}
state := conn.ConnectionState()
if (!state.NegotiatedProtocolIsMutual || state.NegotiatedProtocol != protocol.ProtocolName) && debug {
log.Println("Protocol negotiation error")
}
certs := state.PeerCertificates
if len(certs) != 1 {
if debug {
log.Println("Certificate list error")
}
conn.Close()
return
}
id := syncthingprotocol.NewDeviceID(certs[0].Raw)
messages := make(chan interface{})
errors := make(chan error, 1)
outbox := make(chan interface{})
// Read messages from the connection and send them on the messages
// channel. When there is an error, send it on the error channel and
// return. Applies also when the connection gets closed, so the pattern
// below is to close the connection on error, then wait for the error
// signal from messageReader to exit.
go messageReader(conn, messages, errors)
pingTicker := time.NewTicker(pingInterval)
timeoutTicker := time.NewTimer(networkTimeout)
joined := false
for {
select {
case message := <-messages:
timeoutTicker.Reset(networkTimeout)
if debug {
log.Printf("Message %T from %s", message, id)
}
switch msg := message.(type) {
case protocol.JoinRelayRequest:
if atomic.LoadInt32(&overLimit) > 0 {
protocol.WriteMessage(conn, protocol.RelayFull{})
if debug {
log.Println("Refusing join request from", id, "due to being over limits")
}
conn.Close()
limitCheckTimer.Reset(time.Second)
continue
}
outboxesMut.RLock()
_, ok := outboxes[id]
outboxesMut.RUnlock()
if ok {
protocol.WriteMessage(conn, protocol.ResponseAlreadyConnected)
if debug {
log.Println("Already have a peer with the same ID", id, conn.RemoteAddr())
}
conn.Close()
continue
}
outboxesMut.Lock()
outboxes[id] = outbox
outboxesMut.Unlock()
joined = true
protocol.WriteMessage(conn, protocol.ResponseSuccess)
case protocol.ConnectRequest:
requestedPeer := syncthingprotocol.DeviceIDFromBytes(msg.ID)
outboxesMut.RLock()
peerOutbox, ok := outboxes[requestedPeer]
outboxesMut.RUnlock()
if !ok {
if debug {
log.Println(id, "is looking for", requestedPeer, "which does not exist")
}
protocol.WriteMessage(conn, protocol.ResponseNotFound)
conn.Close()
continue
}
// requestedPeer is the server, id is the client
ses := newSession(requestedPeer, id, sessionLimiter, globalLimiter)
go ses.Serve()
clientInvitation := ses.GetClientInvitationMessage()
serverInvitation := ses.GetServerInvitationMessage()
if err := protocol.WriteMessage(conn, clientInvitation); err != nil {
if debug {
log.Printf("Error sending invitation from %s to client: %s", id, err)
}
conn.Close()
continue
}
select {
case peerOutbox <- serverInvitation:
if debug {
log.Println("Sent invitation from", id, "to", requestedPeer)
}
case <-time.After(time.Second):
if debug {
log.Println("Could not send invitation from", id, "to", requestedPeer, "as peer disconnected")
}
}
conn.Close()
case protocol.Ping:
if err := protocol.WriteMessage(conn, protocol.Pong{}); err != nil {
if debug {
log.Println("Error writing pong:", err)
}
conn.Close()
continue
}
case protocol.Pong:
// Nothing
default:
if debug {
log.Printf("Unknown message %s: %T", id, message)
}
protocol.WriteMessage(conn, protocol.ResponseUnexpectedMessage)
conn.Close()
}
case err := <-errors:
if debug {
log.Printf("Closing connection %s: %s", id, err)
}
// Potentially closing a second time.
conn.Close()
if joined {
// Only delete the outbox if the client is joined, as it might be
// a lookup request coming from the same client.
outboxesMut.Lock()
delete(outboxes, id)
outboxesMut.Unlock()
// Also, kill all sessions related to this node, as it probably
// went offline. This is for the other end to realize the client
// is no longer there faster. This also helps resolve
// 'already connected' errors when one of the sides is
// restarting, and connecting to the other peer before the other
// peer even realised that the node has gone away.
dropSessions(id)
}
return
case <-pingTicker.C:
if !joined {
if debug {
log.Println(id, "didn't join within", pingInterval)
}
conn.Close()
continue
}
if err := protocol.WriteMessage(conn, protocol.Ping{}); err != nil {
if debug {
log.Println(id, err)
}
conn.Close()
}
if atomic.LoadInt32(&overLimit) > 0 && !hasSessions(id) {
if debug {
log.Println("Dropping", id, "as it has no sessions and we are over our limits")
}
protocol.WriteMessage(conn, protocol.RelayFull{})
conn.Close()
limitCheckTimer.Reset(time.Second)
}
case <-timeoutTicker.C:
// We should receive a error from the reader loop, which will cause
// us to quit this loop.
if debug {
log.Printf("%s timed out", id)
}
conn.Close()
case msg := <-outbox:
if debug {
log.Printf("Sending message %T to %s", msg, id)
}
if err := protocol.WriteMessage(conn, msg); err != nil {
if debug {
log.Println(id, err)
}
conn.Close()
}
}
}
}
func sessionConnectionHandler(conn net.Conn) {
if err := conn.SetDeadline(time.Now().Add(messageTimeout)); err != nil {
if debug {
log.Println("Weird error setting deadline:", err, "on", conn.RemoteAddr())
}
return
}
message, err := protocol.ReadMessage(conn)
if err != nil {
return
}
switch msg := message.(type) {
case protocol.JoinSessionRequest:
ses := findSession(string(msg.Key))
if debug {
log.Println(conn.RemoteAddr(), "session lookup", ses, hex.EncodeToString(msg.Key)[:5])
}
if ses == nil {
protocol.WriteMessage(conn, protocol.ResponseNotFound)
conn.Close()
return
}
if !ses.AddConnection(conn) {
if debug {
log.Println("Failed to add", conn.RemoteAddr(), "to session", ses)
}
protocol.WriteMessage(conn, protocol.ResponseAlreadyConnected)
conn.Close()
return
}
if err := protocol.WriteMessage(conn, protocol.ResponseSuccess); err != nil {
if debug {
log.Println("Failed to send session join response to ", conn.RemoteAddr(), "for", ses)
}
return
}
if err := conn.SetDeadline(time.Time{}); err != nil {
if debug {
log.Println("Weird error setting deadline:", err, "on", conn.RemoteAddr())
}
conn.Close()
return
}
default:
if debug {
log.Println("Unexpected message from", conn.RemoteAddr(), message)
}
protocol.WriteMessage(conn, protocol.ResponseUnexpectedMessage)
conn.Close()
}
}
func messageReader(conn net.Conn, messages chan<- interface{}, errors chan<- error) {
atomic.AddInt64(&numConnections, 1)
defer atomic.AddInt64(&numConnections, -1)
for {
msg, err := protocol.ReadMessage(conn)
if err != nil {
errors <- err
return
}
messages <- msg
}
}

301
cmd/strelaysrv/main.go Normal file
View File

@@ -0,0 +1,301 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors.
package main
import (
"crypto/tls"
"flag"
"fmt"
"log"
"net"
"net/http"
"net/url"
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync/atomic"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"golang.org/x/time/rate"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/nat"
_ "github.com/syncthing/syncthing/lib/pmp"
_ "github.com/syncthing/syncthing/lib/upnp"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
)
var (
Version string
BuildStamp string
BuildUser string
BuildHost string
BuildDate time.Time
LongVersion string
)
func init() {
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`strelaysrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
listen string
debug bool
sessionAddress []byte
sessionPort uint16
networkTimeout = 2 * time.Minute
pingInterval = time.Minute
messageTimeout = time.Minute
limitCheckTimer *time.Timer
sessionLimitBps int
globalLimitBps int
overLimit int32
descriptorLimit int64
sessionLimiter *rate.Limiter
globalLimiter *rate.Limiter
statusAddr string
poolAddrs string
pools []string
providedBy string
defaultPoolAddrs = "https://relays.syncthing.net/endpoint"
natEnabled bool
natLease int
natRenewal int
natTimeout int
)
func main() {
log.SetFlags(log.Lshortfile | log.LstdFlags)
var dir, extAddress, proto string
flag.StringVar(&listen, "listen", ":22067", "Protocol listen address")
flag.StringVar(&dir, "keys", ".", "Directory where cert.pem and key.pem is stored")
flag.DurationVar(&networkTimeout, "network-timeout", networkTimeout, "Timeout for network operations between the client and the relay.\n\tIf no data is received between the client and the relay in this period of time, the connection is terminated.\n\tFurthermore, if no data is sent between either clients being relayed within this period of time, the session is also terminated.")
flag.DurationVar(&pingInterval, "ping-interval", pingInterval, "How often pings are sent")
flag.DurationVar(&messageTimeout, "message-timeout", messageTimeout, "Maximum amount of time we wait for relevant messages to arrive")
flag.IntVar(&sessionLimitBps, "per-session-rate", sessionLimitBps, "Per session rate limit, in bytes/s")
flag.IntVar(&globalLimitBps, "global-rate", globalLimitBps, "Global rate limit, in bytes/s")
flag.BoolVar(&debug, "debug", debug, "Enable debug output")
flag.StringVar(&statusAddr, "status-srv", ":22070", "Listen address for status service (blank to disable)")
flag.StringVar(&poolAddrs, "pools", defaultPoolAddrs, "Comma separated list of relay pool addresses to join")
flag.StringVar(&providedBy, "provided-by", "", "An optional description about who provides the relay")
flag.StringVar(&extAddress, "ext-address", "", "An optional address to advertise as being available on.\n\tAllows listening on an unprivileged port with port forwarding from e.g. 443, and be connected to on port 443.")
flag.StringVar(&proto, "protocol", "tcp", "Protocol used for listening. 'tcp' for IPv4 and IPv6, 'tcp4' for IPv4, 'tcp6' for IPv6")
flag.BoolVar(&natEnabled, "nat", false, "Use UPnP/NAT-PMP to acquire external port mapping")
flag.IntVar(&natLease, "nat-lease", 60, "NAT lease length in minutes")
flag.IntVar(&natRenewal, "nat-renewal", 30, "NAT renewal frequency in minutes")
flag.IntVar(&natTimeout, "nat-timeout", 10, "NAT discovery timeout in seconds")
flag.Parse()
if extAddress == "" {
extAddress = listen
}
if len(providedBy) > 30 {
log.Fatal("Provided-by cannot be longer than 30 characters")
}
addr, err := net.ResolveTCPAddr(proto, extAddress)
if err != nil {
log.Fatal(err)
}
laddr, err := net.ResolveTCPAddr(proto, listen)
if err != nil {
log.Fatal(err)
}
if laddr.IP != nil && !laddr.IP.IsUnspecified() {
laddr.Port = 0
transport, ok := http.DefaultTransport.(*http.Transport)
if ok {
transport.Dial = (&net.Dialer{
Timeout: 30 * time.Second,
LocalAddr: laddr,
}).Dial
}
}
log.Println(LongVersion)
maxDescriptors, err := osutil.MaximizeOpenFileLimit()
if maxDescriptors > 0 {
// Assume that 20% of FD's are leaked/unaccounted for.
descriptorLimit = int64(maxDescriptors*80) / 100
log.Println("Connection limit", descriptorLimit)
go monitorLimits()
} else if err != nil && runtime.GOOS != "windows" {
log.Println("Assuming no connection limit, due to error retrieving rlimits:", err)
}
sessionAddress = addr.IP[:]
sessionPort = uint16(addr.Port)
certFile, keyFile := filepath.Join(dir, "cert.pem"), filepath.Join(dir, "key.pem")
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv", 3072)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
NextProtos: []string{protocol.ProtocolName},
ClientAuth: tls.RequestClientCert,
SessionTicketsDisabled: true,
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
CipherSuites: []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
},
}
id := syncthingprotocol.NewDeviceID(cert.Certificate[0])
if debug {
log.Println("ID:", id)
}
wrapper := config.Wrap("config", config.New(id))
wrapper.SetOptions(config.OptionsConfiguration{
NATLeaseM: natLease,
NATRenewalM: natRenewal,
NATTimeoutS: natTimeout,
})
natSvc := nat.NewService(id, wrapper)
mapping := mapping{natSvc.NewMapping(nat.TCP, addr.IP, addr.Port)}
if natEnabled {
go natSvc.Serve()
found := make(chan struct{})
mapping.OnChanged(func(_ *nat.Mapping, _, _ []nat.Address) {
select {
case found <- struct{}{}:
default:
}
})
// Need to wait a few extra seconds, since NAT library waits exactly natTimeout seconds on all interfaces.
timeout := time.Duration(natTimeout+2) * time.Second
log.Printf("Waiting %s to acquire NAT mapping", timeout)
select {
case <-found:
log.Printf("Found NAT mapping: %s", mapping.ExternalAddresses())
case <-time.After(timeout):
log.Println("Timeout out waiting for NAT mapping.")
}
}
if sessionLimitBps > 0 {
sessionLimiter = rate.NewLimiter(rate.Limit(sessionLimitBps), 2*sessionLimitBps)
}
if globalLimitBps > 0 {
globalLimiter = rate.NewLimiter(rate.Limit(globalLimitBps), 2*globalLimitBps)
}
if statusAddr != "" {
go statusService(statusAddr)
}
uri, err := url.Parse(fmt.Sprintf("relay://%s/?id=%s&pingInterval=%s&networkTimeout=%s&sessionLimitBps=%d&globalLimitBps=%d&statusAddr=%s&providedBy=%s", mapping.Address(), id, pingInterval, networkTimeout, sessionLimitBps, globalLimitBps, statusAddr, providedBy))
if err != nil {
log.Fatalln("Failed to construct URI", err)
}
log.Println("URI:", uri.String())
if poolAddrs == defaultPoolAddrs {
log.Println("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
log.Println("!! Joining default relay pools, this relay will be available for public use. !!")
log.Println(`!! Use the -pools="" command line option to make the relay private. !!`)
log.Println("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
}
pools = strings.Split(poolAddrs, ",")
for _, pool := range pools {
pool = strings.TrimSpace(pool)
if len(pool) > 0 {
go poolHandler(pool, uri, mapping)
}
}
go listener(proto, listen, tlsCfg)
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
<-sigs
// Gracefully close all connections, hoping that clients will be faster
// to realize that the relay is now gone.
sessionMut.RLock()
for _, session := range activeSessions {
session.CloseConns()
}
for _, session := range pendingSessions {
session.CloseConns()
}
sessionMut.RUnlock()
outboxesMut.RLock()
for _, outbox := range outboxes {
close(outbox)
}
outboxesMut.RUnlock()
time.Sleep(500 * time.Millisecond)
}
func monitorLimits() {
limitCheckTimer = time.NewTimer(time.Minute)
for range limitCheckTimer.C {
if atomic.LoadInt64(&numConnections)+atomic.LoadInt64(&numProxies) > descriptorLimit {
atomic.StoreInt32(&overLimit, 1)
log.Println("Gone past our connection limits. Starting to refuse new/drop idle connections.")
} else if atomic.CompareAndSwapInt32(&overLimit, 1, 0) {
log.Println("Dropped below our connection limits. Accepting new connections.")
}
limitCheckTimer.Reset(time.Minute)
}
}
type mapping struct {
*nat.Mapping
}
func (m *mapping) Address() nat.Address {
ext := m.ExternalAddresses()
if len(ext) > 0 {
return ext[0]
}
return m.Mapping.Address()
}

66
cmd/strelaysrv/pool.go Normal file
View File

@@ -0,0 +1,66 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors.
package main
import (
"bytes"
"encoding/json"
"io/ioutil"
"log"
"net/http"
"net/url"
"time"
)
func poolHandler(pool string, uri *url.URL, mapping mapping) {
if debug {
log.Println("Joining", pool)
}
for {
uriCopy := *uri
uriCopy.Host = mapping.Address().String()
var b bytes.Buffer
json.NewEncoder(&b).Encode(struct {
URL string `json:"url"`
}{
uriCopy.String(),
})
resp, err := http.Post(pool, "application/json", &b)
if err != nil {
log.Println("Error joining pool", pool, err)
} else if resp.StatusCode == 500 {
bs, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println("Failed to join", pool, "due to an internal server error. Could not read response:", err)
} else {
log.Println("Failed to join", pool, "due to an internal server error:", string(bs))
}
resp.Body.Close()
} else if resp.StatusCode == 429 {
log.Println(pool, "under load, will retry in a minute")
time.Sleep(time.Minute)
continue
} else if resp.StatusCode == 401 {
log.Println(pool, "failed to join due to IP address not matching external address. Aborting")
return
} else if resp.StatusCode == 200 {
var x struct {
EvictionIn time.Duration `json:"evictionIn"`
}
err := json.NewDecoder(resp.Body).Decode(&x)
if err == nil {
rejoin := x.EvictionIn - (x.EvictionIn / 5)
log.Println("Joined", pool, "rejoining in", rejoin)
time.Sleep(rejoin)
continue
} else {
log.Println("Failed to deserialize response", err)
}
} else {
log.Println(pool, "unknown response type from server", resp.StatusCode)
}
time.Sleep(time.Hour)
}
}

353
cmd/strelaysrv/session.go Normal file
View File

@@ -0,0 +1,353 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors.
package main
import (
"crypto/rand"
"encoding/hex"
"fmt"
"log"
"math"
"net"
"sync"
"sync/atomic"
"time"
"golang.org/x/time/rate"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/relay/protocol"
)
var (
sessionMut = sync.RWMutex{}
activeSessions = make([]*session, 0)
pendingSessions = make(map[string]*session, 0)
numProxies int64
bytesProxied int64
)
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit, globalRateLimit *rate.Limiter) *session {
serverkey := make([]byte, 32)
_, err := rand.Read(serverkey)
if err != nil {
return nil
}
clientkey := make([]byte, 32)
_, err = rand.Read(clientkey)
if err != nil {
return nil
}
ses := &session{
serverkey: serverkey,
serverid: serverid,
clientkey: clientkey,
clientid: clientid,
rateLimit: makeRateLimitFunc(sessionRateLimit, globalRateLimit),
connsChan: make(chan net.Conn),
conns: make([]net.Conn, 0, 2),
}
if debug {
log.Println("New session", ses)
}
sessionMut.Lock()
pendingSessions[string(ses.serverkey)] = ses
pendingSessions[string(ses.clientkey)] = ses
sessionMut.Unlock()
return ses
}
func findSession(key string) *session {
sessionMut.Lock()
defer sessionMut.Unlock()
ses, ok := pendingSessions[key]
if !ok {
return nil
}
delete(pendingSessions, key)
return ses
}
func dropSessions(id syncthingprotocol.DeviceID) {
sessionMut.RLock()
for _, session := range activeSessions {
if session.HasParticipant(id) {
if debug {
log.Println("Dropping session", session, "involving", id)
}
session.CloseConns()
}
}
sessionMut.RUnlock()
}
func hasSessions(id syncthingprotocol.DeviceID) bool {
sessionMut.RLock()
has := false
for _, session := range activeSessions {
if session.HasParticipant(id) {
has = true
break
}
}
sessionMut.RUnlock()
return has
}
type session struct {
mut sync.Mutex
serverkey []byte
serverid syncthingprotocol.DeviceID
clientkey []byte
clientid syncthingprotocol.DeviceID
rateLimit func(bytes int)
connsChan chan net.Conn
conns []net.Conn
}
func (s *session) AddConnection(conn net.Conn) bool {
if debug {
log.Println("New connection for", s, "from", conn.RemoteAddr())
}
select {
case s.connsChan <- conn:
return true
default:
}
return false
}
func (s *session) Serve() {
timedout := time.After(messageTimeout)
if debug {
log.Println("Session", s, "serving")
}
for {
select {
case conn := <-s.connsChan:
s.mut.Lock()
s.conns = append(s.conns, conn)
s.mut.Unlock()
// We're the only ones mutating s.conns, hence we are free to read it.
if len(s.conns) < 2 {
continue
}
close(s.connsChan)
if debug {
log.Println("Session", s, "starting between", s.conns[0].RemoteAddr(), "and", s.conns[1].RemoteAddr())
}
wg := sync.WaitGroup{}
wg.Add(2)
var err0 error
go func() {
err0 = s.proxy(s.conns[0], s.conns[1])
wg.Done()
}()
var err1 error
go func() {
err1 = s.proxy(s.conns[1], s.conns[0])
wg.Done()
}()
sessionMut.Lock()
activeSessions = append(activeSessions, s)
sessionMut.Unlock()
wg.Wait()
if debug {
log.Println("Session", s, "ended, outcomes:", err0, "and", err1)
}
goto done
case <-timedout:
if debug {
log.Println("Session", s, "timed out")
}
goto done
}
}
done:
// We can end up here in 3 cases:
// 1. Timeout joining, in which case there are potentially entries in pendingSessions
// 2. General session end/timeout, in which case there are entries in activeSessions
// 3. Protocol handler calls dropSession as one of it's clients disconnects.
sessionMut.Lock()
delete(pendingSessions, string(s.serverkey))
delete(pendingSessions, string(s.clientkey))
for i, session := range activeSessions {
if session == s {
l := len(activeSessions) - 1
activeSessions[i] = activeSessions[l]
activeSessions[l] = nil
activeSessions = activeSessions[:l]
}
}
sessionMut.Unlock()
// If we are here because of case 2 or 3, we are potentially closing some or
// all connections a second time.
s.CloseConns()
if debug {
log.Println("Session", s, "stopping")
}
}
func (s *session) GetClientInvitationMessage() protocol.SessionInvitation {
return protocol.SessionInvitation{
From: s.serverid[:],
Key: s.clientkey,
Address: sessionAddress,
Port: sessionPort,
ServerSocket: false,
}
}
func (s *session) GetServerInvitationMessage() protocol.SessionInvitation {
return protocol.SessionInvitation{
From: s.clientid[:],
Key: s.serverkey,
Address: sessionAddress,
Port: sessionPort,
ServerSocket: true,
}
}
func (s *session) HasParticipant(id syncthingprotocol.DeviceID) bool {
return s.clientid == id || s.serverid == id
}
func (s *session) CloseConns() {
s.mut.Lock()
for _, conn := range s.conns {
conn.Close()
}
s.mut.Unlock()
}
func (s *session) proxy(c1, c2 net.Conn) error {
if debug {
log.Println("Proxy", c1.RemoteAddr(), "->", c2.RemoteAddr())
}
atomic.AddInt64(&numProxies, 1)
defer atomic.AddInt64(&numProxies, -1)
buf := make([]byte, 65536)
for {
c1.SetReadDeadline(time.Now().Add(networkTimeout))
n, err := c1.Read(buf)
if err != nil {
return err
}
atomic.AddInt64(&bytesProxied, int64(n))
if debug {
log.Printf("%d bytes from %s to %s", n, c1.RemoteAddr(), c2.RemoteAddr())
}
if s.rateLimit != nil {
s.rateLimit(n)
}
c2.SetWriteDeadline(time.Now().Add(networkTimeout))
_, err = c2.Write(buf[:n])
if err != nil {
return err
}
}
}
func (s *session) String() string {
return fmt.Sprintf("<%s/%s>", hex.EncodeToString(s.clientkey)[:5], hex.EncodeToString(s.serverkey)[:5])
}
func makeRateLimitFunc(sessionRateLimit, globalRateLimit *rate.Limiter) func(int) {
// This may be a case of super duper premature optimization... We build an
// optimized function to do the rate limiting here based on what we need
// to do and then use it in the loop.
if sessionRateLimit == nil && globalRateLimit == nil {
// No limiting needed. We could equally well return a func(int64){} and
// not do a nil check were we use it, but I think the nil check there
// makes it clear that there will be no limiting if none is
// configured...
return nil
}
if sessionRateLimit == nil {
// We only have a global limiter
return func(bytes int) {
take(bytes, globalRateLimit)
}
}
if globalRateLimit == nil {
// We only have a session limiter
return func(bytes int) {
take(bytes, sessionRateLimit)
}
}
// We have both. Queue the bytes on both the global and session specific
// rate limiters.
return func(bytes int) {
take(bytes, sessionRateLimit, globalRateLimit)
}
}
// take is a utility function to consume tokens from a set of rate.Limiters.
// Tokens are consumed in parallel on all limiters, respecting their
// individual burst sizes.
func take(tokens int, ls ...*rate.Limiter) {
// minBurst is the smallest burst size supported by all limiters.
minBurst := int(math.MaxInt32)
for _, l := range ls {
if burst := l.Burst(); burst < minBurst {
minBurst = burst
}
}
for tokens > 0 {
// chunk is how many tokens we can consume at a time
chunk := tokens
if chunk > minBurst {
chunk = minBurst
}
// maxDelay is the longest delay mandated by any of the limiters for
// the chosen chunk size.
var maxDelay time.Duration
for _, l := range ls {
res := l.ReserveN(time.Now(), chunk)
if del := res.Delay(); del > maxDelay {
maxDelay = del
}
}
time.Sleep(maxDelay)
tokens -= chunk
}
}

111
cmd/strelaysrv/status.go Normal file
View File

@@ -0,0 +1,111 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors.
package main
import (
"encoding/json"
"log"
"net/http"
"runtime"
"sync/atomic"
"time"
)
var rc *rateCalculator
func statusService(addr string) {
rc = newRateCalculator(360, 10*time.Second, &bytesProxied)
http.HandleFunc("/status", getStatus)
if err := http.ListenAndServe(addr, nil); err != nil {
log.Fatal(err)
}
}
func getStatus(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
status := make(map[string]interface{})
sessionMut.Lock()
// This can potentially be double the number of pending sessions, as each session has two keys, one for each side.
status["startTime"] = rc.startTime
status["uptimeSeconds"] = time.Since(rc.startTime) / time.Second
status["numPendingSessionKeys"] = len(pendingSessions)
status["numActiveSessions"] = len(activeSessions)
sessionMut.Unlock()
status["numConnections"] = atomic.LoadInt64(&numConnections)
status["numProxies"] = atomic.LoadInt64(&numProxies)
status["bytesProxied"] = atomic.LoadInt64(&bytesProxied)
status["goVersion"] = runtime.Version()
status["goOS"] = runtime.GOOS
status["goArch"] = runtime.GOARCH
status["goMaxProcs"] = runtime.GOMAXPROCS(-1)
status["goNumRoutine"] = runtime.NumGoroutine()
status["kbps10s1m5m15m30m60m"] = []int64{
rc.rate(1) * 8 / 1000, // each interval is 10s
rc.rate(60/10) * 8 / 1000,
rc.rate(5*60/10) * 8 / 1000,
rc.rate(15*60/10) * 8 / 1000,
rc.rate(30*60/10) * 8 / 1000,
rc.rate(60*60/10) * 8 / 1000,
}
status["options"] = map[string]interface{}{
"network-timeout": networkTimeout / time.Second,
"ping-interval": pingInterval / time.Second,
"message-timeout": messageTimeout / time.Second,
"per-session-rate": sessionLimitBps,
"global-rate": globalLimitBps,
"pools": pools,
"provided-by": providedBy,
}
bs, err := json.MarshalIndent(status, "", " ")
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.Write(bs)
}
type rateCalculator struct {
rates []int64
prev int64
counter *int64
startTime time.Time
}
func newRateCalculator(keepIntervals int, interval time.Duration, counter *int64) *rateCalculator {
r := &rateCalculator{
rates: make([]int64, keepIntervals),
counter: counter,
startTime: time.Now(),
}
go r.updateRates(interval)
return r
}
func (r *rateCalculator) updateRates(interval time.Duration) {
for {
now := time.Now()
next := now.Truncate(interval).Add(interval)
time.Sleep(next.Sub(now))
cur := atomic.LoadInt64(r.counter)
rate := int64(float64(cur-r.prev) / interval.Seconds())
copy(r.rates[1:], r.rates)
r.rates[0] = rate
r.prev = cur
}
}
func (r *rateCalculator) rate(periods int) int64 {
var tot int64
for i := 0; i < periods; i++ {
tot += r.rates[i]
}
return tot / int64(periods)
}

View File

@@ -0,0 +1,152 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
package main
import (
"bufio"
"crypto/tls"
"flag"
"log"
"net"
"net/url"
"os"
"path/filepath"
"time"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/relay/protocol"
)
func main() {
log.SetOutput(os.Stdout)
log.SetFlags(log.LstdFlags | log.Lshortfile)
var connect, relay, dir string
var join, test bool
flag.StringVar(&connect, "connect", "", "Device ID to which to connect to")
flag.BoolVar(&join, "join", false, "Join relay")
flag.BoolVar(&test, "test", false, "Generic relay test")
flag.StringVar(&relay, "relay", "relay://127.0.0.1:22067", "Relay address")
flag.StringVar(&dir, "keys", ".", "Directory where cert.pem and key.pem is stored")
flag.Parse()
certFile, keyFile := filepath.Join(dir, "cert.pem"), filepath.Join(dir, "key.pem")
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Fatalln("Failed to load X509 key pair:", err)
}
id := syncthingprotocol.NewDeviceID(cert.Certificate[0])
log.Println("ID:", id)
uri, err := url.Parse(relay)
if err != nil {
log.Fatal(err)
}
stdin := make(chan string)
go stdinReader(stdin)
if join {
log.Println("Creating client")
relay, err := client.NewClient(uri, []tls.Certificate{cert}, nil, 10*time.Second)
if err != nil {
log.Fatal(err)
}
log.Println("Created client")
go relay.Serve()
recv := make(chan protocol.SessionInvitation)
go func() {
log.Println("Starting invitation receiver")
for invite := range relay.Invitations() {
select {
case recv <- invite:
log.Println("Received invitation", invite)
default:
log.Println("Discarding invitation", invite)
}
}
}()
for {
conn, err := client.JoinSession(<-recv)
if err != nil {
log.Fatalln("Failed to join", err)
}
log.Println("Joined", conn.RemoteAddr(), conn.LocalAddr())
connectToStdio(stdin, conn)
log.Println("Finished", conn.RemoteAddr(), conn.LocalAddr())
}
} else if connect != "" {
id, err := syncthingprotocol.DeviceIDFromString(connect)
if err != nil {
log.Fatal(err)
}
invite, err := client.GetInvitationFromRelay(uri, id, []tls.Certificate{cert}, 10*time.Second)
if err != nil {
log.Fatal(err)
}
log.Println("Received invitation", invite)
conn, err := client.JoinSession(invite)
if err != nil {
log.Fatalln("Failed to join", err)
}
log.Println("Joined", conn.RemoteAddr(), conn.LocalAddr())
connectToStdio(stdin, conn)
log.Println("Finished", conn.RemoteAddr(), conn.LocalAddr())
} else if test {
if client.TestRelay(uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4) {
log.Println("OK")
} else {
log.Println("FAIL")
}
} else {
log.Fatal("Requires either join or connect")
}
}
func stdinReader(c chan<- string) {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
c <- scanner.Text()
c <- "\n"
}
}
func connectToStdio(stdin <-chan string, conn net.Conn) {
go func() {
}()
buf := make([]byte, 1024)
for {
conn.SetReadDeadline(time.Now().Add(time.Millisecond))
n, err := conn.Read(buf[0:])
if err != nil {
nerr, ok := err.(net.Error)
if !ok || !nerr.Timeout() {
log.Println(err)
return
}
}
os.Stdout.Write(buf[:n])
select {
case msg := <-stdin:
_, err := conn.Write([]byte(msg))
if err != nil {
return
}
default:
}
}
}

28
cmd/strelaysrv/utils.go Normal file
View File

@@ -0,0 +1,28 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors.
package main
import (
"errors"
"net"
)
func setTCPOptions(conn net.Conn) error {
tcpConn, ok := conn.(*net.TCPConn)
if !ok {
return errors.New("Not a TCP connection")
}
if err := tcpConn.SetLinger(0); err != nil {
return err
}
if err := tcpConn.SetNoDelay(true); err != nil {
return err
}
if err := tcpConn.SetKeepAlivePeriod(networkTimeout); err != nil {
return err
}
if err := tcpConn.SetKeepAlive(true); err != nil {
return err
}
return nil
}

View File

@@ -130,7 +130,7 @@ func printProgress(prefix string, count *int64) {
expectedIterations := float64(int(1) << uint(wantBits))
fmt.Printf("Want %d bits for prefix %q, about %.2g certs to test (statistically speaking)\n", wantBits, prefix, expectedIterations)
for _ = range time.NewTicker(15 * time.Second).C {
for range time.NewTicker(15 * time.Second).C {
tried := atomic.LoadInt64(count)
elapsed := time.Since(started)
rate := float64(tried) / elapsed.Seconds()

View File

@@ -7,26 +7,23 @@
package main
import (
"bytes"
"compress/gzip"
"crypto/tls"
"encoding/json"
"fmt"
"io/ioutil"
"mime"
"net"
"net/http"
"os"
"path/filepath"
"reflect"
"runtime"
"runtime/pprof"
"sort"
"strconv"
"strings"
"time"
"github.com/rcrowley/go-metrics"
"github.com/syncthing/syncthing/lib/auto"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/discover"
@@ -35,18 +32,17 @@ import (
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/stats"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/upgrade"
"github.com/syncthing/syncthing/lib/util"
"github.com/vitrun/qart/qr"
"golang.org/x/crypto/bcrypt"
)
var (
configInSync = true
startTime = time.Now()
startTime = time.Now()
)
type apiService struct {
@@ -54,20 +50,18 @@ type apiService struct {
cfg configIntf
httpsCertFile string
httpsKeyFile string
assetDir string
themes []string
statics *staticsServer
model modelIntf
eventSub events.BufferedSubscription
diskEventSub events.BufferedSubscription
discoverer discover.CachingMux
connectionsService connectionsIntf
fss *folderSummaryService
systemConfigMut sync.Mutex // serializes posts to /rest/system/config
stop chan struct{} // signals intentional stop
configChanged chan struct{} // signals intentional listener close due to config change
started chan struct{} // signals startup complete, for testing only
listener net.Listener
listenerMut sync.Mutex
started chan string // signals startup complete by sending the listener address, for testing only
startedOnce chan struct{} // the service has started successfully at least once
guiErrors logger.Recorder
systemLog logger.Recorder
@@ -75,10 +69,10 @@ type apiService struct {
type modelIntf interface {
GlobalDirectoryTree(folder, prefix string, levels int, dirsonly bool) map[string]interface{}
Completion(device protocol.DeviceID, folder string) float64
Completion(device protocol.DeviceID, folder string) model.FolderCompletion
Override(folder string)
NeedFolderFiles(folder string, page, perpage int) ([]db.FileInfoTruncated, []db.FileInfoTruncated, []db.FileInfoTruncated, int)
NeedSize(folder string) (nfiles int, bytes int64)
NeedSize(folder string) db.Counts
ConnectionStats() map[string]interface{}
DeviceStatistics() map[string]stats.DeviceStatistics
FolderStatistics() map[string]stats.FolderStatistics
@@ -88,78 +82,58 @@ type modelIntf interface {
Availability(folder, file string, version protocol.Vector, block protocol.BlockInfo) []model.Availability
GetIgnores(folder string) ([]string, []string, error)
SetIgnores(folder string, content []string) error
PauseDevice(device protocol.DeviceID)
ResumeDevice(device protocol.DeviceID)
DelayScan(folder string, next time.Duration)
ScanFolder(folder string) error
ScanFolders() map[string]error
ScanFolderSubs(folder string, subs []string) error
ScanFolderSubdirs(folder string, subs []string) error
BringToFront(folder, file string)
ConnectedTo(deviceID protocol.DeviceID) bool
GlobalSize(folder string) (nfiles, deleted int, bytes int64)
LocalSize(folder string) (nfiles, deleted int, bytes int64)
CurrentLocalVersion(folder string) (int64, bool)
RemoteLocalVersion(folder string) (int64, bool)
GlobalSize(folder string) db.Counts
LocalSize(folder string) db.Counts
CurrentSequence(folder string) (int64, bool)
RemoteSequence(folder string) (int64, bool)
State(folder string) (string, time.Time, error)
}
type configIntf interface {
GUI() config.GUIConfiguration
Raw() config.Configuration
RawCopy() config.Configuration
Options() config.OptionsConfiguration
Replace(cfg config.Configuration) config.CommitResponse
Replace(cfg config.Configuration) error
Subscribe(c config.Committer)
Folders() map[string]config.FolderConfiguration
Devices() map[protocol.DeviceID]config.DeviceConfiguration
SetDevice(config.DeviceConfiguration) error
Save() error
ListenAddresses() []string
RequiresRestart() bool
}
type connectionsIntf interface {
Status() map[string]interface{}
}
func newAPIService(id protocol.DeviceID, cfg configIntf, httpsCertFile, httpsKeyFile, assetDir string, m modelIntf, eventSub events.BufferedSubscription, discoverer discover.CachingMux, connectionsService connectionsIntf, errors, systemLog logger.Recorder) (*apiService, error) {
func newAPIService(id protocol.DeviceID, cfg configIntf, httpsCertFile, httpsKeyFile, assetDir string, m modelIntf, eventSub events.BufferedSubscription, diskEventSub events.BufferedSubscription, discoverer discover.CachingMux, connectionsService connectionsIntf, errors, systemLog logger.Recorder) *apiService {
service := &apiService{
id: id,
cfg: cfg,
httpsCertFile: httpsCertFile,
httpsKeyFile: httpsKeyFile,
assetDir: assetDir,
statics: newStaticsServer(cfg.GUI().Theme, assetDir),
model: m,
eventSub: eventSub,
diskEventSub: diskEventSub,
discoverer: discoverer,
connectionsService: connectionsService,
systemConfigMut: sync.NewMutex(),
stop: make(chan struct{}),
configChanged: make(chan struct{}),
listenerMut: sync.NewMutex(),
startedOnce: make(chan struct{}),
guiErrors: errors,
systemLog: systemLog,
}
seen := make(map[string]struct{})
// Load themes from compiled in assets.
for file := range auto.Assets() {
theme := strings.Split(file, "/")[0]
if _, ok := seen[theme]; !ok {
seen[theme] = struct{}{}
service.themes = append(service.themes, theme)
}
}
if assetDir != "" {
// Load any extra themes from the asset override dir.
for _, dir := range dirNames(assetDir) {
if _, ok := seen[dir]; !ok {
seen[dir] = struct{}{}
service.themes = append(service.themes, dir)
}
}
}
var err error
service.listener, err = service.getListener(cfg.GUI())
return service, err
return service
}
func (s *apiService) getListener(guiCfg config.GUIConfiguration) (net.Listener, error) {
@@ -226,9 +200,22 @@ func sendJSON(w http.ResponseWriter, jsonObject interface{}) {
}
func (s *apiService) Serve() {
s.listenerMut.Lock()
listener := s.listener
s.listenerMut.Unlock()
listener, err := s.getListener(s.cfg.GUI())
if err != nil {
select {
case <-s.startedOnce:
// We let this be a loud user-visible warning as it may be the only
// indication they get that the GUI won't be available.
l.Warnln("Starting API/GUI:", err)
return
default:
// This is during initialization. A failure here should be fatal
// as there will be no way for the user to communicate with us
// otherwise anyway.
l.Fatalln("Starting API/GUI:", err)
}
}
if listener == nil {
// Not much we can do here other than exit quickly. The supervisor
@@ -236,6 +223,8 @@ func (s *apiService) Serve() {
return
}
defer listener.Close()
// The GET handlers
getRestMux := http.NewServeMux()
getRestMux.HandleFunc("/rest/db/completion", s.getDBCompletion) // device folder
@@ -244,12 +233,14 @@ func (s *apiService) Serve() {
getRestMux.HandleFunc("/rest/db/need", s.getDBNeed) // folder [perpage] [page]
getRestMux.HandleFunc("/rest/db/status", s.getDBStatus) // folder
getRestMux.HandleFunc("/rest/db/browse", s.getDBBrowse) // folder [prefix] [dirsonly] [levels]
getRestMux.HandleFunc("/rest/events", s.getEvents) // since [limit]
getRestMux.HandleFunc("/rest/events", s.getIndexEvents) // since [limit]
getRestMux.HandleFunc("/rest/events/disk", s.getDiskEvents) // since [limit]
getRestMux.HandleFunc("/rest/stats/device", s.getDeviceStats) // -
getRestMux.HandleFunc("/rest/stats/folder", s.getFolderStats) // -
getRestMux.HandleFunc("/rest/svc/deviceid", s.getDeviceID) // id
getRestMux.HandleFunc("/rest/svc/lang", s.getLang) // -
getRestMux.HandleFunc("/rest/svc/report", s.getReport) // -
getRestMux.HandleFunc("/rest/svc/random/string", s.getRandomString) // [length]
getRestMux.HandleFunc("/rest/system/browse", s.getSystemBrowse) // current
getRestMux.HandleFunc("/rest/system/config", s.getSystemConfig) // -
getRestMux.HandleFunc("/rest/system/config/insync", s.getSystemConfigInsync) // -
@@ -266,25 +257,29 @@ func (s *apiService) Serve() {
// The POST handlers
postRestMux := http.NewServeMux()
postRestMux.HandleFunc("/rest/db/prio", s.postDBPrio) // folder file [perpage] [page]
postRestMux.HandleFunc("/rest/db/ignores", s.postDBIgnores) // folder
postRestMux.HandleFunc("/rest/db/override", s.postDBOverride) // folder
postRestMux.HandleFunc("/rest/db/scan", s.postDBScan) // folder [sub...] [delay]
postRestMux.HandleFunc("/rest/system/config", s.postSystemConfig) // <body>
postRestMux.HandleFunc("/rest/system/error", s.postSystemError) // <body>
postRestMux.HandleFunc("/rest/system/error/clear", s.postSystemErrorClear) // -
postRestMux.HandleFunc("/rest/system/ping", s.restPing) // -
postRestMux.HandleFunc("/rest/system/reset", s.postSystemReset) // [folder]
postRestMux.HandleFunc("/rest/system/restart", s.postSystemRestart) // -
postRestMux.HandleFunc("/rest/system/shutdown", s.postSystemShutdown) // -
postRestMux.HandleFunc("/rest/system/upgrade", s.postSystemUpgrade) // -
postRestMux.HandleFunc("/rest/system/pause", s.postSystemPause) // device
postRestMux.HandleFunc("/rest/system/resume", s.postSystemResume) // device
postRestMux.HandleFunc("/rest/system/debug", s.postSystemDebug) // [enable] [disable]
postRestMux.HandleFunc("/rest/db/prio", s.postDBPrio) // folder file [perpage] [page]
postRestMux.HandleFunc("/rest/db/ignores", s.postDBIgnores) // folder
postRestMux.HandleFunc("/rest/db/override", s.postDBOverride) // folder
postRestMux.HandleFunc("/rest/db/scan", s.postDBScan) // folder [sub...] [delay]
postRestMux.HandleFunc("/rest/system/config", s.postSystemConfig) // <body>
postRestMux.HandleFunc("/rest/system/error", s.postSystemError) // <body>
postRestMux.HandleFunc("/rest/system/error/clear", s.postSystemErrorClear) // -
postRestMux.HandleFunc("/rest/system/ping", s.restPing) // -
postRestMux.HandleFunc("/rest/system/reset", s.postSystemReset) // [folder]
postRestMux.HandleFunc("/rest/system/restart", s.postSystemRestart) // -
postRestMux.HandleFunc("/rest/system/shutdown", s.postSystemShutdown) // -
postRestMux.HandleFunc("/rest/system/upgrade", s.postSystemUpgrade) // -
postRestMux.HandleFunc("/rest/system/pause", s.makeDevicePauseHandler(true)) // device
postRestMux.HandleFunc("/rest/system/resume", s.makeDevicePauseHandler(false)) // device
postRestMux.HandleFunc("/rest/system/debug", s.postSystemDebug) // [enable] [disable]
// Debug endpoints, not for general use
getRestMux.HandleFunc("/rest/debug/peerCompletion", s.getPeerCompletion)
getRestMux.HandleFunc("/rest/debug/httpmetrics", s.getSystemHTTPMetrics)
debugMux := http.NewServeMux()
debugMux.HandleFunc("/rest/debug/peerCompletion", s.getPeerCompletion)
debugMux.HandleFunc("/rest/debug/httpmetrics", s.getSystemHTTPMetrics)
debugMux.HandleFunc("/rest/debug/cpuprof", s.getCPUProf) // duration
debugMux.HandleFunc("/rest/debug/heapprof", s.getHeapProf)
getRestMux.Handle("/rest/debug/", s.whenDebugging(debugMux))
// A handler that splits requests between the two above and disables
// caching
@@ -296,16 +291,10 @@ func (s *apiService) Serve() {
mux.HandleFunc("/qr/", s.getQR)
// Serve compiled in assets unless an asset directory was set (for development)
assets := &embeddedStatic{
theme: s.cfg.GUI().Theme,
lastModified: time.Now(),
mut: sync.NewRWMutex(),
assetDir: s.assetDir,
assets: auto.Assets(),
}
mux.Handle("/", assets)
mux.Handle("/", s.statics)
s.cfg.Subscribe(assets)
// Handle the special meta.js path
mux.HandleFunc("/meta.js", s.getJSMetadata)
guiCfg := s.cfg.GUI()
@@ -329,11 +318,18 @@ func (s *apiService) Serve() {
// Add the CORS handling
handler = corsMiddleware(handler)
if addressIsLocalhost(guiCfg.Address()) && !guiCfg.InsecureSkipHostCheck {
// Verify source host
handler = localhostMiddleware(handler)
}
handler = debugMiddleware(handler)
srv := http.Server{
Handler: handler,
ReadTimeout: 10 * time.Second,
Handler: handler,
// ReadTimeout must be longer than SyncthingController $scope.refresh
// interval to avoid HTTP keepalive/GUI refresh race.
ReadTimeout: 15 * time.Second,
}
s.fss = newFolderSummaryService(s.cfg, s.model)
@@ -344,33 +340,41 @@ func (s *apiService) Serve() {
l.Infoln("Access the GUI via the following URL:", guiCfg.URL())
if s.started != nil {
// only set when run by the tests
close(s.started)
s.started <- listener.Addr().String()
}
err := srv.Serve(listener)
// The return could be due to an intentional close. Wait for the stop
// signal before returning. IF there is no stop signal within a second, we
// assume it was unintentional and log the error before retrying.
// Indicate successfull initial startup, to ourselves and to interested
// listeners (i.e. the thing that starts the browser).
select {
case <-s.startedOnce:
default:
close(s.startedOnce)
}
// Serve in the background
serveError := make(chan error, 1)
go func() {
serveError <- srv.Serve(listener)
}()
// Wait for stop, restart or error signals
select {
case <-s.stop:
// Shutting down permanently
l.Debugln("shutting down (stop)")
case <-s.configChanged:
case <-time.After(time.Second):
l.Warnln("API:", err)
// Soft restart due to configuration change
l.Debugln("restarting (config changed)")
case <-serveError:
// Restart due to listen/serve failure
l.Warnln("GUI/API:", err, "(restarting)")
}
}
func (s *apiService) Stop() {
s.listenerMut.Lock()
listener := s.listener
s.listenerMut.Unlock()
close(s.stop)
// listener may be nil here if we've had a config change to a broken
// configuration, in which case we shouldn't try to close it.
if listener != nil {
listener.Close()
}
}
func (s *apiService) String() string {
@@ -378,35 +382,23 @@ func (s *apiService) String() string {
}
func (s *apiService) VerifyConfiguration(from, to config.Configuration) error {
return nil
_, err := net.ResolveTCPAddr("tcp", to.GUI.Address())
return err
}
func (s *apiService) CommitConfiguration(from, to config.Configuration) bool {
// No action required when this changes, so mask the fact that it changed at all.
from.GUI.Debugging = to.GUI.Debugging
if to.GUI == from.GUI {
return true
}
// Order here is important. We must close the listener to stop Serve(). We
// must create a new listener before Serve() starts again. We can't create
// a new listener on the same port before the previous listener is closed.
// To assist in this little dance the Serve() method will wait for a
// signal on the configChanged channel after the listener has closed.
s.listenerMut.Lock()
defer s.listenerMut.Unlock()
s.listener.Close()
var err error
s.listener, err = s.getListener(to.GUI)
if err != nil {
// Ideally this should be a verification error, but we check it by
// creating a new listener which requires shutting down the previous
// one first, which is too destructive for the VerifyConfiguration
// method.
return false
if to.GUI.Theme != from.GUI.Theme {
s.statics.setTheme(to.GUI.Theme)
}
// Tell the serve loop to restart
s.configChanged <- struct{}{}
return true
@@ -463,10 +455,12 @@ func corsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Process OPTIONS requests
if r.Method == "OPTIONS" {
// Add a generous access-control-allow-origin header for CORS requests
w.Header().Add("Access-Control-Allow-Origin", "*")
// Only GET/POST Methods are supported
w.Header().Set("Access-Control-Allow-Methods", "GET, POST")
// Only this custom header can be set
w.Header().Set("Access-Control-Allow-Headers", "X-API-Key")
// Only these headers can be set
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, X-API-Key")
// The request is meant to be cached 10 minutes
w.Header().Set("Access-Control-Max-Age", "600")
@@ -521,10 +515,40 @@ func withDetailsMiddleware(id protocol.DeviceID, h http.Handler) http.Handler {
})
}
func localhostMiddleware(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if addressIsLocalhost(r.Host) {
h.ServeHTTP(w, r)
return
}
http.Error(w, "Host check error", http.StatusForbidden)
})
}
func (s *apiService) whenDebugging(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if s.cfg.GUI().Debugging {
h.ServeHTTP(w, r)
return
}
http.Error(w, "Debugging disabled", http.StatusBadRequest)
})
}
func (s *apiService) restPing(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string]string{"ping": "pong"})
}
func (s *apiService) getJSMetadata(w http.ResponseWriter, r *http.Request) {
meta, _ := json.Marshal(map[string]string{
"deviceID": s.id.String(),
})
w.Header().Set("Content-Type", "application/javascript")
fmt.Fprintf(w, "var metadata = %s;\n", meta)
}
func (s *apiService) getSystemVersion(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string]string{
"version": Version,
@@ -589,8 +613,12 @@ func (s *apiService) getDBCompletion(w http.ResponseWriter, r *http.Request) {
return
}
sendJSON(w, map[string]float64{
"completion": s.model.Completion(device, folder),
comp := s.model.Completion(device, folder)
sendJSON(w, map[string]interface{}{
"completion": comp.CompletionPct,
"needBytes": comp.NeedBytes,
"globalBytes": comp.GlobalBytes,
"needDeletes": comp.NeedDeletes,
})
}
@@ -603,18 +631,18 @@ func (s *apiService) getDBStatus(w http.ResponseWriter, r *http.Request) {
func folderSummary(cfg configIntf, m modelIntf, folder string) map[string]interface{} {
var res = make(map[string]interface{})
res["invalid"] = cfg.Folders()[folder].Invalid
res["invalid"] = "" // Deprecated, retains external API for now
globalFiles, globalDeleted, globalBytes := m.GlobalSize(folder)
res["globalFiles"], res["globalDeleted"], res["globalBytes"] = globalFiles, globalDeleted, globalBytes
global := m.GlobalSize(folder)
res["globalFiles"], res["globalDirectories"], res["globalSymlinks"], res["globalDeleted"], res["globalBytes"] = global.Files, global.Directories, global.Symlinks, global.Deleted, global.Bytes
localFiles, localDeleted, localBytes := m.LocalSize(folder)
res["localFiles"], res["localDeleted"], res["localBytes"] = localFiles, localDeleted, localBytes
local := m.LocalSize(folder)
res["localFiles"], res["localDirectories"], res["localSymlinks"], res["localDeleted"], res["localBytes"] = local.Files, local.Directories, local.Symlinks, local.Deleted, local.Bytes
needFiles, needBytes := m.NeedSize(folder)
res["needFiles"], res["needBytes"] = needFiles, needBytes
need := m.NeedSize(folder)
res["needFiles"], res["needDirectories"], res["needSymlinks"], res["needDeletes"], res["needBytes"] = need.Files, need.Directories, need.Symlinks, need.Deleted, need.Bytes
res["inSyncFiles"], res["inSyncBytes"] = globalFiles-needFiles, globalBytes-needBytes
res["inSyncFiles"], res["inSyncBytes"] = global.Files-need.Files, global.Bytes-need.Bytes
var err error
res["state"], res["stateChanged"], err = m.State(folder)
@@ -622,10 +650,11 @@ func folderSummary(cfg configIntf, m modelIntf, folder string) map[string]interf
res["error"] = err.Error()
}
lv, _ := m.CurrentLocalVersion(folder)
rv, _ := m.RemoteLocalVersion(folder)
ourSeq, _ := m.CurrentSequence(folder)
remoteSeq, _ := m.RemoteSequence(folder)
res["version"] = lv + rv
res["version"] = ourSeq + remoteSeq // legacy
res["sequence"] = ourSeq + remoteSeq // new name
ignorePatterns, _, _ := m.GetIgnores(folder)
res["ignorePatterns"] = false
@@ -706,7 +735,7 @@ func (s *apiService) getDBFile(w http.ResponseWriter, r *http.Request) {
}
func (s *apiService) getSystemConfig(w http.ResponseWriter, r *http.Request) {
sendJSON(w, s.cfg.Raw())
sendJSON(w, s.cfg.RawCopy())
}
func (s *apiService) postSystemConfig(w http.ResponseWriter, r *http.Request) {
@@ -716,7 +745,7 @@ func (s *apiService) postSystemConfig(w http.ResponseWriter, r *http.Request) {
to, err := config.ReadJSON(r.Body, myID)
r.Body.Close()
if err != nil {
l.Warnln("decoding posted config:", err)
l.Warnln("Decoding posted config:", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
@@ -739,7 +768,7 @@ func (s *apiService) postSystemConfig(w http.ResponseWriter, r *http.Request) {
if curAcc := s.cfg.Options().URAccepted; to.Options.URAccepted > curAcc {
// UR was enabled
to.Options.URAccepted = usageReportVersion
to.Options.URUniqueID = util.RandomString(8)
to.Options.URUniqueID = rand.String(8)
} else if to.Options.URAccepted < curAcc {
// UR was disabled
to.Options.URAccepted = -1
@@ -748,13 +777,21 @@ func (s *apiService) postSystemConfig(w http.ResponseWriter, r *http.Request) {
// Activate and save
resp := s.cfg.Replace(to)
configInSync = !resp.RequiresRestart
s.cfg.Save()
if err := s.cfg.Replace(to); err != nil {
l.Warnln("Replacing config:", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if err := s.cfg.Save(); err != nil {
l.Warnln("Saving config:", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (s *apiService) getSystemConfigInsync(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string]bool{"configInSync": configInSync})
sendJSON(w, map[string]bool{"configInSync": !s.cfg.RequiresRestart()})
}
func (s *apiService) postSystemRestart(w http.ResponseWriter, r *http.Request) {
@@ -839,7 +876,6 @@ func (s *apiService) getSystemStatus(w http.ResponseWriter, r *http.Request) {
res["pathSeparator"] = string(filepath.Separator)
res["uptime"] = int(time.Since(startTime).Seconds())
res["startTime"] = startTime
res["themes"] = s.themes
sendJSON(w, res)
}
@@ -919,6 +955,16 @@ func (s *apiService) getReport(w http.ResponseWriter, r *http.Request) {
sendJSON(w, reportData(s.cfg, s.model))
}
func (s *apiService) getRandomString(w http.ResponseWriter, r *http.Request) {
length := 32
if val, _ := strconv.Atoi(r.URL.Query().Get("length")); val > 0 {
length = val
}
str := rand.String(length)
sendJSON(w, map[string]string{"random": str})
}
func (s *apiService) getDBIgnores(w http.ResponseWriter, r *http.Request) {
qs := r.URL.Query()
@@ -960,15 +1006,22 @@ func (s *apiService) postDBIgnores(w http.ResponseWriter, r *http.Request) {
s.getDBIgnores(w, r)
}
func (s *apiService) getEvents(w http.ResponseWriter, r *http.Request) {
func (s *apiService) getIndexEvents(w http.ResponseWriter, r *http.Request) {
s.fss.gotEventRequest()
s.getEvents(w, r, s.eventSub)
}
func (s *apiService) getDiskEvents(w http.ResponseWriter, r *http.Request) {
s.getEvents(w, r, s.diskEventSub)
}
func (s *apiService) getEvents(w http.ResponseWriter, r *http.Request, eventSub events.BufferedSubscription) {
qs := r.URL.Query()
sinceStr := qs.Get("since")
limitStr := qs.Get("limit")
since, _ := strconv.Atoi(sinceStr)
limit, _ := strconv.Atoi(limitStr)
s.fss.gotEventRequest()
// Flush before blocking, to indicate that we've received the request and
// that it should not be retried. Must set Content-Type header before
// flushing.
@@ -976,7 +1029,7 @@ func (s *apiService) getEvents(w http.ResponseWriter, r *http.Request) {
f := w.(http.Flusher)
f.Flush()
evs := s.eventSub.Since(since, nil)
evs := eventSub.Since(since, nil)
if 0 < limit && limit < len(evs) {
evs = evs[len(evs)-limit:]
}
@@ -1051,30 +1104,27 @@ func (s *apiService) postSystemUpgrade(w http.ResponseWriter, r *http.Request) {
}
}
func (s *apiService) postSystemPause(w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var deviceStr = qs.Get("device")
func (s *apiService) makeDevicePauseHandler(paused bool) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var deviceStr = qs.Get("device")
device, err := protocol.DeviceIDFromString(deviceStr)
if err != nil {
http.Error(w, err.Error(), 500)
return
device, err := protocol.DeviceIDFromString(deviceStr)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
cfg, ok := s.cfg.Devices()[device]
if !ok {
http.Error(w, "not found", http.StatusNotFound)
}
cfg.Paused = paused
if err := s.cfg.SetDevice(cfg); err != nil {
http.Error(w, err.Error(), 500)
}
}
s.model.PauseDevice(device)
}
func (s *apiService) postSystemResume(w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var deviceStr = qs.Get("device")
device, err := protocol.DeviceIDFromString(deviceStr)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
s.model.ResumeDevice(device)
}
func (s *apiService) postDBScan(w http.ResponseWriter, r *http.Request) {
@@ -1088,7 +1138,7 @@ func (s *apiService) postDBScan(w http.ResponseWriter, r *http.Request) {
}
subs := qs["sub"]
err = s.model.ScanFolderSubs(folder, subs)
err = s.model.ScanFolderSubdirs(folder, subs)
if err != nil {
http.Error(w, err.Error(), 500)
return
@@ -1132,7 +1182,7 @@ func (s *apiService) getPeerCompletion(w http.ResponseWriter, r *http.Request) {
for _, device := range folder.DeviceIDs() {
deviceStr := device.String()
if s.model.ConnectedTo(device) {
tot[deviceStr] += s.model.Completion(device, folder.ID)
tot[deviceStr] += s.model.Completion(device, folder.ID).CompletionPct
} else {
tot[deviceStr] = 0
}
@@ -1151,150 +1201,55 @@ func (s *apiService) getPeerCompletion(w http.ResponseWriter, r *http.Request) {
func (s *apiService) getSystemBrowse(w http.ResponseWriter, r *http.Request) {
qs := r.URL.Query()
current := qs.Get("current")
if current == "" {
if roots, err := osutil.GetFilesystemRoots(); err == nil {
sendJSON(w, roots)
} else {
http.Error(w, err.Error(), 500)
}
return
}
search, _ := osutil.ExpandTilde(current)
pathSeparator := string(os.PathSeparator)
if strings.HasSuffix(current, pathSeparator) && !strings.HasSuffix(search, pathSeparator) {
search = search + pathSeparator
}
subdirectories, _ := osutil.Glob(search + "*")
ret := make([]string, 0, 10)
ret := make([]string, 0, len(subdirectories))
for _, subdirectory := range subdirectories {
info, err := os.Stat(subdirectory)
if err == nil && info.IsDir() {
ret = append(ret, subdirectory+pathSeparator)
if len(ret) > 9 {
break
}
}
}
sendJSON(w, ret)
}
type embeddedStatic struct {
theme string
lastModified time.Time
mut sync.RWMutex
assetDir string
assets map[string][]byte
func (s *apiService) getCPUProf(w http.ResponseWriter, r *http.Request) {
duration, err := time.ParseDuration(r.FormValue("duration"))
if err != nil {
duration = 30 * time.Second
}
filename := fmt.Sprintf("syncthing-cpu-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename="+filename)
pprof.StartCPUProfile(w)
time.Sleep(duration)
pprof.StopCPUProfile()
}
func (s embeddedStatic) ServeHTTP(w http.ResponseWriter, r *http.Request) {
file := r.URL.Path
func (s *apiService) getHeapProf(w http.ResponseWriter, r *http.Request) {
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
if file[0] == '/' {
file = file[1:]
}
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename="+filename)
if len(file) == 0 {
file = "index.html"
}
s.mut.RLock()
theme := s.theme
modified := s.lastModified
s.mut.RUnlock()
// Check for an override for the current theme.
if s.assetDir != "" {
p := filepath.Join(s.assetDir, s.theme, filepath.FromSlash(file))
if _, err := os.Stat(p); err == nil {
http.ServeFile(w, r, p)
return
}
}
// Check for a compiled in asset for the current theme.
bs, ok := s.assets[theme+"/"+file]
if !ok {
// Check for an overridden default asset.
if s.assetDir != "" {
p := filepath.Join(s.assetDir, config.DefaultTheme, filepath.FromSlash(file))
if _, err := os.Stat(p); err == nil {
http.ServeFile(w, r, p)
return
}
}
// Check for a compiled in default asset.
bs, ok = s.assets[config.DefaultTheme+"/"+file]
if !ok {
http.NotFound(w, r)
return
}
}
if modifiedSince, err := time.Parse(r.Header.Get("If-Modified-Since"), http.TimeFormat); err == nil && modified.Before(modifiedSince) {
w.WriteHeader(http.StatusNotModified)
return
}
mtype := s.mimeTypeForFile(file)
if len(mtype) != 0 {
w.Header().Set("Content-Type", mtype)
}
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
w.Header().Set("Content-Encoding", "gzip")
} else {
// ungzip if browser not send gzip accepted header
var gr *gzip.Reader
gr, _ = gzip.NewReader(bytes.NewReader(bs))
bs, _ = ioutil.ReadAll(gr)
gr.Close()
}
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
w.Header().Set("Last-Modified", modified.Format(http.TimeFormat))
w.Header().Set("Cache-Control", "public")
w.Write(bs)
}
func (s embeddedStatic) mimeTypeForFile(file string) string {
// We use a built in table of the common types since the system
// TypeByExtension might be unreliable. But if we don't know, we delegate
// to the system.
ext := filepath.Ext(file)
switch ext {
case ".htm", ".html":
return "text/html"
case ".css":
return "text/css"
case ".js":
return "application/javascript"
case ".json":
return "application/json"
case ".png":
return "image/png"
case ".ttf":
return "application/x-font-ttf"
case ".woff":
return "application/x-font-woff"
case ".svg":
return "image/svg+xml"
default:
return mime.TypeByExtension(ext)
}
}
// VerifyConfiguration implements the config.Committer interface
func (s *embeddedStatic) VerifyConfiguration(from, to config.Configuration) error {
return nil
}
// CommitConfiguration implements the config.Committer interface
func (s *embeddedStatic) CommitConfiguration(from, to config.Configuration) bool {
s.mut.Lock()
if s.theme != to.GUI.Theme {
s.theme = to.GUI.Theme
s.lastModified = time.Now()
}
s.mut.Unlock()
return true
}
func (s *embeddedStatic) String() string {
return fmt.Sprintf("embeddedStatic@%p", s)
runtime.GC()
pprof.WriteHeapProfile(w)
}
func (s *apiService) toNeedSlice(fs []db.FileInfoTruncated) []jsonDBFileInfo {
@@ -1311,13 +1266,17 @@ type jsonFileInfo protocol.FileInfo
func (f jsonFileInfo) MarshalJSON() ([]byte, error) {
return json.Marshal(map[string]interface{}{
"name": f.Name,
"size": protocol.FileInfo(f).Size(),
"flags": fmt.Sprintf("%#o", f.Flags),
"modified": time.Unix(f.Modified, 0),
"localVersion": f.LocalVersion,
"numBlocks": len(f.Blocks),
"version": jsonVersionVector(f.Version),
"name": f.Name,
"type": f.Type,
"size": f.Size,
"permissions": fmt.Sprintf("%#o", f.Permissions),
"deleted": f.Deleted,
"invalid": f.Invalid,
"noPermissions": f.NoPermissions,
"modified": protocol.FileInfo(f).ModTime(),
"sequence": f.Sequence,
"numBlocks": len(f.Blocks),
"version": jsonVersionVector(f.Version),
})
}
@@ -1325,20 +1284,23 @@ type jsonDBFileInfo db.FileInfoTruncated
func (f jsonDBFileInfo) MarshalJSON() ([]byte, error) {
return json.Marshal(map[string]interface{}{
"name": f.Name,
"size": db.FileInfoTruncated(f).Size(),
"flags": fmt.Sprintf("%#o", f.Flags),
"modified": time.Unix(f.Modified, 0),
"localVersion": f.LocalVersion,
"version": jsonVersionVector(f.Version),
"name": f.Name,
"type": f.Type,
"size": f.Size,
"permissions": fmt.Sprintf("%#o", f.Permissions),
"deleted": f.Deleted,
"invalid": f.Invalid,
"noPermissions": f.NoPermissions,
"modified": db.FileInfoTruncated(f).ModTime(),
"sequence": f.Sequence,
})
}
type jsonVersionVector protocol.Vector
func (v jsonVersionVector) MarshalJSON() ([]byte, error) {
res := make([]string, len(v))
for i, c := range v {
res := make([]string, len(v.Counters))
for i, c := range v.Counters {
res[i] = fmt.Sprintf("%v:%d", c.ID, c.Value)
}
return json.Marshal(res)
@@ -1366,3 +1328,17 @@ func dirNames(dir string) []string {
sort.Strings(dirs)
return dirs
}
func addressIsLocalhost(addr string) bool {
host, _, err := net.SplitHostPort(addr)
if err != nil {
// There was no port, so we assume the address was just a hostname
host = addr
}
switch strings.ToLower(host) {
case "127.0.0.1", "::1", "localhost":
return true
default:
return false
}
}

View File

@@ -9,15 +9,14 @@ package main
import (
"bytes"
"encoding/base64"
"math/rand"
"net/http"
"strings"
"time"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
"golang.org/x/crypto/bcrypt"
)
@@ -114,7 +113,7 @@ func basicAuthAndSessionMiddleware(cookieName string, cfg config.GUIConfiguratio
return
passwordOK:
sessionid := util.RandomString(32)
sessionid := rand.String(32)
sessionsMut.Lock()
sessions[sessionid] = true
sessionsMut.Unlock()

View File

@@ -15,8 +15,8 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
)
// csrfTokens is a list of valid tokens. It is sorted so that the most
@@ -37,6 +37,16 @@ func csrfMiddleware(unique string, prefix string, cfg config.GUIConfiguration, n
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Allow requests carrying a valid API key
if cfg.IsValidAPIKey(r.Header.Get("X-API-Key")) {
// Set the access-control-allow-origin header for CORS requests
// since a valid API key has been provided
w.Header().Add("Access-Control-Allow-Origin", "*")
next.ServeHTTP(w, r)
return
}
if strings.HasPrefix(r.URL.Path, "/rest/debug") {
// Debugging functions are only available when explicitly
// enabled, and can be accessed without a CSRF token
next.ServeHTTP(w, r)
return
}
@@ -87,7 +97,7 @@ func validCsrfToken(token string) bool {
}
func newCsrfToken() string {
token := util.RandomString(32)
token := rand.String(32)
csrfMut.Lock()
csrfTokens = append([]string{token}, csrfTokens...)
@@ -106,7 +116,7 @@ func saveCsrfTokens() {
// nothing relevant we can do about them anyway...
name := locations[locCsrfTokens]
f, err := osutil.CreateAtomic(name, 0600)
f, err := osutil.CreateAtomic(name)
if err != nil {
return
}

View File

@@ -78,7 +78,7 @@ func trackCPUUsage() {
var prevTime = time.Now().UnixNano()
var rusage prusage_t
var pid = os.Getpid()
for _ = range time.NewTicker(time.Second).C {
for range time.NewTicker(time.Second).C {
err := solarisPrusage(pid, &rusage)
if err != nil {
l.Warnln("getting prusage:", err)

View File

@@ -0,0 +1,176 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"compress/gzip"
"fmt"
"io/ioutil"
"mime"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/syncthing/syncthing/lib/auto"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/sync"
)
type staticsServer struct {
assetDir string
assets map[string][]byte
availableThemes []string
mut sync.RWMutex
theme string
}
func newStaticsServer(theme, assetDir string) *staticsServer {
s := &staticsServer{
assetDir: assetDir,
assets: auto.Assets(),
mut: sync.NewRWMutex(),
theme: theme,
}
seen := make(map[string]struct{})
// Load themes from compiled in assets.
for file := range auto.Assets() {
theme := strings.Split(file, "/")[0]
if _, ok := seen[theme]; !ok {
seen[theme] = struct{}{}
s.availableThemes = append(s.availableThemes, theme)
}
}
if assetDir != "" {
// Load any extra themes from the asset override dir.
for _, dir := range dirNames(assetDir) {
if _, ok := seen[dir]; !ok {
seen[dir] = struct{}{}
s.availableThemes = append(s.availableThemes, dir)
}
}
}
return s
}
func (s *staticsServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/themes.json":
s.serveThemes(w, r)
default:
s.serveAsset(w, r)
}
}
func (s *staticsServer) serveAsset(w http.ResponseWriter, r *http.Request) {
file := r.URL.Path
if file[0] == '/' {
file = file[1:]
}
if len(file) == 0 {
file = "index.html"
}
s.mut.RLock()
theme := s.theme
s.mut.RUnlock()
// Check for an override for the current theme.
if s.assetDir != "" {
p := filepath.Join(s.assetDir, theme, filepath.FromSlash(file))
if _, err := os.Stat(p); err == nil {
http.ServeFile(w, r, p)
return
}
}
// Check for a compiled in asset for the current theme.
bs, ok := s.assets[theme+"/"+file]
if !ok {
// Check for an overridden default asset.
if s.assetDir != "" {
p := filepath.Join(s.assetDir, config.DefaultTheme, filepath.FromSlash(file))
if _, err := os.Stat(p); err == nil {
http.ServeFile(w, r, p)
return
}
}
// Check for a compiled in default asset.
bs, ok = s.assets[config.DefaultTheme+"/"+file]
if !ok {
http.NotFound(w, r)
return
}
}
mtype := s.mimeTypeForFile(file)
if len(mtype) != 0 {
w.Header().Set("Content-Type", mtype)
}
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
w.Header().Set("Content-Encoding", "gzip")
} else {
// ungzip if browser not send gzip accepted header
var gr *gzip.Reader
gr, _ = gzip.NewReader(bytes.NewReader(bs))
bs, _ = ioutil.ReadAll(gr)
gr.Close()
}
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
w.Write(bs)
}
func (s *staticsServer) serveThemes(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string][]string{
"themes": s.availableThemes,
})
}
func (s *staticsServer) mimeTypeForFile(file string) string {
// We use a built in table of the common types since the system
// TypeByExtension might be unreliable. But if we don't know, we delegate
// to the system.
ext := filepath.Ext(file)
switch ext {
case ".htm", ".html":
return "text/html"
case ".css":
return "text/css"
case ".js":
return "application/javascript"
case ".json":
return "application/json"
case ".png":
return "image/png"
case ".ttf":
return "application/x-font-ttf"
case ".woff":
return "application/x-font-woff"
case ".svg":
return "image/svg+xml"
default:
return mime.TypeByExtension(ext)
}
}
func (s *staticsServer) setTheme(theme string) {
s.mut.Lock()
s.theme = theme
s.mut.Unlock()
}
func (s *staticsServer) String() string {
return fmt.Sprintf("staticsServer@%p", s)
}

View File

@@ -9,11 +9,14 @@ package main
import (
"bytes"
"compress/gzip"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/http/httptest"
"strconv"
"strings"
"testing"
"time"
@@ -67,11 +70,8 @@ func TestStopAfterBrokenConfig(t *testing.T) {
}
w := config.Wrap("/dev/null", cfg)
srv, err := newAPIService(protocol.LocalDeviceID, w, "../../test/h1/https-cert.pem", "../../test/h1/https-key.pem", "", nil, nil, nil, nil, nil, nil)
if err != nil {
t.Fatal(err)
}
srv.started = make(chan struct{})
srv := newAPIService(protocol.LocalDeviceID, w, "../../test/h1/https-cert.pem", "../../test/h1/https-key.pem", "", nil, nil, nil, nil, nil, nil, nil)
srv.started = make(chan string)
sup := suture.NewSimple("test")
sup.Add(srv)
@@ -89,8 +89,8 @@ func TestStopAfterBrokenConfig(t *testing.T) {
RawUseTLS: false,
},
}
if srv.CommitConfiguration(cfg, newCfg) {
t.Fatal("Config commit should have failed")
if err := srv.VerifyConfiguration(cfg, newCfg); err == nil {
t.Fatal("Verify config should have failed")
}
// Nonetheless, it should be fine to Stop() it without panic.
@@ -118,7 +118,7 @@ func TestAssetsDir(t *testing.T) {
gw.Close()
foo := buf.Bytes()
e := embeddedStatic{
e := &staticsServer{
theme: "foo",
mut: sync.NewRWMutex(),
assetDir: "testdata",
@@ -461,7 +461,6 @@ func TestHTTPLogin(t *testing.T) {
if resp.StatusCode != http.StatusOK {
t.Errorf("Unexpected non-200 return code %d for authed request (ISO-8859-1)", resp.StatusCode)
}
}
func startHTTP(cfg *mockedConfig) (string, error) {
@@ -470,34 +469,36 @@ func startHTTP(cfg *mockedConfig) (string, error) {
httpsKeyFile := "../../test/h1/https-key.pem"
assetDir := "../../gui"
eventSub := new(mockedEventSub)
diskEventSub := new(mockedEventSub)
discoverer := new(mockedCachingMux)
connections := new(mockedConnections)
errorLog := new(mockedLoggerRecorder)
systemLog := new(mockedLoggerRecorder)
addrChan := make(chan string)
// Instantiate the API service
svc, err := newAPIService(protocol.LocalDeviceID, cfg, httpsCertFile, httpsKeyFile, assetDir, model,
eventSub, discoverer, connections, errorLog, systemLog)
if err != nil {
return "", err
}
// Make sure the API service is listening, and get the URL to use.
addr := svc.listener.Addr()
if addr == nil {
return "", fmt.Errorf("Nil listening address from API service")
}
tcpAddr, err := net.ResolveTCPAddr("tcp", addr.String())
if err != nil {
return "", fmt.Errorf("Weird address from API service: %v", err)
}
baseURL := fmt.Sprintf("http://127.0.0.1:%d", tcpAddr.Port)
svc := newAPIService(protocol.LocalDeviceID, cfg, httpsCertFile, httpsKeyFile, assetDir, model,
eventSub, diskEventSub, discoverer, connections, errorLog, systemLog)
svc.started = addrChan
// Actually start the API service
supervisor := suture.NewSimple("API test")
supervisor.Add(svc)
supervisor.ServeBackground()
// Make sure the API service is listening, and get the URL to use.
addr := <-addrChan
tcpAddr, err := net.ResolveTCPAddr("tcp", addr)
if err != nil {
return "", fmt.Errorf("Weird address from API service: %v", err)
}
host, _, _ := net.SplitHostPort(cfg.gui.RawAddress)
if host == "" || host == "0.0.0.0" {
host = "127.0.0.1"
}
baseURL := fmt.Sprintf("http://%s", net.JoinHostPort(host, strconv.Itoa(tcpAddr.Port)))
return baseURL, nil
}
@@ -506,6 +507,9 @@ func TestCSRFRequired(t *testing.T) {
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal("Unexpected error from getting base URL:", err)
}
cli := &http.Client{
Timeout: time.Second,
@@ -570,3 +574,353 @@ func TestCSRFRequired(t *testing.T) {
t.Fatal("Getting /rest/system/config with API key should succeed, not", resp.Status)
}
}
func TestRandomString(t *testing.T) {
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
cli := &http.Client{
Timeout: time.Second,
}
// The default should be to return a 32 character random string
for _, url := range []string{"/rest/svc/random/string", "/rest/svc/random/string?length=-1", "/rest/svc/random/string?length=yo"} {
req, _ := http.NewRequest("GET", baseURL+url, nil)
req.Header.Set("X-API-Key", testAPIKey)
resp, err := cli.Do(req)
if err != nil {
t.Fatal(err)
}
var res map[string]string
if err := json.NewDecoder(resp.Body).Decode(&res); err != nil {
t.Fatal(err)
}
if len(res["random"]) != 32 {
t.Errorf("Expected 32 random characters, got %q of length %d", res["random"], len(res["random"]))
}
}
// We can ask for a different length if we like
req, _ := http.NewRequest("GET", baseURL+"/rest/svc/random/string?length=27", nil)
req.Header.Set("X-API-Key", testAPIKey)
resp, err := cli.Do(req)
if err != nil {
t.Fatal(err)
}
var res map[string]string
if err := json.NewDecoder(resp.Body).Decode(&res); err != nil {
t.Fatal(err)
}
if len(res["random"]) != 27 {
t.Errorf("Expected 27 random characters, got %q of length %d", res["random"], len(res["random"]))
}
}
func TestConfigPostOK(t *testing.T) {
cfg := bytes.NewBuffer([]byte(`{
"version": 15,
"folders": [
{"id": "foo"}
]
}`))
resp, err := testConfigPost(cfg)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != http.StatusOK {
t.Error("Expected 200 OK, not", resp.Status)
}
}
func TestConfigPostDupFolder(t *testing.T) {
cfg := bytes.NewBuffer([]byte(`{
"version": 15,
"folders": [
{"id": "foo"},
{"id": "foo"}
]
}`))
resp, err := testConfigPost(cfg)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != http.StatusBadRequest {
t.Error("Expected 400 Bad Request, not", resp.Status)
}
}
func testConfigPost(data io.Reader) (*http.Response, error) {
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, err := startHTTP(cfg)
if err != nil {
return nil, err
}
cli := &http.Client{
Timeout: time.Second,
}
req, _ := http.NewRequest("POST", baseURL+"/rest/system/config", data)
req.Header.Set("X-API-Key", testAPIKey)
return cli.Do(req)
}
func TestHostCheck(t *testing.T) {
// An API service bound to localhost should reject non-localhost host Headers
cfg := new(mockedConfig)
cfg.gui.RawAddress = "127.0.0.1:0"
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
// A normal HTTP get to the localhost-bound service should succeed
resp, err := http.Get(baseURL)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Regular HTTP get: expected 200 OK, not", resp.Status)
}
// A request with a suspicious Host header should fail
req, _ := http.NewRequest("GET", baseURL, nil)
req.Host = "example.com"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusForbidden {
t.Error("Suspicious Host header: expected 403 Forbidden, not", resp.Status)
}
// A request with an explicit "localhost:8384" Host header should pass
req, _ = http.NewRequest("GET", baseURL, nil)
req.Host = "localhost:8384"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Explicit localhost:8384: expected 200 OK, not", resp.Status)
}
// A request with an explicit "localhost" Host header (no port) should pass
req, _ = http.NewRequest("GET", baseURL, nil)
req.Host = "localhost"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Explicit localhost: expected 200 OK, not", resp.Status)
}
// A server with InsecureSkipHostCheck set behaves differently
cfg = new(mockedConfig)
cfg.gui.RawAddress = "127.0.0.1:0"
cfg.gui.InsecureSkipHostCheck = true
baseURL, err = startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
// A request with a suspicious Host header should be allowed
req, _ = http.NewRequest("GET", baseURL, nil)
req.Host = "example.com"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Incorrect host header, check disabled: expected 200 OK, not", resp.Status)
}
// A server bound to a wildcard address also doesn't do the check
cfg = new(mockedConfig)
cfg.gui.RawAddress = "0.0.0.0:0"
cfg.gui.InsecureSkipHostCheck = true
baseURL, err = startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
// A request with a suspicious Host header should be allowed
req, _ = http.NewRequest("GET", baseURL, nil)
req.Host = "example.com"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Incorrect host header, wildcard bound: expected 200 OK, not", resp.Status)
}
// This should all work over IPv6 as well
cfg = new(mockedConfig)
cfg.gui.RawAddress = "[::1]:0"
baseURL, err = startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
// A normal HTTP get to the localhost-bound service should succeed
resp, err = http.Get(baseURL)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Regular HTTP get (IPv6): expected 200 OK, not", resp.Status)
}
// A request with a suspicious Host header should fail
req, _ = http.NewRequest("GET", baseURL, nil)
req.Host = "example.com"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusForbidden {
t.Error("Suspicious Host header (IPv6): expected 403 Forbidden, not", resp.Status)
}
// A request with an explicit "localhost:8384" Host header should pass
req, _ = http.NewRequest("GET", baseURL, nil)
req.Host = "localhost:8384"
resp, err = http.DefaultClient.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Error("Explicit localhost:8384 (IPv6): expected 200 OK, not", resp.Status)
}
}
func TestAddressIsLocalhost(t *testing.T) {
testcases := []struct {
address string
result bool
}{
// These are all valid localhost addresses
{"localhost", true},
{"LOCALHOST", true},
{"::1", true},
{"127.0.0.1", true},
{"localhost:8080", true},
{"LOCALHOST:8000", true},
{"[::1]:8080", true},
{"127.0.0.1:8080", true},
// These are all non-localhost addresses
{"example.com", false},
{"example.com:8080", false},
{"192.0.2.10", false},
{"192.0.2.10:8080", false},
{"0.0.0.0", false},
{"0.0.0.0:8080", false},
{"::", false},
{"[::]:8080", false},
{":8080", false},
}
for _, tc := range testcases {
result := addressIsLocalhost(tc.address)
if result != tc.result {
t.Errorf("addressIsLocalhost(%q)=%v, expected %v", tc.address, result, tc.result)
}
}
}
func TestAccessControlAllowOriginHeader(t *testing.T) {
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
cli := &http.Client{
Timeout: time.Second,
}
req, _ := http.NewRequest("GET", baseURL+"/rest/system/status", nil)
req.Header.Set("X-API-Key", testAPIKey)
resp, err := cli.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Fatal("GET on /rest/system/status should succeed, not", resp.Status)
}
if resp.Header.Get("Access-Control-Allow-Origin") != "*" {
t.Fatal("GET on /rest/system/status should return a 'Access-Control-Allow-Origin: *' header")
}
}
func TestOptionsRequest(t *testing.T) {
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
cli := &http.Client{
Timeout: time.Second,
}
req, _ := http.NewRequest("OPTIONS", baseURL+"/rest/system/status", nil)
resp, err := cli.Do(req)
if err != nil {
t.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusNoContent {
t.Fatal("OPTIONS on /rest/system/status should succeed, not", resp.Status)
}
if resp.Header.Get("Access-Control-Allow-Origin") != "*" {
t.Fatal("OPTIONS on /rest/system/status should return a 'Access-Control-Allow-Origin: *' header")
}
if resp.Header.Get("Access-Control-Allow-Methods") != "GET, POST" {
t.Fatal("OPTIONS on /rest/system/status should return a 'Access-Control-Allow-Methods: GET, POST' header")
}
if resp.Header.Get("Access-Control-Allow-Headers") != "Content-Type, X-API-Key" {
t.Fatal("OPTIONS on /rest/system/status should return a 'Access-Control-Allow-Headers: Content-Type, X-API-KEY' header")
}
}

View File

@@ -21,7 +21,7 @@ func trackCPUUsage() {
var prevUsage int64
var prevTime = time.Now().UnixNano()
var rusage syscall.Rusage
for _ = range time.NewTicker(time.Second).C {
for range time.NewTicker(time.Second).C {
syscall.Getrusage(syscall.RUSAGE_SELF, &rusage)
curTime := time.Now().UnixNano()
timeDiff := curTime - prevTime

View File

@@ -34,7 +34,7 @@ func trackCPUUsage() {
prevTime := ctime.Nanoseconds()
prevUsage := ktime.Nanoseconds() + utime.Nanoseconds() // Always overflows
for _ = range time.NewTicker(time.Second).C {
for range time.NewTicker(time.Second).C {
err := syscall.GetProcessTimes(handle, &ctime, &etime, &ktime, &utime)
if err != nil {
continue

View File

@@ -48,7 +48,7 @@ var locations = map[locationEnum]string{
locKeyFile: "${config}/key.pem",
locHTTPSCertFile: "${config}/https-cert.pem",
locHTTPSKeyFile: "${config}/https-key.pem",
locDatabase: "${config}/index-v0.13.0.db",
locDatabase: "${config}/index-v0.14.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-${timestamp}.log",

View File

@@ -12,13 +12,15 @@ import (
"errors"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
_ "net/http/pprof"
"net/url"
"os"
"os/signal"
"path"
"path/filepath"
"regexp"
"runtime"
@@ -39,17 +41,20 @@ import (
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/symlinks"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/upgrade"
"github.com/syncthing/syncthing/lib/util"
"github.com/thejerf/suture"
_ "net/http/pprof" // Need to import this to support STPROFILER.
)
var (
Version = "unknown-dev"
Codename = "Copper Cockroach"
Codename = "Dysprosium Dragonfly"
BuildStamp = "0"
BuildDate time.Time
BuildHost = "unknown"
@@ -136,11 +141,11 @@ show time only (2).
Development Settings
--------------------
The following environment variables modify syncthing's behavior in ways that
The following environment variables modify Syncthing's behavior in ways that
are mostly useful for developers. Use with care.
STNODEFAULTFOLDER Don't create a default folder when starting for the first
time. This variable will be ignored anytime after the first
time. This variable will be ignored anytime after the first
run.
STGUIASSETS Directory to load GUI assets from. Overrides compiled in
@@ -149,8 +154,8 @@ are mostly useful for developers. Use with care.
STTRACE A comma separated string of facilities to trace. The valid
facility strings listed below.
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
profiler with HTTP access.
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start
the profiler with HTTP access.
STCPUPROFILE Write a CPU profile to cpu-$pid.pprof on exit.
@@ -163,14 +168,33 @@ are mostly useful for developers. Use with care.
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
supported on Windows.
STDEADLOCK Used for debugging internal deadlocks. Use only under
direction of a developer.
STDEADLOCKTIMEOUT Used for debugging internal deadlocks; sets debug
sensitivity. Use only under direction of a developer.
STDEADLOCKTHRESHOLD Used for debugging internal deadlocks; sets debug
sensitivity. Use only under direction of a developer.
STNORESTART Equivalent to the -no-restart argument. Disable the
Syncthing monitor process which handles restarts for some
configuration changes, upgrades, crashes and also log file
writing (stdout is still written).
STNOUPGRADE Disable automatic upgrades.
STHASHING Select the SHA256 hashing package to use. Possible values
are "standard" for the Go standard library implementation,
"minio" for the github.com/minio/sha256-simd implementation,
and blank (the default) for auto detection.
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
available CPU cores.
GOGC Percentage of heap growth at which to trigger GC. Default is
100. Lower numbers keep peak memory usage down, at the price
of CPU usage (ie. performance).
of CPU usage (i.e. performance).
Debugging Facilities
@@ -190,7 +214,8 @@ var (
type RuntimeOptions struct {
confDir string
reset bool
resetDatabase bool
resetDeltaIdxs bool
showVersion bool
showPaths bool
doUpgrade bool
@@ -201,8 +226,10 @@ type RuntimeOptions struct {
hideConsole bool
logFile string
auditEnabled bool
auditFile string
verbose bool
paused bool
unpaused bool
guiAddress string
guiAPIKey string
generateDir string
@@ -249,8 +276,9 @@ func parseCommandLineOptions() RuntimeOptions {
flag.IntVar(&options.logFlags, "logflags", options.logFlags, "Select information in log line prefix (see below)")
flag.BoolVar(&options.noBrowser, "no-browser", false, "Do not start browser")
flag.BoolVar(&options.browserOnly, "browser-only", false, "Open GUI in browser")
flag.BoolVar(&options.noRestart, "no-restart", options.noRestart, "Do not restart; just exit")
flag.BoolVar(&options.reset, "reset", false, "Reset the database")
flag.BoolVar(&options.noRestart, "no-restart", options.noRestart, "Disable monitor process, managed restarts and log file writing")
flag.BoolVar(&options.resetDatabase, "reset-database", false, "Reset the database, forcing a full rescan and resync")
flag.BoolVar(&options.resetDeltaIdxs, "reset-deltas", false, "Reset delta index IDs, forcing a full index exchange")
flag.BoolVar(&options.doUpgrade, "upgrade", false, "Perform upgrade")
flag.BoolVar(&options.doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
flag.BoolVar(&options.showVersion, "version", false, "Show version")
@@ -258,8 +286,10 @@ func parseCommandLineOptions() RuntimeOptions {
flag.StringVar(&options.upgradeTo, "upgrade-to", options.upgradeTo, "Force upgrade directly from specified URL")
flag.BoolVar(&options.auditEnabled, "audit", false, "Write events to audit file")
flag.BoolVar(&options.verbose, "verbose", false, "Print verbose log output")
flag.BoolVar(&options.paused, "paused", false, "Start with all devices paused")
flag.BoolVar(&options.paused, "paused", false, "Start with all devices and folders paused")
flag.BoolVar(&options.unpaused, "unpaused", false, "Start with all devices and folders unpaused")
flag.StringVar(&options.logFile, "logfile", options.logFile, "Log file name (use \"-\" for stdout)")
flag.StringVar(&options.auditFile, "auditfile", options.auditFile, "Specify audit file (use \"-\" for stdout, \"--\" for stderr)")
if runtime.GOOS == "windows" {
// Allow user to hide the console window
flag.BoolVar(&options.hideConsole, "no-console", false, "Hide console window")
@@ -269,6 +299,11 @@ func parseCommandLineOptions() RuntimeOptions {
flag.Usage = usageFor(flag.CommandLine, usage, longUsage)
flag.Parse()
if len(flag.Args()) > 0 {
flag.Usage()
os.Exit(2)
}
return options
}
@@ -353,7 +388,7 @@ func main() {
return
}
if options.reset {
if options.resetDatabase {
resetDB()
return
}
@@ -476,8 +511,13 @@ func performUpgrade(release upgrade.Release) {
func upgradeViaRest() error {
cfg, _ := loadConfig()
target := cfg.GUI().URL()
r, _ := http.NewRequest("POST", target+"/rest/system/upgrade", nil)
u, err := url.Parse(cfg.GUI().URL())
if err != nil {
return err
}
u.Path = path.Join(u.Path, "rest/system/upgrade")
target := u.String()
r, _ := http.NewRequest("POST", target, nil)
r.Header.Set("X-API-Key", cfg.GUI().APIKey)
tr := &http.Transport{
@@ -522,7 +562,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
l.SetPrefix("[start] ")
if runtimeOptions.auditEnabled {
startAuditing(mainService)
startAuditing(mainService, runtimeOptions.auditFile)
}
if runtimeOptions.verbose {
@@ -532,9 +572,11 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
errors := logger.NewRecorder(l, logger.LevelWarn, maxSystemErrors, 0)
systemLog := logger.NewRecorder(l, logger.LevelDebug, maxSystemLog, initialSystemLog)
// Event subscription for the API; must start early to catch the early events. The LocalDiskUpdated
// event might overwhelm the event reciever in some situations so we will not subscribe to it here.
apiSub := events.NewBufferedSubscription(events.Default.Subscribe(events.AllEvents&^events.LocalChangeDetected), 1000)
// Event subscription for the API; must start early to catch the early
// events. The LocalChangeDetected event might overwhelm the event
// receiver in some situations so we will not subscribe to it here.
apiSub := events.NewBufferedSubscription(events.Default.Subscribe(events.AllEvents&^events.LocalChangeDetected&^events.RemoteChangeDetected), 1000)
diskSub := events.NewBufferedSubscription(events.Default.Subscribe(events.LocalChangeDetected|events.RemoteChangeDetected), 1000)
if len(os.Getenv("GOMAXPROCS")) == 0 {
runtime.GOMAXPROCS(runtime.NumCPU())
@@ -560,7 +602,9 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
l.Infoln(LongVersion)
l.Infoln("My ID:", myID)
printHashRate()
sha256.SelectAlgo()
sha256.Report()
// Emit the Starting event, now that we know who we are.
@@ -597,6 +641,10 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
CipherSuites: []uint16{
0xCCA8, // TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, Go 1.8
0xCCA9, // TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, Go 1.8
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
@@ -606,9 +654,6 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
},
}
// If the read or write rate should be limited, set up a rate limiter for it.
// This will be used on connections created in the connect and listen routines.
opts := cfg.Options()
if !opts.SymlinksEnabled {
@@ -640,6 +685,11 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
}
if runtimeOptions.resetDeltaIdxs {
l.Infoln("Reinitializing delta index IDs")
ldb.DropDeltaIndexIDs()
}
protectedFiles := []string{
locations[locDatabase],
locations[locConfigFile],
@@ -656,8 +706,14 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
}
}
if cfg.RawCopy().OriginalVersion == 15 {
// The config version 15->16 migration is about handling ignores and
// delta indexes and requires that we drop existing indexes that
// have been incorrectly ignore filtered.
ldb.DropDeltaIndexIDs()
}
m := model.NewModel(cfg, myID, myDeviceName(cfg), "syncthing", Version, ldb, protectedFiles)
cfg.Subscribe(m)
if t := os.Getenv("STDEADLOCKTIMEOUT"); len(t) > 0 {
it, err := strconv.Atoi(t)
@@ -668,23 +724,18 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
m.StartDeadlockDetector(20 * time.Minute)
}
if runtimeOptions.paused {
for device := range cfg.Devices() {
m.PauseDevice(device)
}
if runtimeOptions.unpaused {
setPauseState(cfg, false)
} else if runtimeOptions.paused {
setPauseState(cfg, true)
}
// Clear out old indexes for other devices. Otherwise we'll start up and
// start needing a bunch of files which are nowhere to be found. This
// needs to be changed when we correctly do persistent indexes.
// Add and start folders
for _, folderCfg := range cfg.Folders() {
m.AddFolder(folderCfg)
for _, device := range folderCfg.DeviceIDs() {
if device == myID {
continue
}
m.Index(device, folderCfg.ID, nil, 0, nil)
if folderCfg.Paused {
continue
}
m.AddFolder(folderCfg)
m.StartFolder(folderCfg.ID)
}
@@ -735,7 +786,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
// GUI
setupGUI(mainService, cfg, m, apiSub, cachedDiscovery, connectionsService, errors, systemLog, runtimeOptions)
setupGUI(mainService, cfg, m, apiSub, diskSub, cachedDiscovery, connectionsService, errors, systemLog, runtimeOptions)
if runtimeOptions.cpuProfile {
f, err := os.Create(fmt.Sprintf("cpu-%d.pprof", os.Getpid()))
@@ -761,7 +812,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
if opts.URUniqueID == "" {
// Previously the ID was generated from the node ID. We now need
// to generate a new one.
opts.URUniqueID = util.RandomString(8)
opts.URUniqueID = rand.String(8)
cfg.SetOptions(opts)
cfg.Save()
}
@@ -779,10 +830,8 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
if opts.AutoUpgradeIntervalH > 0 {
if noUpgrade {
l.Infof("No automatic upgrades; STNOUPGRADE environment variable defined.")
} else if IsRelease {
go autoUpgrade(cfg)
} else {
l.Infof("No automatic upgrades; %s is not a release version.", Version)
go autoUpgrade(cfg)
}
}
@@ -837,28 +886,11 @@ func setupSignalHandling() {
}()
}
// printHashRate prints the hashing performance in MB/s, formatting it with
// appropriate precision for the value, i.e. 182 MB/s, 18 MB/s, 1.8 MB/s, 0.18
// MB/s.
func printHashRate() {
hashRate := cpuBench(3, 100*time.Millisecond)
decimals := 0
if hashRate < 1 {
decimals = 2
} else if hashRate < 10 {
decimals = 1
}
l.Infof("Single thread hash performance is ~%.*f MB/s", decimals, hashRate)
}
func loadConfig() (*config.Wrapper, error) {
cfgFile := locations[locConfigFile]
cfg, err := config.Load(cfgFile, myID)
if err != nil {
l.Infoln("Error loading config file; using defaults for now")
myName, _ := os.Hostname()
newCfg := defaultConfig(myName)
cfg = config.Wrap(cfgFile, newCfg)
@@ -876,7 +908,7 @@ func loadOrCreateConfig() *config.Wrapper {
l.Fatalln("Config:", err)
}
if cfg.Raw().OriginalVersion != config.CurrentVersion {
if cfg.RawCopy().OriginalVersion != config.CurrentVersion {
err = archiveAndSaveConfig(cfg)
if err != nil {
l.Fatalln("Config archive:", err)
@@ -887,24 +919,57 @@ func loadOrCreateConfig() *config.Wrapper {
}
func archiveAndSaveConfig(cfg *config.Wrapper) error {
// To prevent previous config from being cleaned up, quickly touch it too
now := time.Now()
_ = os.Chtimes(cfg.ConfigPath(), now, now) // May return error on Android etc; no worries
archivePath := cfg.ConfigPath() + fmt.Sprintf(".v%d", cfg.Raw().OriginalVersion)
// Copy the existing config to an archive copy
archivePath := cfg.ConfigPath() + fmt.Sprintf(".v%d", cfg.RawCopy().OriginalVersion)
l.Infoln("Archiving a copy of old config file format at:", archivePath)
if err := osutil.Rename(cfg.ConfigPath(), archivePath); err != nil {
if err := copyFile(cfg.ConfigPath(), archivePath); err != nil {
return err
}
// Do a regular atomic config sve
return cfg.Save()
}
func startAuditing(mainService *suture.Supervisor) {
auditFile := timestampedLoc(locAuditLog)
fd, err := os.OpenFile(auditFile, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
func copyFile(src, dst string) error {
bs, err := ioutil.ReadFile(src)
if err != nil {
l.Fatalln("Audit:", err)
return err
}
if err := ioutil.WriteFile(dst, bs, 0600); err != nil {
// Attempt to clean up
os.Remove(dst)
return err
}
return nil
}
func startAuditing(mainService *suture.Supervisor, auditFile string) {
var fd io.Writer
var err error
var auditDest string
var auditFlags int
if auditFile == "-" {
fd = os.Stdout
auditDest = "stdout"
} else if auditFile == "--" {
fd = os.Stderr
auditDest = "stderr"
} else {
if auditFile == "" {
auditFile = timestampedLoc(locAuditLog)
auditFlags = os.O_WRONLY | os.O_CREATE | os.O_EXCL
} else {
auditFlags = os.O_WRONLY | os.O_CREATE | os.O_APPEND
}
fd, err = os.OpenFile(auditFile, auditFlags, 0600)
if err != nil {
l.Fatalln("Audit:", err)
}
auditDest = auditFile
}
auditService := newAuditService(fd)
@@ -914,10 +979,10 @@ func startAuditing(mainService *suture.Supervisor) {
// ensure we capture all events from the start.
auditService.WaitForStart()
l.Infoln("Audit log in", auditFile)
l.Infoln("Audit log in", auditDest)
}
func setupGUI(mainService *suture.Supervisor, cfg *config.Wrapper, m *model.Model, apiSub events.BufferedSubscription, discoverer discover.CachingMux, connectionsService *connections.Service, errors, systemLog logger.Recorder, runtimeOptions RuntimeOptions) {
func setupGUI(mainService *suture.Supervisor, cfg *config.Wrapper, m *model.Model, apiSub events.BufferedSubscription, diskSub events.BufferedSubscription, discoverer discover.CachingMux, connectionsService *connections.Service, errors, systemLog logger.Recorder, runtimeOptions RuntimeOptions) {
guiCfg := cfg.GUI()
if !guiCfg.Enabled {
@@ -928,16 +993,14 @@ func setupGUI(mainService *suture.Supervisor, cfg *config.Wrapper, m *model.Mode
l.Warnln("Insecure admin access is enabled.")
}
api, err := newAPIService(myID, cfg, locations[locHTTPSCertFile], locations[locHTTPSKeyFile], runtimeOptions.assetDir, m, apiSub, discoverer, connectionsService, errors, systemLog)
if err != nil {
l.Fatalln("Cannot start GUI:", err)
}
api := newAPIService(myID, cfg, locations[locHTTPSCertFile], locations[locHTTPSKeyFile], runtimeOptions.assetDir, m, apiSub, diskSub, discoverer, connectionsService, errors, systemLog)
cfg.Subscribe(api)
mainService.Add(api)
if cfg.Options().StartBrowser && !runtimeOptions.noBrowser && !runtimeOptions.stRestarting {
// Can potentially block if the utility we are invoking doesn't
// fork, and just execs, hence keep it in it's own routine.
<-api.startedOnce
go openURL(guiCfg.URL())
}
}
@@ -947,9 +1010,8 @@ func defaultConfig(myName string) config.Configuration {
if !noDefaultFolder {
l.Infoln("Default folder created and/or linked to new config")
folderID := util.RandomString(5) + "-" + util.RandomString(5)
defaultFolder = config.NewFolderConfiguration(folderID, locations[locDefFolder])
defaultFolder.Label = "Default Folder (" + folderID + ")"
defaultFolder = config.NewFolderConfiguration("default", locations[locDefFolder])
defaultFolder.Label = "Default Folder"
defaultFolder.RescanIntervalS = 60
defaultFolder.MinDiskFreePct = 1
defaultFolder.Devices = []config.FolderDeviceConfiguration{{DeviceID: myID}}
@@ -1054,7 +1116,7 @@ func getFreePort(host string, ports ...int) (int, error) {
}
func standbyMonitor() {
restartDelay := time.Duration(60 * time.Second)
restartDelay := 60 * time.Second
now := time.Now()
for {
time.Sleep(10 * time.Second)
@@ -1125,13 +1187,16 @@ func autoUpgrade(cfg *config.Wrapper) {
// suitable time after they have gone out of fashion.
func cleanConfigDirectory() {
patterns := map[string]time.Duration{
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"index*.converted": 14 * 24 * time.Hour, // keep old converted indexes for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.11.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.13.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index*.converted": 14 * 24 * time.Hour, // keep old converted indexes for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
"tmp-index-sorter.*": time.Minute, // these should never exist on startup
}
for pat, dur := range patterns {
@@ -1184,3 +1249,16 @@ func showPaths() {
fmt.Printf("GUI override directory:\n\t%s\n\n", locations[locGUIAssets])
fmt.Printf("Default sync folder directory:\n\t%s\n\n", locations[locDefFolder])
}
func setPauseState(cfg *config.Wrapper, paused bool) {
raw := cfg.RawCopy()
for i := range raw.Devices {
raw.Devices[i].Paused = paused
}
for i := range raw.Folders {
raw.Folders[i].Paused = paused
}
if err := cfg.Replace(raw); err != nil {
l.Fatalln("Cannot adjust paused state:", err)
}
}

View File

@@ -7,151 +7,12 @@
package main
import (
"os"
"testing"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestFolderErrors(t *testing.T) {
// This test intentionally avoids starting the folders. If they are
// started, they will perform an initial scan, which will create missing
// folder markers and race with the stuff we do in the test.
fcfg := config.FolderConfiguration{
ID: "folder",
RawPath: "testdata/testfolder",
}
cfg := config.Wrap("/tmp/test", config.Configuration{
Folders: []config.FolderConfiguration{fcfg},
})
for _, file := range []string{".stfolder", "testfolder/.stfolder", "testfolder"} {
if err := os.Remove("testdata/" + file); err != nil && !os.IsNotExist(err) {
t.Fatal(err)
}
}
ldb := db.OpenMemory()
// Case 1 - new folder, directory and marker created
m := model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
if err := m.CheckFolderHealth("folder"); err != nil {
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
}
s, err := os.Stat("testdata/testfolder")
if err != nil || !s.IsDir() {
t.Error(err)
}
_, err = os.Stat("testdata/testfolder/.stfolder")
if err != nil {
t.Error(err)
}
if err := os.Remove("testdata/testfolder/.stfolder"); err != nil {
t.Fatal(err)
}
if err := os.Remove("testdata/testfolder/"); err != nil {
t.Fatal(err)
}
// Case 2 - new folder, marker created
fcfg.RawPath = "testdata/"
cfg = config.Wrap("/tmp/test", config.Configuration{
Folders: []config.FolderConfiguration{fcfg},
})
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
if err := m.CheckFolderHealth("folder"); err != nil {
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
}
_, err = os.Stat("testdata/.stfolder")
if err != nil {
t.Error(err)
}
if err := os.Remove("testdata/.stfolder"); err != nil {
t.Fatal(err)
}
// Case 3 - Folder marker missing
set := db.NewFileSet("folder", ldb)
set.Update(protocol.LocalDeviceID, []protocol.FileInfo{
{Name: "dummyfile"},
})
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder marker missing" {
t.Error("Incorrect error: Folder marker missing !=", m.CheckFolderHealth("folder"))
}
// Case 3.1 - recover after folder marker missing
if err = fcfg.CreateMarker(); err != nil {
t.Error(err)
}
if err := m.CheckFolderHealth("folder"); err != nil {
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
}
// Case 4 - Folder path missing
if err := os.Remove("testdata/testfolder/.stfolder"); err != nil && !os.IsNotExist(err) {
t.Fatal(err)
}
if err := os.Remove("testdata/testfolder"); err != nil && !os.IsNotExist(err) {
t.Fatal(err)
}
fcfg.RawPath = "testdata/testfolder"
cfg = config.Wrap("testdata/subfolder", config.Configuration{
Folders: []config.FolderConfiguration{fcfg},
})
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder path missing" {
t.Error("Incorrect error: Folder path missing !=", m.CheckFolderHealth("folder"))
}
// Case 4.1 - recover after folder path missing
if err := os.Mkdir("testdata/testfolder", 0700); err != nil {
t.Fatal(err)
}
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder marker missing" {
t.Error("Incorrect error: Folder marker missing !=", m.CheckFolderHealth("folder"))
}
// Case 4.2 - recover after missing marker
if err = fcfg.CreateMarker(); err != nil {
t.Error(err)
}
if err := m.CheckFolderHealth("folder"); err != nil {
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
}
}
func TestShortIDCheck(t *testing.T) {
cfg := config.Wrap("/tmp/test", config.Configuration{
Devices: []config.DeviceConfiguration{

View File

@@ -20,9 +20,8 @@ var (
func memorySize() (int64, error) {
var memoryStatusEx [64]byte
binary.LittleEndian.PutUint32(memoryStatusEx[:], 64)
p := uintptr(unsafe.Pointer(&memoryStatusEx[0]))
ret, _, callErr := syscall.Syscall(uintptr(globalMemoryStatusEx), 1, p, 0, 0)
ret, _, callErr := syscall.Syscall(uintptr(globalMemoryStatusEx), 1, uintptr(unsafe.Pointer(&memoryStatusEx[0])), 0, 0)
if ret == 0 {
return 0, callErr
}

View File

@@ -23,7 +23,7 @@ func (c *mockedConfig) ListenAddresses() []string {
return nil
}
func (c *mockedConfig) Raw() config.Configuration {
func (c *mockedConfig) RawCopy() config.Configuration {
return config.Configuration{}
}
@@ -31,8 +31,8 @@ func (c *mockedConfig) Options() config.OptionsConfiguration {
return config.OptionsConfiguration{}
}
func (c *mockedConfig) Replace(cfg config.Configuration) config.CommitResponse {
return config.CommitResponse{}
func (c *mockedConfig) Replace(cfg config.Configuration) error {
return nil
}
func (c *mockedConfig) Subscribe(cm config.Committer) {}
@@ -45,6 +45,14 @@ func (c *mockedConfig) Devices() map[protocol.DeviceID]config.DeviceConfiguratio
return nil
}
func (c *mockedConfig) SetDevice(config.DeviceConfiguration) error {
return nil
}
func (c *mockedConfig) Save() error {
return nil
}
func (c *mockedConfig) RequiresRestart() bool {
return false
}

View File

@@ -21,8 +21,8 @@ func (m *mockedModel) GlobalDirectoryTree(folder, prefix string, levels int, dir
return nil
}
func (m *mockedModel) Completion(device protocol.DeviceID, folder string) float64 {
return 0
func (m *mockedModel) Completion(device protocol.DeviceID, folder string) model.FolderCompletion {
return model.FolderCompletion{}
}
func (m *mockedModel) Override(folder string) {}
@@ -31,8 +31,8 @@ func (m *mockedModel) NeedFolderFiles(folder string, page, perpage int) ([]db.Fi
return nil, nil, nil, 0
}
func (m *mockedModel) NeedSize(folder string) (nfiles int, bytes int64) {
return 0, 0
func (m *mockedModel) NeedSize(folder string) db.Counts {
return db.Counts{}
}
func (m *mockedModel) ConnectionStats() map[string]interface{} {
@@ -85,7 +85,7 @@ func (m *mockedModel) ScanFolders() map[string]error {
return nil
}
func (m *mockedModel) ScanFolderSubs(folder string, subs []string) error {
func (m *mockedModel) ScanFolderSubdirs(folder string, subs []string) error {
return nil
}
@@ -95,19 +95,19 @@ func (m *mockedModel) ConnectedTo(deviceID protocol.DeviceID) bool {
return false
}
func (m *mockedModel) GlobalSize(folder string) (nfiles, deleted int, bytes int64) {
return 0, 0, 0
func (m *mockedModel) GlobalSize(folder string) db.Counts {
return db.Counts{}
}
func (m *mockedModel) LocalSize(folder string) (nfiles, deleted int, bytes int64) {
return 0, 0, 0
func (m *mockedModel) LocalSize(folder string) db.Counts {
return db.Counts{}
}
func (m *mockedModel) CurrentLocalVersion(folder string) (int64, bool) {
func (m *mockedModel) CurrentSequence(folder string) (int64, bool) {
return 0, false
}
func (m *mockedModel) RemoteLocalVersion(folder string) (int64, bool) {
func (m *mockedModel) RemoteSequence(folder string) (int64, bool) {
return 0, false
}

View File

@@ -178,6 +178,22 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
if panicFd == nil {
dst.Write([]byte(line))
if strings.Contains(line, "SIGILL") {
l.Warnln(`
*******************************************************************************
* Crash due to illegal instruction detected. This is most likely due to a CPU *
* incompatibility with the high performance hashing package. Switching to the *
* standard hashing package instead. Please report this issue at: *
* *
* https://github.com/syncthing/syncthing/issues *
* *
* Include the details of your CPU. *
*******************************************************************************
`)
os.Setenv("STHASHING", "standard")
return
}
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(timestampedLoc(locPanicLog))
if err != nil {
@@ -186,8 +202,24 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
}
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
if strings.Contains(line, "leveldb") && strings.Contains(line, "corrupt") {
l.Warnln(`
*********************************************************************************
* Crash due to corrupt database. *
* *
* This crash usually occurs due to one of the following reasons: *
* - Syncthing being stopped abruptly (killed/loss of power) *
* - Bad hardware (memory/disk issues) *
* - Software that affects disk writes (SSD caching software and simillar) *
* *
* Please see the following URL for instructions on how to recover: *
* https://docs.syncthing.net/users/faq.html#my-syncthing-database-is-corrupt *
*********************************************************************************
`)
} else {
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
}
stdoutMut.Lock()
for _, line := range stdoutFirstLines {

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture"
)
@@ -59,7 +60,7 @@ func (c *folderSummaryService) Stop() {
// listenForUpdates subscribes to the event bus and makes note of folders that
// need their data recalculated.
func (c *folderSummaryService) listenForUpdates() {
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged)
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged | events.RemoteDownloadProgress | events.DeviceConnected)
defer events.Default.Unsubscribe(sub)
for {
@@ -67,8 +68,31 @@ func (c *folderSummaryService) listenForUpdates() {
select {
case ev := <-sub.C():
// Whenever the local or remote index is updated for a given
// folder we make a note of it.
if ev.Type == events.DeviceConnected {
// When a device connects we schedule a refresh of all
// folders shared with that device.
data := ev.Data.(map[string]string)
deviceID, _ := protocol.DeviceIDFromString(data["id"])
c.foldersMut.Lock()
nextFolder:
for _, folder := range c.cfg.Folders() {
for _, dev := range folder.Devices {
if dev.DeviceID == deviceID {
c.folders[folder.ID] = struct{}{}
continue nextFolder
}
}
}
c.foldersMut.Unlock()
continue
}
// The other events all have a "folder" attribute that they
// affect. Whenever the local or remote index is updated for a
// given folder we make a note of it.
data := ev.Data.(map[string]interface{})
folder := data["folder"].(string)
@@ -183,9 +207,11 @@ func (c *folderSummaryService) sendSummary(folder string) {
// remote device.
comp := c.model.Completion(devCfg.DeviceID, folder)
events.Default.Log(events.FolderCompletion, map[string]interface{}{
"folder": folder,
"device": devCfg.DeviceID.String(),
"completion": comp,
"folder": folder,
"device": devCfg.DeviceID.String(),
"completion": comp.CompletionPct,
"needBytes": comp.NeedBytes,
"globalBytes": comp.GlobalBytes,
})
}
}

View File

@@ -0,0 +1,16 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
//+build go1.7
package main
import "runtime/debug"
func init() {
// We want all (our) goroutines in panic traces.
debug.SetTraceback("all")
}

View File

@@ -9,7 +9,6 @@ package main
import (
"bytes"
"crypto/rand"
"crypto/sha256"
"crypto/tls"
"encoding/json"
"fmt"
@@ -23,6 +22,7 @@ import (
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/upgrade"
"github.com/thejerf/suture"
)
@@ -45,7 +45,7 @@ func newUsageReportingManager(cfg *config.Wrapper, m *model.Model) *usageReporti
}
// Start UR if it's enabled.
mgr.CommitConfiguration(config.Configuration{}, cfg.Raw())
mgr.CommitConfiguration(config.Configuration{}, cfg.RawCopy())
// Listen to future config changes so that we can start and stop as
// appropriate.
@@ -93,14 +93,14 @@ func reportData(cfg configIntf, m modelIntf) map[string]interface{} {
var totFiles, maxFiles int
var totBytes, maxBytes int64
for folderID := range cfg.Folders() {
files, _, bytes := m.GlobalSize(folderID)
totFiles += files
totBytes += bytes
if files > maxFiles {
maxFiles = files
global := m.GlobalSize(folderID)
totFiles += global.Files
totBytes += global.Bytes
if global.Files > maxFiles {
maxFiles = global.Files
}
if bytes > maxBytes {
maxBytes = bytes
if global.Bytes > maxBytes {
maxBytes = global.Bytes
}
}
@@ -134,7 +134,7 @@ func reportData(cfg configIntf, m modelIntf) map[string]interface{} {
for _, cfg := range cfg.Folders() {
rescanIntvs = append(rescanIntvs, cfg.RescanIntervalS)
if cfg.Type == config.FolderTypeReadOnly {
if cfg.Type == config.FolderTypeSendOnly {
folderUses["readonly"]++
}
if cfg.IgnorePerms {

View File

@@ -94,9 +94,12 @@ func (s *verboseService) formatEvent(ev events.Event) string {
case events.LocalChangeDetected:
data := ev.Data.(map[string]string)
// Local change detected in folder "foo": modified file /Users/jb/whatever
return fmt.Sprintf("Local change detected in folder %q: %s %s %s", data["folder"], data["action"], data["type"], data["path"])
case events.RemoteChangeDetected:
data := ev.Data.(map[string]string)
return fmt.Sprintf("Remote change detected in folder %q: %s %s %s", data["folder"], data["action"], data["type"], data["path"])
case events.RemoteIndexUpdated:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Device %v sent an index update for %q with %d items", data["device"], data["folder"], data["items"])
@@ -132,11 +135,14 @@ func (s *verboseService) formatEvent(ev events.Event) string {
case events.FolderSummary:
data := ev.Data.(map[string]interface{})
sum := data["summary"].(map[string]interface{})
delete(sum, "invalid")
delete(sum, "ignorePatterns")
delete(sum, "stateChanged")
return fmt.Sprintf("Summary for folder %q is %v", data["folder"], data["summary"])
sum := make(map[string]interface{})
for k, v := range data["summary"].(map[string]interface{}) {
if k == "invalid" || k == "ignorePatterns" || k == "stateChanged" {
continue
}
sum[k] = v
}
return fmt.Sprintf("Summary for folder %q is %v", data["folder"], sum)
case events.FolderScanProgress:
data := ev.Data.(map[string]interface{})
@@ -148,7 +154,7 @@ func (s *verboseService) formatEvent(ev events.Event) string {
if total > 0 {
pct = 100 * current / total
}
return fmt.Sprintf("Scanning folder %q, %d%% done (%.01f MB/s)", folder, pct, rate)
return fmt.Sprintf("Scanning folder %q, %d%% done (%.01f MiB/s)", folder, pct, rate)
case events.DevicePaused:
data := ev.Data.(map[string]string)
@@ -160,6 +166,18 @@ func (s *verboseService) formatEvent(ev events.Event) string {
device := data["device"]
return fmt.Sprintf("Device %v was resumed", device)
case events.FolderPaused:
data := ev.Data.(map[string]string)
id := data["id"]
label := data["label"]
return fmt.Sprintf("Folder %v (%v) was paused", id, label)
case events.FolderResumed:
data := ev.Data.(map[string]string)
id := data["id"]
label := data["label"]
return fmt.Sprintf("Folder %v (%v) was resumed", id, label)
case events.ListenAddressesChanged:
data := ev.Data.(map[string]interface{})
address := data["address"]

View File

@@ -1,5 +0,0 @@
{{.name}} ({{.version}}); urgency=medium
* Packaging of {{.version}}.
-- Syncthing Release Management <release@syncthing.net> {{.date}}

View File

@@ -1 +0,0 @@
9

View File

@@ -1,16 +0,0 @@
Package: syncthing
Version: {{.version}}
Priority: optional
Section: net
Architecture: {{.arch}}
Depends: libc6, procps
Homepage: https://syncthing.net/
Maintainer: Syncthing Release Management <release@syncthing.net>
Description: Open Source Continuous File Synchronization
Syncthing is an application that lets you synchronize your files across
multiple devices. This means the creation, modification or deletion of files
on one machine will automatically be replicated to your other devices. We
believe your data is your data alone and you deserve to choose where it is
stored. Therefore Syncthing does not upload your data to the cloud but
exchanges your data across your machines as soon as they are online at the
same time.

View File

@@ -1,6 +0,0 @@
#!/bin/bash
set -euo pipefail
if [[ ${1:-} == configure ]]; then
pkill -HUP -x syncthing || true
fi

View File

@@ -0,0 +1,31 @@
Uncomplicated FireWall application preset
===================
Installation
-----------
**Please note:** When you installed syncthing using the official deb package, you can skip the copying.
Copy the file `syncthing` to your ufw applications directory usually located at `/etc/ufw/applications.d/` (root permissions required).
In a terminal run
```
sudo ufw app update syncthing
sudo ufw app update syncthing-gui
```
to load the presets.
To allow the syncthing ports, run
```
sudo ufw allow syncthing
```
If you want to access the web gui from anywhere (not only from localhost), you can also allow the gui port.
This is step is **not** necessary for a "normal" installation!
```
sudo ufw allow syncthing-gui
```
Verification
----------
You can verify the opened ports by running
```
sudo ufw status verbose
```

View File

@@ -0,0 +1,9 @@
[syncthing]
title=Syncthing
description=Syncthing file synchronisation
ports=22000/tcp|21027/udp
[syncthing-gui]
title=Syncthing-GUI
description=Syncthing web gui
ports=8384/tcp

View File

@@ -3,6 +3,6 @@
This directory contains configuration files for running Syncthing under the
"systemd" service manager on Linux both under either a systemd system service or
systemd user service. For further documentation take a look at the [systemd
section][1] on http://docs.syncthing.net.
section][1] on https://docs.syncthing.net.
[1]: http://docs.syncthing.net/users/autostart.html#systemd
[1]: https://docs.syncthing.net/users/autostart.html#systemd

View File

@@ -5,7 +5,7 @@ After=suspend.target
[Service]
Type=oneshot
ExecStart=/usr/bin/pkill -HUP -x syncthing
ExecStart=-/usr/bin/pkill -HUP -x syncthing
[Install]
WantedBy=suspend.target

View File

@@ -1,7 +1,6 @@
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=man:syncthing(1)
After=network.target
Wants=syncthing-inotify.service
[Service]

View File

@@ -11,6 +11,6 @@ To manualy start syncthing via Upstart when using the system configuration use:
sudo initctl start syncthing
```
For further documentation see [http://docs.syncthing.net/users/autostart.html][1].
For further documentation see [https://docs.syncthing.net/users/autostart.html][1].
[1]: http://docs.syncthing.net/users/autostart.html#Upstart
[1]: https://docs.syncthing.net/users/autostart.html#Upstart

View File

@@ -0,0 +1,245 @@
/**
Black theme
Author: alessandro.g89
Source: https://userstyles.org/styles/122502/syncthing-dark
**/
body {
color: #aaa !important;
background-color: black !important;
}
a:hover,a:focus,a.focus{
outline: none !important;
}
/* navbar */
.navbar {
background-color: #333 !important;
border-color: #333 !important;
border-width: 2px !important;
}
.navbar-text, .dropdown>a, .dropdown-menu>li>a, .hidden-xs>a, .navbar-link {
color: #aaa !important;
}
.dropdown-menu {
border-color: #333 !important;
border-width: 2px !important;
background-color: #222 !important;
}
.dropdown-menu>li>a:hover, .dropdown-menu>li>a:focus {
color: #fff !important;
background-color: #333 !important;
}
.open>.dropdown-toggle, .dropdown-toggle:hover {
border-color: #333 !important;
background-color: #222 !important;
}
.divider {
background-color: #333 !important;
height: 2px !important;
}
li.hidden-xs:hover, .navbar-link:hover, .navbar-link:focus {
outline: none !important;
border-color: #333 !important;
background-color: #222 !important;
}
.dropdown-menu>.active>a {
color: #fff !important;
background-color: #217dbb !important;
}
/* main panel */
.panel {
background-color: #111 !important;
border-width: 2px !important;
}
.panel-default {
border-color: #222 !important;
}
.panel-default > .panel-heading {
color: #aaa !important;
border-color: #222 !important;
background-color: #222 !important;
}
.panel-warning > .panel-heading {
color: #222 !important;
}
.panel-progress {
background: #3498db;
}
.panel-footer {
background-color: #111 !important;
border-width: 0 !important;
}
.table-striped>tbody>tr:nth-of-type(odd) {
background-color: #181818 !important;
}
.panel-group .panel-heading+.panel-collapse>.panel-body, .panel-group .panel-heading+.panel-collapse>.list-group {
border-top: 1px solid #222 !important;
}
.identicon rect {
fill: #aaa;
}
.panel-warning .identicon rect {
fill: #222;
}
.panel-heading:hover, .panel-heading:focus {
text-decoration: none;
}
/* buttons */
.btn {
border-radius: 3px !important;
border-width: 0px !important;
}
.btn:hover, .btn:focus, .btn.focus {
outline: none !important;
}
.btn-default {
color: #aaa !important;
background-color: #333 !important;
}
.btn-default:hover, .btn-default:focus, .btn-default.focus {
color: #fff !important;
background-color: #484848 !important;
}
.btn-primary {
background-color: #217dbb !important;
}
.btn-primary:hover, .btn-primary:focus, .btn-primary.focus {
background-color: #3498db !important;
}
.btn-warning {
background-color: #c29d0b !important;
}
.btn-warning:hover, .btn-warning:focus, .btn-warning.focus {
background-color: #f1c40f !important;
}
.btn-danger {
background-color: #d62c1a !important;
}
.btn-danger:hover, .btn-danger:focus, .btn-danger.focus {
background-color: #e74c3c !important;
}
/* modal dialogs */
.modal-header {
border-bottom-color: #222 !important;
}
.modal-header:not(.alert) {
background-color: #222;
}
.alert-info {
color: #222 !important;
}
.alert-warning {
color: #222 !important;
}
.alert-danger {
color: #222 !important;
background-color: #d62c1a !important;
}
.modal-content {
border-color: #666 !important;
border-width: 2px !important;
background-color: #111 !important;
}
.modal-footer {
border-color: #111 !important;
background-color: #111 !important;
}
.help-block {
color: #aaa !important;
}
.form-control {
color: #aaa !important;
border-color: #444 !important;
background-color: black !important;
}
code.ng-binding{
color: #f99 !important;
background-color: #444 !important;
}
.well, .form-control[readonly="readonly"], .popover { /* read-only fields*/
color: #666 !important;
border-color: #444 !important;
background-color: #111 !important;
}
/* buttons for pagination */
.pagination>li>a, .pagination>li>span {
background-color: #333 !important;
border-color: #484848 !important;
}
.pagination>li>a:hover, .pagination>li>a:focus, .pagination>li>a.focus {
background-color: #484848 !important;
}
/* progress bars */
.progress-bar {
background-color: #217dbb !important;
}
.progress-bar-success {
background-color: #0A8522 !important;
}
.progress-bar-info {
background-color: #9b59b6 !important;
}
.progress-bar-warning {
background-color: #c29d0b !important;
}
.progress-bar-danger {
background-color: #d62c1a !important;
}
.progress .frontal {
color: #222;
}

View File

@@ -5,11 +5,13 @@ Dark theme
Author: alessandro.g89
Source: https://userstyles.org/styles/122502/syncthing-dark
Modified by: fti7
**/
body {
color: #aaa !important;
background-color: black !important;
background-color: #272727 !important;
}
a:hover,a:focus,a.focus{
@@ -20,8 +22,10 @@ a:hover,a:focus,a.focus{
/* navbar */
.navbar {
background-color: #333 !important;
border-color: #333 !important;
border-width: 2px !important;
border-color: #424242 !important;
border-top-width: 1px !important;
border-bottom-width: 1px !important;
}
.navbar-text, .dropdown>a, .dropdown-menu>li>a, .hidden-xs>a, .navbar-link {
@@ -29,7 +33,7 @@ a:hover,a:focus,a.focus{
}
.dropdown-menu {
border-color: #333 !important;
border-color: #424242 !important;
border-width: 2px !important;
background-color: #222 !important;
}
@@ -40,7 +44,7 @@ a:hover,a:focus,a.focus{
}
.open>.dropdown-toggle, .dropdown-toggle:hover {
border-color: #333 !important;
border-color: #424242 !important;
background-color: #222 !important;
}
@@ -51,7 +55,7 @@ a:hover,a:focus,a.focus{
li.hidden-xs:hover, .navbar-link:hover, .navbar-link:focus {
outline: none !important;
border-color: #333 !important;
border-color: #424242 !important;
background-color: #222 !important;
}
@@ -63,37 +67,53 @@ li.hidden-xs:hover, .navbar-link:hover, .navbar-link:focus {
/* main panel */
.panel {
background-color: #111 !important;
background-color: #323232 !important;
border-width: 2px !important;
}
.panel-default {
border-color: #222 !important;
border-color: #424242 !important;
}
.panel-default>.panel-heading {
.panel-default > .panel-heading {
color: #aaa !important;
border-color: #222 !important;
background-color: #222 !important;
background-color: #3B3B3B !important;
}
.panel-warning > .panel-heading {
color: #222 !important;
}
.panel-progress {
background: #3498db;
}
.panel-footer {
background-color: #111 !important;
background-color: #2D2D2D !important;
border-width: 0 !important;
}
.table-striped>tbody>tr:nth-of-type(odd) {
background-color: #181818 !important;
background-color: #2E2E2E !important;
}
.panel-group .panel-heading+.panel-collapse>.panel-body, .panel-group .panel-heading+.panel-collapse>.list-group {
border-top: 1px solid #222 !important;
border-top: 1px solid #424242 !important;
}
.identicon>rect {
fill: #aaa !important;
.identicon rect {
fill: #aaa;
}
.panel-warning .identicon rect {
fill: #222;
}
.panel-heading:hover, .panel-heading:focus {
text-decoration: none;
}
/* buttons */
.btn {
border-radius: 3px !important;
@@ -140,27 +160,35 @@ li.hidden-xs:hover, .navbar-link:hover, .navbar-link:focus {
/* modal dialogs */
.modal-header {
border-color: #222 !important;
background-color: #222;
border-bottom-color: #424242 !important;
}
.modal-header:not(.alert) {
background-color: #3B3B3B;
}
.alert-info {
color: #222 !important;
}
.alert-warning {
color: #222 !important;
}
.alert-danger {
color: #222 !important;
background-color: #d62c1a !important;
}
.modal-content {
border-color: #666 !important;
border-width: 2px !important;
background-color: #111 !important;
background-color: #272727 !important;
}
.modal-footer {
border-color: #111 !important;
background-color: #111 !important;
}
.alert-warning {
background-color: #c29d0b !important;
}
.alert-danger {
background-color: #d62c1a !important;
border-color: #303030 !important;
background-color: #2D2D2D !important;
}
.help-block {
@@ -169,8 +197,8 @@ li.hidden-xs:hover, .navbar-link:hover, .navbar-link:focus {
.form-control {
color: #aaa !important;
border-color: #444 !important;
background-color: black !important;
border-color: #424242 !important;
background-color: #3B3B3B !important;
}
code.ng-binding{
@@ -180,8 +208,8 @@ code.ng-binding{
.well, .form-control[readonly="readonly"], .popover { /* read-only fields*/
color: #666 !important;
border-color: #444 !important;
background-color: #111 !important;
border-color: #424242 !important;
background-color: #3B3B3B !important;
}
/* buttons for pagination */
@@ -214,4 +242,17 @@ code.ng-binding{
.progress-bar-danger {
background-color: #d62c1a !important;
}
}
.progress .frontal {
color: #222;
}
.text-info {
color: #9a6dc0;
}
.text-primary {
color: #3fa9f0;
}

View File

@@ -1,5 +1,4 @@
.dev-top-bar{
display: none;
background-color: yellow;
}

View File

@@ -33,29 +33,6 @@ ul+h5 {
display: block;
}
.panel-title {
position: relative;
text-overflow: ellipsis;
overflow: hidden;
}
.panel-title a:hover {
text-decoration: none;
}
identicon {
display: inline-block;
position: relative;
width: 1em;
height: 1em;
line-height: 1;
margin-right: 5px;
}
.identicon {
width: 1em;
height: 1em;
}
.checkbox {
margin-top: 0px;
}
@@ -73,15 +50,6 @@ identicon {
word-wrap:break-word;
}
.panel-heading .fa, .modal-header .fa {
margin-right: 10px;
}
.panel-heading {
position: relative;
overflow: hidden;
}
.text-monospace {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
}
@@ -125,7 +93,7 @@ identicon {
}
.table td {
padding-left: 20px !important;
/*padding-left: 20px !important;*/
}
.table td.small-data {
@@ -163,12 +131,27 @@ table.table-condensed td.no-overflow-ellipse {
display: none;
}
*[language-select] > .dropdown-menu {
width: 450px;
}
*[language-select] > .dropdown-menu > li {
float: left;
width: 50%;
}
*[language-select] > .dropdown-menu > li > a {
overflow: hidden;
text-overflow: ellipsis;
}
.nav>li{
float: left;
}
.navbar-right {
/* to align with panel */
padding-right: 15px;
float: right;
}
.panel-body .table-condensed {
@@ -183,6 +166,56 @@ table.table-condensed td.no-overflow-ellipse {
margin-left: 60px;
}
/**
* Panel, Model and Accordion Title bars
*/
.panel-icon {
float: left;
margin-right: 15px;
margin-top: 0.125em;
margin-bottom: 0.125em;
line-height: 1;
}
.modal-title .panel-icon {
margin-top: 0.25em;
margin-bottom: 0.25em;
}
button.panel-heading {
display: block;
position: relative;
width: 100%;
text-align: left;
border-top-width: 0;
border-left-width: 0;
border-right-width: 0;
border-radius: 0 !important;
}
.panel-heading .panel-title-text {
text-overflow: ellipsis;
overflow: hidden;
white-space: nowrap;
}
.panel-heading .panel-status {
margin-left:15px;
}
identicon {
width: 1em;
height: 1em;
line-height: 1;
}
.identicon {
width: 1em;
height: 1em;
}
/**
* Progress bars with centered text
*/
@@ -216,6 +249,10 @@ ul.three-columns li, ul.two-columns li {
text-indent: -0.5em;
}
.navbar-fixed-bottom {
z-index: 980;
}
/** Footer nav on small devices **/
@media (max-width: 1199px) {
/* Stay at the end of the page, with space reserved for the footer
@@ -243,12 +280,37 @@ ul.three-columns li, ul.two-columns li {
padding-bottom: 0px;
}
.navbar-brand {
margin: 3.25px -15px;
}
.navbar-fixed-bottom {
position: relative;
}
.nav>li {
float:right;
.navbar-nav .open .dropdown-menu {
position: absolute;
left: auto;
right: 0;
background-color: #ffffff;
border: 1px solid #cccccc;
border: 1px solid rgba(0, 0, 0, 0.15);
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
border-radius: 2px;
}
*[language-select] {
position: static !important;
}
*[language-select] > .dropdown-menu {
margin-left: 15px;
margin-right: 15px;
margin-top: -12px !important;
max-width: 450px;
height: 265px;
overflow-y: scroll;
}
table.table-condensed td {
@@ -258,47 +320,17 @@ ul.three-columns li, ul.two-columns li {
}
}
@media (min-width:650px) {
*[language-select] > .dropdown-menu > li {
width: 50%;
float: left;
}
*[language-select] > .dropdown-menu {
width: 440px;
}
}
/**
* Menu for select language
*/
@media (min-width:480px) and (max-width:649px) {
*[language-select] > .dropdown-menu {
width: 230px;
}
}
@media (max-width:479px) {
.dropdown-menu {
padding-top: 55px;
}
nav .dropdown-toggle {
font-size: 1em;
}
.dropdown-toggle {
float: left;
}
.logo{
margin:auto;
}
.navbar-nav .open .dropdown-menu > li > a {
padding: 12px 15px 12px 25px;
}
.navbar-fixed-bottom li{
width:100%;
}
.navbar-fixed-bottom li {
width: 100%;
}
}

View File

@@ -15,7 +15,15 @@
fill: #333;
}
.panel-warning .identicon rect {
fill: #fff;
}
.li-column {
background-color: rgb(236, 240, 241);
border-radius: 3px;
}
}
.panel-heading:hover, .panel-heading:focus {
text-decoration: none;
}

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Some files were not shown because too many files have changed in this diff Show More