Compare commits

...

793 Commits

Author SHA1 Message Date
Jakob Borg
5f40879a75 gui: Fix auto dismissing auth notification (ref #6536) 2020-06-08 08:29:55 +02:00
Simon Frei
0b65a616ba lib/scanner: Save one stat call per file (#6715) 2020-06-08 08:14:50 +02:00
Simon Frei
6b4fe5c063 gui: Ignore permissions isn't just about FAT (#6713)
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2020-06-08 08:12:06 +02:00
Simon Frei
3065b127b5 lib/connections, lib/nat: Correctly dis-/enable nat (fixes #6552) (#6719) 2020-06-07 20:29:53 +02:00
Simon Frei
74ea9c5f67 gui: Fix string and update translations (ref #6536) (#6716) 2020-06-07 10:46:06 +02:00
André Colomb
46536509d7 lib/protocol: Avoid panic in DeviceIDFromBytes (#6714) 2020-06-07 10:31:12 +02:00
Jakob Borg
607bcc0b0e build: Fix rebuild of strelaypoolsrv assets 2020-06-06 08:43:56 +02:00
Jakob Borg
1950efb790 cmd/strelaysrv: Add note about relay operators 2020-06-06 08:35:01 +02:00
Jakob Borg
1b77ab2b52 cmd/stcrashreceiver: Pick up extra build tags and send to Sentry (#6710)
This extracts the extra tags from any `[foo]` stuff at the end of the
version and sends them to Sentry for indexing.

If I need to modify that regexp again I'll probably write a from scratch
tokenizer and parser for our version string instead...
2020-06-03 15:00:46 +02:00
Jakob Borg
4f367e4376 lib/build: Add some env vars as synthetic build tags (#6709)
This adds some env vars to the long version string as if they were build
tags. The purpose is to better understand what code was running or not
in the version output, usage reporting and crash reports. In order to
prevent possible privacy issues the actual value of the variable is not
reported, just the fact that it was set to something non-empty.

Example:

	% ./bin/syncthing --version
	syncthing v1.6.1+47-g1eb104f3-buildtags "Fermium Flea" (go1.14.3 darwin-amd64) jb@kvin.kastelo.net 2020-06-03 07:25:46 UTC [stnoupgrade, use_badger]
2020-06-03 15:00:28 +02:00
dependabot-preview[bot]
98418c9b5c build(deps): bump github.com/lucas-clemente/quic-go (#6695)
Bumps [github.com/lucas-clemente/quic-go](https://github.com/lucas-clemente/quic-go) from 0.15.7 to 0.16.0.
- [Release notes](https://github.com/lucas-clemente/quic-go/releases)
- [Changelog](https://github.com/lucas-clemente/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/lucas-clemente/quic-go/compare/v0.15.7...v0.16.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-06-03 10:03:08 +02:00
Jakob Borg
3e4a90a2ba gui, man, authors: Update docs, translations, and contributors 2020-06-03 07:45:41 +02:00
Simon Frei
fac4dec840 lib/db: New VersionList migration fixes (ref #6638) (#6705) 2020-06-02 23:05:41 +02:00
Simon Frei
79bf1f1056 gui: Preserve folder-device info on folder edit (fixes #6706) (#6707) 2020-06-02 23:05:19 +02:00
Jonathan
9ef17322be lib/config: Add missing quic address in case of non-default port (fixes #6679) (#6703)
* Add quic listener on instance of port blockage

* Update lib/config/config.go

Co-authored-by: Audrius Butkevicius <audrius.butkevicius@gmail.com>
2020-06-02 15:06:02 +01:00
Jakob Borg
c80e0bfc28 Merge branch 'release'
* release:
  cmd/syncthing: Correct auto upgrade criteria (fixes #6701) (#6702)
2020-06-02 12:09:14 +02:00
Jędrzej Kula
28d5c84599 gui, lib/config: Add GUI user and password notification (fixes #4703) (#6536) 2020-06-02 12:08:22 +02:00
Jakob Borg
d7c3d81dfb cmd/syncthing: Correct auto upgrade criteria (fixes #6701) (#6702) 2020-06-02 11:49:22 +02:00
Jakob Borg
b033c36b31 build: Add "changelog" command (#6700) 2020-06-02 11:40:45 +02:00
Jakob Borg
bfc9478965 cmd/syncthing: Correct auto upgrade criteria (fixes #6701) (#6702) 2020-06-02 11:38:39 +02:00
Jakob Borg
d9cb7e2739 lib/connections: Skip and warn on malformed URLs (fixes #6697) (#6699) 2020-06-02 11:19:51 +02:00
Simon Frei
1f8e6c55f6 lib/db: Refactor to use global list by version (fixes #6372) (#6638)
Group the global list of files by version, instead of having one flat list for all devices. This removes lots of duplicate protocol.Vectors.

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2020-05-30 09:50:23 +02:00
Jakob Borg
1eea076f5c cmd/stindex: Detect and open Badger databases 2020-05-30 09:47:11 +02:00
Jakob Borg
94beed5c10 lib/db: Add Badger backend (fixes #5910) (#6250) 2020-05-29 13:43:02 +02:00
Simon Frei
ed6bfc5417 lib/model: Don't increase pull pause on triggered pull (#6690) 2020-05-29 09:52:28 +02:00
Jakob Borg
04ff890263 build: Clean up build.sh, add build.ps1 (#6689) 2020-05-28 12:42:15 +02:00
greatroar
9c0825c0d9 lib/scanner: Simplify, optimize and document Validate (#6674) (#6688) 2020-05-27 22:23:12 +02:00
Jakob Borg
f78133b8e9 lib/db: Adjust transaction flush sizes downwards (#6686)
This reduces the size of our write batches before we flush them. This
has two effects: reducing the amount of data lost if we crash when
updating the database, and reducing the amount of memory used when we do
large updates without checkpoint (e.g., deleting a folder).

I ran our SyncManyFiles benchmark as it is the one doing most
transactions, however there was no relevant change in any metric (it's
limited by our fsync I expect). This is good as any visible change would
just be a decrease in performance.

I don't have a benchmark on deleting a large folder, taking that part on
trust for now...
2020-05-27 12:15:00 +02:00
Simon Frei
b784f5b9e3 lib/config, lib/model: Commit auto-accepted folders all at once (#6684) 2020-05-27 08:05:26 +02:00
Jakob Borg
1c089a4d11 gui, man, authors: Update docs, translations, and contributors 2020-05-27 07:45:24 +02:00
greatroar
baa38eea7a lib/assets: Allow assets to remain uncompressed (#6661) 2020-05-25 08:51:27 +02:00
Simon Frei
c3b5eba205 lib/model: Fix checking children when trying to delete a dir (fixes #6646) (#6647) 2020-05-25 08:46:24 +02:00
Simon Frei
cf75329067 lib/model: Handle concurrently removed device in ClusterConfig (#6666) 2020-05-20 11:13:55 +02:00
Jakob Borg
8343db6766 Merge branch 'release'
* release:
  lib/db: Initialize need meta on first dev occurrence (fixes #6668) (#6669)
  lib/db: Don't panic on seq. coruption when debugging (#6662)
2020-05-20 11:05:53 +02:00
Simon Frei
8c74177699 lib/db: Initialize need meta on first dev occurrence (fixes #6668) (#6669) 2020-05-20 11:04:45 +02:00
Simon Frei
9a5f7fbadf lib/db: Don't panic on seq. coruption when debugging (#6662) 2020-05-20 11:04:38 +02:00
Simon Frei
8f344d0915 lib/db: Initialize need meta on first dev occurrence (fixes #6668) (#6669) 2020-05-20 11:01:27 +02:00
Jakob Borg
77dd874383 gui, man, authors: Update docs, translations, and contributors 2020-05-20 07:45:28 +02:00
Simon Frei
5b34c31cb3 lib/db: Don't panic on seq. coruption when debugging (#6662) 2020-05-18 10:42:51 +02:00
André Colomb
668979605b cmd/stindex: Add missing KeyType values in stindex dump code (#6659)
* cmd/stindex: Unify access to key from cached variable.

Avoid calling the Key() method from the iterator each time the value
is needed.  Just reuse the cache variable already assigned before the
switch block.

* cmd/stindex: Display the prefix byte value for unknown key types.

Make it easier to diagnose corrupt / unknown key type entries by
showing their decimal value, correlating with the definitions in
keyer.go.

* cmd/stindex: Add missing KeyType values in stindex dump code.

Recently added DB key prefixes KeyTypeBlockListMap and KeyTypeVersion
were unknown to the stindex dumping tool.  Add basic parsing to dump
their key structure.
2020-05-17 11:06:14 +02:00
Jakob Borg
5ffa012410 lib/model: Don't crash when taking rename shortcut (fixes #6654) (#6657)
If we fail to take the rename shortcut we may crash on a later loop,
because we do trickiness with the indexes but the original buckets[key]
in "range buckets[key]" isn't re-evaluated so i exceeds the max index.
2020-05-17 08:48:35 +01:00
Jakob Borg
5c54d879a1 cmd/syncthing: Don't crash when failing to create default config (fixes #6655) (#6658)
This is not an ignorable error, because it can happen if we fail to
allocate a free port for the GUI or sync port on first startup.
2020-05-17 07:56:24 +02:00
Jakob Borg
1c5af3a4bd Merge branch 'release'
* release:
  lib/model: Partial revert of rename fix (fixes #6653) (#6656)
2020-05-16 23:08:43 +02:00
Jakob Borg
22c222bf75 lib/model: Partial revert of rename fix (fixes #6653) (#6656)
We can't just drop the snap because it's in use elsewhere. This should
be equally functional.
2020-05-16 23:06:25 +02:00
Jakob Borg
651ee2ce74 lib/model: Partial revert of rename fix (fixes #6653) (#6656)
We can't just drop the snap because it's in use elsewhere. This should
be equally functional.
2020-05-16 23:05:33 +02:00
Jakob Borg
5c9dc4c883 Merge branch 'release'
* release:
  lib/model: Fix rename handling (ref #6650) (#6652)
  lib/db: Filter repeat files in one update (ref #6650) (#6651)
2020-05-16 14:42:11 +02:00
Audrius Butkevicius
258341f8bf lib/model: Fix rename handling (ref #6650) (#6652) 2020-05-16 14:40:20 +02:00
Simon Frei
f5ca213682 lib/db: Filter repeat files in one update (ref #6650) (#6651) 2020-05-16 14:40:20 +02:00
Audrius Butkevicius
a1c5b44c74 lib/model: Fix rename handling (ref #6650) (#6652) 2020-05-16 14:39:27 +02:00
Simon Frei
de9489585f lib/db: Filter repeat files in one update (ref #6650) (#6651) 2020-05-16 14:34:53 +02:00
Jakob Borg
438f687591 docker: Add tzdata for local time log entries 2020-05-16 11:34:46 +02:00
Shaarad Dalvi
7a8cc5fc99 docker: Improved health check for host networks (#6649) 2020-05-15 12:59:22 +02:00
xarx00
f05ccd775a lib/fs: Set execute bits on junctions converted to dirs (ref #6606) (#6645) 2020-05-14 08:09:58 +02:00
Simon Frei
e5cc55ce09 lib/model: Close conns when devices are removed (fixes #6564) (#6641) 2020-05-14 07:50:53 +02:00
Simon Frei
299b9d8883 lib/model: Adjust remote-rename-test to timer-based versions (fixes #6625) (#6644) 2020-05-14 00:31:05 +02:00
Simon Frei
074097a8e7 lib/fs: Prevent race-detector triggering in tests (fixes #6608) (#6642) 2020-05-14 00:30:09 +02:00
xarx00
ee445e35a0 lib/fs: Treat Windows junctions as normal directories (#6606)
Fixes #1830, presumably.
2020-05-13 21:46:24 +02:00
Jakob Borg
3ad049184e Merge branch 'release'
* release:
  lib/db: Dont add symlinks to blocks map (fixes #6637) (#6639)
2020-05-13 20:48:34 +02:00
Simon Frei
6768daec07 lib/db: Dont add symlinks to blocks map (fixes #6637) (#6639) 2020-05-13 20:47:05 +02:00
Simon Frei
974551375e lib/db: Dont add symlinks to blocks map (fixes #6637) (#6639) 2020-05-13 20:38:21 +02:00
Jakob Borg
531ceb2b0f Add indirection for large version vectors. (#6376)
This adds indirection of large version vectors in the same manner as we
already to block lists. The effect is the same: less duplicated data in
some situations.

To mitigate the impact for when this indirection
wouldn't be needed I've added an indirection cutoff for both blocks and
the new version vector stuff: we don't do the indirection at all for
small block lists or small version vectors, instead storing it directly
like we used to do. This is faster for small files and small setups.
2020-05-13 14:28:42 +02:00
Simon Frei
ba4462a70b lib/db: Fix updating/removing from global (ref #6413) (#6635) 2020-05-13 12:56:49 +02:00
Jakob Borg
8419c05794 gui, man, authors: Update docs, translations, and contributors 2020-05-13 07:45:31 +02:00
NinoM4ster
c84f60f949 etc: Add RestartSec=5 to linux-systemd units
Might help avoiding the 'Start request repeated too quickly.' error.

Fixes #6633, fixes #6634
2020-05-12 10:02:32 +02:00
Audrius Butkevicius
6201eebc98 lib/model: Add support for different puller block ordering (#6587)
* WIP

* Tests

* Header and format

* WIP

* Fix tests

* Goland you disappoint me

* Remove CC storage

* Update blockpullreorderer.go
2020-05-11 22:44:04 +01:00
Audrius Butkevicius
decb967969 all: Reorder sequences for better rename detection (#6574) 2020-05-11 20:15:11 +02:00
Jakob Borg
aac3750298 build: Enable hardened runtime in Mac codesigning
syncthing/syncthing-macos#111
2020-05-11 18:19:39 +02:00
Simon Frei
a94951becd lib/db, lib/model: Keep need stats in metadata (ref #5899) (#6413) 2020-05-11 15:07:06 +02:00
Audrius Butkevicius
7dc290c3ed lib/connections: React to listeners going up and down faster (#6590) 2020-05-11 15:02:22 +02:00
Jakob Borg
50faa8f7ef build: Check new assets location for rebuild 2020-05-11 12:20:21 +02:00
Simon Frei
5c87ceb392 build: Update .gitignores for new asset locs (ref #6624) (#6631) 2020-05-11 11:04:47 +01:00
dependabot-preview[bot]
6ffc8255b6 build(deps): bump github.com/lucas-clemente/quic-go (#6630)
Bumps [github.com/lucas-clemente/quic-go](https://github.com/lucas-clemente/quic-go) from 0.15.6 to 0.15.7.
- [Release notes](https://github.com/lucas-clemente/quic-go/releases)
- [Changelog](https://github.com/lucas-clemente/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/lucas-clemente/quic-go/compare/v0.15.6...v0.15.7)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-05-11 11:04:23 +01:00
dependabot-preview[bot]
3354e60461 build(deps): bump github.com/go-ldap/ldap/v3 from 3.1.7 to 3.1.10 (#6629)
Bumps [github.com/go-ldap/ldap/v3](https://github.com/go-ldap/ldap) from 3.1.7 to 3.1.10.
- [Release notes](https://github.com/go-ldap/ldap/releases)
- [Commits](https://github.com/go-ldap/ldap/compare/v3.1.7...v3.1.10)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-05-11 10:14:31 +01:00
greatroar
06365e5635 cmd/strelaypoolsrv, lib/api: Factor out static asset serving (#6624) 2020-05-10 11:44:34 +02:00
Audrius Butkevicius
da99203dcd lib/db: Fix non-truncated fileinfo loading (#6621) 2020-05-08 20:31:43 +01:00
Simon Frei
57ea8a1bf5 lib/model: Prevent panic on test failure (ref #6618) (#6620) 2020-05-08 17:56:49 +02:00
Jakob Borg
f72832d591 build: go.mod tidy 2020-05-08 16:50:17 +02:00
Jakob Borg
c0c18a568c lib/db: Hold the bloom filter the right way (fixes #6614) (#6617) 2020-05-08 14:18:00 +02:00
Shaarad Dalvi
b0b3abf76b docker: Remove health check (fixes #6604) (#6616)
Co-authored-by: Shaarad Dalvi <shdalv@microsoft.com>
2020-05-08 12:03:37 +02:00
Jakob Borg
c20ed80dc4 github: Update issue templates (#6610)
* github: Update issue templates

* wip

* wip
2020-05-07 12:23:00 +01:00
greatroar
e2febf246e all: Store assets as strings (#6611)
Storing assets as []byte requires every compiled-in asset to be copied
into writable memory at program startup. That currently takes up 1.6MB
per syncthing process. Strings stay in the RODATA section and should be
shared between processes running the same binary.
2020-05-07 11:47:23 +02:00
Simon Frei
2cdeb1bf70 lib/model: Spurious tmp file (ref #6607) (#6609) 2020-05-07 08:49:59 +02:00
Jakob Borg
876609a0f0 lib/model: Fix test after version vector changes (#6607) 2020-05-06 21:19:33 +02:00
Jakob Borg
744ef0d8ac lib/protocol: Avoid data loss on database wipe by higher version numbers (fixes #3876) (#6605)
This makes version vector values clock based instead of just incremented
from zero. The effect is that a vector that is created from scratch
(after database reset) will have a higher value for the local device
than what it could have been previously, causing a conflict. That is, if
we are A and we had

    {A: 42, B: 12}

in the old scheme, a reset and rescan would give us

    {A: 1}

which is a strict ancestor of the older file (this might be wrong). With
the new scheme we would instead have

    {A: someClockTime, b: otherClockTime}

and the new version after reset would become

    {A: someClockTime+delta}

which is in conflict with the previous entry (better).
In case the clocks are wrong (current time is less than the value in the
vector) we fall back to just simple increment like today.

This scheme is ineffective if we suffer a database reset while at the
same time setting the clock back far into the past. It's however no
worse than what we already do.

This loses the ability to emit the "added" event, as we can't look for
the magic 1 entry any more. That event was however already broken
(#5541).

Another place where we infer meaning from the vector itself is in
receive only folders, but there the only criteria is that the vector is
one item long and includes just ourselves, which remains the case with
this change.

* wip
2020-05-06 08:47:02 +02:00
dependabot-preview[bot]
8d6fb86ee0 build(deps): bump github.com/greatroar/blobloom from 0.2.0 to 0.2.1 (#6600)
Bumps [github.com/greatroar/blobloom](https://github.com/greatroar/blobloom) from 0.2.0 to 0.2.1.
- [Release notes](https://github.com/greatroar/blobloom/releases)
- [Commits](https://github.com/greatroar/blobloom/compare/v0.2.0...v0.2.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-05-06 08:35:37 +02:00
dependabot-preview[bot]
13c3dac89c build(deps): bump github.com/lucas-clemente/quic-go (#6599)
Bumps [github.com/lucas-clemente/quic-go](https://github.com/lucas-clemente/quic-go) from 0.15.5 to 0.15.6.
- [Release notes](https://github.com/lucas-clemente/quic-go/releases)
- [Changelog](https://github.com/lucas-clemente/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/lucas-clemente/quic-go/compare/v0.15.5...v0.15.6)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2020-05-06 08:35:20 +02:00
Simon Frei
b5fc332782 lib/model: Merge add and start folder funcs and related refactor (#6594) 2020-05-06 08:34:54 +02:00
Jakob Borg
4a8a8b294d gui, man, authors: Update docs, translations, and contributors 2020-05-06 07:46:12 +02:00
Jakob Borg
92905d30e8 docker: Accept Go version as --build-arg 2020-05-04 12:45:36 +02:00
Simon Frei
914eb77ca4 lib/model: Don't include iolimiter wait into sync duration (#6593) 2020-05-04 08:43:35 +02:00
Tomasz Wilczyński
f560e8c850 gui: Prevent text overflow in file lists (fixes #6268) (#6292) 2020-05-02 19:34:28 +02:00
Simon Frei
2e3975e956 lib/model: Improve errors when deleting dirs (fixes #6575) (#6586) 2020-05-01 11:11:38 +02:00
Audrius Butkevicius
bd0c2bf237 lib/model: Do file recheck in folder loop (fixes #6583) (#6585) 2020-05-01 11:08:59 +02:00
Audrius Butkevicius
f86deedd9c lib/model: Progress emitter network operations dont need to be blocking (#6589)
* lib/model: Progress emitter network operations dont need to be blocking

* Do sends outside of the lock
2020-05-01 08:54:15 +01:00
Audrius Butkevicius
782bd08aad lib/model: Add option to disable fsync (#6588)
* lib/model: Add option to disable fsync

* Fix test

* Dont open stuff for no reason
2020-05-01 08:36:46 +01:00
Simon Frei
22242d51be lib/db: Refactor getting global lists (#6529)
* lib/db: Refactor getting global lists

* VL -> Versions

* keyBuf

* don't touch db migration
2020-05-01 08:30:20 +01:00
Audrius Butkevicius
ac7338f1f2 lib/connections: Update quic (#6591)
* lib/connections: Update quic

* Fix freebsd builds?

* Undo x/sys and gopsutil update

* Update quic_dial.go

* Update quic_listen.go
2020-05-01 08:14:28 +01:00
Jakob Borg
bdb25f9ba5 gui, man, authors: Update docs, translations, and contributors 2020-04-29 07:45:31 +02:00
Jakob Borg
0e2a07d71a lib/fs: Avoid dirty offset read in fakefs (fixes #6584) 2020-04-28 09:58:31 +02:00
dependabot-preview[bot]
5e1cd0e71a build(deps): bump github.com/greatroar/blobloom from 0.1.1 to 0.2.0 (#6580)
Bumps [github.com/greatroar/blobloom](https://github.com/greatroar/blobloom) from 0.1.1 to 0.2.0.
- [Release notes](https://github.com/greatroar/blobloom/releases)
- [Commits](https://github.com/greatroar/blobloom/compare/v0.1.1...v0.2.0)
2020-04-27 14:38:20 +02:00
MikolajTwarog
5224f07ac8 gui: Add "Pause All"/"Resume All" button for devices (fixes #6530) (#6549) 2020-04-27 00:18:05 +02:00
Jakob Borg
d9664a946d gui: Allow rescan on local additions (fixes #6578) (#6579) 2020-04-27 00:13:53 +02:00
Jakob Borg
8c61e0d6ab lib/config: Sort versioning options on marshal (fixes #6576) (#6577) 2020-04-27 00:13:35 +02:00
Jakob Borg
6c73617974 lib/model: Use semaphore to limit concurrent folder writes (fixes #6541) (#6573) 2020-04-27 00:13:18 +02:00
Jakob Borg
037934ec74 Merge branch 'release'
* release:
  gui: Fix regression on refreshNeed (fixes #6560, ref #6452) (#6561)
2020-04-22 20:41:03 +02:00
Jakob Borg
78a741d0be gui, man, authors: Update docs, translations, and contributors 2020-04-22 07:45:34 +02:00
Simon Frei
ebad9e2073 gui: Fix regression on refreshNeed (fixes #6560, ref #6452) (#6561) 2020-04-21 22:45:03 +02:00
Simon Frei
d68fa84055 gui: Fix regression on refreshNeed (fixes #6560, ref #6452) (#6561) 2020-04-21 22:42:59 +02:00
Simon Frei
d3ed4de4ed lib/model: Don't exit pullerRoutine on cancelled ctx (fixes #6559) (#6562)
* lib/model: Don't exit pullerRoutine on cancelled ctx (fixes #6559)

* actual fix
2020-04-21 18:55:14 +01:00
Simon Frei
6bbd24de12 lib/model: Refactor folder health/error handling (fixes #6557) (#6558) 2020-04-21 10:15:59 +02:00
Boqin Qin
c63ca4f563 lib/protocol, rc, utils: Add mutex Unlock before panic (#6556) 2020-04-20 14:52:16 +02:00
greatroar
0e5ba3ca05 lib/db: Upgrade to Blobloom v0.1.1 (#6553)
Now faster and Apache-licensed.
2020-04-20 14:23:36 +02:00
greatroar
44b0f0b456 lib/db: Switch to faster blobloom Bloom filter pkg (#6537) 2020-04-20 09:02:33 +02:00
MikolajTwarog
4aa2199d5b lib/connections: Accept new connections in place of old ones (fixes #5224) (#6548) 2020-04-20 08:23:38 +02:00
Simon Frei
49798552f2 lib/model: Delay watch setup on errors (fixes #5731) (#6544) 2020-04-17 17:43:42 +02:00
Jakob Borg
171b8139ab lib/model: Fix logging placeholder 2020-04-16 15:42:45 +02:00
Jakob Borg
7fa699e159 build, lib/build: Build faster (#6538)
This changes the build script to build all the things in one go
invocation, instead of one invocation per cmd. This is a lot faster
because it means more things get compiled concurrently. It's especially
a lot faster when things *don't* need to be rebuilt, possibly because it
only needs to build the dependency map and such once instead of once per
binary.

In order for this to work we need to be able to pass the same ldflags to
all the binaries. This means we can't set the program name with an
ldflag.

When it needs to rebuild everything (go clean -cache):

    ( ./old-build -gocmd go1.14.2 build all 2> /dev/null; )  65.82s user 11.28s system 574% cpu 13.409 total
    ( ./new-build -gocmd go1.14.2 build all 2> /dev/null; )  63.26s user 7.12s system 1220% cpu 5.766 total

On a subsequent run (nothing to build, just link the binaries):

    ( ./old-build -gocmd go1.14.2 build all 2> /dev/null; )  26.58s user 7.53s system 582% cpu 5.853 total
    ( ./new-build -gocmd go1.14.2 build all 2> /dev/null; )  18.66s user 2.45s system 1090% cpu 1.935 total
2020-04-16 10:09:33 +02:00
Jakob Borg
5373e38ac8 cmd/ursrv: Filter out ancient versions from chart 2020-04-16 09:13:01 +02:00
Jakob Borg
41ef945b2b gui, man, authors: Update docs, translations, and contributors 2020-04-15 07:45:24 +02:00
greatroar
81ff31b8fc lib/model: Harden sanitizePath (#6533)
In particular, non-printable runes and non-UTF8 sequences are no longer
allowed. Linux systems will happily creates filenames containing these.
2020-04-14 20:26:26 +02:00
greatroar
82fbcb96f8 lib/db: Remove unused blockFinder global (#6534) 2020-04-14 17:09:59 +02:00
Jakob Borg
37ede49077 build: Remove snap build machinery (#6532) 2020-04-14 14:20:44 +02:00
Simon Frei
0ba3abdee4 lib/db: Handle missed error variable in old schema upgrade (#6528) 2020-04-13 22:58:04 +02:00
Simon Frei
ab92f8520c cmd/syncthing, lib/db: Store upgrade info to throttle queries (fixes #6513) (#6514) 2020-04-13 10:21:07 +02:00
Jakob Borg
0e67c036bb lib/db: Make database GC a service, stop on Stop() (#6518)
This makes the GC runner a service that will stop fairly quickly when
told to.

As a bonus, STTRACE=app will print the service tree on the way out,
including any errors they've flagged.
2020-04-12 10:26:57 +02:00
Jakob Borg
046bbdfbd4 Merge branch 'release'
* release:
  lib/db: Don't get blocklists on drop and missing continue (ref #6457) (#6502)
  Revert "cmd/syncthing: Do auto-upgrade before startup (fixes #6384) (#6385)"
  lib/ur: Correct freaky error handling (fixes #6499) (#6500)
2020-04-08 10:49:59 +02:00
Jakob Borg
c6c74e8291 gui, man, authors: Update docs, translations, and contributors 2020-04-08 07:45:32 +02:00
Jakob Borg
59b1b0e1dc lib/osutil: Fix "atomic" rename on Windows (ref #6495, ref #6493) (#6510)
So, in a funny plot twist, it turns out that WriteFile in Go 1.13
doesn't actually set the read only bit on Windows when called with
permissions 0444 so my test was broken. With an improved test it turns
out that Rename does not, in fact, overwrite a read-only file on
Windows. This adds a fix for that.

(Rename might get improved in Go 1.15...)
2020-04-07 15:38:55 +02:00
Jakob Borg
4b17c511f9 cmd/stcrashreceiver: Enable (rough) affected users count (#6511)
Seeing thousands of reports is no use when we don't know if they
represent one poor user or thousands.
2020-04-07 13:19:49 +02:00
Simon Frei
0f532a5607 lib/db: Don't get blocklists on drop and missing continue (ref #6457) (#6502) 2020-04-07 13:14:03 +02:00
Simon Frei
df318ed370 lib/db: Don't get blocklists on drop and missing continue (ref #6457) (#6502) 2020-04-07 13:13:18 +02:00
Jakob Borg
0275cbd66a Revert "cmd/syncthing: Do auto-upgrade before startup (fixes #6384) (#6385)"
This reverts commit c101a04179.
2020-04-07 12:55:25 +02:00
Jakob Borg
670a9809fa lib/ur: Correct freaky error handling (fixes #6499) (#6500) 2020-04-07 12:54:06 +02:00
Simon Frei
07ce3572a0 lib/ignore: Only skip for toplevel includes (fixes #6487) (#6508) 2020-04-07 10:23:38 +02:00
Jakob Borg
b64052bc26 Revert "build: Go 1.14 for the Debian etc builds"
This reverts commit d400e51422.
2020-04-07 09:37:39 +02:00
Jakob Borg
7956e7d0ef lib/{fs,scanner}: gofmt from Go 1.14 (#6509) 2020-04-07 09:31:29 +02:00
Jakob Borg
d400e51422 build: Go 1.14 for the Debian etc builds 2020-04-07 08:22:53 +02:00
greatroar
674a99e9ae cmd/strelaypoolsrv: Simplify LRU usage (#6507) 2020-04-06 12:43:56 +02:00
Jakob Borg
88cabb9e0a lib/ur: Correct freaky error handling (fixes #6499) (#6500) 2020-04-06 09:53:37 +02:00
greatroar
b7ba401c0b cmd/strelaypoolsrv: Fix race condition in caching (#6496)
Successful LRU cache lookups modify the cache's recency list, so
RWMutex.RLock isn't enough protection.

Secondarily, multiple concurrent lookups with the same key should not
create separate rate limiters, so release the lock only when presence
of the key in the cache has been ascertained.

Co-authored-by: greatroar <@>
2020-04-04 20:20:25 +01:00
Jakob Borg
7505ea79a0 lib/osutil: Don't remove before rename on Windows (ref #6493) (#6495)
This was needed in ancient times but not currently.
2020-04-04 14:17:16 +02:00
Kevin Bushiri
e1324a0e23 cmd/strelaypoolsrv: Use OpenStreetMap (fixes #6150) (#6459) 2020-04-04 13:48:20 +02:00
Jakob Borg
b7b9476e5a cmd/strelaysrv: Harmonize and improve log output (ref #6492) 2020-04-04 13:31:42 +02:00
Jakob Borg
d1db7e3dd2 cmd/strelaypoolsrv: Configurable request processors & queue len 2020-04-04 13:31:42 +02:00
Jakob Borg
362da59396 cmd/strelaypoolsrv: Expose check error to client, fix incorrect response code handling 2020-04-04 13:31:42 +02:00
Jakob Borg
66262392c3 cmd/strelaypoolsrv: Correctly account status codes, tweak status codes 2020-04-04 13:31:42 +02:00
Jakob Borg
48f9d323fa lib/api: Add LDAP search filters (fixes #5376) (#6488)
This adds the functionality to run a user search with a filter for LDAP
authentication. The search is done after successful bind, as the binding
user. The typical use case is to limit authentication to users who are
member of a group or under a certain OU. For example, to only match
users in the "Syncthing" group in otherwise default Active Directory
set up for example.com:

    <searchBaseDN>CN=Users,DC=example,DC=com</searchBaseDN>
    <searchFilter>(&amp;(sAMAccountName=%s)(memberOf=CN=Syncthing,CN=Users,DC=example,DC=com))</searchFilter>

The search filter is an "and" of two criteria (with the ampersand being
XML quoted),

- "(sAMAccountName=%s)" matches the user logging in
- "(memberOf=CN=Syncthing,CN=Users,DC=example,DC=com)" matches members
  of the group in question.

Authentication will only proceed if the search filter matches precisely
one user.
2020-04-04 11:33:43 +02:00
Simon Frei
f69c0b550c gui: Refactor out-of-sync modal (#6452) 2020-04-02 16:18:41 +02:00
Simon Frei
32245435e2 lib/model: Handle deleted items on RO for remote removes (fixes #6432) (#6464) 2020-04-02 16:14:25 +02:00
Jakob Borg
2658369051 gui: Expose LDAP config in advanced config editor (#6489)
One tiny step friendlier than vi on config.xml
2020-04-02 12:07:17 +02:00
Simon Mwepu
d50adb225b gui: Avoid validation error on closing folder editor (fixes #3808) (#6478) 2020-04-02 08:20:03 +02:00
Jakob Borg
7da898f2d6 go.mod: Use github.com/twmb/murmur3 for murmur3 (#6486)
Let's try this again shall we
2020-04-01 23:51:31 +02:00
Jakob Borg
0d919bd79c Revert "go.mod: Use github.com/twmb/murmur3 for murmur3"
"I shall not commit to master without testing all architectures on the
builder" * 100 on the black board
2020-04-01 21:16:15 +02:00
Jakob Borg
f91e90a94f go.mod: Use github.com/twmb/murmur3 for murmur3
It seems, like, maintained and stuff.
2020-04-01 21:04:43 +02:00
Jakob Borg
7ce20f197b gui, man, authors: Update docs, translations, and contributors 2020-04-01 07:45:33 +02:00
greatroar
2f26a95973 lib/db, lib/model: Code simplifications (#6484)
NamespacedKV.prefixedKey is still small enough to be inlined.
2020-03-31 14:32:24 +02:00
Simon Frei
123941cf62 lib/fs: Basic with empty root shouldn't point at / (#6483) 2020-03-31 13:56:07 +02:00
Jakob Borg
9c67d57c28 lib/api: Update ldap package (fixes #6479) (#6481) 2020-03-31 09:56:04 +02:00
mv1005
5b3466dc6e lib/versioner: Extended tests of intervals (#6462) 2020-03-31 00:14:05 +02:00
Michael Rienstra
bca6854c03 lib/versioner: Fix "30 days" interval (fixes #6410) (#6461) 2020-03-30 23:28:53 +02:00
greatroar
c930b2e9e2 lib/rand: Various fixes (#6474) 2020-03-30 23:26:28 +02:00
greatroar
d7a257b391 lib/util: Fix potential data race (#6477)
Co-authored-by: greatroar <@>
2020-03-30 14:10:08 +01:00
Alberto Donato
7709ac33a7 go.mod: Update jackpal/gateway dependency (fixes #5288) (#6469) 2020-03-30 11:11:55 +02:00
greatroar
1e2379df1b lib/protocol: faster Luhn algorithm and better testing (#6475)
The previous implementation was very generic; its tests didn't cover the
actual alphabet for device IDs.

Benchmark results on amd64:

name         old time/op    new time/op     delta
Luhnify-8      1.00µs ± 1%     0.28µs ± 4%   -72.38%  (p=0.000 n=9+10)
Unluhnify-8     992ns ± 2%      274ns ± 1%   -72.39%  (p=0.000 n=10+9)
2020-03-29 22:28:04 +02:00
greatroar
ea5c9176e1 lib/protocol: Remove unused channel Connection.preventSends (#6473)
Co-authored-by: greatroar <@>
2020-03-29 17:09:53 +01:00
greatroar
cc1b003f21 lib/weakhash: Fix speed reporting in benchmark (#6470) 2020-03-29 17:07:25 +02:00
Jakob Borg
38bd90e6f2 build: Simplify/correct Windows version tagging (fixes #6471) (#6472) 2020-03-29 16:51:50 +02:00
greatroar
1c47fae206 lib/ur: Use sysctl syscall to get RAM size on Mac (#6468) 2020-03-29 14:28:46 +02:00
Simon Frei
79a758be3c lib/model: Do Revert/Override synchronously (#6460) 2020-03-27 13:05:09 +01:00
Simon Frei
c7cf3ef899 lib/syncthing: Save version to db after upgrade ops are done (ref #6457) (#6458) 2020-03-26 16:58:21 +01:00
Jakob Borg
2c2e6cd0d5 cmd/ursrv: Minor heatmap tweaks 2020-03-26 15:19:05 +01:00
Simon Frei
b7dffc051e lib/model: Remove unused func (#6456) 2020-03-26 14:19:26 +01:00
Kevin Bushiri
963e9a4071 cmd/ursrv: Use OpenStreetMap and Leaflet for heat map (ref #6150) (#6454) 2020-03-26 12:32:14 +01:00
Jakob Borg
b28899ac07 cmd/ursrv: Provide cached locations.json 2020-03-25 14:19:35 +01:00
Jakob Borg
83f6da8dca authors: Fixup keevBush 2020-03-25 10:03:23 +01:00
Jakob Borg
1a29296d9d gui, man, authors: Update docs, translations, and contributors 2020-03-25 07:45:35 +01:00
Simon Frei
a7de4c68e3 go.mod: Update quic-go to 0.14.4 (#6453) 2020-03-24 21:12:57 +01:00
Simon Frei
7f23de4f03 all: Pass db intervals as args not env vars (#6448) 2020-03-24 13:53:20 +01:00
Jakob Borg
ca89f12be6 lib/api: Set ServerName on LDAPS connections (fixes #6450) (#6451)
tls.Dial needs it for certificate verification.
2020-03-24 12:56:43 +01:00
Simon Frei
ddfa82e990 lib/model: Unset local flag on deleted files (fixes #6436) (#6449) 2020-03-24 12:51:17 +01:00
Nicolas Perraut
61302c467c gui: Improve unused device status (fixes #6416) (#6445) 2020-03-22 19:30:18 +01:00
Simon Frei
d6b4873eed gui, lib/model: Fix for folder stats with r-o and ignoreDel (fixes #6430) (#6431) 2020-03-22 11:46:42 +01:00
Jakob Borg
1ea98a16b1 cmd/syncthing: Don't open browser on upgrade restarts (fixes #6437) (#6442)
We set the STRESTART environment when starting the inner process after
the first time, but this didn't persist when restarting the monitor
process. Now it does.
2020-03-22 11:39:09 +01:00
Jakob Borg
e2f3500df9 cmd/syncthing: Properly handle STNORESTART=1 (fixes #6440) (#6441)
Makes it truly equivalent to -no-restart, and also updates the option
descriptions to be more truthful.
2020-03-22 11:38:53 +01:00
Jakob Borg
8b025af1e5 Merge branch 'release'
* release:
  lib/db: Don't whack blocks when putting truncated file (#6434)
2020-03-20 12:08:16 +01:00
Jakob Borg
f1b253fc00 lib/db: Don't whack blocks when putting truncated file (#6434)
As of the latest database checker we are again putting files without
blocks. I'm not 100% convinced that's a great idea, but we also do it
for ignored files apparently so it looks like we probably should support
it. This adds an escape hatch that must be manually enabled...
2020-03-20 12:07:29 +01:00
Jakob Borg
c4abe6f815 lib/db: Don't whack blocks when putting truncated file (#6434)
As of the latest database checker we are again putting files without
blocks. I'm not 100% convinced that's a great idea, but we also do it
for ignored files apparently so it looks like we probably should support
it. This adds an escape hatch that must be manually enabled...
2020-03-20 12:07:14 +01:00
Jakob Borg
4c5e9cf921 Merge branch 'release'
* release:
  lib/db, lib/syncthing: Repair db once on upgrade (ref #6425, #6427) (#6429)
  lib/db: Fix removeFromGlobal and no filenames in error (fixes #6427) (#6428)
  lib/db: Remove emptied global list in checkGlobals (fixes #6425) (#6426)
2020-03-19 16:05:39 +01:00
Simon Frei
b33d5e57c6 lib/db, lib/syncthing: Repair db once on upgrade (ref #6425, #6427) (#6429) 2020-03-19 16:00:05 +01:00
Simon Frei
0060840249 lib/db: Fix removeFromGlobal and no filenames in error (fixes #6427) (#6428) 2020-03-19 16:00:05 +01:00
Simon Frei
71faae67f2 lib/db: Remove emptied global list in checkGlobals (fixes #6425) (#6426) 2020-03-19 15:59:49 +01:00
Simon Frei
74706bb02b lib/db, lib/syncthing: Repair db once on upgrade (ref #6425, #6427) (#6429) 2020-03-19 15:58:32 +01:00
Kevin Bushiri
5975772ed8 cmd/stdiscosrv: Only generate keypair if it doesn't exist (fixes #5809) (#6419) 2020-03-19 14:50:24 +01:00
Simon Frei
cf11fa4327 lib/db: Fix removeFromGlobal and no filenames in error (fixes #6427) (#6428) 2020-03-19 14:32:22 +01:00
Simon Frei
40580d8b9b lib/db: Remove emptied global list in checkGlobals (fixes #6425) (#6426) 2020-03-19 14:30:20 +01:00
Simon Frei
e25e71cdde cmd/syncthing, lib/locations: Separate data and config dirs (fixes #4924) (#6309) 2020-03-18 20:58:11 +01:00
Simon Frei
00b2340f9a lib/db: Checkpoint during schema updates (fixes #6422) (#6424) 2020-03-18 20:33:43 +01:00
Simon Frei
cc2a55892f lib: Repair sequence inconsistencies (#6367) 2020-03-18 17:34:46 +01:00
Jakob Borg
80107d5f5e lib/config: Correct spelling of address in LDAP config (#6420)
Literally noone uses this so I don't see a need to call this out or
trigger a 1.5 release for it.
2020-03-18 10:44:00 +00:00
Mario Majila
f10e85d0c2 gui: Display folder name in modal (fixes #5380) (#6407) 2020-03-18 11:13:58 +01:00
Kevin Bushiri
f4a6e4439a gui: Add folder name for restore version modal (fixes #5380) (#6418) 2020-03-18 11:11:40 +01:00
Jakob Borg
2ae3ea0d52 gui, man, authors: Update docs, translations, and contributors 2020-03-18 07:45:30 +01:00
Alex Xu
1e68ab3f90 lib/beacon: Only send to appropriately flagged interfaces (ref #5957) (#6404)
saves some traffic (potential mobile wakeups), may help with #5957.
2020-03-17 09:40:37 +01:00
Simon Mwepu
f08d09f607 gui: Display device name in modal (fixes #5380) (#6405) 2020-03-17 08:23:45 +01:00
Jakob Borg
e053db6a5e lib/protocol: Zero pad index ID strings 2020-03-17 07:40:52 +01:00
Simon Frei
1bd4ea0cbb lib/db: Don't ignore failures unmarshaling version lists (#6411) 2020-03-16 10:09:27 +01:00
Simon Frei
a1cb1d70c4 lib/db: Use need func in withNeed and simplify (#6412) 2020-03-16 08:45:45 +01:00
Simon Frei
c101a04179 cmd/syncthing: Do auto-upgrade before startup (fixes #6384) (#6385) 2020-03-16 08:12:32 +01:00
Simon Frei
16698b12b1 lib/db: Extend set test with second remote (#6402) 2020-03-11 08:15:45 +01:00
Jakob Borg
0bc571b2fd gui, man, authors: Update docs, translations, and contributors 2020-03-11 07:45:28 +01:00
Jakob Borg
20aaa5927b lib/protocol: Use BlocksHash to compare block lists when available (#6401)
This is an optimization for faster equal checks on block lists.
2020-03-10 14:46:49 +01:00
Jakob Borg
d612c35290 lib/api: Ignore that one file that always shows up in git status 2020-03-07 11:46:54 +01:00
Jakob Borg
5ab257fb60 Merge branch 'release'
* release:
  lib/db: Be more lenient during migration (fixes #6397) (#6398)
2020-03-06 20:53:55 +01:00
Jakob Borg
db02545ef3 lib/db: Be more lenient during migration (fixes #6397) (#6398) 2020-03-06 20:52:22 +01:00
Jakob Borg
2faa1ad360 lib/db: Be more lenient during migration (fixes #6397) (#6398) 2020-03-06 20:50:55 +01:00
Jakob Borg
860ae7f395 cmd/ursrv: Analytics for Synology dist 2020-03-06 07:46:11 +01:00
Jakob Borg
135c71ca87 build: Build image should use Go 1.13 for now 2020-03-05 11:53:07 +01:00
Jakob Borg
c7d6a6d780 gui, lib/api: Remove CPU & RAM measurements (fixes #6249) (#6393) 2020-03-04 20:27:48 +01:00
Jakob Borg
92533dd9f0 gui, man, authors: Update docs, translations, and contributors 2020-03-04 07:45:31 +01:00
Jakob Borg
dd92b2b8f4 all: Tweak error creation (#6391)
- In the few places where we wrap errors, use the new Go 1.13 "%w"
  construction instead of %s or %v.

- Where we create errors with constant strings, consistently use
  errors.New and not fmt.Errorf.

- Remove capitalization from errors in the few places where we had that.
2020-03-03 22:40:00 +01:00
Jakob Borg
eddc8d3ff2 authors: Cleanup on request 2020-03-02 16:31:29 +01:00
Jakob Borg
dfdd5af7a6 build: We can now use Go 1.13 2020-03-01 12:59:49 +01:00
Jakob Borg
6b5c281dd5 Merge branch 'release'
* release:
  lib/db: Prevent GC concurrently with migration (fixes #6389) (#6390)
  build: Fix syso creation (fixes #6386) (#6387)
2020-02-29 19:58:49 +01:00
Jakob Borg
52e72e0122 lib/db: Prevent GC concurrently with migration (fixes #6389) (#6390) 2020-02-29 19:51:48 +01:00
Jakob Borg
c08e253e7c lib/db: Prevent GC concurrently with migration (fixes #6389) (#6390) 2020-02-29 19:51:32 +01:00
Evgeny Kuznetsov
d1e0a38c04 build: Fix syso creation (fixes #6386) (#6387) 2020-02-29 19:48:42 +01:00
Evgeny Kuznetsov
ac19cdb2cd build: Fix syso creation (fixes #6386) (#6387) 2020-02-28 20:40:14 +01:00
Jakob Borg
58607486af Merge branch 'release'
* release:
  lib/db: Correct metadata recalculation (fixes #6381) (#6382)
2020-02-28 11:21:51 +01:00
Jakob Borg
0b610017ea lib/db: Correct metadata recalculation (fixes #6381) (#6382)
If we decide to recalculate the metadata we shouldn't start from
whatever we loaded from the database, as that data is wrong. We should
start from a clean slate.
2020-02-28 11:17:02 +01:00
Jakob Borg
5de6f6d349 lib/db: Correct metadata recalculation (fixes #6381) (#6382)
If we decide to recalculate the metadata we shouldn't start from
whatever we loaded from the database, as that data is wrong. We should
start from a clean slate.
2020-02-28 11:16:33 +01:00
Jakob Borg
daf05c6509 Merge branch 'release'
* release:
  lib/db: Remove reference to env var that never existed
  lib/db: Slightly improve indirection (ref #6372) (#6373)
2020-02-27 11:24:18 +01:00
Jakob Borg
9a1df97c69 lib/db: Remove reference to env var that never existed 2020-02-27 11:22:09 +01:00
Jakob Borg
ee61da5b6a lib/db: Slightly improve indirection (ref #6372) (#6373)
I was working on indirecting version vectors, and that resulted in some
refactoring and improving the existing block indirection stuff. We may
or may not end up doing the version vector indirection, but I think
these changes are reasonable anyhow and will simplify the diff
significantly if we do go there. The main points are:

- A bunch of renaming to make the indirection and GC not about "blocks"
  but about "indirection".

- Adding a cutoff so that we don't actually indirect for small block
  lists. This gets us better performance when handling small files as it
  cuts out the indirection for quite small loss in space efficiency.

- Being paranoid and always recalculating the hash on put. This costs
  some CPU, but the consequences if a buggy or malicious implementation
  silently substituted the block list by lying about the hash would be bad.
2020-02-27 11:22:01 +01:00
Jakob Borg
883497966e lib/db: Remove reference to env var that never existed 2020-02-27 11:21:35 +01:00
Jakob Borg
4f7a77597e lib/db: Slightly improve indirection (ref #6372) (#6373)
I was working on indirecting version vectors, and that resulted in some
refactoring and improving the existing block indirection stuff. We may
or may not end up doing the version vector indirection, but I think
these changes are reasonable anyhow and will simplify the diff
significantly if we do go there. The main points are:

- A bunch of renaming to make the indirection and GC not about "blocks"
  but about "indirection".

- Adding a cutoff so that we don't actually indirect for small block
  lists. This gets us better performance when handling small files as it
  cuts out the indirection for quite small loss in space efficiency.

- Being paranoid and always recalculating the hash on put. This costs
  some CPU, but the consequences if a buggy or malicious implementation
  silently substituted the block list by lying about the hash would be bad.
2020-02-27 11:19:21 +01:00
Jakob Borg
c4b9046eaa build: Forked github.com/spaolacci/murmur3 for unsafe (ref #6371) 2020-02-26 20:25:24 +01:00
Simon Frei
299a80d328 cmd/syncthing: Do not truncate/rotate logs at start (#6359) 2020-02-26 13:49:03 +01:00
Jakob Borg
4e4b9a872a lib/dialer: Preserve nilness in error handling (fixes #6368) (#6369)
Also the call site where it shouldn't anyway be looking at the conn when
the err is non-nil.
2020-02-26 13:16:18 +01:00
Simon Frei
cb624dbf5d cmd/syncthing: Add indication that reset db happened (#6364) 2020-02-26 12:38:43 +01:00
Audrius Butkevicius
71aecc5cd4 lib/dialer: Bring back address faking connection (fixes #6289) (#6363) 2020-02-26 12:37:23 +01:00
Jakob Borg
10af09e4b4 gui, man, authors: Update docs, translations, and contributors 2020-02-26 07:45:29 +01:00
Simon Frei
680b0b14db lib/connections: Refactor status for testing (ref #6361) (#6362) 2020-02-25 21:18:31 +01:00
Jakob Borg
55238e3b5b lib/connections: Actually record connection errors (#6361) 2020-02-25 16:56:24 +01:00
Simon Frei
f0e33d052a lib: More contextification (#6343) 2020-02-24 21:57:15 +01:00
Jakob Borg
7b8622c2e9 build: Simplify build image for snaps 2020-02-23 08:40:42 +01:00
Jakob Borg
40e1835927 Merge branch 'release'
* release:
  lib/db: Allow put partial FileInfo without blocks (ref #6353)
  lib/db: Don't panic on incorrect BlocksHash (fixes #6353) (#6355)
2020-02-22 19:17:38 +01:00
Jakob Borg
a5e12a0a3d lib/db: Allow put partial FileInfo without blocks (ref #6353) 2020-02-22 17:49:23 +01:00
Jakob Borg
10cb14fcb8 lib/db: Allow put partial FileInfo without blocks (ref #6353) 2020-02-22 17:44:34 +01:00
Simon Frei
4f29180e7c lib/db: Don't panic on incorrect BlocksHash (fixes #6353) (#6355) 2020-02-22 16:52:34 +01:00
Simon Frei
32e12abb43 lib/db: Don't panic on incorrect BlocksHash (fixes #6353) (#6355) 2020-02-22 16:51:23 +01:00
Jakob Borg
4cc1b7f42c Merge branch 'release'
* release:
  lib/db: Schema update to repair sequence index (ref #6304) (#6350)
  lib: Modify FileInfos consistently (ref #6321) (#6349)
2020-02-22 09:40:27 +01:00
Simon Frei
0fb2cd52ff lib/db: Schema update to repair sequence index (ref #6304) (#6350) 2020-02-22 09:37:21 +01:00
Simon Frei
6489feb1d7 lib/db: Schema update to repair sequence index (ref #6304) (#6350) 2020-02-22 09:36:59 +01:00
Simon Frei
a4bd4d118a lib: Modify FileInfos consistently (ref #6321) (#6349) 2020-02-22 09:31:26 +01:00
Simon Frei
fae7425bbf lib: Modify FileInfos consistently (ref #6321) (#6349) 2020-02-19 16:58:09 +01:00
Jakob Borg
7b5551248a gui, man, authors: Update docs, translations, and contributors 2020-02-19 07:45:29 +01:00
Tyler Kropp
4026625c2d lib/config, gui: Set unix socket permissions for GUI listen address (fixes #5979) (#6310) 2020-02-18 08:52:12 +01:00
Jakob Borg
3e0241ea31 build: Dockerfile for the builder image 2020-02-14 10:44:31 +01:00
Jakob Borg
bb375b1aff lib/model: Stop summary sender faster (ref #6319) (#6341)
One of the causes of "panic: database is closed" is that we try to send
summaries after it's been closed. Calculating summaries can take a long
time and if we have a lot of folders it's not unreasonable to think
that we might be stopped in this loop, so prepare to bail here.

* push
2020-02-14 08:11:54 +01:00
Simon Frei
05e23f1991 lib/db: Don't call backend.Commit twice (ref #6337) (#6342) 2020-02-14 08:11:24 +01:00
Jakob Borg
71de6fe290 lib/upnp: Exit quicker (#6339)
During NAT discovery we block for 10s (NatTimeoutS) before returning.
This is mostly noticeable when Ctrl-C:ing a Syncthing directly after
startup as we wait for those ten seconds before shutting down. This
makes it check the context a little bit more frequently.
2020-02-13 15:39:36 +01:00
Jakob Borg
6a840a040b lib/db: Keep metadata better in sync (ref #6335) (#6337)
This adds metadata updates to the same write batch as the underlying
file change. The odds of a metadata update going missing is greatly
reduced.

Bonus change: actually commit the transaction in recalcMeta.
2020-02-13 15:23:08 +01:00
Simon Frei
c3637f2191 lib: Faster termination on exit (ref #6319) (#6329) 2020-02-13 14:43:00 +01:00
Simon Frei
ca90f4e6af lib/db: Use flags from arg not LocalFlags() updating meta (#6333) 2020-02-13 14:02:30 +01:00
Jakob Borg
51fa36d61f lib/db: Recover sequence number and metadata on startup (fixes #6335) (#6336)
lib/db: Recover sequence number and metadata on startup (fixes #6335)

If we crashed after writing new file entries but before updating
metadata in the database the sequence number and metadata will be wrong.
This fixes that.
2020-02-13 13:05:26 +01:00
Jakob Borg
d95a087829 lib/db: Don't leak snapshot when closing (#6331)
We could potentially get a snapshot and then fail to get a releaser,
leaking the snapshot. This takes the releaser first and makes sure to
release it on snapshot error.
2020-02-12 12:00:17 +01:00
Jakob Borg
a728743c86 lib/db: Use Commit() instead of commit() (#6330)
The readWriteTransaction offered both commit() (the one to use) and
Commit() (via embedding) where the latter didn't close the read
transaction. This removes the lower cased variant in order to prevent
the mistake.

The only place where the mistake was made was the new gc runner, where
it would leave a read snapshot open forever.
2020-02-12 11:59:55 +01:00
Simon Frei
ce27780a4c lib/model: Return empty summary on paused folders (ref #6272) (#6326) 2020-02-12 11:59:12 +01:00
Jakob Borg
0df39ddc72 lib/fs, lib/model: Rewrite RecvOnly tests (#6318)
During some other work I discovered these tests weren't great, so I've
rewritten them to be a little better. The real changes here are:

- Don't play games with not starting the folder and such, and don't
  construct a fake folder instance -- just use the one the model has. The
  folder starts and scans but the folder contents are empty at this point
  so that's fine.

- Use a fakefs instead of a temp dir.

- To support the above, implement a fakefs option `?content=true` to
  make the fakefs actually retain written content. Use sparingly,
  obviously, but it means the fakefs can usually be used instead of an
  on disk real directory.
2020-02-12 07:47:05 +01:00
Jakob Borg
b84aa114be gui, man, authors: Update docs, translations, and contributors 2020-02-12 07:45:30 +01:00
Simon Frei
a596e5e2f0 lib/model: Consistent error return values for folder methods on model (#6325) 2020-02-12 07:35:24 +01:00
Jakob Borg
04e648fee6 lib/db: Handle missing block lists as missing file (ref #6321) (#6322)
Also explicitly handle non-nil but empty block lists (if they should
ever pop up as an effect of unmarshalling changes or whatnot).
2020-02-11 15:37:22 +01:00
Simon Frei
29736b1e33 lib/db: Add closeWaitGroup to allow async operation (#6317) 2020-02-11 14:31:43 +01:00
Jakob Borg
b61da487e4 all: Update metadata modtime on delete (ref #6284) (#6315)
This records the time a file was marked as deleted. It will, in the
future, aid in garbage collecting old files.
2020-02-10 10:48:30 +01:00
Jakob Borg
b4064e07dc build: Minor tidy 2020-02-07 16:21:01 +01:00
Jakob Borg
b4dc15bc06 gui, man, authors: Update docs, translations, and contributors 2020-02-05 07:45:30 +01:00
Anjan Momi
3304e0f832 etc/linux-runit: Add excecute permission (#6306) 2020-02-03 11:50:44 +01:00
Jakob Borg
b8a5e1a244 cmd/stindex: Print missing sequence ranges concisely 2020-02-03 09:18:21 +01:00
Jakob Borg
5823e7a5ce lib/model: Sort and group model initialization better (ref #6303) 2020-02-01 08:12:25 +01:00
Jakob Borg
55937b61ca lib/model: Add global request limiter (fixes #6302) (#6303)
This adds a new config with the simple and concise name
maxConcurrentIncomingRequestKiB. This limits how many bytes we have "in
the air" in the form of response data being read and processed.

After some testing I think that not having this limiter is seldom a
great idea and thus I propose a default value of 256 MiB for this new
setting.

I also refactored the folder IO limiter to be a model/folder attribute
instead of a package global.
2020-02-01 08:02:18 +01:00
Jakob Borg
9cef283151 cmd/stindex: Teach it about new key types 2020-01-31 08:27:20 +01:00
Jakob Borg
e0d4cdc9a3 lib/ignore: Don't crash on empty patterns (fixes #6300) (#6301)
Instead of panicking when we try to look closer at the empty pattern,
return something like `invalid pattern "(?d)" in ignore file: missing
pattern`.
2020-01-30 09:58:44 +01:00
Jakob Borg
ac3879e2b0 gui, man, authors: Update docs, translations, and contributors 2020-01-29 07:45:31 +01:00
Jakob Borg
d91c4b010b lib/config, lib/model: Limit concurrent pulls (fixes #5914) (#6290)
Adds a new folder state "Waiting to Sync" in the same vein as the
existing "Waiting to Scan". This vastly improves performances in the
rare cases when there are lots and lots of folders operating.
2020-01-27 17:31:17 +01:00
Jakob Borg
84920bff63 lib/db: Fixup last commit with better key name 2020-01-26 15:22:21 +01:00
Jakob Borg
bf4c8439e8 lib/db: Configurable block GC time (#6295)
Also retain the interval over restarts by storing last GC time in the
database. This to make sure that GC eventually happens even if the
interval is configured to a long time (say, a month).
2020-01-26 15:13:28 +01:00
Jakob Borg
8fc2dfad0c lib/db: Deduplicate block lists in database (fixes #5898) (#6283)
* lib/db: Deduplicate block lists in database (fixes #5898)

This moves the block list in the database out from being just a field on
the FileInfo to being an object of its own. When putting a FileInfo we
marshal the block list separately and store it keyed by the sha256 of
the marshalled block list. When getting, if we are not doing a
"truncated" get, we do an extra read and unmarshal for the block list.

Old block lists are cleared out by a periodic GC sweep. The alternative
would be to use refcounting, but:

- There is a larger risk of getting that wrong and either dropping a
  block list in error or keeping them around forever.

- It's tricky with our current database, as we don't have dirty reads.
  This means that if we update two FileInfos with identical block lists in
  the same transaction we can't just do read/modify/write for the ref
  counters as we wouldn't see our own first update. See above about
  tracking this and risks about getting it wrong.

GC uses a bloom filter for keys to avoid heavy RAM usage. GC can't run
concurrently with FileInfo updates so there is a new lock around those
operation at the lowlevel.

The end result is a much more compact database, especially for setups
with many peers where files get duplicated many times.

This is per-key-class stats for a large database I'm currently working
with, under the current schema:

```
 0x00:  9138161 items, 870876 KB keys + 7397482 KB data, 95 B +  809 B avg, 1637651 B max
 0x01:   185656 items,  10388 KB keys + 1790909 KB data, 55 B + 9646 B avg,  924525 B max
 0x02:   916890 items,  84795 KB keys +    3667 KB data, 92 B +    4 B avg,     192 B max
 0x03:      384 items,     27 KB keys +       5 KB data, 72 B +   15 B avg,      87 B max
 0x04:     1109 items,     17 KB keys +      17 KB data, 15 B +   15 B avg,      69 B max
 0x06:      383 items,      3 KB keys +       0 KB data,  9 B +    2 B avg,      18 B max
 0x07:      510 items,      4 KB keys +      12 KB data,  9 B +   24 B avg,      41 B max
 0x08:     1349 items,     12 KB keys +      10 KB data,  9 B +    8 B avg,      17 B max
 0x09:      194 items,      0 KB keys +     123 KB data,  5 B +  634 B avg,   11484 B max
 0x0a:        3 items,      0 KB keys +       0 KB data, 14 B +    7 B avg,      30 B max
 0x0b:   181836 items,   2363 KB keys +   10694 KB data, 13 B +   58 B avg,     173 B max
 Total 10426475 items, 968490 KB keys + 9202925 KB data.
```

Note 7.4 GB of data in class 00, total size 9.2 GB. After running the
migration we get this instead:

```
 0x00:  9138161 items, 870876 KB keys + 2611392 KB data, 95 B +  285 B avg,    4788 B max
 0x01:   185656 items,  10388 KB keys + 1790909 KB data, 55 B + 9646 B avg,  924525 B max
 0x02:   916890 items,  84795 KB keys +    3667 KB data, 92 B +    4 B avg,     192 B max
 0x03:      384 items,     27 KB keys +       5 KB data, 72 B +   15 B avg,      87 B max
 0x04:     1109 items,     17 KB keys +      17 KB data, 15 B +   15 B avg,      69 B max
 0x06:      383 items,      3 KB keys +       0 KB data,  9 B +    2 B avg,      18 B max
 0x07:      510 items,      4 KB keys +      12 KB data,  9 B +   24 B avg,      41 B max
 0x09:      194 items,      0 KB keys +     123 KB data,  5 B +  634 B avg,   11484 B max
 0x0a:        3 items,      0 KB keys +       0 KB data, 14 B +   17 B avg,      51 B max
 0x0b:   181836 items,   2363 KB keys +   10694 KB data, 13 B +   58 B avg,     173 B max
 0x0d:    44282 items,   1461 KB keys +   61081 KB data, 33 B + 1379 B avg, 1637399 B max
 Total 10469408 items, 969939 KB keys + 4477905 KB data.
```

Class 00 is now down to 2.6 GB, with just 61 MB added in class 0d.

There will be some additional reads in some cases which theoretically
hurts performance, but this will be more than compensated for by smaller
writes and better compaction.

On my own home setup which just has three devices and a handful of
folders the difference is smaller in absolute numbers of course, but
still less than half the old size:

```
 0x00:  297122 items,  20894 KB keys + 306860 KB data, 70 B + 1032 B avg, 103237 B max
 0x01:  115299 items,   7738 KB keys +  17542 KB data, 67 B +  152 B avg,    419 B max
 0x02: 1430537 items, 121223 KB keys +   5722 KB data, 84 B +    4 B avg,    253 B max
 ...
 Total 1947412 items, 151268 KB keys + 337485 KB data.
```

to:

```
 0x00:  297122 items,  20894 KB keys +  37038 KB data, 70 B +  124 B avg,    520 B max
 0x01:  115299 items,   7738 KB keys +  17542 KB data, 67 B +  152 B avg,    419 B max
 0x02: 1430537 items, 121223 KB keys +   5722 KB data, 84 B +    4 B avg,    253 B max
 ...
 0x0d:   18041 items,    595 KB keys +  71964 KB data, 33 B + 3988 B avg, 101109 B max
 Total 1965447 items, 151863 KB keys + 139628 KB data.
```

* wip

* wip

* wip

* wip
2020-01-24 08:35:44 +01:00
Audrius Butkevicius
17b441c993 lib/relays: Fix incorrect timeout, bring back logging (ref #6289) (#6291)
* lib/relays: Fix incorrect timeout, bring back logging (ref #6289)

* Fix format strings
2020-01-23 21:37:35 +00:00
Jakob Borg
b9879e2013 gui, man, authors: Update docs, translations, and contributors 2020-01-22 07:45:33 +01:00
Simon Frei
08f0e125ef all: Transactionalize db.FileSet (fixes #5952) (#6239) 2020-01-21 18:23:08 +01:00
Jakob Borg
d62a0cf692 lib/model: Handle progress emitter zero interval (fixes #6281) (#6282)
Makes the logic a bit clearer and safer. This also sneakily redefines
the 0 interval to also mean disabled, whereas it previously meant ...
sometimes default to 1s, sometimes just spin.
2020-01-20 21:14:29 +01:00
dependabot-preview[bot]
ddd26f5c42 build(deps): bump github.com/pkg/errors from 0.9.0 to 0.9.1 (#6279)
Bumps [github.com/pkg/errors](https://github.com/pkg/errors) from 0.9.0 to 0.9.1.
- [Release notes](https://github.com/pkg/errors/releases)
- [Commits](https://github.com/pkg/errors/compare/v0.9.0...v0.9.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-01-20 20:05:19 +01:00
Simon Frei
69da11a263 cmd/syncthing: Always use monitor process (fixes #4774, fixes #5786) (#6278) 2020-01-20 09:07:46 +01:00
Simon Frei
879d757850 lib/syncthing: Wait for actual termination on Stop() (#6277) 2020-01-20 08:40:15 +01:00
Jakob Borg
6c8e8f0391 lib/model: Remove legacy handling of symlinks (#6276)
This hardly seams relevant any more; 0.14.14 is dead since a long time.
2020-01-19 12:02:20 +01:00
Simon Frei
39891cdf42 lib/model: Return paused summary instead of error on paused folders (#6272) 2020-01-17 09:30:00 +01:00
Simon Frei
e782bab9fc lib/config: Add some info to the folder marker missing (ref #5207) (#6270) 2020-01-16 15:30:29 +01:00
Tomasz Wilczyński
9cc49aea77 assets, gui: Losslessly compress all JPG, PNG, and PDF images (#6265)
Use FileOptimizer (https://nikkhokkho.sourceforge.io/static.php?page=FileOptimizer)
to losslessly compress all JPG, PNG, and PDF images without reducing their
quality.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2020-01-16 13:52:43 +01:00
Jakob Borg
29690502f0 cmd/strelaypoolsrv: Serve gzip compressed responses 2020-01-15 10:36:21 +01:00
Jakob Borg
d323e9c106 gui, man, authors: Update docs, translations, and contributors 2020-01-15 07:46:01 +01:00
Jakob Borg
a79de840bd gui, man, authors: Update docs, translations, and contributors 2020-01-14 08:01:03 +01:00
Jakob Borg
f454e8b609 build: go mod tidy 2020-01-14 07:59:31 +01:00
dependabot-preview[bot]
c6cef168a5 build(deps): bump github.com/oschwald/geoip2-golang from 1.3.0 to 1.4.0 (#6245)
Bumps [github.com/oschwald/geoip2-golang](https://github.com/oschwald/geoip2-golang) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/oschwald/geoip2-golang/releases)
- [Commits](https://github.com/oschwald/geoip2-golang/compare/v1.3.0...v1.4.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-01-14 09:57:08 +04:00
dependabot-preview[bot]
4de6b94de7 build(deps): bump github.com/pkg/errors from 0.8.1 to 0.9.0 (#6267)
Bumps [github.com/pkg/errors](https://github.com/pkg/errors) from 0.8.1 to 0.9.0.
- [Release notes](https://github.com/pkg/errors/releases)
- [Commits](https://github.com/pkg/errors/compare/v0.8.1...v0.9.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-01-13 12:18:32 +04:00
Simon Frei
08bb730ad0 lib/db: Wrap errors from leveldb iterators (fixes #6263) (#6264) 2020-01-12 09:06:31 +04:00
Jakob Borg
1b52197f71 gui, man, authors: Update docs, translations, and contributors 2020-01-11 12:56:45 +01:00
Simon Frei
71882765f2 lib/events, lib/model: Unflake test and prevent deadlock on event unsubscribing (#6261) 2020-01-11 08:14:05 +01:00
Simon Frei
119d76d035 lib/stun: Refactor to remove unnecessary logging (fixes #6213) (#6260) 2020-01-10 10:24:15 +01:00
Simon Frei
08753ccabe lib/model: Reset queue after all pulling is done (fixes #5867) (#6256) 2020-01-08 12:21:22 +01:00
Dan
ceb9475668 etc: Fix misleading comment in discosrv options file (#6258) 2020-01-06 22:43:41 +00:00
Simon Frei
f56a5545d4 gui, lib/model: Prevent negative sync completion (fixes #4570) (#6248) 2020-01-03 14:07:57 +01:00
Simon Frei
7a8e73d599 build, pmp: Replace fork with upstream for go-nat-pmp and tidy go.mod (#6247) 2020-01-03 12:39:59 +01:00
Simon Frei
1e69c31d87 gui: Missing line break (fixes #6240) (#6241) 2019-12-28 20:51:45 +01:00
Jakob Borg
3dc3e01f80 docker: Fix HOME setting (fixes #6234) (#6235)
su-exec sets $HOME, and we used to have this env call in there to fix
that up. It disappeared in the latest entrypoint.sh rewrite.
2019-12-22 17:50:16 +01:00
Jakob Borg
fb5f1bb56a golangci: Skip godox 2019-12-18 11:33:36 +01:00
dependabot-preview[bot]
0f1e0eff05 build(deps): bump github.com/mattn/go-isatty from 0.0.10 to 0.0.11 (#6231)
Bumps [github.com/mattn/go-isatty](https://github.com/mattn/go-isatty) from 0.0.10 to 0.0.11.
- [Release notes](https://github.com/mattn/go-isatty/releases)
- [Commits](https://github.com/mattn/go-isatty/compare/v0.0.10...v0.0.11)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-12-16 07:52:02 +00:00
Jakob Borg
a963bc8b86 lib/upgrade: Let Mac load .zip archives (#6230)
There is no need to do this switch based on the current OS, instead do
it based on what the archive actually appears to be.

(Tested; works.)
2019-12-16 07:21:18 +01:00
Simon Frei
de64ffddab lib/api: Prevent leaks in tests (#6227) 2019-12-13 09:26:41 +01:00
Simon Frei
8140350094 lib/syncthing: Expose backend instead of lowlevel (#6224) 2019-12-12 16:50:09 +01:00
Simon Frei
82ed8e702c gui: Prevent spurious api call (fixes #6222) (#6223) 2019-12-11 20:37:35 +01:00
Artur Zubilewicz
fca2876795 gui: Fix a typo in a class name (#6220) 2019-12-11 08:34:51 +01:00
Artur Zubilewicz
633ddba2b2 gui: Allow to degrade 'Automatic upgrades' option to 'No upgrades' (fixes #6044) (#6168) 2019-12-11 08:33:18 +01:00
Jakob Borg
325c3c1fa7 lib/db, lib/protocol: Compact FileInfo and BlockInfo alignment (#6215)
* lib/db, lib/protocol: Compact FileInfo and BlockInfo alignment

This fixes the following two lint warnings

    FileInfo: struct of size 160 bytes could be of size 136 bytes
    BlockInfo: struct of size 48 bytes could be of size 40 bytes

by reordering fields in alignment order (64 bit fields, then 32 bit
fields, then 16 bit fields (if any), then small ones). The end result is
a slightly less aesthetically pleasing struct field order, but since
these are the objects we often juggle in bulk and keep large queues of I
think it's worth it.

It's a micro optimization, but a cheap one.
2019-12-08 13:31:26 +01:00
Jakob Borg
be0508cf26 lib/model, lib/protocol: Use error handling to avoid panic on non-started folder (fixes #6174) (#6212)
This adds error returns to model methods called by the protocol layer.
Returning an error will cause the connection to be torn down as the
message couldn't be handled. Using this to signal that a folder isn't
currently available will then cause a reconnection a few moments later,
when it'll hopefully work better.

Tested manually by running with STRECHECKDBEVERY=0 on a nontrivially
sized setup. This panics reliably before this patch, but just causes a
disconnect/reconnect now.
2019-12-04 10:46:55 +01:00
Simon Frei
6fd5e78740 lib: Consistently unsubscribe from config-wrapper (fixes #6133) (#6205) 2019-12-04 07:15:00 +01:00
Jakob Borg
a9e490adfa cmd/ursrv: Show more architectures (fixes #6211) 2019-12-03 21:34:32 +01:00
Jakob Borg
d8e7e92512 build: Generalize code signing
Should, at the least, also codesign when building zip for mac.
2019-12-03 08:37:43 +01:00
Paul Brit
eca156fd7f lib/osutil: Increase maxfiles on macOS properly (fixes #6206) (#6207) 2019-12-03 07:26:22 +01:00
Marcus Legendre
b3fd9a8d53 lib/ignore: Don't create empty ".stignore" files (fixes #6190) (#6197)
This will:

1. prevent creation of a new .stignore if there are no ignore patterns
2. delete an existing .stignore if all ignore patterns are removed
2019-12-02 08:19:02 +01:00
Simon Frei
0bec01b827 lib/db: Remove *instance by making everything *Lowlevel (#6204) 2019-12-02 08:18:04 +01:00
Jakob Borg
e82a7e3dfa all: Propagate errors from NamespacedKV (#6203)
As foretold by the prophecy, "once the database refactor is merged, then
shall appear a request to propagate errors from the store known
throughout the land as the NamedspacedKV, and it shall be good".
2019-11-30 13:03:24 +01:00
Jakob Borg
928767e316 lib/model: gofmt lol :( 2019-11-29 09:29:59 +01:00
Evgeny Kuznetsov
1c277fc096 lib/fs: Add case-insensitive fakefs (#6074) 2019-11-29 09:17:42 +01:00
Jakob Borg
c71116ee94 Implement database abstraction, error checking (ref #5907) (#6107)
This PR does two things, because one lead to the other:

- Move the leveldb specific stuff into a small "backend" package that
defines a backend interface and the leveldb implementation. This allows,
potentially, in the future, switching the db implementation so another
KV store should we wish to do so.

- Add proper error handling all along the way. The db and backend
packages are now errcheck clean. However, I drew the line at modifying
the FileSet API in order to keep this manageable and not continue
refactoring all of the rest of Syncthing. As such, the FileSet methods
still panic on database errors, except for the "database is closed"
error which is instead handled by silently returning as quickly as
possible, with the assumption that we're anyway "on the way out".
2019-11-29 09:11:52 +01:00
Robin Schoonover
a5bbc12625 gui: Sort versions by date in restore dropdown (#6201) 2019-11-29 08:32:25 +01:00
Simon Frei
606154b183 lib/model: Also send folder summary from sync-preparing (ref #6028) (#6202) 2019-11-29 08:30:17 +01:00
Jakob Borg
f9c380d45b cmd/syncthing: Implement log rotation (fixes #6104) (#6198)
Since we've taken upon ourselves to create a log file by default on
Windows, this adds proper management of that log file. There are two new
options:

  -log-max-old-files="3"    Number of old files to keep (zero to keep only current).
  -log-max-size="10485760"  Maximum size of any file (zero to disable log rotation).

The default values result in four files (syncthing.log, synchting.0.log,
..., syncthing.3.log) each up to 10 MiB in size. To not use log rotation
at all, the user can say --log-max-size=0.
2019-11-28 12:26:14 +01:00
Aman Gupta
a04f54a16a lib/upnp: Use simple continue in loop (#6192) 2019-11-26 22:55:34 +00:00
Aman Gupta
509d123251 lib/upnp: Ensure uPnP http requests have trailing \r\n (#6193) 2019-11-26 22:54:46 +00:00
Simon Frei
b32821a586 lib/config, lib/connections: Remove ListenAddresses hack (#6188) 2019-11-26 17:07:25 +01:00
Otiel
8ced8ad562 github: bump Syncthing version to v1... (#6191) 2019-11-26 08:25:50 +00:00
Simon Frei
1bae4b7f50 all: Use context in lib/dialer (#6177)
* all: Use context in lib/dialer

* a bit slimmer

* https://github.com/syncthing/syncthing/pull/5753

* bot

* missed adding debug.go

* errors.Cause

* simultaneous dialing

* anti-leak
2019-11-26 07:39:51 +00:00
Jakob Borg
4e151d380c lib/versioner: Reduce surface area (#6186)
* lib/versioner: Reduce surface area

This is a refactor while I was anyway rooting around in the versioner.
Instead of exporting every possible implementation and the factory and
letting the caller do whatever, this now encapsulates all that and
exposes a New() that takes a config.VersioningConfiguration.

Given that and that we don't know (from the outside) how a versioner
works or what state it keeps, we now just construct it once per folder
and keep it around. Previously it was recreated for each restore
request.

* unparam

* wip
2019-11-26 07:39:31 +00:00
Simon Frei
f747ba6d69 lib/ignore: Keep skipping ignored dirs for rooted patterns (#6151)
* lib/ignore: Keep skipping ignored dirs for rooted patterns

* review

* clarify comment and lint

* glob.QuoteMeta

* review
2019-11-26 07:37:41 +00:00
Simon Frei
33258b06f4 lib/connections: Dialer code deduplication (#6187) 2019-11-26 07:36:58 +00:00
Jakob Borg
4340589501 Merge branch 'release'
Discarding the commit on that branch...
2019-11-25 11:10:09 +01:00
Simon Frei
4d368a37e2 lib/model, lib/protocol: Add contexts sending indexes and download-progress (#6176) 2019-11-25 11:07:36 +01:00
dependabot-preview[bot]
999647b7d6 build(deps): bump github.com/urfave/cli from 1.22.1 to 1.22.2 (#6183)
Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.22.1 to 1.22.2.
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/master/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v1.22.1...v1.22.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-11-25 11:05:55 +01:00
Jakob Borg
45a711570e Revert "lib/model: Add folders on start in model (#6135)"
This reverts commit bee7cce081.
2019-11-24 09:33:58 +01:00
Simon Frei
cf312abc72 lib: Wrap errors with errors.Wrap instead of fmt.Errorf (#6181) 2019-11-23 15:20:54 +00:00
Mateusz Ż
e2f6d0d6c4 UI enhancements on mobile (#6180)
* Set fallback font for log viewer

* Enable logo scaling in About view

* Don't split "dependency list" into 2 columns on mobile
2019-11-23 12:25:25 +00:00
Simon Frei
65d4dd32cb lib/model: Also handle ServeBackground (#6173) 2019-11-22 21:30:16 +01:00
Simon Frei
de886b3f22 lib/relay: Prevent lock nil deref when creation dynamic client (#6175) 2019-11-21 17:45:06 +00:00
Jakob Borg
8c91e012c7 Merge branch 'release'
* release:
  gui: Prioritize non-idle folder states (fixes #6169) (#6170)
2019-11-21 09:36:03 +01:00
Simon Frei
6d27cf6563 gui: Prioritize non-idle folder states (fixes #6169) (#6170) 2019-11-21 09:33:39 +01:00
Simon Frei
57d668ed1d lib/config: Do introductions in a single config change (#6162) 2019-11-21 08:41:41 +01:00
Simon Frei
90d85fd0a2 lib: Replace done channel with contexts in and add names to util services (#6166) 2019-11-21 08:41:15 +01:00
Simon Frei
552ea68672 gui: Prioritize non-idle folder states (fixes #6169) (#6170) 2019-11-20 19:06:03 +01:00
Artur Zubilewicz
80eac473d9 gui: Make 'Nearby devices' look like links (fixes #6057) (#6165)
- The 'help-box' with nearby devices will appear only when there are
  any nearby devices
- IDs of nearby devices will look like links (i.e. underlined when
  hovered over)
2019-11-19 22:15:27 +01:00
Ruslan Yevdokymov
c1db8b2680 gui: Add upgrade confirmation dialog (fixes #5887) (#6167) 2019-11-19 22:05:41 +01:00
Jakob Borg
df866e10c8 gui: Increase padding a bit again (ref #6153)
I change my mind on this, the modals need *some* padding to not look weird.
2019-11-19 22:03:31 +01:00
Simon Frei
0d14ee4142 lib/model: Don't info log repeat pull errors (#6149) 2019-11-19 09:56:53 +01:00
Simon Frei
28edf2f5bb lib/model: Keep fmut locked while adding/starting/restarting folders (#6156) 2019-11-18 21:15:26 +01:00
Jakob Borg
e7100bc573 golang-ci: Skip "cognitive complexity" check for now 2019-11-17 08:54:59 +01:00
Simon Frei
5edf4660e2 lib/model: Prevent cleanup-race in testing (ref #6152) (#6155) 2019-11-14 23:08:40 +01:00
Domenic Horner
a5699d40a8 gui: Decrease padding on the panel and modal bodies (#6153)
This allows better viewing when on a condensed screen, and reduces screen real estate slightly.
2019-11-13 15:14:00 +01:00
Simon Frei
f80ce17497 lib/model: In tests prevent goroutine leaks and increase timeouts (#6152) 2019-11-13 10:21:54 +01:00
Simon Frei
ce72bee576 lib/model: Simplify pull error/retry logic (fixes #6139) (#6141) 2019-11-11 15:50:28 +01:00
Jacob
0cc77feabb docker: Add stdiscosrv and strelaysrv Dockerfiles (#6143) 2019-11-11 09:37:08 +01:00
Jakob Borg
d19b12d3fe lib/protocol: Buffer allocation when compressing (fixes #6146) (#6147)
We incorrectly gave a too small buffer to lz4.Compress, causing it to
allocate in some cases (when the data actually becomes larger when
compressed). This then panicked when passed to the buffer pool.

This ensures a buffer that is large enough, and adds tripwires closer to
the source in case this ever pops up again. There is a test that
exercises the issue.
2019-11-11 08:36:31 +00:00
Jakob Borg
1d406d62e3 golang-ci: Upgrade, skipping the white space complainer 2019-11-10 10:25:14 +01:00
Jakob Borg
1d99e5277a all: Cleanups enabled by Go 1.12 2019-11-10 10:16:10 +01:00
Jakob Borg
879f51b027 lib/tlsutil: Remove Go 1.12 TLS 1.3 beta opt-in
Go 1.13 enables this by default.
2019-11-10 09:32:48 +01:00
Audrius Butkevicius
d3d7408b17 lib/api: Make theme paths relative (#6142)
* Update theme.css

* Update syncthingController.js
2019-11-09 12:07:46 +00:00
Pablo
9b01e64c66 gui, lib/api: Adds support for prefers-color-scheme (fixes #6115)
* gui, lib/api: Adds support for prefers-color-scheme on default theme (fixes #6115)

- Renames current default theme into a new "light" theme
- Modifies assets serving to allow getting assets from different themes

* lib/api: Serve assets from arbitrary theme when path starts with "theme-assets"

* lib/api: Moves constant out of function

* Loads light theme in browsers without support for prefers-color-scheme

* gui: Disables dark theme when printing

* Prevents repeated injection and adds support for older browsers

The CSS is always loaded if there is no support for `matchMedia`.
2019-11-08 21:44:37 +00:00
Audrius Butkevicius
65c172cd8d lib/api: Reset mtime after theme change (fixes #5810) (#6140) 2019-11-08 22:37:42 +01:00
Simon Frei
85e6a77f25 lib/model: Remove some testing deadlocks (#6138) 2019-11-08 18:53:51 +01:00
Jakob Borg
88244b0c1f lib/model: Add test for previous commit 2019-11-08 17:03:25 +01:00
Simon Frei
cd290d2d05 lib/model: Add initial deviceStatRefs on model creation (fixes #6136) (#6137)
This is a regression introduced in PR #6005 / commit
f7b2e79fdc
2019-11-08 11:32:51 +00:00
Simon Frei
bee7cce081 lib/model: Add folders on start in model (#6135) 2019-11-08 10:56:16 +01:00
Jakob Borg
f15a1528fc cmd/stbench: rm -r cmd/stbench (#6131)
This is apparently an old benchmarking tool. I'd forgotten about it.
Since 67b8ef1f3e the build script tries to
build all binaries explicitly by default, and this fails on Windows as
this tool doesn't build on Windows.

Kill it with fire.
2019-11-07 07:20:21 +00:00
Jakob Borg
6be6de4b4a lib/api: Slightly unflake TestCSRFRequired by allowing longer timeout 2019-11-07 08:14:49 +01:00
Jakob Borg
6755a9ca63 Fix bufferpool puts (ref #4976) (#6125)
* Fix bufferpool puts (ref #4976)

There was a logic error in Put() which made us put all large blocks into
segment zero, where we subsequently did not look for them.

I also added a lowest threshold, as we otherwise allocate a 128KiB
buffer when we need 24 bytes for a header and such.

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* smaller stress

* cap/len

* wip

* wip
2019-11-06 10:53:10 +00:00
Audrius Butkevicius
98a1adebe1 all: Remove dead code, fix lost msgLen checks (#6129) 2019-11-06 07:09:58 +01:00
Aman Gupta
31569debeb lib/upnp: Fix outdated comment (#6110) 2019-11-05 18:56:51 +00:00
Simon Frei
cf420e135e gui: New folder state "Local Additions" for receive-only (fixes #5968) (#6117) 2019-11-01 20:44:23 +01:00
Ruslan Yevdokymov
3b5dff3f34 lib/model: Fix removal of a marker when there are still folders referencing it (#6114) 2019-10-30 15:11:07 +00:00
Jakob Borg
56cdf2f2d9 build: I like long, complicated things, ok? 2019-10-25 10:04:26 +02:00
dependabot-preview[bot]
b1dbe925d4 build(deps): bump github.com/prometheus/client_golang (#6099)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.1.0 to 1.2.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.1.0...v1.2.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-10-22 21:56:32 +02:00
Simon Frei
bbdda059bd lib/model: Check for symlinks before deleting during pull (fixes #6090) (#6100) 2019-10-22 21:55:51 +02:00
Simon Frei
72f26c1e45 gui: Fix loop selecting all devices (fixes #5980) (#6102) 2019-10-22 13:57:10 +01:00
Simon Frei
72194d137c lib/scanner: Don't scan if input path is below symlink (fixes #6090) (#6101) 2019-10-22 11:12:21 +02:00
André Colomb
8b5bd45a29 gui: Split device list in folder sharing options by shared / unshared (#5756) 2019-10-21 21:28:10 +02:00
Jakob Borg
9084510e1b cmd/stdiscosrv: Sort addresses before replication (fixes #6093) (#6094)
This makes sure addresses are sorted when coming in from the API. The
database merge operation still checks for correct ordering (which is
quick) and sorts if it isn't correct (legacy database record or
replication peer), but then does a copy first.

Tested with -race in production...
2019-10-18 10:50:19 +02:00
Audrius Butkevicius
c4f161d8c5 lib/connections: Rate limit quic accept loop (fixes #6081) (#6082) 2019-10-18 09:55:37 +02:00
Jakob Borg
ad2d3702ae all: Upgrade github.com/gogo/protobuf and regenerate (fixes #6085) 2019-10-18 09:53:59 +02:00
Jakob Borg
95acb26249 lib/syncthing: Fixup test after merge 2019-10-17 09:14:27 +02:00
Jakob Borg
4736cccda1 all: Update certificate lifetimes (fixes #6036) (#6078)
This adds a certificate lifetime parameter to our certificate generation
and hard codes it to twenty years in some uninteresting places. In the
main binary there are a couple of constants but it results in twenty
years for the device certificate and 820 days for the HTTPS one. 820 is
less than the 825 maximum Apple allows nowadays.

This also means we must be prepared for certificates to expire, so I add
some handling for that and generate a new certificate when needed. For
self signed certificates we regenerate a month ahead of time. For other
certificates we leave well enough alone.
2019-10-16 20:31:46 +02:00
Simon Frei
1a06ab68eb lib/sync: Cleanly fail instead of panic in tests (#6088) 2019-10-16 10:11:11 +02:00
Simon Frei
b8907b49f9 lib/syncthing: Prevent hangup on error during startup (fixes #6043) (#6047) 2019-10-16 10:10:42 +02:00
Simon Frei
7b33294955 gui, lib/model: Add new state FolderPreparingSync (fixes #6027) (#6028) 2019-10-16 09:08:54 +02:00
Simon Frei
031684116b lib/util: Add caller info to service (ref #5932) (#5973) 2019-10-16 09:06:16 +02:00
Simon Frei
a0c9db1d09 lib/api: Unify JSON marshalling of file infos (#6087) 2019-10-15 11:25:12 +02:00
dependabot-preview[bot]
aa4b918224 build(deps): bump github.com/lucas-clemente/quic-go (#6084)
Bumps [github.com/lucas-clemente/quic-go](https://github.com/lucas-clemente/quic-go) from 0.12.0 to 0.12.1.
- [Release notes](https://github.com/lucas-clemente/quic-go/releases)
- [Changelog](https://github.com/lucas-clemente/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/lucas-clemente/quic-go/compare/v0.12.0...v0.12.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-10-14 18:30:00 +01:00
dependabot-preview[bot]
7043b1fbba build(deps): bump github.com/mattn/go-isatty from 0.0.9 to 0.0.10 (#6083)
Bumps [github.com/mattn/go-isatty](https://github.com/mattn/go-isatty) from 0.0.9 to 0.0.10.
- [Release notes](https://github.com/mattn/go-isatty/releases)
- [Commits](https://github.com/mattn/go-isatty/compare/v0.0.9...v0.0.10)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-10-14 18:17:01 +01:00
chenrui
9d6b663d1c build: Update golangci for go v1.13 (#6042) 2019-10-13 16:32:20 +02:00
Arkadiusz Tymiński
7dc4ac6e1f gui: Hide select/deselect all buttons if there are no devices (fixes #6056) 2019-10-08 21:57:17 +01:00
Alan Pope
7bad9b3a11 snapcraft: Add desktop plug (#6069)
Without the desktop plug, launching the syncthing snap will not launch a browser window to the admin UI. Adding this one line will fix that. Tested here on my Ubuntu system with a build from tip of master.
2019-10-08 18:20:51 +01:00
Jakob Borg
6b570ee8dc lib/upgrade: Add html_url release field 2019-10-08 09:12:00 +02:00
Cyprien Devillez
6408a116f9 cmd/stdiscosrv: Add support for Traefik 2 as a reverse proxy (#6065) 2019-10-07 12:55:27 +01:00
Jakob Borg
67b8ef1f3e cmd/*, lib/build: Set correct LongVersion (fixes #5993) (#5997)
The relay and discosrv didn't use the new lib/build package, now they
do. Conversely the lib/build package wasn't aware there might be other
users and hard coded the program name - now it's set by the build
script
2019-10-07 13:30:25 +02:00
Evgeny Kuznetsov
999d4a0e23 gui: Better info for stalled and lengthy scans (fixes #5627) (#6061) 2019-10-05 11:34:42 +02:00
Lukas Lihotzki
96bb1c8e29 all, lib/logger: Refactor SetDebug calls (#6054) 2019-10-04 13:03:34 +02:00
Audrius Butkevicius
8fb576ed54 lib/model: Adjust blocks reported in usage reporting (fixes #5995) (#6037)
* lib/model: Adjust blocks reported in usage reporting (fixes #5995)

* Use variables, fix go.mod
2019-10-04 12:03:13 +01:00
Jakob Borg
5e31e6356f lib/api: Report actual listener address (fixes #6049) (#6060) 2019-10-04 11:25:41 +01:00
Jakob Borg
1b5a61e03e build: Upgrade github.com/syndtr/goleveldb
Newer is always better. Always.
2019-10-03 17:45:45 +02:00
Jakob Borg
755e689627 lib/db: Always use small db settings on 32 bit archs (#6053) 2019-10-03 13:40:14 +01:00
boomsquared
3f5c9b578c gui: Fix tab headers in black and dark themes (fixes #5583) 2019-10-01 20:09:52 +01:00
Simon Frei
a2a14c8424 lib/model: Set empty version when unignoring deleted files (fixes 6038) (#6039) 2019-10-01 15:34:59 +02:00
Lukas Lihotzki
cff7a091f5 gui: Don't show auth warning when listening on UNIX socket (fixes #6040) (#6041) 2019-10-01 13:22:33 +02:00
Jakob Borg
757d9a5333 Merge branch 'release'
* release:
  readme: Fix broken link to README-Docker.md (#6025)
  docker: Make it easy to disable the GUI, document it (#6021)
2019-10-01 07:46:59 +02:00
Ilya Brin
2c88e473cb readme: Fix broken link to README-Docker.md (#6025) 2019-10-01 07:34:58 +02:00
Jakob Borg
875377981d docker: Make it easy to disable the GUI, document it (#6021) 2019-10-01 07:31:48 +02:00
Jakob Borg
52d80d8144 lib/fs: Improve root check (#6033)
The root check would allow things like c:\foobar\baz if the root was
c:\foo, because string wise that's a prefix. Now it doesn't.
2019-09-29 23:38:11 +08:00
Ilya Brin
fd2e91c82d readme: Fix broken link to README-Docker.md (#6025) 2019-09-23 13:28:42 +09:00
Jakob Borg
c744a75cdd docker: Make it easy to disable the GUI, document it (#6021) 2019-09-22 11:33:29 +01:00
Simon Frei
35b699dc77 lib/fs: Check events against both the user and eval root (#6013) 2019-09-22 08:03:22 +01:00
Jakob Borg
7127c13f18 build: Tweak golang-ci config to build (#6022) 2019-09-22 07:57:58 +01:00
Jakob Borg
dab29287da Merge branch 'release'
* release:
  gui, lib/api: Use effective listen address for no auth warning
  docker: Build using Go 1.13
2019-09-21 12:10:04 +02:00
Jakob Borg
c0b5a70ce3 gui, lib/api: Use effective listen address for no auth warning
This adds a field `guiAddressUsed` to the system status response, that
holds the current listening address actually in use. This may be
different from the one stored in the config because it may have been
overridden by environment or command line flag.

The GUI now checks this field to see if we are listening on localhost.
If we are not, the authentication required warning is displayed,
regardless of the *configured* listening address.
2019-09-21 12:07:10 +02:00
Jakob Borg
7bcdc5b08e docker: Build using Go 1.13 2019-09-21 12:07:07 +02:00
Jakob Borg
db0ba2555a gui, lib/api: Use effective listen address for no auth warning
This adds a field `guiAddressUsed` to the system status response, that
holds the current listening address actually in use. This may be
different from the one stored in the config because it may have been
overridden by environment or command line flag.

The GUI now checks this field to see if we are listening on localhost.
If we are not, the authentication required warning is displayed,
regardless of the *configured* listening address.
2019-09-20 16:23:33 +02:00
Jakob Borg
1398fbb681 docker: Build using Go 1.13 2019-09-20 11:02:43 +02:00
dependabot-preview[bot]
f653f540f8 build(deps): bump github.com/urfave/cli from 1.21.0 to 1.22.1 (#6015)
Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.21.0 to 1.22.1.
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/master/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v1.21.0...v1.22.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-09-16 08:31:00 +01:00
dependabot-preview[bot]
078923bd1a build(deps): bump github.com/minio/sha256-simd from 0.1.0 to 0.1.1 (#6014)
Bumps [github.com/minio/sha256-simd](https://github.com/minio/sha256-simd) from 0.1.0 to 0.1.1.
- [Release notes](https://github.com/minio/sha256-simd/releases)
- [Commits](https://github.com/minio/sha256-simd/compare/v0.1.0...v0.1.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-09-16 08:28:46 +01:00
ghjklw
80a83b605c lib/config: Remove stun.voxgratia.org (fixes #6010) (#6011)
DNS resolution fails for this server:
named[9495]: REFUSED unexpected RCODE resolving 'stun.voxgratia.org/A/IN': 2600:9000:5303:ae00::1#53
named[9495]: REFUSED unexpected RCODE resolving 'stun.voxgratia.org/A/IN': 205.251.198.31#53
2019-09-13 09:05:25 +01:00
Simon Frei
28b6e8b063 lib/db: Update db when only local flags change (fixes #6008) (#6007) 2019-09-12 08:47:39 +02:00
Simon Frei
f7b2e79fdc lib/model: Use read-locks wherever possible (#6005) 2019-09-12 05:55:23 +01:00
Jakob Borg
c0b3de2680 build: Correct hash for quic package 2019-09-11 15:31:43 +02:00
jelle van der Waa
9a9bcff3e9 build: Add EXTRA_LDFLAGS environment variable handling (fixes #5999) (#6000)
Allow extending LDFLAGS by setting EXTRA_LDFLAGS to be able to pass
-extldflags=-zrelro -ldflags=-extldflags=-znow for Arch Linux packaging
to get full relro.
2019-09-07 19:21:09 +01:00
Jakob Borg
88482b29ee build: Upgrade dependencies
go get -u ./...
go mod tidy
2019-09-05 15:13:51 +02:00
Jakob Borg
22dff7207c lib/api: Refactor to run tests in parallel (#5998)
This is an experiment in testing, based on the advise to always call
t.Parallel() at the start of every test. Doing so makes tests run in
parallel, which is usually faster, but also exposes package level state
and potential race conditions better.

To support this I had to redesign the CSRF manager to not be package
global, which was indeed an improvement. And tests run five times faster
now.
2019-09-05 12:35:51 +01:00
Jakob Borg
0104e78589 lib/connections: Improve write rate limiting (fixes #5138) (#5996)
This splits large writes into smaller ones when using a rate limit,
making them into a legitimate trickle rather than large bursts with a
long time in between.
2019-09-04 11:12:17 +01:00
Jakob Borg
80894948f6 build: Upgrade github.com/gogo/protobuf (#5994)
This is the result of:

- Changing build.go to take the protobuf version from the modules
  instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code
2019-09-04 07:33:29 +01:00
Jakob Borg
e945e65b13 lib/api: Skip an IPv6 specific test inside Docker (fixes #5991) (#5992)
Isn't this just the most beautiful thing you've ever seen?
2019-09-03 21:14:36 +01:00
Jakob Borg
ebd2e5bf30 lib/model: Correctly handle manual rescan with ignore error (fixes #5985) (#5987)
Assume a folder error was set due to bad ignores on the latest scan.
Previously, doing a manual rescan would result in:

1. Clearing the folder error, which schedules (immediately) an fs
   watcher restart

2. Attempting to load the ignores, which fails, so we set a folder
   error and bail.

3. Now the fs watcher restarts, as scheduled, so we trigger a scan.
   Goto 1.

This change fixes this by not clearing the error until the error is
actually cleared, that is, if both the health check and ignore loading
succeeds.
2019-08-30 13:27:26 +01:00
Jakob Borg
60c07b259e lib/ignore: Don't crash in partial #include line (ref #5985) (#5986)
If the line is just "#include" with nothing following it we would crash
with an index out of bounds error. Now it's a little more careful.
2019-08-30 11:36:31 +02:00
Jakob Borg
fe50f1a158 cmd/usrv: Use better caching 2019-08-29 19:49:27 +02:00
Jakob Borg
5851aabe02 lib/upgrade: Include browser_download_url field 2019-08-29 16:10:13 +02:00
Jakob Borg
0832285d79 lib/ur: Prevent trivial race in CPU bench 2019-08-25 23:43:28 +02:00
Jakob Borg
c2ea9d119d lib/connections: Upgrade QUIC package, use contexts for timeout (#5972) 2019-08-23 10:15:52 +02:00
Simon Frei
534f07d9ca lib/discover: Don't leak goroutines (#5950) 2019-08-22 12:30:57 +02:00
Jakob Borg
24d4290d03 lib/model, lib/scanner: Pass a valid event logger (fixes #5970) (#5971) 2019-08-21 08:05:43 +02:00
Jakob Borg
09b872cef4 build: go mod tidy 2019-08-21 07:47:05 +02:00
Simon Frei
2d124e053c test: Get integration tests up to speed (config, build and test fixes) (#5962) 2019-08-20 10:17:11 +02:00
Jakob Borg
90b70c7a16 lib/db: Use different defaults for larger databases (fixes #5966) (#5967)
This introduces a better set of defaults for large databases. I've
experimentally determined that it results in much better throughput in a
couple of scenarios with large databases, but I can't give any
guarantees the values are always optimal. They're probably no worse than
the defaults though.
2019-08-20 09:41:41 +02:00
dependabot-preview[bot]
e910acdc17 build(deps): bump github.com/mattn/go-isatty from 0.0.7 to 0.0.9 (#5965)
Bumps [github.com/mattn/go-isatty](https://github.com/mattn/go-isatty) from 0.0.7 to 0.0.9.
- [Release notes](https://github.com/mattn/go-isatty/releases)
- [Commits](https://github.com/mattn/go-isatty/compare/v0.0.7...v0.0.9)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-08-19 12:13:26 +01:00
Jakob Borg
96350d7600 cmd/stupgrades: Generate appropriate upgrade data (fixes #5924) (#5960)
This is a tiny tool to grab the GitHub releases info and generate a
more concise version of it. The conciseness comes from two aspects:

- We select only the latest stable and pre. There is no need to offer
  upgrades to versions that are older than the latest. (There might be, in
  the future, when we hit 2.0. We can revisit this at that time.)

- We use our structs to deserialize and reserialize the data. This means
  we remove all attributes that we don't understand and hence don't
  require.

All in all the new response is about 10% the size of the previous one and
avoids the issue where we only serve a bunch of release candidates and
no stable.
2019-08-16 10:04:10 +02:00
Simon Frei
77a5980747 lib/model: Do free disk space check later on pull (fixes #5948) (#5949) 2019-08-16 09:40:53 +02:00
Simon Frei
b677464dfa lib/model: Optimise locking around conn-close and puller states (#5954) 2019-08-16 09:35:19 +02:00
Simon Frei
b1c74860e8 all: Remove global events.Default (ref #4085) (#5886) 2019-08-15 16:29:37 +02:00
Simon Frei
f6f696c6c5 lib/config: Prevent nil deref in debug logging (fixes #5955) (#5956) 2019-08-15 15:51:09 +02:00
Simon Frei
cf40ed6cec lib/connections: Return exported intf from exported function (#5947) 2019-08-13 09:33:33 +02:00
Simon Frei
6fa02d5081 lib/model: Fix a few more problematic locks (ref #5929) (#5944) 2019-08-13 09:04:43 +02:00
Simon Frei
86e35f1879 lib/model: Less locking in ClusterConfig (#5943) 2019-08-11 19:30:24 +02:00
Jakob Borg
720a6bf62e lib/tlsutil: Remove hardcoded curve preferences (fixes #5940) (#5942)
They are arguable outdated and we are better off trusting the standard
library than trying to keep up with it ourselves.
2019-08-11 19:01:57 +02:00
Simon Frei
4a619e74f2 lib/model: Fix incorrect locking (#5939) 2019-08-11 16:10:30 +02:00
Audrius Butkevicius
58ef5368f8 lib/connections: Validate device id before assuming success (fixes #5934) (#5935)
* lib/connections: Validate device id before assuming success (fixes #5934)

* Vet
2019-08-09 12:31:42 +01:00
Cromefire_
7b37d453f9 build, etc: Add systemd units and ufw rules for relay and discovery (fixes #5115) (#5350) 2019-08-08 18:04:52 +02:00
Oliver Freyermuth
edf2399ce6 Add NATSymmetricUDPFirewall to punchable NATs (#5931)
NATSymmetricUDPFirewall actually is not NAT at all, but means the machine has a global IP address and an UDP firewall in front (RFC calls it Symmetric UDP Firewall). This is punchable fine, both theoretically and also practically in testing.
2019-08-06 12:26:02 +01:00
dependabot-preview[bot]
d43b0a4395 build(deps): bump github.com/urfave/cli from 1.20.0 to 1.21.0 (#5928)
Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.20.0 to 1.21.0.
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/master/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v1.20.0...v1.21.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-08-05 12:18:35 +02:00
Jakob Borg
f2efd08e0f Merge branch 'release'
* release:
  cmd/syncthing: Print version information early (fixes #5891) (#5893)
2019-08-05 07:16:07 +02:00
Jakob Borg
61b9f7bd55 lib/config: Format bytes in insufficient space errors (fixes #5920) (#5921) 2019-08-02 14:43:05 +02:00
Simon Frei
1475c0344a gui: Parse strings before copying options object (fixes #5824) (#5922) 2019-08-02 12:46:27 +02:00
Simon Frei
77cc87dfca lib/connections: Log errors in relay clients (#5917) 2019-08-01 17:37:58 +02:00
Jakob Borg
bc7dd02e2b authors: Update carstenhag (Moter8) (fixes #5919) 2019-08-01 17:35:45 +02:00
Simon Frei
8a06cf0973 lib/model: Unflake TestPullInvalidIgnored (#5918) 2019-08-01 11:07:41 +02:00
Simon Frei
05835ed81f all: Remove potentially problematic errors from panics (fixes #5839) (#5912) 2019-07-31 10:53:35 +02:00
Simon Frei
df522576ac lib/model: Don't call t.Fatal in goroutines (fixes #5901) (#5903) 2019-07-30 17:50:51 +02:00
Simon Frei
d681ac11fe lib/config: Handle empty Fstype for mtime-window (#5906) 2019-07-30 15:23:00 +02:00
Simon Frei
7d5f7d508d lib/syncthing: Stop only once (fixes #5908) (#5909) 2019-07-29 20:07:19 +02:00
Simon Frei
1d182e4631 lib/fs: Use gopsutils for disk usage (#5905) 2019-07-29 20:06:17 +02:00
Simon Frei
710f5c199f lib/db: Don't use global fileset in benchmarks (#5902) 2019-07-28 22:31:24 +02:00
Simon Frei
fd847d4efe lib/model: Fix flakyness of TestRequestRemoteRenameChanged (#5904) 2019-07-28 22:29:31 +02:00
Jakob Borg
15e51fc045 cmd/stcrashreceiver: Store and serve compressed reports (#5892)
This changes the on disk format for new raw reports to be gzip
compressed. Also adds the ability to serve these reports in plain text,
to insulate web browsers from the change (previously we just served the
raw reports from disk using Caddy).
2019-07-28 11:13:04 +02:00
Jakob Borg
c1c976aa2b lib/model: Don't panic on failed chmod-back on directory (fixes #5836) (#5896)
* lib/model: Don't panic on failed chmod-back on directory (fixes #5836)

This makes the "in writable dir"-wrapper log chmod-back errors instead
of panicking. To do that we need a logger so the function moved into the
model package which is also the only place it's used. The tests came
along.

(The test also exercised osutil.RenameOrCopy like some sort of
piggybacking. I removed that.)
2019-07-28 10:25:05 +02:00
Jakob Borg
159d1a68e1 lib/api: Don't log random stuff in the HTTP server (fixes #5738) (#5897) 2019-07-28 09:49:07 +02:00
Simon Frei
dd850f66bb lib/api: Close server before exiting serve (fixes #5866) (#5895) 2019-07-28 08:03:55 +02:00
Jakob Borg
d0c3697152 cmd/syncthing: Print version information early (fixes #5891) (#5893) 2019-07-27 20:36:33 +02:00
Jakob Borg
669bcb748f lib/config, lib/model: Don't save on every pending folder/device update (fixes #5888) (#5890)
Wrapper methods generally don't save by themselves.
2019-07-27 11:05:00 +01:00
Jakob Borg
4e22a96602 cmd/syncthing: Print version information early (fixes #5891) (#5893) 2019-07-27 10:58:39 +01:00
Jakob Borg
a992559abc lib/db: Add hacky way to adjust database parameters (#5889)
This adds a set of magical environment variables that can be used to
tweak the database parameters. It's totally undocumented and not
intended to be a long term or supported thing.

It's ugly, but there is a backstory. I have a couple of large
installations where the database options are inefficient or otherwise
suboptimal (24/7 compaction going on and stuff like that). I don't
*know* the correct database parameters, nor yet the formula or method to
derive them by, so this requires experimentation. Experimentation needs
to happen partly in production, and rolling out new builds for every
tweak isn't practical. This provides override points for all reasonable
values, while not changing anything by default.

Ideally, at the end of such experimentation, we'll know which values are
relevant to change and in what manner, and can provide a more user
friendly knob to do so - or do it automatically based on the database
size.
2019-07-26 22:18:42 +02:00
Simon Frei
46e72d76b5 cmd/syncthing, lib/syncthing: Create library utils (ref #4085) (#5871) 2019-07-23 23:39:20 +02:00
Simon Frei
7a4c88d4e4 lib: Add mtime window when comparing files (#5852) 2019-07-23 21:48:53 +02:00
Simon Frei
35f40e9a58 lib/model: Create new file-set after stopping folder (fixes #5882) (#5883) 2019-07-23 20:39:25 +02:00
Simon Frei
5de9b677c2 lib/fs: Fix kqueue event list (fixes #5308) (#5885) 2019-07-23 14:11:15 +02:00
Simon Frei
6f08162376 lib/model: Remove incorrect/useless panics (#5881) 2019-07-23 10:51:16 +02:00
Simon Frei
7b3d9a8dca lib/syncthing: Refactor to use util.AsService (#5858) 2019-07-23 10:50:37 +02:00
Simon Frei
942659fb06 lib/model, lib/nat: More service termination speedup (#5884) 2019-07-23 10:49:22 +02:00
dependabot-preview[bot]
15c262184b build(deps): bump github.com/maruel/panicparse from 1.2.1 to 1.3.0 (#5879)
Bumps [github.com/maruel/panicparse](https://github.com/maruel/panicparse) from 1.2.1 to 1.3.0.
- [Release notes](https://github.com/maruel/panicparse/releases)
- [Commits](https://github.com/maruel/panicparse/compare/v1.2.1...v1.3.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-22 19:46:52 +01:00
dependabot-preview[bot]
484fa0592e build(deps): bump github.com/lib/pq from 1.1.1 to 1.2.0 (#5878)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.1.1 to 1.2.0.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.1.1...v1.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-22 08:07:21 +01:00
Simon Frei
b5b54ff057 lib/model: No watch-error on missing folder (fixes #5833) (#5876) 2019-07-19 19:41:16 +02:00
Simon Frei
4d3432af3e lib: Ensure timely service termination (fixes #5860) (#5863) 2019-07-19 19:40:40 +02:00
Simon Frei
1cb55904bc lib/model: Prevent panic in NeedFolderFiles (fixes #5872) (#5875) 2019-07-19 19:39:52 +02:00
Simon Frei
2b622d0774 lib/model: Close conn on dev pause (fixes #5873) (#5874) 2019-07-19 19:37:29 +02:00
Simon Frei
1894123d3c lib/syncthing: Modify exit status before stopping (fixes #5869) (#5870) 2019-07-18 20:49:00 +02:00
Jakob Borg
e7e177a6fa lib/relay: Prevent spurious relay error message (fixes #5861) (#5864) 2019-07-17 10:55:28 +02:00
Simon Frei
eed1edcca0 cmd/syncthing: Ensure myID is set by making it local (fixes #5859) (#5862) 2019-07-17 07:19:14 +02:00
Simon Frei
0025e9ccfb all: Refactor cmd/syncthing creating lib/syncthing (ref #4085) (#5805)
* add skeleton for lib/syncthing

* copy syncthingMain to lib/syncthing (verbatim)

* Remove code to deduplicate copies of syncthingMain

* fix simple build errors

* move stuff from main to syncthing with minimal mod

* merge runtime options

* actually use syncthing.App

* pass io.writer to lib/syncthing for auditing

* get rid of env stuff in lib/syncthing

* add .Error() and comments

* review: Remove fs interactions from lib

* and go 1.13 happened

* utility functions
2019-07-14 11:43:13 +01:00
Simon Frei
82b70b9fae lib/model, lib/protocol: Track closing connections (fixes #5828) (#5829) 2019-07-14 11:03:55 +02:00
Simon Frei
def4b8cee5 lib/config: Error on empty folder path (fixes #5853) (#5854) 2019-07-14 11:03:14 +02:00
Aurélien Rainone
f1a7dd766e all: Add comment to ensure correct atomics alignment (fixes #5813)
Per the sync/atomic bug note:

> On ARM, x86-32, and 32-bit MIPS, it is the caller's
> responsibility to arrange for 64-bit alignment of 64-bit words
> accessed atomically. The first word in a variable or in an
> allocated struct, array, or slice can be relied upon to be
> 64-bit aligned.

All atomic accesses of 64-bit variables in syncthing code base are
currently ok (i.e they are all 64-bit aligned).

Generally, the bug is triggered because of incorrect alignement
of struct fields. Free variables (declared in a function) are
guaranteed to be 64-bit aligned by the Go compiler.

To ensure the code remains correct upon further addition/removal
of fields, which would change the currently correct alignment, I
added the following comment where required:

     // atomic, must remain 64-bit aligned

See https://golang.org/pkg/sync/atomic/#pkg-note-BUG.
2019-07-13 14:05:39 +01:00
Simon Frei
20c8dbd9ed lib/model: Fix integer conversion (fixes #5837) (#5851) 2019-07-12 16:37:12 +02:00
xduugu
4b3f9b1af9 lib/versioner: Replace multiple placeholders in a single token in external command (fixes #5849)
* lib/versioner: Add placeholder to provide the absolute file path to external commands

This commit adds support for a new placeholder, %FILE_PATH_FULL%, to the
command of the external versioner. The placeholder will be replaced by
the absolute path of the file that should be deleted.

* Revert "lib/versioner: Add placeholder to provide the absolute file path to external commands"

This reverts commit fb48962b94.

* lib/versioner: Replace all placeholders in external command (fixes #5849)

Before this commit, only these placeholders were replaced that span a
whole word, for example "%FOLDER_PATH%". Words that consisted of more
than one placeholder or additional characters, for example
"%FOLDER_PATH%/%FILE_PATH%", were left untouched.

* fixup! lib/versioner: Replace all placeholders in external command (fixes #5849)
2019-07-12 08:45:39 +01:00
Simon Frei
3446d50201 lib/model: Remove pointless error that watch hasn't started (fixes #5833) (#5834) 2019-07-10 11:00:06 +02:00
Simon Frei
9fef1552fc lib/db, lib/model: Remove folder info from panics (ref #5839) (#5840) 2019-07-10 10:57:49 +02:00
Simon Frei
85318f3b82 gui: On update setting don't show RC msg when disabled (fixes #5803) (#5842) 2019-07-09 22:30:22 +01:00
Simon Frei
485acda63b lib/relay: Call the proper Error method (ref #5806) (#5841) 2019-07-09 22:29:19 +01:00
Simon Frei
05e9e0bfa9 build: Update notify dependency (#5838) 2019-07-09 21:33:22 +01:00
Simon Frei
ba056578ec lib: Add util.Service as suture.Service template (fixes #5801) (#5806) 2019-07-09 11:40:30 +02:00
Jakob Borg
d0ab65a178 cmd/stcrashreceiver: Don't leak clients
Use a global raven.Client because they allocate an http.Client for each,
with a separate CA bundle and infinite connection idle time. Infinite
connection idle time means that if the client is never used again it
will always keep the connection around, not verifying whether it's
closed server side or not. This leaks about a megabyte of memory for
each client every created.

client.Close() doesn't help with this because the http.Client is still
around, retained by its own goroutines.

The thing with the map is just to retain the API on sendReport, even
though there will in practice only ever be one DSN per process
instance...
2019-07-09 11:11:06 +02:00
Simon Frei
4cba433852 build: Add go major version to go.mod (#5822) 2019-06-30 13:18:34 +02:00
Simon Frei
863fe23347 gui, lib/model: Fix download progress accounting (fixes #5811) (#5815) 2019-06-30 09:23:47 +02:00
Jakob Borg
43b6ac9501 cmd/stcrashreceiver: Add source code loader (#5779) 2019-06-29 08:50:09 +02:00
Simon Frei
1cf352a722 lib/model: NewFileSet outside fmut (#5818) 2019-06-29 08:49:30 +02:00
Simon Frei
b58f6ca886 lib/model: Correct/unify check if item changed (#5819) 2019-06-29 07:45:41 +02:00
Jakob Borg
5cbc9089fd Merge branch 'release'
* release:
  go.mod: Update AudriusButkevicius/pfilter (fixes #5820)
2019-06-28 08:21:00 +02:00
Jakob Borg
20eab36a33 go.mod: Update AudriusButkevicius/pfilter (fixes #5820) 2019-06-28 08:03:29 +02:00
Jakob Borg
2b4df6b874 go.mod: Update AudriusButkevicius/pfilter (fixes #5820) 2019-06-28 07:38:52 +02:00
Simon Frei
3c7e7e971d lib/model: Make jobQueue.Jobs paginated (fixes #5754) (#5804)
* lib/model: Make jobQueue.Jobs paginated (fixes #5754)

* fix, no test yet

* add test
2019-06-27 19:25:38 +01:00
Audrius Butkevicius
afde0727fe lib/versioner: Revert naming change (fixes #5807) (#5808) 2019-06-25 08:56:11 +03:00
Simon Frei
bf744ded31 cmd/syncthing, lib/db: Exit/close db faster (fixes #5781) (#5782)
This adds a 10s timeout on closing the db and additionally cancels active
db iterators and waits for them to terminate before closing the db.
2019-06-17 15:27:25 +03:00
Wulf Weich
0d86166890 gui: Optimize folder/device info for small screens (fixes #5774) (#5787)
break table layout and add button margin on small screens
2019-06-17 15:24:45 +03:00
Simon Frei
cea5962417 lib/model: Unflake TestPullInvalidIgnoredSR/SO (fixes #5796) (#5799) 2019-06-17 15:23:28 +03:00
Simon Frei
02752af862 lib/protocol: Don't block on Close (fixes #5794) (#5795) 2019-06-14 19:04:41 +02:00
Simon Frei
6b1d7ac727 gui: Check data before calling .reverse() (#5793) 2019-06-14 13:14:15 +02:00
Simon Frei
abd363e8bb lib/model: Don't error on pulling deletion of invalid file (fixes #5791) (#5792) 2019-06-14 08:48:14 +02:00
Jakob Borg
bff1a5f5e4 build: Upgrade github.com/syndtr/goleveldb 2019-06-14 06:56:52 +02:00
Simon Frei
38302270d4 build: Include cross-package test coverage (#5735) 2019-06-13 19:28:14 +02:00
desbma
1b4fe39a89 etc: Remove Systemd hardening options unsupported in user mode (#5788) 2019-06-12 10:31:19 +02:00
Jakob Borg
6b74cdc613 gui, man, authors: Update docs, translations, and contributors 2019-06-12 07:45:26 +02:00
Simon Frei
13a746e0fb lib/model: Prevent nil deref if folder stopped (fixes #5780) (#5778) 2019-06-11 11:48:51 +02:00
Audrius Butkevicius
21f50e2f8f lib/versioner: Use mtime for version cleanup (fixes #5765) (#5769) 2019-06-11 09:16:55 +02:00
Audrius Butkevicius
b7c70a9817 lib/fs: Enhance mtimefs, use everywhere (fixes #5777) (#5776) 2019-06-11 08:27:12 +02:00
Jakob Borg
42ce6be9b9 lib/ur: Implement crash (panic) reporting (fixes #959) (#5702)
* lib/ur: Implement crash (panic) reporting (fixes #959)

This implements a simple crash reporting method. It piggybacks on the
panic log files created by the monitor process, picking these up and
uploading them from the usage reporting routine.

A new config value points to the crash receiver base URL, which defaults
to "https://crash.syncthing.net/newcrash" (following the pattern of
"https://data.syncthing.net/newdata" for usage reports, but allowing us
to separate the service as required).
2019-06-11 08:19:11 +02:00
dependabot-preview[bot]
93e57bd357 build(deps): bump github.com/prometheus/client_golang (#5775)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 0.9.3 to 0.9.4.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v0.9.3...v0.9.4)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-06-10 19:06:25 +02:00
Mingxuan Lin
eb4fe808c5 lib/fs: Fallback EvalSymlinks method on windows (fixes #5609) (#5611) 2019-06-10 14:33:53 +02:00
Simon Frei
1054ce9354 lib/model: Refactor sending indexes as suture service (#5757) 2019-06-10 13:27:22 +02:00
Simon Frei
97ad575b1f gui: Enable rescan button with failed items (fixes #5770) (#5771) 2019-06-10 10:12:30 +02:00
Audrius Butkevicius
ee746263fb lib/connections: Do not leak FDs, fix address copy (fixes #5767) (#5768)
* lib/connections: Do not leak FDs, fix address copy (fixes #5767)

* build

* Update quic_listen.go

* Update quic_listen.go
2019-06-09 22:14:00 +01:00
Jakob Borg
41ff4b323e lib/api: Correct logic for errors.jons in support bundle (fixes #5766)
The err check should always be done with != and have the error case
first, to follow the pattern and avoid mistakes like these.
2019-06-09 09:35:05 +02:00
Jakob Borg
997bb5e7e1 all: Remove "large blocks" config (#5763)
We now always use large / variable blocks.
2019-06-06 15:57:38 +01:00
Jakob Borg
ca2fa8de4e lib/build: Version 1.2 will be the Fermium Flea 2019-06-06 14:45:07 +02:00
Jakob Borg
64185484b2 readme: Remove old, dead link (fixes #5760) 2019-06-06 08:35:03 +02:00
Simon Frei
5541697d18 gui: Don't call API when no modal is open (fixes #5652) (#5759) 2019-06-05 14:03:52 +08:00
Simon Frei
e39d3f95dd lib/protocol: Prioritize close msg and add close timeout (#5746) 2019-06-05 14:01:59 +08:00
Jakob Borg
6e8aa0ec25 gui, man, authors: Update docs, translations, and contributors 2019-06-05 07:45:26 +02:00
dependabot-preview[bot]
97057eb9de build(deps): bump github.com/lucas-clemente/quic-go (#5761)
Bumps [github.com/lucas-clemente/quic-go](https://github.com/lucas-clemente/quic-go) from 0.11.1 to 0.11.2.
- [Release notes](https://github.com/lucas-clemente/quic-go/releases)
- [Changelog](https://github.com/lucas-clemente/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/lucas-clemente/quic-go/compare/v0.11.1...v0.11.2)
2019-06-03 12:42:23 +01:00
Simon Frei
6664e01acf lib/protocol: Return from ClusterConfig when closed (#5752) 2019-05-29 12:14:00 +02:00
dependabot-preview[bot]
e2a647a6a4 build(deps): bump golang.org/x/text from 0.3.0 to 0.3.2 (#5751)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.0 to 0.3.2.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.3.0...v0.3.2)
2019-05-29 11:42:07 +02:00
Jakob Borg
5ce5b2c94a lib/config: Refactor migrations a bit (#5750)
This breaks out config migrations to a separate concept, making it
(imho) slightly easier to maintain and get an overview.
2019-05-29 11:37:44 +02:00
Audrius Butkevicius
e714df013f lib/connections: Add QUIC protocol support (fixes #5377) (#5737) 2019-05-29 09:56:40 +02:00
Kalle Laine
d8fa61e27c github: Create SECURITY.md (#5749)
Now that github has added the security section why not use it.
2019-05-29 09:39:00 +02:00
Jakob Borg
d3f583c8c9 gui, man, authors: Update docs, translations, and contributors 2019-05-29 07:45:25 +02:00
Jakob Borg
e096a14ff5 golang-ci: Disable confused scopelint check 2019-05-27 12:54:08 +02:00
Simon Frei
3775a64d5c lib/protocol: Don't send anything else before cluster config (#5741) 2019-05-27 11:15:34 +01:00
Simon Frei
129df0613b lib/model: Unflake folder restart with blocking conn test (ref #5707) (#5744) 2019-05-27 10:58:09 +01:00
Simon Frei
486230768e lib/fs, lib/model: Add error channel to Watch to avoid panics (fixes #5697) (#5734)
* lib/fs, lib/model: Add error channel to Watch to avoid panics (fixes #5697)

* forgot unsupported watch

* and more non(-standard)-unixy fixes

* and windows test

* review
2019-05-25 20:08:26 +01:00
Simon Frei
9e6db72535 lib/protocol: Don't call receiver after calling Closed (fixes #4170) (#5742)
* lib/protocol: Don't call receiver after calling Closed (fixes #4170)

* review
2019-05-25 20:08:07 +01:00
Simon Frei
d91da8feee lib/model: Readd special handling of conn close in TestIssue5063 (#5743)
This partially reverts commit 64518b0f7e.
2019-05-25 18:51:13 +01:00
Simon Frei
64518b0f7e lib/model: Close connections when model is stopped (#5733) 2019-05-25 16:00:32 +02:00
Simon Frei
5d35b2c540 lib/protocol: Test for Close on blocking send deadlock (ref #5442) (#5732) 2019-05-23 21:42:02 +01:00
Jakob Borg
224775ca81 golang-ci: Not as good at modules as you'd hope 2019-05-23 17:34:32 +02:00
Jakob Borg
98cda96b1c gui, man, authors: Update docs, translations, and contributors 2019-05-22 07:45:27 +02:00
Simon Frei
5b306510a0 lib/model: Consistently cleanup model in tests (#5724) 2019-05-19 14:29:07 +02:00
Jakob Borg
f593ac387c golang-ci: Turn up the heat 2019-05-18 12:51:49 +02:00
Jakob Borg
eb8df7f632 cmd/ursrv: Lint fixes 2019-05-18 11:59:32 +02:00
dependabot[bot]
78d6eee74a build(deps): bump github.com/prometheus/client_golang (#5729)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 0.9.2 to 0.9.3.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v0.9.2...v0.9.3)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-05-18 09:57:35 +02:00
Simon Frei
1b2b970f32 lib/model, lib/testutils: Test closing a connection on folder restart (#5707) 2019-05-18 08:53:59 +02:00
Simon Frei
5ffbb7668d lib/model: Fix test flakyness regression (ref #5592) (#5718) 2019-05-18 08:52:50 +02:00
Simon Frei
1df8701c46 test: Report time per MiB on transfer benchs (#5711) 2019-05-18 08:46:08 +02:00
Tom Jakubowski
cc36621b11 docker: Create entrypoint script (fixes #5631) (#5635) 2019-05-18 08:43:53 +02:00
Simon Frei
441ea109a1 lib/model: Refactor file deletions when pulling (#5699) 2019-05-17 18:29:54 +02:00
Jakob Borg
8015f3f937 cmd/ursrv: Add F-droid and some third party distributors to the distrubtion summary 2019-05-17 10:18:37 +02:00
Jakob Borg
2c866277a2 lib/api, lib/connections, gui: Show connection error for disconnected devices (fixes #3345) (#5727)
* lib/api, lib/connections, gui: Show connection error for disconnected devices (fixes #3345)

This adds functionality in the connetions service to track the last
error per address. That is in turn exposed in the /rest/system/status
API method, as that is also where we already show the listener status
from the connection service.

The GUI uses this info where it lists addresses, showing errors (if any)
in red underneath each address.

I also slightly refactored the existing status method on the connection
service to have a better name and return typed information.

* ok

* review

* formatting

* review
2019-05-16 22:11:45 +01:00
Jakob Borg
1da9317a09 authors: Update acolomb (fixes #5726) 2019-05-15 14:47:35 +02:00
Jakob Borg
e16a65bacb cmd/ursrv: Summarize known distribution channels
That is, whether the binary was downloaded from GitHub, from our APT
repository, etc.
2019-05-15 13:42:55 +02:00
Jakob Borg
c02aed0a21 gui, man, authors: Update docs, translations, and contributors 2019-05-15 07:45:24 +02:00
André Colomb
e4956358fb lib/model: Remove superfluous check for IndexID in remote ClusterConfig (#5717)
The check in ClusterConfig() when iterating through announced devices
in a folder explicitly skips entries without a non-zero IndexID.
Therefore, the check for IndexID == 0 just below will never be true
and the intended cleanup of local index data will not happen.

Plainly remove that check to make the intended case distinction work.
2019-05-12 21:17:55 +02:00
dependabot[bot]
ad44d77a21 build(deps): bump github.com/lib/pq from 1.0.0 to 1.1.1 (#5714)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.0.0 to 1.1.1.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.0.0...v1.1.1)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-05-11 22:15:30 +02:00
dependabot[bot]
6bb5d140fa build(deps): bump github.com/oschwald/geoip2-golang from 1.1.0 to 1.3.0 (#5713)
Bumps [github.com/oschwald/geoip2-golang](https://github.com/oschwald/geoip2-golang) from 1.1.0 to 1.3.0.
- [Release notes](https://github.com/oschwald/geoip2-golang/releases)
- [Commits](https://github.com/oschwald/geoip2-golang/compare/v1.1.0...v1.3.0)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-05-11 21:02:49 +01:00
dependabot[bot]
d082b2ba4c build(deps): bump github.com/mattn/go-isatty from 0.0.4 to 0.0.7 (#5712)
Bumps [github.com/mattn/go-isatty](https://github.com/mattn/go-isatty) from 0.0.4 to 0.0.7.
- [Release notes](https://github.com/mattn/go-isatty/releases)
- [Commits](https://github.com/mattn/go-isatty/compare/v0.0.4...v0.0.7)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-05-11 21:00:00 +01:00
dependabot[bot]
638789899c build(deps): bump github.com/gogo/protobuf from 1.2.0 to 1.2.1 (#5715)
Bumps [github.com/gogo/protobuf](https://github.com/gogo/protobuf) from 1.2.0 to 1.2.1.
- [Release notes](https://github.com/gogo/protobuf/releases)
- [Commits](https://github.com/gogo/protobuf/compare/v1.2.0...v1.2.1)

Signed-off-by: dependabot[bot] <support@dependabot.com>
2019-05-11 20:59:44 +01:00
Simon Frei
dfbbb286fc lib/fs: Consider win83 for root path as well when watching (ref #5706) (#5709) 2019-05-11 10:06:04 +02:00
Simon Frei
fbd445fe0a lib/api: Clean up after test, enables test caching (#5710) 2019-05-11 08:41:32 +02:00
Simon Frei
2b246eeb52 lib/model: Move test utilities to separate files (#5694) 2019-05-10 13:33:45 +02:00
Simon Frei
2558b021e5 lib/fs: Remove \\?\ for drive letters when watching (fixes #5578) (#5701) 2019-05-10 09:09:58 +02:00
Jakob Borg
31be810eb6 lib/model: Don't fail operation when fsync() fails (fixes #5704) (#5705) 2019-05-09 21:20:29 +01:00
Jakob Borg
62a6d619e7 Merge branch 'release'
* release:
  lib/fs: Revert removal of \\?\ for drive letters (fixes #5695)
2019-05-08 17:38:08 +02:00
Jakob Borg
a04fcfe749 lib/fs: Revert removal of \\?\ for drive letters (fixes #5695)
This reverts commit ca823bd591 from #5633.

This un-fixes bug #5578.
2019-05-08 17:31:52 +02:00
Simon Frei
283f39ae5f lib/protocol: Revert unreleased changes related to closing connections (#5688)
This reverts commits:
    ec7c88ca55
    19b51c9b92
    5da41f75fa
    04b927104f
2019-05-08 08:08:26 +02:00
Jakob Borg
59e1349499 gui, man, authors: Update docs, translations, and contributors 2019-05-08 07:45:28 +02:00
Simon Frei
b45d77b6be lib/model: Fix regression deleting directories on pull (ref #5690) (#5691) 2019-05-06 20:55:26 +02:00
Simon Frei
79e67b7f79 lib/model: Remove individual pull errors on retry (fixes #5659) (#5690) 2019-05-06 17:21:56 +02:00
Jakob Borg
36a4a9fd34 Merge branch 'release'
* release:
  etc: Revert systemd After=multi-user.target addition (fixes #5676)
2019-05-06 12:24:39 +02:00
Nitroretro
92ed31fe21 gui: Fix typo in log viewer modal (#5687) 2019-05-03 07:28:17 +02:00
Simon Frei
5fa8467756 lib/model: Consistent use of locks (#5684) 2019-05-02 18:55:39 +01:00
Simon Frei
5954b105cd lib/model: Let fakeConnection call Model.Closed on close (#5682) 2019-05-02 14:24:55 +02:00
Simon Frei
defc5dca65 lib/model: Use correct lock (#5683) 2019-05-02 14:09:42 +02:00
Simon Frei
9f358ecae0 lib/protocol: Refactor to pass only relevant argument to writeMessage (#5681) 2019-05-02 14:09:19 +02:00
Simon Frei
fe4daf242b cmd, lib/db: Actually close goleveldb (fixes #5505) (#5671) 2019-05-02 11:15:00 +02:00
Simon Frei
ec7c88ca55 lib/protocol: Fix yet another deadlock (fixes #5678) (#5679)
* lib/protocol: Fix yet another deadlock (fixes #5678)

* more consistency

* read deadlock

* naming

* more naming
2019-05-02 09:21:07 +01:00
Jakob Borg
26e6d94c00 gui, man, authors: Update docs, translations, and contributors 2019-05-01 07:45:25 +02:00
Jakob Borg
a54d7a32d1 etc: Revert systemd After=multi-user.target addition (fixes #5676)
This reverts commit 84fe285659.
2019-04-29 21:48:28 +02:00
Jakob Borg
e2470c8bc8 etc: Revert systemd After=multi-user.target addition (fixes #5676)
This reverts commit 84fe285659.
2019-04-29 21:45:01 +02:00
Simon Frei
19b51c9b92 lib/protocol: Don't close asyncMessage.done on success (fixes #5674) (#5675) 2019-04-29 17:52:57 +02:00
Audrius Butkevicius
0ca1f26ff8 lib/versioner: Restore for all versioners, cross-device support (#5514)
* lib/versioner: Restore for all versioners, cross-device support

Fixes #4631
Fixes #4586
Fixes #1634
Fixes #5338
Fixes #5419
2019-04-28 23:30:16 +01:00
Jakob Borg
2984d40641 lib/ignore: Additional test case (#5672) 2019-04-28 21:20:11 +01:00
Simon Frei
5da41f75fa lib/model, lib/protocol: Wait for reader/writer loops on close (fixes #4170) (#5657)
* lib/protocol: Wait for reader/writer loops on close (fixes #4170)

* waitgroup

* lib/model: Don't hold lock while closing connection

* fix comments

* review (lock once, func argument) and naming
2019-04-28 11:58:51 +01:00
Jakob Borg
32dec4a00d gui, man, authors: Update docs, translations, and contributors 2019-04-24 07:45:24 +02:00
Simon Frei
04b927104f lib/protocol: Don't send any messages before cluster config (#5646)
* lib/model: Send cluster config before releasing pmut

* reshuffle

* add model.connReady to track cluster-config status

* Corrected comments/strings

* do it in protocol
2019-04-23 20:47:11 +01:00
Simon Frei
110806842c lib/model: Pass the old not new fileinfo to deleteItemOnDisk (fixes #5654) (#5655) 2019-04-23 20:46:28 +01:00
Jonas Thelemann
d9b3415dec gui: Semicolon insertion (#5666) 2019-04-23 18:37:51 +01:00
Jonas Thelemann
d3d43d90f6 gui: Remove superfluous trailing argument (#5668) 2019-04-23 18:37:20 +01:00
Jonas Thelemann
e7d11adf3c gui: Remove unused variables (#5665) 2019-04-23 18:36:12 +01:00
Jonas Thelemann
afeb606b5b gui: Use of AngularJS markup in URL-valued attribute (#5667)
Using AngularJS markup (that is, AngularJS expressions enclosed in double curly braces) in HTML attributes that reference URLs is not recommended: the browser may attempt to fetch the URL before the AngularJS compiler evaluates the markup, resulting in a request for an invalid URL.
2019-04-23 18:33:40 +01:00
Jonas Thelemann
c6a179fa4d cmd/strelaypoolsrv: Missing explicit dependency injection (#5669)
https://lgtm.com/rules/1505800326162/
2019-04-23 12:17:27 +01:00
Simon Frei
e302ccf4b4 lib/fs: Improve IsParent (#5658) 2019-04-22 11:12:32 +02:00
Simon Frei
926e9228ed lib/scanner, lib/model: File -> item when logging error (#5664) 2019-04-21 16:19:59 +01:00
Simon Frei
86e72d9973 lib/model: Use RLock and legacy polish (#5663) 2019-04-21 13:21:36 +01:00
otbutz
952c8becf5 gui: Adjust table column width to content size (fixes #4531) (#5565) 2019-04-18 18:35:44 +02:00
Jakob Borg
32ee8d783d gui, man, authors: Update docs, translations, and contributors 2019-04-17 07:45:24 +02:00
Simon Frei
3bea59b0d9 lib/model: Refactor progressEmitter to de-/activate by config (fixes #4613) (#5623) 2019-04-13 14:20:51 +02:00
Simon Frei
8a4b65b937 gui: Fix setting page size on failed and locally changed modals (fixes #5421) (#5650) 2019-04-13 14:05:39 +02:00
Simon Frei
fca895a632 lib/model: Fix block index calculation for recheckFile (fixes #5649) (#5648) 2019-04-12 15:21:07 +02:00
Simon Frei
79360e2205 lib/fs: Add test that symlinks are skipped on walk (#5644) 2019-04-10 22:36:37 +02:00
Simon Frei
9b2a73f9ab lib/model: Pause pull for at least as long as failed pull took (fixes #5641) (#5643) 2019-04-10 16:22:23 +02:00
Simon Frei
c305265c62 lib/model: Request errors conforming to BEP specs (#5642) 2019-04-10 11:47:24 +02:00
Jakob Borg
1954239ffa gui, man, authors: Update docs, translations, and contributors 2019-04-10 07:45:25 +02:00
Simon Frei
ca823bd591 lib/fs: When watching remove \\?\ for drive letters (fixes #5578) (#5633) 2019-04-09 09:02:04 +02:00
Jakob Borg
eabd972667 docker: Update build image version 2019-04-07 16:13:55 +02:00
Simon Frei
395e524e2d lib/model: Update db on scan/pull in folder (#5608) 2019-04-07 13:29:17 +02:00
Jakob Borg
48b1a2b264 gui, man, authors: Update docs, translations, and contributors 2019-04-03 07:45:31 +02:00
Evgeny Kuznetsov
58953d799c gui: Add licensing information to aboutModalView (fixes #1223) (#5605) 2019-03-28 07:49:29 +01:00
Jakob Borg
f0f8bf7784 lib/config: Round times stored for pending folders/devices (fixes #5554) 2019-03-27 20:35:42 +01:00
Jakob Borg
bf3834e367 lib/protocol: Use constants instead of init time hashing (fixes #5624) (#5625)
This constructs the map of hashes of zero blocks from constants instead
of calculating it at startup time. A new test verifies that the map is
correct.
2019-03-27 20:20:30 +01:00
Simon Frei
8d1eff7e41 lib/model: Missing queue.Done and reset between pulls (fixes #5332) (#5626) 2019-03-27 20:19:35 +01:00
Simon Frei
0cff66fcbc lib/model: Optimize puller for meta only changes (#5622) 2019-03-27 08:36:58 +00:00
Jakob Borg
d23e8be39f gui, man, authors: Update docs, translations, and contributors 2019-03-27 07:45:25 +01:00
Simon Frei
43a5be1c4b lib/model: Send item finished even after deregistering (fixes #5362) (#5620) 2019-03-26 21:31:33 +01:00
Simon Frei
b50039a920 cmd/syncthing, lib/api: Separate api/gui into own package (ref #4085) (#5529)
* cmd/syncthing, lib/gui: Separate gui into own package (ref #4085)

* fix tests

* Don't use main as interface name (make old go happy)

* gui->api

* don't leak state via locations and use in-tree config

* let api (un-)subscribe to config

* interface naming and exporting

* lib/ur

* fix tests and lib/foldersummary

* shorter URVersion and ur debug fix

* review

* model.JsonCompletion(FolderCompletion) -> FolderCompletion.Map()

* rename debug facility https -> api

* folder summaries in model

* disassociate unrelated constants

* fix merge fail

* missing id assignement
2019-03-26 19:53:58 +00:00
Simon Frei
d4e81fff8a lib/model: Remove unnecessary arguments in pulling functions (#5619) 2019-03-25 12:59:22 +01:00
Jakob Borg
3a557a43cd Merge branch 'release'
* release:
  lib/model: Pass correct file info to deleteItemOnDisk (fixes #5616) (#5617)
  gui: Add missing quote char
2019-03-25 12:58:12 +01:00
Simon Frei
bc53782f88 test: Update conflict integration test (ref #5511) (#5618)
The change in 225c0dda80 (#5511) results in
conflicts being immediately scanned and synced, while the test expects just one
conflict on either side. Change it to expect the same conflict on both sides.
2019-03-25 12:53:06 +01:00
Simon Frei
e4ab9d3312 lib/model: Pass correct file info to deleteItemOnDisk (fixes #5616) (#5617) 2019-03-25 12:43:21 +01:00
Simon Frei
e31a116e6e lib/model: Pass correct file info to deleteItemOnDisk (fixes #5616) (#5617) 2019-03-25 12:42:39 +01:00
Jakob Borg
24f41e169a gui: Add missing quote char 2019-03-25 12:37:10 +01:00
Simon Frei
675f289aef gui: Add new folder state "Failed Items" (fixes #5456) (#5614) 2019-03-22 17:57:53 +00:00
Simon Frei
e7ae851900 lib/model: Debug and test fixes (#5613) 2019-03-22 14:43:47 +01:00
Jakob Borg
73fa7a6e5b go.mod: Update golang.org/x/sys
There was a vulnerability in salsa20, which we don't use, but better
safe than sorry.
2019-03-21 08:08:15 +01:00
Simon Frei
1a6d023ba8 lib/model: Run recvonly tests in temporary dirs (#5587) 2019-03-20 17:38:03 +00:00
Jakob Borg
9207535028 gui: Add missing quote char 2019-03-14 07:37:18 +01:00
Jakob Borg
24967e99a7 gui, man, authors: Update docs, translations, and contributors 2019-03-13 07:45:25 +01:00
Simon Frei
50d8c43e7c lib/config: Set UseLargeBlocks to true by default (fixes #5599) (#5600) 2019-03-12 12:59:26 +00:00
Evgeny Kuznetsov
90d9b2de2b cmd/syncthing: Add a check for particular far-future version of config (fixes #1101) 2019-03-12 08:20:47 +00:00
Evgeny Kuznetsov
04f05f102d cmd: Add check for newer config file and an option to override it (fixes #4921) (#5597)
* Add check for newer config file and override option

* Expanded error message

* Polish previous commits

* Make it newER
2019-03-12 07:12:08 +00:00
Simon Frei
289a02e994 lib/model: Integrate stat refs in folder (#5596) 2019-03-11 16:57:21 +00:00
georgespatton
84fe285659 etc: Systemd unit should declare after=multiuser.target (fixes #5346) (#5593) 2019-03-11 15:50:34 +01:00
Simon Frei
445637ebec lib/model: Pass fset & ignores on folder creation (#5592) 2019-03-11 07:28:54 +01:00
Simon Frei
3f3d2c814b lib/model: Remove unused code (#5591) 2019-03-10 17:05:39 +01:00
Simon Frei
189e44488e lib/model: Introduce must test utility (#5586)
* lib/model: Introduce must test utility

* nice
2019-03-09 18:45:36 +00:00
Simon Frei
27ff20faa3 lib/model: Introduce waitForState test utility (#5585)
* lib/model: Introduce waitForState test utility

* folder id as param to waitForState
2019-03-09 10:36:55 +00:00
Simon Frei
b1564e53e4 lib/model: Improve test utilities (#5584) 2019-03-08 20:29:09 +00:00
Simon Frei
3a75b63776 lib/model: Use temp dir from osutils in tests (#5581) 2019-03-07 16:34:41 +01:00
Simon Frei
8e238c8e48 lib/model: Check before replacing existing file on pull (fixes #5571) (#5567) 2019-03-07 15:15:14 +01:00
Jakob Borg
3d5af675db gui, man, authors: Update docs, translations, and contributors 2019-03-06 07:45:23 +01:00
Jakob Borg
9da3273eb8 lib/model: Clarify fileInfoBatch.flushIfFull criteria 2019-03-05 21:34:04 +01:00
Simon Frei
bd37f6da17 lib/model: Optimize dbUpdaterRoutine (#5576) 2019-03-05 21:32:37 +01:00
Simon Frei
6940d79f5b lib/model: Use errors.Wrap for pull errors (#5563) 2019-03-04 13:01:52 +00:00
Simon Frei
0f80318ef6 lib/scanner: Consistenlty use CreateFileInfo and remove outdated comment (#5574) 2019-03-04 13:27:33 +01:00
Simon Frei
d6622b1f68 lib/model: setUp -> setup (#5573) 2019-03-04 12:27:10 +00:00
Simon Frei
43bcb3d5a5 lib/model: Refactor conflict name handling (#5572) 2019-03-04 12:20:40 +00:00
Evgeny Kuznetsov
e2e8f6e940 gui: Update copyright notices (fixes #5569) (#5570) 2019-03-03 13:15:17 +01:00
Jakob Borg
88b0ce892d gui, man, authors: Update docs, translations, and contributors 2019-02-27 07:45:23 +01:00
otbutz
55cd4b3d9b gui: Use handshake icon for "Introduced by" (fixes #5560) (#5561) 2019-02-26 14:18:35 +01:00
Jakob Borg
f24676ba5a lib/tlsutil: Enable TLS 1.3 when available, on test builds (fixes #5065) (#5558)
* lib/tlsutil: Enable TLS 1.3 when available, on test builds (fixes #5065)

This enables TLS 1.3 negotiation on Go 1.12 by setting the GODEBUG
variable. For now, this just gets enabled on test versions (those with a
dash in the version number).

Users wishing to enable this on production builds can set GODEBUG
manually.

The string representation of connections now includes the TLS version
and cipher suite. This becomes part of the log output on connections.
That is, when talking to an old client:

    Established secure connection .../TLS1.2-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

and now potentially:

    Established secure connection .../TLS1.3-TLS_AES_128_GCM_SHA256

(The cipher suite was there previously in the log output, but not the
TLS version.)

I also added this info as a new Crypto() method on the connection, and
propagate this out to the API and GUI, where it can be seen in the
connection address hover (although with bad word wrapping sometimes).

* wip

* wip
2019-02-26 11:49:02 +01:00
Simon Frei
722b3fce6a all: Hide implementations behind interfaces for mocked testing (#5548)
* lib/model: Hide implementations behind interfaces for mocked testing

* review
2019-02-26 08:09:25 +00:00
Jonas Thelemann
8a05492622 docs: Correct Docker README (#5480) (#5545)
Plus some formatting.
2019-02-26 00:37:59 +04:00
Jakob Borg
63c4e7f6d0 Merge branch 'release'
* release:
  lib/scanner: Use standard adler32 when we don't need rolling (#5556)
2019-02-25 19:36:41 +01:00
Audrius Butkevicius
f0f79a3e3e lib/scanner: Use standard adler32 when we don't need rolling (#5556)
* lib/scanner: Use standard adler32 when we don't need rolling

Seems the rolling adler32 implementation is super slow when executed on large blocks, even tho I can't explain why.

BenchmarkFind1MFile-16    				     100	  18991667 ns/op	  55.21 MB/s	  398844 B/op	      20 allocs/op
BenchmarkBlock/adler32-131072/#00-16     		     200	   9726519 ns/op	1078.06 MB/s	 2654936 B/op	     163 allocs/op
BenchmarkBlock/bozo32-131072/#00-16      		      20	  73435540 ns/op	 142.79 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/buzhash32-131072/#00-16   		      20	  61482005 ns/op	 170.55 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/buzhash64-131072/#00-16   		      20	  61673660 ns/op	 170.02 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/vanilla-adler32-131072/#00-16         	     300	   4377307 ns/op	2395.48 MB/s	 2654935 B/op	     163 allocs/op
BenchmarkBlock/adler32-16777216/#00-16               	       2	 544010100 ns/op	  19.27 MB/s	   65624 B/op	       5 allocs/op
BenchmarkBlock/bozo32-16777216/#00-16                	       1	4678108500 ns/op	   2.24 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/buzhash32-16777216/#00-16             	       1	3880370700 ns/op	   2.70 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/buzhash64-16777216/#00-16             	       1	3875911700 ns/op	   2.71 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/vanilla-adler32-16777216/#00-16       	     300	   4010279 ns/op	2614.72 MB/s	   65624 B/op	       5 allocs/op
BenchmarkRoll/adler32-131072/#00-16                  	    2000	    974279 ns/op	 134.53 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/bozo32-131072/#00-16                   	    2000	    791770 ns/op	 165.54 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/buzhash32-131072/#00-16                	    2000	    917409 ns/op	 142.87 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/buzhash64-131072/#00-16                	    2000	    881125 ns/op	 148.76 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/adler32-16777216/#00-16                	      10	 124000400 ns/op	 135.30 MB/s	 7548937 B/op	       0 allocs/op
BenchmarkRoll/bozo32-16777216/#00-16                 	      10	 118008080 ns/op	 142.17 MB/s	 7548928 B/op	       0 allocs/op
BenchmarkRoll/buzhash32-16777216/#00-16              	      10	 126794440 ns/op	 132.32 MB/s	 7548928 B/op	       0 allocs/op
BenchmarkRoll/buzhash64-16777216/#00-16              	      10	 126631960 ns/op	 132.49 MB/s	 7548928 B/op	       0 allocs/op

* Update benchmark_test.go

* gofmt

* fixup benchmark
2019-02-25 19:25:08 +01:00
Audrius Butkevicius
fafd30f804 lib/scanner: Use standard adler32 when we don't need rolling (#5556)
* lib/scanner: Use standard adler32 when we don't need rolling

Seems the rolling adler32 implementation is super slow when executed on large blocks, even tho I can't explain why.

BenchmarkFind1MFile-16    				     100	  18991667 ns/op	  55.21 MB/s	  398844 B/op	      20 allocs/op
BenchmarkBlock/adler32-131072/#00-16     		     200	   9726519 ns/op	1078.06 MB/s	 2654936 B/op	     163 allocs/op
BenchmarkBlock/bozo32-131072/#00-16      		      20	  73435540 ns/op	 142.79 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/buzhash32-131072/#00-16   		      20	  61482005 ns/op	 170.55 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/buzhash64-131072/#00-16   		      20	  61673660 ns/op	 170.02 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/vanilla-adler32-131072/#00-16         	     300	   4377307 ns/op	2395.48 MB/s	 2654935 B/op	     163 allocs/op
BenchmarkBlock/adler32-16777216/#00-16               	       2	 544010100 ns/op	  19.27 MB/s	   65624 B/op	       5 allocs/op
BenchmarkBlock/bozo32-16777216/#00-16                	       1	4678108500 ns/op	   2.24 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/buzhash32-16777216/#00-16             	       1	3880370700 ns/op	   2.70 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/buzhash64-16777216/#00-16             	       1	3875911700 ns/op	   2.71 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/vanilla-adler32-16777216/#00-16       	     300	   4010279 ns/op	2614.72 MB/s	   65624 B/op	       5 allocs/op
BenchmarkRoll/adler32-131072/#00-16                  	    2000	    974279 ns/op	 134.53 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/bozo32-131072/#00-16                   	    2000	    791770 ns/op	 165.54 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/buzhash32-131072/#00-16                	    2000	    917409 ns/op	 142.87 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/buzhash64-131072/#00-16                	    2000	    881125 ns/op	 148.76 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/adler32-16777216/#00-16                	      10	 124000400 ns/op	 135.30 MB/s	 7548937 B/op	       0 allocs/op
BenchmarkRoll/bozo32-16777216/#00-16                 	      10	 118008080 ns/op	 142.17 MB/s	 7548928 B/op	       0 allocs/op
BenchmarkRoll/buzhash32-16777216/#00-16              	      10	 126794440 ns/op	 132.32 MB/s	 7548928 B/op	       0 allocs/op
BenchmarkRoll/buzhash64-16777216/#00-16              	      10	 126631960 ns/op	 132.49 MB/s	 7548928 B/op	       0 allocs/op

* Update benchmark_test.go

* gofmt

* fixup benchmark
2019-02-25 13:29:31 +04:00
Simon Frei
ad5a046843 lib/fs: Rename fsFile* to basicFile* (#5546) 2019-02-24 18:02:02 +01:00
Jakob Borg
4b8853bfde gui, man, authors: Update docs, translations, and contributors 2019-02-20 07:45:22 +01:00
Evgeny Kuznetsov
648fcf2c45 etc: Remove unnecessary quotes in .desktop (#5540) 2019-02-15 07:35:19 +00:00
Simon Frei
ca3ae64bbf lib/db: Flush batch based on size and refactor (fixes #5531) (#5536)
Flush the batch when exceeding a certain size, instead of when reaching a number
of batched operations.
Move batch to lowlevel to be able to use it in NamespacedKV.
Increase the leveldb memory buffer from 4 to 16 MiB.
2019-02-14 23:15:13 +00:00
Simon Frei
e2204d0071 build: Add option to get test coverage (#5539) 2019-02-14 22:38:47 +00:00
Simon Frei
d5ff2c41dc all: Get rid of fatal logging (#5537)
* cleanup Fatal in lib/config/config.go

* cleanup Fatal in lib/config/folderconfiguration.go

* cleanup Fatal in lib/model/model.go

* cleanup Fatal in cmd/syncthing/monitor.go

* cleanup Fatal in cmd/syncthing/main.go

* cleanup Fatal in lib/api

* remove Fatal methods from logger

* lowercase in errors.Wrap

* one less channel
2019-02-14 20:29:14 +00:00
Simon Frei
7a40c42e8b cmd/syncthing: Introduce exiter to handle termination (#5532) 2019-02-13 23:07:27 +00:00
Simon Frei
905c3594b0 lib/model: Various model test fixes and polish (#5528)
* lib/model: Various model test fixes and polish

Missing calls to m.Stop()
Don't fail test if temporary test dir cleanup fails

* drop lazyness
2019-02-13 18:54:04 +00:00
Jakob Borg
5fd333e4f7 gui, man, authors: Update docs, translations, and contributors 2019-02-13 07:45:23 +01:00
Simon Frei
225c0dda80 lib/model: Scan conflicts after creation (#5511)
Also unflakes and improve TestRequestRemoteRenameChanged.
2019-02-12 16:05:20 +01:00
Simon Frei
5fd2cab102 lib/model: Run more tests in tmp dir (#5527) 2019-02-12 16:04:04 +01:00
Simon Frei
4299af1c63 lib/config, lib/model: Use path from locations to check disk space for db (#5525) 2019-02-12 12:25:11 +00:00
Simon Frei
d85ef949be lib/model: Introduce setupModel test utility (#5524) 2019-02-12 12:18:13 +00:00
Audrius Butkevicius
dc929946fe all: Use new reflect based CLI (#5487) 2019-02-12 07:58:24 +01:00
Simon Frei
7bac927ac8 lib/model: Use functions to generate config (#5513) 2019-02-12 07:50:07 +01:00
Matt Robenolt
93b4597d1a gui: Localize items counts (#5520) 2019-02-12 07:49:14 +01:00
Evgeny Kuznetsov
0928628e7b etc: Add keywords to .desktop files (fixes #5365) (#5521) 2019-02-11 14:33:05 +01:00
Audrius Butkevicius
c5a79bdfe6 Fix builds on Windows without CGO (#5518) 2019-02-10 15:58:29 +01:00
Jakob Borg
04fdafa280 lib/db, lib/model: Remove dead code (#5517) 2019-02-08 16:42:58 +01:00
Jakob Borg
cb49136269 gui: Add missing translation string (fixes #5515) 2019-02-07 07:30:22 +01:00
Simon Frei
2f415d8f09 lib/model: Don't use LocalDeviceID as normal id in tests (#5512) 2019-02-06 09:32:03 +01:00
Jakob Borg
b076031bfa gui, man, authors: Update docs, translations, and contributors 2019-02-06 07:45:24 +01:00
Simon Frei
e538797ce1 lib/model: Improve TestIssue5063 (#5509)
* lib/model: Improve TestIssue5063

* add not that helpful issue comment
2019-02-05 23:07:21 +00:00
Simon Frei
5df8219bcb lib/model: Add progressEmitter to supervisor (model) (#5510) 2019-02-05 18:02:36 +00:00
Simon Frei
af4fb97538 lib/model: Fail test instead of panic due to closing channel twice (#5508) 2019-02-05 18:01:56 +00:00
Simon Frei
5d9d87f770 lib/model: Helperize test os and remove error return value (#5507) 2019-02-05 18:01:05 +00:00
Jakob Borg
c75cfcfd06 gui: Enable large blocks by default, add to config dialog (#5405)
Also moves a couple of </div> that were out of balance and reindents
accordingly, so try a white space insensitive diff when reviewing...
2019-02-02 13:06:01 +01:00
Jakob Borg
6cdd1c5158 cmd/syncthing: Fixup previous commit 2019-02-02 12:54:26 +01:00
Simon Frei
41d037da1f build: Remove outdated&non-functional setup command (fixes #5454) (#5455) 2019-02-02 12:47:46 +01:00
Simon Frei
82afe73a9a cmd/syncthing, lib/config: Update default config creation (#5492)
Also remove dead code in config.Wrapper.
2019-02-02 12:43:57 +01:00
Jakob Borg
6452e16f15 golangci: Add config file 2019-02-02 12:34:57 +01:00
Iskander (Alex) Sharipov
ca47b4c218 cmd/syncthing: Correct strings.HasPrefix args order (#5498) 2019-02-02 12:18:19 +01:00
Jakob Borg
c2ddc83509 all: Revert the underscore sillyness 2019-02-02 12:16:27 +01:00
Jakob Borg
9fd270d78e all: A few more interesting linter fixes (#5502)
A couple of minor bugs and simplifications
2019-02-02 12:09:07 +01:00
Jakob Borg
0b2cabbc31 all: Even more boring linter fixes (#5501) 2019-02-02 11:45:17 +01:00
Jakob Borg
df5c1eaf01 all: Bunch of more linter fixes (#5500) 2019-02-02 11:02:28 +01:00
Jakob Borg
2111386ee4 all: Fix some linter errors (#5499)
I'm working through linter complaints, these are some fixes. Broad
categories:

1) Ignore errors where we can ignore errors: add "_ = ..." construct.
you can argue that this is annoying noise, but apart from silencing the
linter it *does* serve the purpose of highlighting that an error is
being ignored. I think this is OK, because the linter highlighted some
error cases I wasn't aware of (starting CPU profiles, for example).

2) Untyped constants where we though we had set the type.

3) A real bug where we ineffectually assigned to a shadowed err.

4) Some dead code removed.

There'll be more of these, because not all packages are fixed, but the
diff was already large enough.
2019-02-02 10:11:42 +01:00
Simon Frei
583172dc8d lib/db: Fix race in NamespacedKV (#5496) 2019-02-01 09:54:21 +01:00
Jakob Borg
1529563332 docker: Build outside GOPATH (fixes #5495) 2019-01-31 20:38:33 +01:00
Simon Frei
5605877625 cmd/syncthing: Pass SIGTERM on in monitor (fixes #5493) (#5494) 2019-01-31 18:35:20 +01:00
Simon Frei
7236d56731 lib/model: In tests disable watching for changes by default (fixes #5246) (#5485) 2019-01-30 16:38:10 +01:00
Jakob Borg
47d68a0aec gui, man, authors: Update docs, translations, and contributors 2019-01-30 07:45:27 +01:00
Simon Frei
657be162dd test, lib/rc: Integration test fixes and polish (#5488) 2019-01-29 16:59:00 +01:00
Simon Frei
8815ef922b mod: Update dependencies and tidy (fixes #5311) (#5486) 2019-01-28 22:45:56 +01:00
Simon Frei
79d109a386 lib/config: Add omitempty to DeprecatedMinHomeDiskFreePct (fixes #5482) (#5484) 2019-01-28 11:46:28 +01:00
Jakob Borg
75dcff0a0e all: Copy owner/group from parent (fixes #5445) (#5479)
This adds a folder option "CopyOwnershipFromParent" which, when set,
makes Syncthing attempt to retain the owner/group information when
syncing files. Specifically, at the finisher stage we look at the parent
dir to get owner/group and then attempt a Lchown call on the temp file.
For this to succeed Syncthing must be running with the appropriate
permissions. On Linux this is CAP_FOWNER, which can be granted by the
service manager on startup or set on the binary in the filesystem. Other
operating systems do other things, but often it's not required to run as
full "root". On Windows this patch does nothing - ownership works
differently there and is generally less of a deal, as permissions are
inherited as ACLs anyway.

There are unit tests on the Lchown functionality, which requires the
above permissions to run. There is also a unit test on the folder which
uses the fake filesystem and hence does not need special permissions.
2019-01-25 09:52:21 +01:00
Jakob Borg
0e07f6bef4 vendor: rm -rf vendor (#5478)
This removes our vendored dependencies. They provide no value for our
own build or development processes. For our source releases, the build
job can accomplish the same thing by a "go mod vendor" to recreate the
vendor dir (from the cryptographically verified dependencies).
2019-01-24 09:03:00 +01:00
Simon Frei
a45ba70467 lib/model: Improve errors while pulling (#5474) 2019-01-24 08:18:55 +01:00
Simon Frei
42bd42df5a lib/db: Do all update operations on a single item at once (#5441)
To do so the BlockMap struct has been removed. It behaves like any other prefixed
part of the database, but was not integrated in the recent keyer refactor. Now
the database is only flushed when files are in a consistent state.
2019-01-23 10:22:33 +01:00
Jakob Borg
6421693cce gui, man, authors: Update docs, translations, and contributors 2019-01-23 07:45:23 +01:00
Simon Frei
a371b15398 lib/db: Various polish (#5471)
naming: buf -> keyBuf
dedup: use getFileTrunc
manually inline insertFile
2019-01-20 10:24:39 +01:00
Jakob Borg
29e4b417f2 vendor: Update vendor dir from a80c0fda 2019-01-20 08:50:40 +01:00
Simon Frei
00fa77dd47 lib/db: Consistent use of buffers (#5470) 2019-01-20 08:47:20 +01:00
Simon Frei
df4d754197 lib/db: Minor polish (#5469) 2019-01-19 20:26:46 +01:00
Simon Frei
f3d735c56a lib/db: Fix, optimize and extend benchmarks (#5467) 2019-01-19 20:24:44 +01:00
Simon Frei
1d99db9bc6 lib/db: Deduplicate getFile* code (#5468) 2019-01-19 20:21:58 +01:00
Audrius Butkevicius
96bd691f55 lib/fs: Skip some tests on OpenBSD (fixes #5077) (#5466) 2019-01-19 08:28:57 +01:00
Simon Frei
22e133cce6 lib/db: Deduplicate comparing old and new items (#5465) 2019-01-18 21:19:56 +00:00
Audrius Butkevicius
a80c0fdae8 vendor: Update minio/sha256-simd 2019-01-18 20:30:40 +00:00
Simon Frei
1f87b874af lib/db: Add "dirty" function terminology to getGlobal (ref #5462) (#5463) 2019-01-18 13:01:39 +01:00
Jakob Borg
1e69997ecd lib/db: Fix iterating sequence index (fixes #5340) (#5462)
There was a problem in iterating the sequence index that could result
in missing updates. The issue is that while the index was (correctly)
iterated in a snapshot, the actual file infos were read dirty outside of
the snapshot. This fixes this by doing the reads inside the snapshot,
and also updates a couple of other places that did the same thing more
or less harmfully (I didn't investigate).

To avoid similar issues in the future I did some renaming of the
getFile* methods - the ones in a transaction are just getFile, while the
ones directly on the database are variants of getFileDirty to highlight
what's going on.
2019-01-18 11:34:18 +01:00
Jakob Borg
76af0cf07b lib/protocol: Remove support for v0.13 hello messages (#5461) 2019-01-17 20:48:43 +01:00
Jakob Borg
c2cc1dadea gui, man, authors: Update docs, translations, and contributors 2019-01-16 07:45:23 +01:00
Jakob Borg
f4bf18c1b4 cmd/syncthing, gui: Settings dialog upgrades/reporting for candidates (fixes #4216) (#5457)
This adds booleans to the /system/version response to advice the GUI
whether the running version is a candidate release or not. (We could
parse it from the version string, but why duplicate the logic.)

Additionally the settings dialog locks down the upgrade and usage
reporting options on candidate releases. This matches the current
behavior, it just makes it obvious what actually *can* be chosen.
2019-01-15 08:44:46 +01:00
Jakob Borg
b01edca420 all: Update protobuf package 1.0.0 -> 1.2.0 (#5452)
Also adds a few file global options to keep the generated code similar
to what we already had.
2019-01-14 11:53:36 +01:00
Simon Frei
c87411851d lib/protocol: Fix potential deadlock when closing connection (ref #5440) (#5442) 2019-01-14 08:32:37 +01:00
Simon Frei
51f65bd23a lib/model: Reset pull errors when succeeding (fix #5450) (#5451) 2019-01-14 08:30:52 +01:00
Audrius Butkevicius
801e9b57eb Update config (#5444)
* gui: Update config from remote after saving (fixes #5354)

* Refresh is async :(
2019-01-13 23:28:17 +00:00
Jakob Borg
93ae9a1c2e lib/logger: Clean up LOGGER_DISCARD handing to unbreak tests
When using newLogger() with an io.Writer, that writer should in fact be
used...
2019-01-13 10:22:01 +01:00
Audrius Butkevicius
d4f0544d9e build: Add -cc argument that is not used by build.go (#5447) 2019-01-11 23:07:53 +00:00
Simon Frei
0b03b6a9ec lib/model: Improve filesystem operations during tests (fixes #5422)
* lib/fs, lib/model: Improve filesystem operations during tests (fixes #5422)

Introduces MustFilesystem that panics on errors and should be used for operations
during testing which must never fail.
Create temporary directories outside of testdata.

* don't do a filesystem, just a wrapper around os for testing

* fix copyright
2019-01-11 12:56:05 +00:00
Simon Frei
24ffd8be99 all: Send Close BEP msg on intentional disconnect (#5440)
This avoids waiting until next ping and timeout until the connection is actually
closed both by notifying the peer of the disconnect and by immediately closing
the local end of the connection after that. As a nice side effect, info level
logging about dropped connections now have the actual reason in it, not a generic
timeout error which looks like a real problem with the connection.
2019-01-09 17:31:09 +01:00
Jakob Borg
d924bd7bd9 gui, man, authors: Update docs, translations, and contributors 2019-01-09 07:45:23 +01:00
Jakob Borg
1e71b00936 lib/model: Sanitize paths used for auto accepted folders (fixes #5411) (#5435) 2019-01-05 18:10:02 +01:00
Jakob Borg
f3630a69f1 gui: Fix bar chart icon (fixes #5389) 2019-01-05 11:36:25 +01:00
Jakob Borg
5503175854 lib/logger: Strip control characters from log output (fixes #5428) (#5434) 2019-01-05 11:31:02 +01:00
Audrius Butkevicius
ad30192dca vendor: Update minio/sha256-simd (#5433)
* vendor: Update minio/sha256-simd

* Add go module stuff
2019-01-05 10:21:42 +01:00
Simon Frei
158559023e lib/db: Fix sequence updating for remote invalid items (#5420)
* lib/db: Fix sequence updating for remote invalid items

* fix for the unit test introduced in the previous commit

* lib/db: Polish blockmap
2019-01-04 20:19:10 +01:00
Jakob Borg
04070b4848 lib/logger: Add test for stack level (#5430) 2019-01-04 17:51:58 +01:00
Jakob Borg
597cee67d3 cmd/ur*: Updates for 1.0 2019-01-03 21:46:02 +01:00
Jakob Borg
78ccedfeec gui, man, authors: Update docs, translations, and contributors 2019-01-02 07:45:22 +01:00
Ben S
69fe471dfa gui: Make sync status accessible by screen readers (fixes #2697) (#5381) 2019-01-01 10:18:08 +01:00
Simon Frei
47e08797cb lib/model: Don't pull if ignores failed to load and cleanup (#5418) 2019-01-01 10:17:14 +01:00
Jakob Borg
48dab8f201 Merge branch 'release'
* release:
  cmd/syncthing: Updated code name
2019-01-01 09:38:30 +01:00
Simon Frei
643cfe2e98 lib/config: Revert #5415 (#5417)
This reverts commit 9d075781ad:
"cmd/syncthing: Improve messages when free space is running out (#5415)"
2018-12-30 21:57:41 +01:00
Simon Frei
8bb9878f26 lib/model: Check folder context before setting error state (#5416) 2018-12-30 21:56:16 +01:00
Maurizio Tomasi
9d075781ad cmd/syncthing: Improve messages when free space is running out (#5415) 2018-12-29 20:48:20 +01:00
Hugo Locurcio
2ad40734f8 gui: Use a modern font stack inspired by Bootstrap 4 (#5407)
See https://github.com/twbs/bootstrap/pull/19098 for more information.
2018-12-26 23:05:20 +01:00
Jakob Borg
84ca86f095 gui, man, authors: Update docs, translations, and contributors 2018-12-26 07:45:23 +01:00
Jakob Borg
0ac6ea6f1e gui: Reformat HTML & JS for consistent white space
(white space only change)
2018-12-25 14:26:46 +01:00
Jakob Borg
41469c5393 lib/rc: Remove dead lock (fixes #5403) 2018-12-23 09:11:15 +01:00
Simon Frei
4783294a09 lib/model: Don't pass nil *Matcher to performFinish (fixes #5401) (#5402) 2018-12-22 21:58:17 +01:00
Jakob Borg
c8123bda28 cmd/syncthing: Don't rewrite config on startup unless necessary (#5399) 2018-12-21 14:26:36 +00:00
Simon Frei
99c9d65ddf lib/scanner: Check ignore patterns before reporting error (fixes #5397) (#5398) 2018-12-21 12:08:15 +01:00
Simon Frei
fc81e2b3d7 lib/model: Do folder watch operations under lock (fixes #5392) (#5395) 2018-12-21 12:06:21 +01:00
Simon Frei
c3b3e02f21 test: Update configs and revert changes during testing (#5393) 2018-12-21 11:50:28 +01:00
Simon Frei
2626143fc5 lib/rc: Fix hangups when Syncthing process ends unexpectedly (#5383) 2018-12-21 11:49:04 +01:00
Stefan Tatschner
ae0dfcd7ca build: Allow specifying the go command (#5396) 2018-12-20 15:24:35 +01:00
Jakob Borg
58f3e56729 gui, man, authors: Update docs, translations, and contributors 2018-12-19 07:45:22 +01:00
Jakob Borg
944ddcf768 all: Become a Go module (fixes #5148) (#5384)
* go mod init; rm -rf vendor

* tweak proto files and generation

* go mod vendor

* clean up build.go

* protobuf literals in tests

* downgrade gogo/protobuf
2018-12-18 12:36:38 +01:00
Simon Frei
3cc8918eb4 lib/db: Defer unlock to avoid hangup on panic (#5388) 2018-12-17 14:59:09 +01:00
Simon Frei
c40c9a8d6a lib/scanner: Don't report error on missing items (fixes #5385) (#5387) 2018-12-17 14:52:15 +01:00
2637 changed files with 38498 additions and 1085040 deletions

5
.codecov.yml Normal file
View File

@@ -0,0 +1,5 @@
coverage:
range: "40...100"
ignore:
- "**.pb.go"

View File

@@ -1,42 +0,0 @@
### DO NOT REPORT SECURITY ISSUES IN THIS ISSUE TRACKER
Instead, contact security@syncthing.net directly - see
https://syncthing.net/security.html for more information.
### DO NOT POST SUPPORT REQUESTS OR GENERAL QUESTIONS IN THIS ISSUE TRACKER
Please use the forum at https://forum.syncthing.net/ where a large number of
helpful people hang out. This issue tracker is for reporting bugs or feature
requests directly to the developers. Worst case you might get a short
"that's a bug, please report it on GitHub" response on the forum, in which
case we thank you for your patience and following our advice. :)
### Please use the correct issue tracker
If your problem relates to a Syncthing wrapper or [sub-project](https://github.com/syncthing) such as [Syncthing for Android](https://github.com/syncthing/syncthing-android/issues), [SyncTrayzor](https://github.com/canton7/synctrayzor) or the [documentation](https://github.com/syncthing/docs/issues), please use their respective issue trackers.
### Does your log mention database corruption?
If your Syncthing log reports panics because of database corruption it is most likely a fault with your system's storage or memory. Affected log entries will contain lines starting with `panic: leveldb`. You will need to delete the index database to clear this, by running `syncthing -reset-database`.
### Please do post actual bug reports and feature requests.
If your issue is a bug report, replace this boilerplate with a description
of the problem, being sure to include at least:
- what happened,
- what you expected to happen instead, and
- any steps to reproduce the problem.
Also fill out the version information below and add log output or
screenshots as appropriate.
If your issue is a feature request, simply replace this template text in
its entirety.
### Version Information
Syncthing Version: v0.x.y
OS Version: Windows 7 / Ubuntu 14.04 / ...
Browser Version: (if applicable, for GUI issues)

13
.github/ISSUE_TEMPLATE/01-feature.md vendored Normal file
View File

@@ -0,0 +1,13 @@
---
name: Feature request
about: If you're just not sure how to do something, see "ask a question".
labels: enhancement, needs-triage
---
### Include required information
Please be sure to include at least:
- what problem your new feature would solve
- how or why you think it is generally useful (i.e., not just for you)
- what alternatives or workarounds you considered

23
.github/ISSUE_TEMPLATE/02-bug.md vendored Normal file
View File

@@ -0,0 +1,23 @@
---
name: Bug report
about: If you're actually looking for support, see "ask a question".
labels: bug, needs-triage
---
### Does your log mention database corruption?
If your Syncthing log reports panics because of database corruption it is
most likely a fault with your system's storage or memory. Affected log
entries will contain lines starting with `panic: leveldb`. You will need to
delete the index database to clear this, by running `syncthing
-reset-database`.
### Include required information
Please be sure to include at least:
- which version of Syncthing and what operating system you are using
- browser and version, if applicable
- what happened,
- what you expected to happen instead, and
- any steps to reproduce the problem.

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Ask a question
url: https://forum.syncthing.net/
about: Ask questions, get support, and discuss with other community members.
- name: Android issues
url: https://github.com/syncthing/syncthing-android/issues/
about: The Android app has its own issue tracker.

10
.github/SECURITY.md vendored Normal file
View File

@@ -0,0 +1,10 @@
## Reporting a Vulnerability
If you believe that you've found a Syncthing-related security vulnerability,
please report it by sending email to the address security@syncthing.net. The
[PGP key for security@syncthing.net
(B683AD7B76CAB013)](https://syncthing.net/security-key.txt) can be used to
send encrypted mail or to verify responses received from that address.
You can read more about Syncthing security at
https://syncthing.net/security/.

8
.gitignore vendored
View File

@@ -15,11 +15,5 @@ coverage.xml
syncthing.sig
RELEASE
deb
lib/auto/gui.files.go
snapcraft.yaml
prime/
snap/
parts/
stage/
*.snap
*.bz2
/repos

26
.golangci.yml Normal file
View File

@@ -0,0 +1,26 @@
linters-settings:
maligned:
suggest-new: true
linters:
enable-all: true
disable:
- goimports
- depguard
- lll
- gochecknoinits
- gochecknoglobals
- gofmt
- scopelint
- gocyclo
- funlen
- wsl
- gocognit
- godox
service:
golangci-lint-version: 1.21.x
prepare:
- rm -f go.sum # 1.12 -> 1.13 issues with QUIC-go
- GO111MODULE=on go mod vendor
- go run build.go assets

66
AUTHORS
View File

@@ -16,20 +16,29 @@
Aaron Bieber (qbit) <qbit@deftly.net>
Adam Piggott (ProactiveServices) <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com> <ProactiveServices@users.noreply.github.com> <adam@proactiveservices.co.uk>
Adel Qalieh (adelq) <aqalieh95@gmail.com> <adelq@users.noreply.github.com>
Alan Pope <alan@popey.com>
Alberto Donato <albertodonato@users.noreply.github.com>
Alessandro G. (alessandro.g89) <alessandro.g89@gmail.com>
Alex Xu <alex.hello71@gmail.com>
Alexander Graf (alex2108) <register-github@alex-graf.de>
Alexandre Viau (aviau) <alexandre@alexandreviau.net> <aviau@debian.org>
Aman Gupta <aman@tmm1.net>
Anderson Mesquita (andersonvom) <andersonvom@gmail.com>
andresvia <andres.via@gmail.com>
Andrew Dunham (andrew-d) <andrew@du.nham.ca>
Andrew Rabert (nvllsvm) <ar@nullsum.net> <6550543+nvllsvm@users.noreply.github.com>
Andrey D (scienmind) <scintertech@cryptolab.net> <scienmind@users.noreply.github.com>
André Colomb (acolomb) <src@andre.colomb.de> <github.com@andre.colomb.de>
andyleap <andyleap@gmail.com>
Anjan Momi <anjan@momi.ca>
Antoine Lamielle (0x010C) <antoine.lamielle@0x010c.fr> <gh@0x010c.fr>
Antony Male (canton7) <antony.male@gmail.com>
Aranjedeath <Aranjedeath@users.noreply.github.com>
Arkadiusz Tymiński <gevleeog@gmail.com>
Arthur Axel fREW Schmidt (frioux) <frew@afoolishmanifesto.com> <frioux@gmail.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com>
Artur Zubilewicz <AkaZecik@users.noreply.github.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com> <github@audrius.rocks>
Aurélien Rainone <476650+arl@users.noreply.github.com>
BAHADIR YILMAZ <bahadiryilmaz32@gmail.com>
Bart De Vries (mogwa1) <devriesb@gmail.com>
Ben Curthoys (bencurthoys) <ben@bencurthoys.com>
@@ -40,21 +49,26 @@ Benedikt Heine (bebehei) <bebe@bebehei.de>
Benedikt Morbach <benedikt.morbach@googlemail.com>
Benno Fünfstück <benno.fuenfstueck@gmail.com>
Benny Ng (tpng) <benny.tpng@gmail.com>
boomsquared <54829195+boomsquared@users.noreply.github.com>
Boqin Qin <bobbqqin@bupt.edu.cn>
Boris Rybalkin <ribalkin@gmail.com>
Brandon Philips (philips) <brandon@ifup.org>
Brendan Long (brendanlong) <self@brendanlong.com>
Brian R. Becker (brbecker) <brbecker@gmail.com>
Caleb Callaway (cqcallaw) <enlightened.despot@gmail.com>
Carsten Hagemann (Moter8) <moter8@gmail.com>
Carsten Hagemann (carstenhag) <moter8@gmail.com> <carsten@chagemann.de>
Cathryne Linenweaver (Cathryne) <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com> <katrinleinweber@MAC.local>
Cedric Staniewski (xduugu) <cedric@gmx.ca>
chenrui <rui@meetup.com>
Chris Howie (cdhowie) <me@chrishowie.com>
Chris Joel (cdata) <chris@scriptolo.gy>
Chris Tonkinson <chris@masterbran.ch>
chucic <chucic@seznam.cz>
Colin Kennedy (moshen) <moshen.colin@gmail.com>
Cromefire_ <tim.l@nghorst.net>
Cromefire_ <tim.l@nghorst.net> <26320625+cromefire@users.noreply.github.com>
Cyprien Devillez <cypx@users.noreply.github.com>
Dale Visser <dale.visser@live.com>
Dan <benda.daniel@gmail.com>
Daniel Bergmann (brgmnn) <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Harte (norgeous) <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@users.noreply.github.com>
Daniel Martí (mvdan) <mvdan@mvdan.cc>
@@ -62,28 +76,39 @@ Darshil Chanpura (dtchanpura) <dtchanpura@gmail.com> <dcprime314@gmail.com>
David Rimmer (dinosore) <dinosore@dbrsoftware.co.uk>
Denis A. (dva) <denisva@gmail.com>
Dennis Wilson (snnd) <dw@risu.io>
dependabot-preview[bot] <dependabot-preview[bot]@users.noreply.github.com> <27856297+dependabot-preview[bot]@users.noreply.github.com>
dependabot[bot] <dependabot[bot]@users.noreply.github.com>
derekriemer <derek.riemer@colorado.edu>
desbma <desbma@users.noreply.github.com>
Dmitry Saveliev (dsaveliev) <d.e.saveliev@gmail.com>
Domenic Horner <domenic@tgxn.net>
Dominik Heidler (asdil12) <dominik@heidler.eu>
Elias Jarlebring (jarlebring) <jarlebring@gmail.com>
Elliot Huffman <thelich2@gmail.com>
Emil Hessman (ceh) <emil@hessman.se>
Erik Meitner (WSGCSysadmin) <e.meitner@willystreet.coop>
Evgeny Kuznetsov <evgeny@kuznetsov.md>
Federico Castagnini (facastagnini) <federico.castagnini@gmail.com>
Felix Ableitner (Nutomic) <me@nutomic.com>
Felix Unterpaintner (bigbear2nd) <bigbear2nd@gmail.com>
Francois-Xavier Gsell (zukoo) <fxgsell@gmail.com>
Frank Isemann (fti7) <frank@isemann.name>
georgespatton <georgespatton@users.noreply.github.com>
ghjklw <malo@jaffre.info>
Gilli Sigurdsson (gillisig) <gilli@vx.is>
Graham Miln (grahammiln) <graham.miln@dssw.co.uk> <graham.miln@miln.eu>
greatroar <61184462+greatroar@users.noreply.github.com>
Han Boetes <han@boetes.org>
Harrison Jones (harrisonhjones) <harrisonhjones@users.noreply.github.com>
Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Hugo Locurcio <hugo.locurcio@hugo.pro>
Iain Barnett <iainspeed@gmail.com>
Ian Johnson (anonymouse64) <ian.johnson@canonical.com> <person.uwsome@gmail.com>
Ilya Brin <464157+ilyabrin@users.noreply.github.com>
Iskander Sharipov (Alex) <quasilyte@gmail.com>
Jaakko Hannikainen (jgke) <jgke@jgke.fi>
Jacek Szafarkiewicz (hadogenes) <szafar@linux.pl>
Jacob <jyundt@gmail.com>
Jake Peterson (acogdev) <jake@acogdev.com>
Jakob Borg (calmh) <jakob@nym.se> <jakob@kastelo.net>
James Patterson (jpjp) <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
@@ -91,20 +116,26 @@ janost <janost@tuta.io>
Jaroslav Malec (dzarda) <dzardacz@gmail.com>
jaseg <githubaccount@jaseg.net>
Jaya Chithra (jayachithra) <s.k.jayachithra@gmail.com>
jelle van der Waa <jelle@vdwaa.nl>
Jens Diemer (jedie) <github.com@jensdiemer.de> <git@jensdiemer.de>
Jerry Jacobs (xor-gate) <jerry.jacobs@xor-gate.org> <xor-gate@users.noreply.github.com>
Jochen Voss (seehuhn) <voss@seehuhn.de>
Johan Andersson <j@i19.se>
Johan Vromans (sciurius) <jvromans@squirrel.nl>
John Rinehart (fuzzybear3965) <johnrichardrinehart@gmail.com>
Jonas Thelemann <e-mail@jonas-thelemann.de>
Jonathan <artback@protonmail.com>
Jonathan Cross <jcross@gmail.com>
Jose Manuel Delicado (jmdaweb) <jmdaweb@hotmail.com> <jmdaweb@users.noreply.github.com>
Jörg Thalheim <Mic92@users.noreply.github.com>
Jędrzej Kula <kula.jedrek@gmail.com>
Kalle Laine <pahakalle@protonmail.com>
Karol Różycki (krozycki) <rozycki.karol@gmail.com>
Keith Turner <kturner@apache.org>
Kelong Cong (kc1212) <kc04bc@gmx.com> <kc1212@users.noreply.github.com>
Ken'ichi Kamada (kamadak) <kamada@nanohz.org>
Kevin Allen (ironmig) <kma1660@gmail.com>
Kevin Bushiri (keevBush) <keevbush@gmail.com> <36192217+keevBush@users.noreply.github.com>
Kevin White, Jr. (kwhite17) <kevinwhite1710@gmail.com>
klemens <ka7@github.com>
Kurt Fitzner (Kudalufi) <kurt@va1der.ca> <kurt.fitzner@gmail.com>
@@ -115,33 +146,51 @@ Leo Arias (elopio) <yo@elopio.net>
Liu Siyuan (liusy182) <liusy182@gmail.com> <liusy182@hotmail.com>
Lode Hoste (Zillode) <zillode@zillode.be>
Lord Landon Agahnim (LordLandon) <lordlandon@gmail.com>
Lukas Lihotzki <lukas@lihotzki.de>
Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol (kilburn) <kilburn@la3.org>
Marcin Dziadus (marcindziadus) <dziadus.marcin@gmail.com>
marco-m <marco.molteni@laposte.net>
Marcus Legendre <marcus.legendre@gmail.com>
Mario Majila <mariustshipichik@gmail.com>
Mark Pulford (mpx) <mark@kyne.com.au>
Mateusz Naściszewski (mateon1) <matin1111@wp.pl>
Mateusz Ż <thedead4fun@live.com>
Matic Potočnik <hairyfotr@gmail.com>
Matt Burke (burkemw3) <mburke@amplify.com> <burkemw3@gmail.com>
Matt Robenolt <matt@ydekproductions.com>
Matteo Ruina <matteo.ruina@gmail.com>
Maurizio Tomasi <ziotom78@gmail.com>
Max Schulze (kralo) <max.schulze@online.de> <kralo@users.noreply.github.com>
MaximAL <almaximal@ya.ru>
Maxime Thirouin <m@moox.io>
Michael Jephcote (Rewt0r) <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Ploujnikov (plouj) <ploujj@gmail.com>
Michael Rienstra <mrienstra@gmail.com>
Michael Tilli (pyfisch) <pyfisch@gmail.com>
Mike Boone <mike@boonedocks.net>
MikeLund <MikeLund@users.noreply.github.com>
MikolajTwarog <43782609+MikolajTwarog@users.noreply.github.com>
Mingxuan Lin <gdlmx@users.noreply.github.com>
mv1005 <49659413+mv1005@users.noreply.github.com>
Nate Morrison (nrm21) <natemorrison@gmail.com>
Nicholas Rishel (PrototypeNM1) <rishel.nick@gmail.com> <PrototypeNM1@users.noreply.github.com>
Nico Stapelbroek <3368018+nstapelbroek@users.noreply.github.com>
Nicolas Braud-Santoni <nicolas@braud-santoni.eu>
Nicolas Perraut <n.perraut@gmail.com>
Niels Peter Roest (Niller303) <nielsproest@hotmail.com> <seje.niels@hotmail.com>
Nils Jakobi (thunderstorm99) <jakobi.nils@gmail.com>
NinoM4ster <ninom4ster@gmail.com>
Nitroretro <43112364+Nitroretro@users.noreply.github.com>
NoLooseEnds <jon.koslung@gmail.com>
Oliver Freyermuth <o.freyermuth@googlemail.com>
otbutz <tbutz@optitool.de>
Otiel <Otiel@users.noreply.github.com>
Oyebanji Jacob Mayowa <oyebanji05@gmail.com>
Pablo <pbaeyens31+github@gmail.com>
Pascal Jungblut (pascalj) <github@pascalj.com> <mail@pascal-jungblut.com>
Paul Brit <paulbrit44@gmail.com>
Pawel Palenica (qepasa) <pawelpalenica11@gmail.com>
Paweł Rozlach <vespian@users.noreply.github.com>
perewa <cavalcante.ten@gmail.com>
@@ -157,17 +206,21 @@ Piotr Bejda (piobpl) <piotrb10@gmail.com>
Pramodh KP (pramodhkp) <pramodh.p@directi.com> <1507241+pramodhkp@users.noreply.github.com>
Richard Hartmann <RichiH@users.noreply.github.com>
Robert Carosi (nov1n) <robert@carosi.nl>
Robin Schoonover <robin@cornhooves.org>
Roman Zaynetdinov (zaynetro) <romanznet@gmail.com>
Ross Smith II (rasa) <ross@smithii.com>
rubenbe <github-com-00ff86@vandamme.email>
Ruslan Yevdokymov <38809160+ruslanye@users.noreply.github.com>
Ryan Sullivan (KayoticSully) <kayoticsully@gmail.com>
Sacheendra Talluri (sacheendra) <sacheendra.t@gmail.com>
Scott Klupfel (kluppy) <kluppy@going2blue.com>
Sergey Mishin (ralder) <ralder@yandex.ru>
Shaarad Dalvi <60266155+shaaraddalvi@users.noreply.github.com>
Simon Frei (imsodin) <freisim93@gmail.com>
Simon Mwepu <simonmwepu@gmail.com>
Sly_tom_cat <slytomcat@mail.ru>
Stefan Kuntz (Stefan-Code) <stefan.github@gmail.com> <Stefan.github@gmail.com>
Stefan Tatschner (rumpelsepp) <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org>
Stefan Tatschner (rumpelsepp) <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org> <stefan@rumpelsepp.org>
Suhas Gundimeda (snugghash) <suhas.gundimeda@gmail.com> <snugghash@gmail.com>
Taylor Khan (nelsonkhan) <nelsonkhan@gmail.com>
Thomas Hipp <thomashipp@gmail.com>
@@ -175,10 +228,12 @@ Tim Abell (timabell) <tim@timwise.co.uk>
Tim Howes (timhowes) <timhowes@berkeley.edu>
Tobias Nygren (tnn2) <tnn@nygren.pp.se>
Tobias Tom (tobiastom) <t.tom@succont.de>
Tomas Cerveny (kozec) <kozec@kozec.com>
Tom Jakubowski <tom@crystae.net>
Tomasz Wilczyński <5626656+tomasz1986@users.noreply.github.com>
Tommy Thorn <tommy-github-email@thorn.ws>
Tully Robinson (tojrobinson) <tully@tojr.org>
Tyler Brazier (tylerbrazier) <tyler@tylerbrazier.com>
Tyler Kropp <kropptyler@gmail.com>
Unrud (Unrud) <unrud@openaliasbox.org> <Unrud@users.noreply.github.com>
Veeti Paananen (veeti) <veeti.paananen@rojekti.fi>
Victor Buinsky (buinsky) <vix_booja@tut.by>
@@ -187,6 +242,7 @@ Vladimir Rusinov <vrusinov@google.com>
wangguoliang <liangcszzu@163.com>
William A. Kennington III (wkennington) <william@wkennington.com>
Wulf Weich (wweich) <wweich@users.noreply.github.com> <wweich@gmx.de> <wulf@weich-kr.de>
xarx00 <xarx00@users.noreply.github.com>
Xavier O. (damajor) <damajor@gmail.com>
xjtdy888 (xjtdy888) <xjtdy888@163.com>
Yannic A. (eipiminus1) <eipiminusone+github@gmail.com> <eipiminus1@users.noreply.github.com>

View File

@@ -1,6 +1,7 @@
FROM golang:1.11 AS builder
ARG GOVERSION=latest
FROM golang:$GOVERSION AS builder
WORKDIR /go/src/github.com/syncthing/syncthing
WORKDIR /src
COPY . .
ENV CGO_ENABLED=0
@@ -14,19 +15,15 @@ EXPOSE 8384 22000 21027/udp
VOLUME ["/var/syncthing"]
RUN apk add --no-cache ca-certificates su-exec
RUN apk add --no-cache ca-certificates su-exec tzdata
COPY --from=builder /go/src/github.com/syncthing/syncthing/syncthing /bin/syncthing
COPY --from=builder /src/syncthing /bin/syncthing
COPY --from=builder /src/script/docker-entrypoint.sh /bin/entrypoint.sh
ENV PUID=1000 PGID=1000
ENV PUID=1000 PGID=1000 HOME=/var/syncthing
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -z localhost 8384 || exit 1
CMD nc -z 127.0.0.1 8384 || exit 1
ENTRYPOINT \
chown "${PUID}:${PGID}" /var/syncthing \
&& su-exec "${PUID}:${PGID}" \
env HOME=/var/syncthing \
/bin/syncthing \
-home /var/syncthing/config \
-gui-address 0.0.0.0:8384
ENV STGUIADDRESS=0.0.0.0:8384
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/syncthing", "-home", "/var/syncthing/config"]

9
Dockerfile.builder Normal file
View File

@@ -0,0 +1,9 @@
ARG GOVERSION=latest
FROM golang:$GOVERSION
# FPM to build Debian packages
RUN apt-get update && apt-get install -y --no-install-recommends \
locales rubygems ruby-dev build-essential git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& gem install --no-ri --no-rdoc fpm

29
Dockerfile.stdiscosrv Normal file
View File

@@ -0,0 +1,29 @@
ARG GOVERSION=latest
FROM golang:$GOVERSION AS builder
WORKDIR /src
COPY . .
ENV CGO_ENABLED=0
ENV BUILD_HOST=syncthing.net
ENV BUILD_USER=docker
RUN rm -f stdiscosrv && go run build.go -no-upgrade build stdiscosrv
FROM alpine
EXPOSE 19200 8443
VOLUME ["/var/stdiscosrv"]
RUN apk add --no-cache ca-certificates su-exec
COPY --from=builder /src/stdiscosrv /bin/stdiscosrv
COPY --from=builder /src/script/docker-entrypoint.sh /bin/entrypoint.sh
ENV PUID=1000 PGID=1000 HOME=/var/stdiscosrv
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -z localhost 8443 || exit 1
WORKDIR /var/stdiscosrv
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/stdiscosrv"]

29
Dockerfile.strelaysrv Normal file
View File

@@ -0,0 +1,29 @@
ARG GOVERSION=latest
FROM golang:$GOVERSION AS builder
WORKDIR /src
COPY . .
ENV CGO_ENABLED=0
ENV BUILD_HOST=syncthing.net
ENV BUILD_USER=docker
RUN rm -f strelaysrv && go run build.go -no-upgrade build strelaysrv
FROM alpine
EXPOSE 22067 22070
VOLUME ["/var/strelaysrv"]
RUN apk add --no-cache ca-certificates su-exec
COPY --from=builder /src/strelaysrv /bin/strelaysrv
COPY --from=builder /src/script/docker-entrypoint.sh /bin/entrypoint.sh
ENV PUID=1000 PGID=1000 HOME=/var/strelaysrv
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -z localhost 22067 || exit 1
WORKDIR /var/strelaysrv
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/strelaysrv"]

View File

@@ -1,39 +1,34 @@
# Docker Container for Syncthing
Use the Dockerfile in this repo, or pull the `syncthing/syncthing` image
from Docker Hub. Use volumes to have the synchronized files available on the
host.
from Docker Hub.
The exposed volumes are by default:
/var/syncthing/config - the configuration and index directory into the Container
/var/syncthing - the default sync folder into the Container
You can add more folders and map them as you prefer.
Use the `/var/syncthing` volume to have the synchronized files available on the
host. You can add more folders and map them as you prefer.
Note that Syncthing runs as UID 1000 and GID 1000 by default. These may be
altered with the ``PUID`` and ``PGID`` environment variables.
Example usage:
## Example Usage
```
$ docker pull syncthing/syncthing
$ docker run -p 8384:8384 -p 22000:22000 \
-v /wherever/st-cfg:/var/syncthing/config \
-v /wherever/st-sync:/var/syncthing \
syncthing/syncthing:latest
```
Note that local device discovery will not work with the above command resulting
in poor local transfer rates if local device addresses are not manually
configured.
## Discovery
Note that local device discovery will not work with the above command,
resulting in poor local transfer rates if local device addresses are not
manually configured.
To allow local discovery, the docker host network can be used instead:
```
$ docker pull syncthing/syncthing
$ docker run --network=host \
-v /wherever/st-cfg:/var/syncthing/config \
-v /wherever/st-sync:/var/syncthing \
syncthing/syncthing:latest
```
@@ -41,3 +36,24 @@ $ docker run --network=host \
Be aware that syncthing alone is now in control of what interfaces and ports it
listens on. You can edit the syncthing configuration to change the defaults if
there are conflicts.
## GUI Security
By default Syncthing inside the Docker image listens on 0.0.0.0:8384 to
allow GUI connections via the Docker proxy. This is set by the
`STGUIADDRESS` environment variable in the Dockerfile, as it differs from
what Syncthing would otherwise use by default. This means you should set up
authentication in the GUI, like for any other externally reachable Syncthing
instance. If you do not require the GUI, or you use host networking, you can
unset the `STGUIADDRESS` variable to have Syncthing fall back to listening
on 127.0.0.1:
```
$ docker pull syncthing/syncthing
$ docker run -e STGUIADDRESS= \
-v /wherever/st-sync:/var/syncthing \
syncthing/syncthing:latest
```
With the environment variable unset Syncthing will follow what is set in the
configuration file / GUI settings dialog.

View File

@@ -2,7 +2,6 @@
---
[![Latest Downloads](https://img.shields.io/badge/latest-downloads-brightgreen.svg?style=flat-square)](https://build.syncthing.net/latest/)
[![Latest Linux & Cross Build](https://img.shields.io/teamcity/https/build.syncthing.net/s/Syncthing_BuildLinuxCross.svg?style=flat-square&label=linux+%26+cross+build)](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildLinuxCross&guest=1)
[![Latest Windows Build](https://img.shields.io/teamcity/https/build.syncthing.net/s/Syncthing_BuildWindows.svg?style=flat-square&label=windows+build)](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildWindows&guest=1)
[![Latest Mac Build](https://img.shields.io/teamcity/https/build.syncthing.net/s/Syncthing_BuildMac.svg?style=flat-square&label=mac+build)](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildMac&guest=1)
@@ -63,6 +62,10 @@ There are a few examples for keeping Syncthing running in the background
on your system in [the etc directory][3]. There are also several [GUI
implementations][11] for Windows, Mac and Linux.
## Docker
To run Syncthing in Docker, see [the Docker README][16].
## Vote on features/bugs
We'd like to encourage you to [vote][12] on issues that matter to you.
@@ -111,4 +114,5 @@ All code is licensed under the [MPLv2 License][7].
[13]: https://github.com/syncthing/syncthing/blob/master/GOALS.md
[14]: assets/logo-text-128.png
[15]: https://syncthing.net/
[16]: https://github.com/syncthing/syncthing/blob/master/README-Docker.md

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

After

Width:  |  Height:  |  Size: 4.8 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 36 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

View File

Binary file not shown.

537
build.go
View File

@@ -15,6 +15,7 @@ import (
"compress/flate"
"compress/gzip"
"crypto/sha256"
"encoding/json"
"errors"
"flag"
"fmt"
@@ -29,7 +30,6 @@ import (
"runtime"
"strconv"
"strings"
"text/template"
"time"
)
@@ -39,26 +39,30 @@ var (
goos string
noupgrade bool
version string
goVersion float64
goCmd string
race bool
debug = os.Getenv("BUILDDEBUG") != ""
noBuildGopath bool
extraTags string
installSuffix string
pkgdir string
cc string
debugBinary bool
coverage bool
timeout = "120s"
numVersions = 5
)
type target struct {
name string
debname string
debdeps []string
debpre string
debpost string
description string
buildPkg string
buildPkgs []string
binaryName string
archiveFiles []archiveFile
systemdServices []string
installationFiles []archiveFile
tags []string
}
@@ -72,9 +76,8 @@ type archiveFile struct {
var targets = map[string]target{
"all": {
// Only valid for the "build" and "install" commands as it lacks all
// the archive creation stuff.
buildPkg: "github.com/syncthing/syncthing/cmd/...",
tags: []string{"purego"},
// the archive creation stuff. buildPkgs gets filled out in init()
tags: []string{"purego"},
},
"syncthing": {
// The default target for "build", "install", "tar", "zip", "deb", etc.
@@ -83,7 +86,7 @@ var targets = map[string]target{
debdeps: []string{"libc6", "procps"},
debpost: "script/post-upgrade",
description: "Open Source Continuous File Synchronization",
buildPkg: "github.com/syncthing/syncthing/cmd/syncthing",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/syncthing"},
binaryName: "syncthing", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -125,8 +128,9 @@ var targets = map[string]target{
name: "stdiscosrv",
debname: "syncthing-discosrv",
debdeps: []string{"libc6"},
debpre: "cmd/stdiscosrv/scripts/preinst",
description: "Syncthing Discovery Server",
buildPkg: "github.com/syncthing/syncthing/cmd/stdiscosrv",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stdiscosrv"},
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -134,12 +138,17 @@ var targets = map[string]target{
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
systemdServices: []string{
"cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service",
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0644},
},
tags: []string{"purego"},
},
@@ -147,8 +156,9 @@ var targets = map[string]target{
name: "strelaysrv",
debname: "syncthing-relaysrv",
debdeps: []string{"libc6"},
debpre: "cmd/strelaysrv/scripts/preinst",
description: "Syncthing Relay Server",
buildPkg: "github.com/syncthing/syncthing/cmd/strelaysrv",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaysrv"},
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -157,6 +167,9 @@ var targets = map[string]target{
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
systemdServices: []string{
"cmd/strelaysrv/etc/linux-systemd/strelaysrv.service",
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0644},
@@ -164,6 +177,8 @@ var targets = map[string]target{
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0644},
},
},
"strelaypoolsrv": {
@@ -171,7 +186,7 @@ var targets = map[string]target{
debname: "syncthing-relaypoolsrv",
debdeps: []string{"libc6"},
description: "Syncthing Relay Pool Server",
buildPkg: "github.com/syncthing/syncthing/cmd/strelaypoolsrv",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaypoolsrv"},
binaryName: "strelaypoolsrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -188,7 +203,31 @@ var targets = map[string]target{
},
}
// These are repos we need to clone to run "go generate"
type dependencyRepo struct {
path string
repo string
commit string
}
var dependencyRepos = []dependencyRepo{
{path: "xdr", repo: "https://github.com/calmh/xdr.git", commit: "08e072f9cb16"},
}
func init() {
all := targets["all"]
pkgs, _ := filepath.Glob("cmd/*")
for _, pkg := range pkgs {
pkg = filepath.Base(pkg)
if strings.HasPrefix(pkg, ".") {
// ignore dotfiles
continue
}
all.buildPkgs = append(all.buildPkgs, fmt.Sprintf("github.com/syncthing/syncthing/cmd/%s", pkg))
}
targets["all"] = all
// The "syncthing" target includes a few more files found in the "etc"
// and "extra" dirs.
syncthingPkg := targets["syncthing"]
@@ -216,39 +255,6 @@ func main() {
}()
}
if gopath := gopath(); gopath == "" {
gopath, err := temporaryBuildDir()
if err != nil {
log.Fatal(err)
}
os.Setenv("GOPATH", gopath)
log.Println("GOPATH is", gopath)
if !noBuildGopath {
if err := buildGOPATH(gopath); err != nil {
log.Fatal(err)
}
lazyRebuildAssets()
}
} else {
inside := false
wd, _ := os.Getwd()
wd, _ = filepath.EvalSymlinks(wd)
for _, p := range filepath.SplitList(gopath) {
p, _ = filepath.EvalSymlinks(p)
if filepath.Join(p, "src/github.com/syncthing/syncthing") == wd {
inside = true
break
}
}
if !inside {
fmt.Println("You seem to have GOPATH set but the Syncthing source not placed correctly within it, which may cause problems.")
}
}
// Set path to $GOPATH/bin:$PATH so that we can for sure find tools we
// might have installed during "build.go setup".
os.Setenv("PATH", fmt.Sprintf("%s%cbin%c%s", os.Getenv("GOPATH"), os.PathSeparator, os.PathListSeparator, os.Getenv("PATH")))
// Invoking build.go with no parameters at all builds everything (incrementally),
// which is what you want for maximum error checking during development.
if flag.NArg() == 0 {
@@ -272,9 +278,6 @@ func main() {
func runCommand(cmd string, target target) {
switch cmd {
case "setup":
setup()
case "install":
var tags []string
if noupgrade {
@@ -319,12 +322,6 @@ func runCommand(cmd string, target target) {
case "deb":
buildDeb(target)
case "snap":
buildSnap(target)
case "clean":
clean()
case "vet":
metalintShort()
@@ -337,12 +334,19 @@ func runCommand(cmd string, target target) {
case "version":
fmt.Println(getVersion())
case "gopath":
gopath, err := temporaryBuildDir()
case "changelog":
vers, err := currentAndLatestVersions(numVersions)
if err != nil {
log.Fatal(err)
}
fmt.Println(gopath)
for _, ver := range vers {
underline := strings.Repeat("=", len(ver))
msg, err := tagMessage(ver)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n%s\n\n%s\n\n", ver, underline, msg)
}
default:
log.Fatalf("Unknown command %q", cmd)
@@ -352,64 +356,42 @@ func runCommand(cmd string, target target) {
func parseFlags() {
flag.StringVar(&goarch, "goarch", runtime.GOARCH, "GOARCH")
flag.StringVar(&goos, "goos", runtime.GOOS, "GOOS")
flag.StringVar(&goCmd, "gocmd", "go", "Specify `go` command")
flag.BoolVar(&noupgrade, "no-upgrade", noupgrade, "Disable upgrade functionality")
flag.StringVar(&version, "version", getVersion(), "Set compiled in version string")
flag.BoolVar(&race, "race", race, "Use race detector")
flag.BoolVar(&noBuildGopath, "no-build-gopath", noBuildGopath, "Don't build GOPATH, assume it's OK")
flag.StringVar(&extraTags, "tags", extraTags, "Extra tags, space separated")
flag.StringVar(&installSuffix, "installsuffix", installSuffix, "Install suffix, optional")
flag.StringVar(&pkgdir, "pkgdir", "", "Set -pkgdir parameter for `go build`")
flag.StringVar(&cc, "cc", os.Getenv("CC"), "Set CC environment variable for `go build`")
flag.BoolVar(&debugBinary, "debug-binary", debugBinary, "Create unoptimized binary to use with delve, set -gcflags='-N -l' and omit -ldflags")
flag.BoolVar(&coverage, "coverage", coverage, "Write coverage profile of tests to coverage.txt")
flag.IntVar(&numVersions, "num-versions", numVersions, "Number of versions for changelog command")
flag.Parse()
}
func setup() {
packages := []string{
"github.com/alecthomas/gometalinter",
"github.com/AlekSi/gocov-xml",
"github.com/axw/gocov/gocov",
"github.com/FiloSottile/gvt",
"golang.org/x/lint/golint",
"github.com/gordonklaus/ineffassign",
"github.com/mdempsky/unconvert",
"github.com/mitchellh/go-wordwrap",
"github.com/opennota/check/cmd/...",
"github.com/tsenart/deadcode",
"golang.org/x/net/html",
"golang.org/x/tools/cmd/cover",
"honnef.co/go/tools/cmd/gosimple",
"honnef.co/go/tools/cmd/staticcheck",
"honnef.co/go/tools/cmd/unused",
"github.com/josephspurrier/goversioninfo",
}
for _, pkg := range packages {
fmt.Println(pkg)
runPrint("go", "get", "-u", pkg)
}
runPrint("go", "install", "-v", "github.com/syncthing/syncthing/vendor/github.com/gogo/protobuf/protoc-gen-gogofast")
}
func test(pkgs ...string) {
lazyRebuildAssets()
useRace := runtime.GOARCH == "amd64"
switch runtime.GOOS {
case "darwin", "linux", "freebsd", "windows":
default:
useRace = false
args := []string{"test", "-short", "-timeout", timeout, "-tags", "purego"}
if runtime.GOARCH == "amd64" {
switch runtime.GOOS {
case "darwin", "linux", "freebsd": // , "windows": # See https://github.com/golang/go/issues/27089
args = append(args, "-race")
}
}
if useRace {
runPrint("go", append([]string{"test", "-short", "-race", "-timeout", timeout, "-tags", "purego"}, pkgs...)...)
} else {
runPrint("go", append([]string{"test", "-short", "-timeout", timeout, "-tags", "purego"}, pkgs...)...)
if coverage {
args = append(args, "-covermode", "atomic", "-coverprofile", "coverage.txt", "-coverpkg", strings.Join(pkgs, ","))
}
runPrint(goCmd, append(args, pkgs...)...)
}
func bench(pkgs ...string) {
lazyRebuildAssets()
runPrint("go", append([]string{"test", "-run", "NONE", "-bench", "."}, pkgs...)...)
runPrint(goCmd, append([]string{"test", "-run", "NONE", "-bench", "."}, pkgs...)...)
}
func install(target target, tags []string) {
@@ -423,11 +405,9 @@ func install(target target, tags []string) {
}
os.Setenv("GOBIN", filepath.Join(cwd, "bin"))
args := []string{"install", "-v"}
args = appendParameters(args, tags, target)
os.Setenv("GOOS", goos)
os.Setenv("GOARCH", goarch)
os.Setenv("CC", cc)
// On Windows generate a special file which the Go compiler will
// automatically use when generating Windows binaries to set things like
@@ -440,21 +420,20 @@ func install(target target, tags []string) {
defer shouldCleanupSyso(sysoPath)
}
runPrint("go", args...)
args := []string{"install", "-v"}
args = appendParameters(args, tags, target.buildPkgs...)
runPrint(goCmd, args...)
}
func build(target target, tags []string) {
lazyRebuildAssets()
tags = append(target.tags, tags...)
rmr(target.BinaryName())
args := []string{"build", "-v"}
args = appendParameters(args, tags, target)
os.Setenv("GOOS", goos)
os.Setenv("GOARCH", goarch)
os.Setenv("CC", cc)
// On Windows generate a special file which the Go compiler will
// automatically use when generating Windows binaries to set things like
@@ -471,10 +450,12 @@ func build(target target, tags []string) {
defer shouldCleanupSyso(sysoPath)
}
runPrint("go", args...)
args := []string{"build", "-v"}
args = appendParameters(args, tags, target.buildPkgs...)
runPrint(goCmd, args...)
}
func appendParameters(args []string, tags []string, target target) []string {
func appendParameters(args []string, tags []string, pkgs ...string) []string {
if pkgdir != "" {
args = append(args, "-pkgdir", pkgdir)
}
@@ -499,7 +480,7 @@ func appendParameters(args []string, tags []string, target target) []string {
args = append(args, "-gcflags", "-N -l")
}
return append(args, target.buildPkg)
return append(args, pkgs...)
}
func buildTar(target target) {
@@ -513,10 +494,7 @@ func buildTar(target target) {
}
build(target, tags)
if goos == "darwin" {
macosCodesign(target.BinaryName())
}
codesign(target)
for i := range target.archiveFiles {
target.archiveFiles[i].src = strings.Replace(target.archiveFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -539,10 +517,7 @@ func buildZip(target target) {
}
build(target, tags)
if goos == "windows" {
windowsCodesign(target.BinaryName())
}
codesign(target)
for i := range target.archiveFiles {
target.archiveFiles[i].src = strings.Replace(target.archiveFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -606,79 +581,55 @@ func buildDeb(target target) {
for _, dep := range target.debdeps {
args = append(args, "-d", dep)
}
for _, service := range target.systemdServices {
args = append(args, "--deb-systemd", service)
}
if target.debpost != "" {
args = append(args, "--after-upgrade", target.debpost)
}
if target.debpre != "" {
args = append(args, "--before-install", target.debpre)
}
runPrint("fpm", args...)
}
func buildSnap(target target) {
os.RemoveAll("snap")
tmpl, err := template.ParseFiles("snapcraft.yaml.template")
if err != nil {
log.Fatal(err)
}
f, err := os.Create("snapcraft.yaml")
defer f.Close()
if err != nil {
log.Fatal(err)
}
snaparch := goarch
if snaparch == "armhf" {
goarch = "arm"
} else if snaparch == "i386" {
goarch = "386"
}
snapver := version
if strings.HasPrefix(snapver, "v") {
snapver = snapver[1:]
}
snapgrade := "devel"
if matched, _ := regexp.MatchString(`^\d+\.\d+\.\d+(-rc.\d+)?$`, snapver); matched {
snapgrade = "stable"
}
err = tmpl.Execute(f, map[string]string{
"Version": snapver,
"HostArchitecture": runtime.GOARCH,
"TargetArchitecture": snaparch,
"Grade": snapgrade,
func shouldBuildSyso(dir string) (string, error) {
type M map[string]interface{}
version := getVersion()
version = strings.TrimPrefix(version, "v")
major, minor, patch := semanticVersion()
bs, err := json.Marshal(M{
"FixedFileInfo": M{
"FileVersion": M{
"Major": major,
"Minor": minor,
"Patch": patch,
},
"ProductVersion": M{
"Major": major,
"Minor": minor,
"Patch": patch,
},
},
"StringFileInfo": M{
"FileDescription": "Open Source Continuous File Synchronization",
"LegalCopyright": "The Syncthing Authors",
"FileVersion": version,
"ProductVersion": version,
"ProductName": "Syncthing",
},
"IconPath": "assets/logo.ico",
})
if err != nil {
log.Fatal(err)
return "", err
}
runPrint("snapcraft", "clean")
build(target, []string{"noupgrade"})
runPrint("snapcraft")
}
func shouldBuildSyso(dir string) (string, error) {
jsonPath := filepath.Join(dir, "versioninfo.json")
file, err := os.Create(filepath.Join(dir, "versioninfo.json"))
err = ioutil.WriteFile(jsonPath, bs, 0644)
if err != nil {
return "", errors.New("failed to create " + jsonPath + ": " + err.Error())
}
major, minor, patch, build := semanticVersion()
fmt.Fprintf(file, `{
"FixedFileInfo": {
"FileVersion": {
"Major": %s,
"Minor": %s,
"Patch": %s,
"Build": %s
}
},
"StringFileInfo": {
"FileDescription": "Open Source Continuous File Synchronization",
"LegalCopyright": "The Syncthing Authors",
"ProductVersion": "%s",
"ProductName": "Syncthing"
},
"IconPath": "assets/logo.ico"
}`, major, minor, patch, build, getVersion())
file.Close()
defer func() {
if err := os.Remove(jsonPath); err != nil {
log.Printf("Warning: unable to remove generated %s: %v. Please remove it manually.", jsonPath, err)
@@ -741,6 +692,7 @@ func listFiles(dir string) []string {
if err != nil {
return err
}
if fi.Mode().IsRegular() {
res = append(res, path)
}
@@ -751,11 +703,11 @@ func listFiles(dir string) []string {
func rebuildAssets() {
os.Setenv("SOURCE_DATE_EPOCH", fmt.Sprint(buildStamp()))
runPrint("go", "generate", "github.com/syncthing/syncthing/lib/auto", "github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto")
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/api/auto", "github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto")
}
func lazyRebuildAssets() {
if shouldRebuildAssets("lib/auto/gui.files.go", "gui") || shouldRebuildAssets("cmd/strelaypoolsrv/auto/gui.files.go", "cmd/strelaypoolsrv/auto/gui") {
if shouldRebuildAssets("lib/api/auto/gui.files.go", "gui") || shouldRebuildAssets("cmd/strelaypoolsrv/auto/gui.files.go", "cmd/strelaypoolsrv/gui") {
rebuildAssets()
}
}
@@ -787,12 +739,28 @@ func shouldRebuildAssets(target, srcdir string) bool {
}
func proto() {
runPrint("go", "generate", "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/stdiscosrv")
pv := protobufVersion()
dependencyRepos = append(dependencyRepos,
dependencyRepo{path: "protobuf", repo: "https://github.com/gogo/protobuf.git", commit: pv},
)
runPrint(goCmd, "get", fmt.Sprintf("github.com/gogo/protobuf/protoc-gen-gogofast@%v", pv))
os.MkdirAll("repos", 0755)
for _, dep := range dependencyRepos {
path := filepath.Join("repos", dep.path)
if _, err := os.Stat(path); err != nil {
runPrintInDir("repos", "git", "clone", dep.repo, dep.path)
} else {
runPrintInDir(path, "git", "fetch")
}
runPrintInDir(path, "git", "checkout", dep.commit)
}
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/stdiscosrv")
}
func translate() {
os.Chdir("gui/default/assets/lang")
runPipe("lang-en-new.json", "go", "run", "../../../../script/translate.go", "lang-en.json", "../../../")
runPipe("lang-en-new.json", goCmd, "run", "../../../../script/translate.go", "lang-en.json", "../../../")
os.Remove("lang-en.json")
err := os.Rename("lang-en-new.json", "lang-en.json")
if err != nil {
@@ -803,26 +771,19 @@ func translate() {
func transifex() {
os.Chdir("gui/default/assets/lang")
runPrint("go", "run", "../../../../script/transifexdl.go")
}
func clean() {
rmr("bin")
rmr(filepath.Join(os.Getenv("GOPATH"), fmt.Sprintf("pkg/%s_%s/github.com/syncthing", goos, goarch)))
runPrint(goCmd, "run", "../../../../script/transifexdl.go")
}
func ldflags() string {
sep := '='
if goVersion > 0 && goVersion < 1.5 {
sep = ' '
}
b := new(bytes.Buffer)
b := new(strings.Builder)
b.WriteString("-w")
fmt.Fprintf(b, " -X main.Version%c%s", sep, version)
fmt.Fprintf(b, " -X main.BuildStamp%c%d", sep, buildStamp())
fmt.Fprintf(b, " -X main.BuildUser%c%s", sep, buildUser())
fmt.Fprintf(b, " -X main.BuildHost%c%s", sep, buildHost())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Version=%s", version)
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Stamp=%d", buildStamp())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.User=%s", buildUser())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Host=%s", buildHost())
if v := os.Getenv("EXTRA_LDFLAGS"); v != "" {
fmt.Fprintf(b, " %s", v)
}
return b.String()
}
@@ -836,13 +797,7 @@ func rmr(paths ...string) {
}
func getReleaseVersion() (string, error) {
fd, err := os.Open("RELEASE")
if err != nil {
return "", err
}
defer fd.Close()
bs, err := ioutil.ReadAll(fd)
bs, err := ioutil.ReadFile("RELEASE")
if err != nil {
return "", err
}
@@ -879,13 +834,18 @@ func getVersion() string {
return "unknown-dev"
}
func semanticVersion() (major, minor, patch, build string) {
r := regexp.MustCompile(`v(?P<Major>\d+)\.(?P<Minor>\d+).(?P<Patch>\d+).*\+(?P<CommitsAhead>\d+)`)
func semanticVersion() (major, minor, patch int) {
r := regexp.MustCompile(`v(\d+)\.(\d+).(\d+)`)
matches := r.FindStringSubmatch(getVersion())
if len(matches) != 5 {
return "0", "0", "0", "0"
if len(matches) != 4 {
return 0, 0, 0
}
return matches[1], matches[2], matches[3], matches[4]
var ints [3]int
for i, s := range matches[1:] {
ints[i], _ = strconv.Atoi(s)
}
return ints[0], ints[1], ints[2]
}
func getBranchSuffix() string {
@@ -1005,6 +965,10 @@ func runError(cmd string, args ...string) ([]byte, error) {
}
func runPrint(cmd string, args ...string) {
runPrintInDir(".", cmd, args...)
}
func runPrintInDir(dir string, cmd string, args ...string) {
if debug {
t0 := time.Now()
log.Println("runPrint:", cmd, strings.Join(args, " "))
@@ -1015,6 +979,7 @@ func runPrint(cmd string, args ...string) {
ecmd := exec.Command(cmd, args...)
ecmd.Stdout = os.Stdout
ecmd.Stderr = os.Stderr
ecmd.Dir = dir
err := ecmd.Run()
if err != nil {
log.Fatal(err)
@@ -1176,6 +1141,15 @@ func zipFile(out string, files []archiveFile) {
}
}
func codesign(target target) {
switch goos {
case "windows":
windowsCodesign(target.BinaryName())
case "darwin":
macosCodesign(target.BinaryName())
}
}
func macosCodesign(file string) {
if pass := os.Getenv("CODESIGN_KEYCHAIN_PASS"); pass != "" {
bs, err := runError("security", "unlock-keychain", "-p", pass)
@@ -1186,7 +1160,7 @@ func macosCodesign(file string) {
}
if id := os.Getenv("CODESIGN_IDENTITY"); id != "" {
bs, err := runError("codesign", "-s", id, file)
bs, err := runError("codesign", "--options=runtime", "-s", id, file)
if err != nil {
log.Println("Codesign: signing failed:", string(bs))
return
@@ -1234,12 +1208,12 @@ func windowsCodesign(file string) {
func metalint() {
lazyRebuildAssets()
runPrint("go", "test", "-run", "Metalint", "./meta")
runPrint(goCmd, "test", "-run", "Metalint", "./meta")
}
func metalintShort() {
lazyRebuildAssets()
runPrint("go", "test", "-short", "-run", "Metalint", "./meta")
runPrint(goCmd, "test", "-short", "-run", "Metalint", "./meta")
}
func temporaryBuildDir() (string, error) {
@@ -1268,93 +1242,76 @@ func temporaryBuildDir() (string, error) {
return filepath.Join(tmpDir, base), nil
}
func buildGOPATH(gopath string) error {
pkg := filepath.Join(gopath, "src/github.com/syncthing/syncthing")
dirs := []string{"cmd", "gui", "lib", "meta", "script", "test", "vendor"}
if debug {
t0 := time.Now()
log.Println("build temporary GOPATH in", gopath)
defer func() {
log.Println("... in", time.Since(t0))
}()
}
// Walk the sources and copy the files into the temporary GOPATH.
// Remember which files are supposed to be present so we can clean
// out everything else in the next step. The copyFile() step will
// only actually copy the file if it doesn't exist or the contents
// differ.
exists := map[string]struct{}{}
for _, dir := range dirs {
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
dst := filepath.Join(pkg, path)
exists[dst] = struct{}{}
if err := copyFile(path, dst, info.Mode()); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
}
// Walk the temporary GOPATH and remove any files that we wouldn't
// have copied there in the previous step.
filepath.Walk(pkg, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
if _, ok := exists[path]; !ok {
os.Remove(path)
}
return nil
})
return nil
}
func gopath() string {
if gopath := os.Getenv("GOPATH"); gopath != "" {
// The env var is set, use that.
return gopath
}
// Ask Go what it thinks.
bs, err := runError("go", "env", "GOPATH")
if err != nil {
return ""
}
// We got something. Check if we are in fact available in that location.
gopath := string(bs)
if _, err := os.Stat(filepath.Join(gopath, "src/github.com/syncthing/syncthing/build.go")); err == nil {
// That seems to be the gopath.
return gopath
}
// The gopath is not valid.
return ""
}
func (t target) BinaryName() string {
if goos == "windows" {
return t.binaryName + ".exe"
}
return t.binaryName
}
func protobufVersion() string {
bs, err := runError(goCmd, "list", "-f", "{{.Version}}", "-m", "github.com/gogo/protobuf")
if err != nil {
log.Fatal("Getting protobuf version:", err)
}
return string(bs)
}
func currentAndLatestVersions(n int) ([]string, error) {
bs, err := runError("git", "tag", "--sort", "taggerdate")
if err != nil {
return nil, err
}
lines := strings.Split(string(bs), "\n")
reverseStrings(lines)
// The one at the head is the latest version. We always keep that one.
// Then we filter out remaining ones with dashes (pre-releases etc).
latest := lines[:1]
nonPres := filterStrings(lines[1:], func(s string) bool { return !strings.Contains(s, "-") })
vers := append(latest, nonPres...)
return vers[:n], nil
}
func reverseStrings(ss []string) {
for i := 0; i < len(ss)/2; i++ {
ss[i], ss[len(ss)-1-i] = ss[len(ss)-1-i], ss[i]
}
}
func filterStrings(ss []string, op func(string) bool) []string {
n := ss[:0]
for _, s := range ss {
if op(s) {
n = append(n, s)
}
}
return n
}
func tagMessage(tag string) (string, error) {
hash, err := runError("git", "rev-parse", tag)
if err != nil {
return "", err
}
obj, err := runError("git", "cat-file", "-p", string(hash))
if err != nil {
return "", err
}
return trimTagMessage(string(obj), tag), nil
}
func trimTagMessage(msg, tag string) string {
firstBlank := strings.Index(msg, "\n\n")
if firstBlank > 0 {
msg = msg[firstBlank+2:]
}
msg = strings.TrimPrefix(msg, tag)
beginSig := strings.Index(msg, "-----BEGIN PGP")
if beginSig > 0 {
msg = msg[:beginSig]
}
return strings.TrimSpace(msg)
}

20
build.ps1 Normal file
View File

@@ -0,0 +1,20 @@
function build {
go run build.go @args
}
$cmd, $rest = $args
switch ($cmd) {
"test" {
$env:LOGGER_DISCARD=1
build test
}
"bench" {
$env:LOGGER_DISCARD=1
build bench
}
default {
build @rest
}
}

View File

@@ -2,8 +2,6 @@
set -euo pipefail
IFS=$'\n\t'
STTRACE=${STTRACE:-}
script() {
name="$1"
shift
@@ -15,88 +13,23 @@ build() {
}
case "${1:-default}" in
default)
build
;;
clean)
build "$@"
;;
tar)
build "$@"
;;
assets)
build "$@"
;;
xdr)
build "$@"
;;
translate)
build "$@"
;;
deb)
build "$@"
;;
setup)
build "$@"
;;
test)
LOGGER_DISCARD=1 build test
;;
bench)
LOGGER_DISCARD=1 build bench | script benchfilter
LOGGER_DISCARD=1 build bench
;;
prerelease)
go run script/authors.go
script authors
build transifex
pushd man ; ./refresh.sh ; popd
git add -A gui man AUTHORS
git commit -m 'gui, man, authors: Update docs, translations, and contributors'
;;
noupgrade)
build -no-upgrade tar
;;
all)
platforms=(
darwin-amd64 dragonfly-amd64 freebsd-amd64 linux-amd64 netbsd-amd64 openbsd-amd64 solaris-amd64 windows-amd64
freebsd-386 linux-386 netbsd-386 openbsd-386 windows-386
linux-arm linux-arm64 linux-ppc64 linux-ppc64le
)
for plat in "${platforms[@]}"; do
echo Building "$plat"
goos="${plat%-*}"
goarch="${plat#*-}"
dist="tar"
if [[ $goos == "windows" ]]; then
dist="zip"
fi
build -goos "$goos" -goarch "$goarch" "$dist"
echo
done
;;
test-xunit)
(GOPATH="$(pwd)/Godeps/_workspace:$GOPATH" go test -v -race ./lib/... ./cmd/... || true) > tests.out
go2xunit -output tests.xml -fail < tests.out
;;
*)
echo "Unknown build command $1"
build "$@"
;;
esac

View File

@@ -1,143 +0,0 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// This doesn't build on Windows due to the Rusage stuff.
// +build !windows
package main
import (
"flag"
"fmt"
"log"
"runtime"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/rc"
)
var homeDir = "h1"
var syncthingBin = "./bin/syncthing"
var test = "scan"
func main() {
flag.StringVar(&homeDir, "home", homeDir, "Home directory location")
flag.StringVar(&syncthingBin, "bin", syncthingBin, "Binary location")
flag.StringVar(&test, "test", test, "Test to run")
flag.Parse()
switch test {
case "scan":
// scan measures the resource usage required to perform the initial
// scan, without cleaning away the database first.
testScan()
}
}
// testScan starts a process and reports on the resource usage required to
// perform the initial scan.
func testScan() {
log.Println("Starting...")
p := rc.NewProcess("127.0.0.1:8081")
if err := p.Start(syncthingBin, "-home", homeDir, "-no-browser"); err != nil {
log.Println(err)
return
}
defer p.Stop()
wallTime := awaitScanComplete(p)
report(p, wallTime)
}
// awaitScanComplete waits for a folder to transition idle->scanning and
// then scanning->idle and returns the time taken for the scan.
func awaitScanComplete(p *rc.Process) time.Duration {
log.Println("Awaiting scan completion...")
var t0, t1 time.Time
lastEvent := 0
loop:
for {
evs, err := p.Events(lastEvent)
if err != nil {
continue
}
for _, ev := range evs {
if ev.Type == "StateChanged" {
data := ev.Data.(map[string]interface{})
log.Println(ev)
if data["to"].(string) == "scanning" {
t0 = ev.Time
continue
}
if !t0.IsZero() && data["to"].(string) == "idle" {
t1 = ev.Time
break loop
}
}
lastEvent = ev.ID
}
time.Sleep(250 * time.Millisecond)
}
return t1.Sub(t0)
}
// report stops the given process and reports on its resource usage in two
// ways: human readable to stderr, and CSV to stdout.
func report(p *rc.Process, wallTime time.Duration) {
sv, err := p.SystemVersion()
if err != nil {
log.Println(err)
return
}
ss, err := p.SystemStatus()
if err != nil {
log.Println(err)
return
}
proc, err := p.Stop()
if err != nil {
return
}
rusage, ok := proc.SysUsage().(*syscall.Rusage)
if !ok {
return
}
log.Println("Version:", sv.Version)
log.Println("Alloc:", ss.Alloc/1024, "KiB")
log.Println("Sys:", ss.Sys/1024, "KiB")
log.Println("Goroutines:", ss.Goroutines)
log.Println("Wall time:", wallTime)
log.Println("Utime:", time.Duration(rusage.Utime.Nano()))
log.Println("Stime:", time.Duration(rusage.Stime.Nano()))
if runtime.GOOS == "darwin" {
// Darwin reports in bytes, Linux seems to report in KiB even
// though the manpage says otherwise.
rusage.Maxrss /= 1024
}
log.Println("MaxRSS:", rusage.Maxrss, "KiB")
fmt.Printf("%s,%d,%d,%d,%.02f,%.02f,%.02f,%d\n",
sv.Version,
ss.Alloc/1024,
ss.Sys/1024,
ss.Goroutines,
wallTime.Seconds(),
time.Duration(rusage.Utime.Nano()).Seconds(),
time.Duration(rusage.Stime.Nano()).Seconds(),
rusage.Maxrss)
}

View File

@@ -1,19 +0,0 @@
Copyright (C) 2014 Audrius Butkevičius
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,115 +1,96 @@
// Copyright (C) 2014 Audrius Butkevičius
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"net/http"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
type APIClient struct {
httpClient http.Client
endpoint string
apikey string
username string
password string
id string
csrf string
http.Client
cfg config.GUIConfiguration
apikey string
}
var instance *APIClient
func getClient(c *cli.Context) *APIClient {
if instance != nil {
return instance
}
endpoint := c.GlobalString("endpoint")
if !strings.HasPrefix(endpoint, "http") {
endpoint = "http://" + endpoint
}
func getClient(cfg config.GUIConfiguration) *APIClient {
httpClient := http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: c.GlobalBool("insecure"),
InsecureSkipVerify: true,
},
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
return net.Dial(cfg.Network(), cfg.Address())
},
},
}
client := APIClient{
httpClient: httpClient,
endpoint: endpoint,
apikey: c.GlobalString("apikey"),
username: c.GlobalString("username"),
password: c.GlobalString("password"),
return &APIClient{
Client: httpClient,
cfg: cfg,
apikey: cfg.APIKey,
}
if client.apikey == "" {
request, err := http.NewRequest("GET", client.endpoint, nil)
die(err)
response := client.handleRequest(request)
client.id = response.Header.Get("X-Syncthing-ID")
if client.id == "" {
die("Failed to get device ID")
}
for _, item := range response.Cookies() {
if item.Name == "CSRF-Token-"+client.id[:5] {
client.csrf = item.Value
goto csrffound
}
}
die("Failed to get CSRF token")
csrffound:
}
instance = &client
return &client
}
func (client *APIClient) handleRequest(request *http.Request) *http.Response {
if client.apikey != "" {
request.Header.Set("X-API-Key", client.apikey)
func (c *APIClient) Endpoint() string {
if c.cfg.Network() == "unix" {
return "http://unix/"
}
if client.username != "" || client.password != "" {
request.SetBasicAuth(client.username, client.password)
}
if client.csrf != "" {
request.Header.Set("X-CSRF-Token-"+client.id[:5], client.csrf)
url := c.cfg.URL()
if !strings.HasSuffix(url, "/") {
url += "/"
}
return url
}
response, err := client.httpClient.Do(request)
die(err)
func (c *APIClient) Do(req *http.Request) (*http.Response, error) {
req.Header.Set("X-API-Key", c.apikey)
resp, err := c.Client.Do(req)
if err != nil {
return nil, err
}
return resp, checkResponse(resp)
}
func (c *APIClient) Get(url string) (*http.Response, error) {
request, err := http.NewRequest("GET", c.Endpoint()+"rest/"+url, nil)
if err != nil {
return nil, err
}
return c.Do(request)
}
func (c *APIClient) Post(url, body string) (*http.Response, error) {
request, err := http.NewRequest("POST", c.Endpoint()+"rest/"+url, bytes.NewBufferString(body))
if err != nil {
return nil, err
}
return c.Do(request)
}
func checkResponse(response *http.Response) error {
if response.StatusCode == 404 {
die("Invalid endpoint or API call")
} else if response.StatusCode == 401 {
die("Invalid username or password")
return errors.New("invalid endpoint or API call")
} else if response.StatusCode == 403 {
if client.apikey == "" {
die("Invalid CSRF token")
}
die("Invalid API key")
return errors.New("invalid API key")
} else if response.StatusCode != 200 {
body := strings.TrimSpace(string(responseToBArray(response)))
if body != "" {
die(body)
data, err := responseToBArray(response)
if err != nil {
return err
}
die("Unknown HTTP status returned: " + response.Status)
body := strings.TrimSpace(string(data))
return fmt.Errorf("unexpected HTTP status returned: %s\n%s", response.Status, body)
}
return response
}
func httpGet(c *cli.Context, url string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("GET", client.endpoint+"/rest/"+url, nil)
die(err)
return client.handleRequest(request)
}
func httpPost(c *cli.Context, url string, body string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("POST", client.endpoint+"/rest/"+url, bytes.NewBufferString(body))
die(err)
return client.handleRequest(request)
return nil
}

View File

@@ -1,188 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "devices",
HideHelp: true,
Usage: "Device command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List registered devices",
Requires: &cli.Requires{},
Action: devicesList,
},
{
Name: "add",
Usage: "Add a new device",
Requires: &cli.Requires{"device id", "device name?"},
Action: devicesAdd,
},
{
Name: "remove",
Usage: "Remove an existing device",
Requires: &cli.Requires{"device id"},
Action: devicesRemove,
},
{
Name: "get",
Usage: "Get a property of a device",
Requires: &cli.Requires{"device id", "property"},
Action: devicesGet,
},
{
Name: "set",
Usage: "Set a property of a device",
Requires: &cli.Requires{"device id", "property", "value..."},
Action: devicesSet,
},
},
})
}
func devicesList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, device := range cfg.Devices {
if !first {
fmt.Fprintln(writer)
}
fmt.Fprintln(writer, "ID:\t", device.DeviceID, "\t")
fmt.Fprintln(writer, "Name:\t", device.Name, "\t(name)")
fmt.Fprintln(writer, "Address:\t", strings.Join(device.Addresses, " "), "\t(address)")
fmt.Fprintln(writer, "Compression:\t", device.Compression, "\t(compression)")
fmt.Fprintln(writer, "Certificate name:\t", device.CertName, "\t(certname)")
fmt.Fprintln(writer, "Introducer:\t", device.Introducer, "\t(introducer)")
first = false
}
writer.Flush()
}
func devicesAdd(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
newDevice := config.DeviceConfiguration{
DeviceID: id,
Name: nid,
Addresses: []string{"dynamic"},
}
if len(c.Args()) > 1 {
newDevice.Name = c.Args()[1]
}
if len(c.Args()) > 2 {
addresses := c.Args()[2:]
for _, item := range addresses {
if item == "dynamic" {
continue
}
validAddress(item)
}
newDevice.Addresses = addresses
}
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID == id {
die("Device " + nid + " already exists")
}
}
cfg.Devices = append(cfg.Devices, newDevice)
setConfig(c, cfg)
}
func devicesRemove(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
if nid == getMyID(c) {
die("Cannot remove yourself")
}
cfg := getConfig(c)
for i, device := range cfg.Devices {
if device.DeviceID == id {
last := len(cfg.Devices) - 1
cfg.Devices[i] = cfg.Devices[last]
cfg.Devices = cfg.Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + nid + " not found")
}
func devicesGet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
fmt.Println(device.Name)
case "address":
fmt.Println(strings.Join(device.Addresses, "\n"))
case "compression":
fmt.Println(device.Compression.String())
case "certname":
fmt.Println(device.CertName)
case "introducer":
fmt.Println(device.Introducer)
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
return
}
die("Device " + nid + " not found")
}
func devicesSet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
config := getConfig(c)
for i, device := range config.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
config.Devices[i].Name = strings.Join(c.Args()[2:], " ")
case "address":
for _, item := range c.Args()[2:] {
if item == "dynamic" {
continue
}
validAddress(item)
}
config.Devices[i].Addresses = c.Args()[2:]
case "compression":
err := config.Devices[i].Compression.UnmarshalText([]byte(c.Args()[2]))
die(err)
case "certname":
config.Devices[i].CertName = strings.Join(c.Args()[2:], " ")
case "introducer":
config.Devices[i].Introducer = parseBool(c.Args()[2])
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
setConfig(c, config)
return
}
die("Device " + nid + " not found")
}

View File

@@ -1,67 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Requires: &cli.Requires{},
Action: errorsShow,
},
{
Name: "push",
Usage: "Push an error to active clients",
Requires: &cli.Requires{"error message..."},
Action: errorsPush,
},
{
Name: "clear",
Usage: "Clear pending errors",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/error/clear"),
},
},
})
}
func errorsShow(c *cli.Context) {
response := httpGet(c, "system/error")
var data map[string][]map[string]interface{}
json.Unmarshal(responseToBArray(response), &data)
writer := newTableWriter()
for _, item := range data["errors"] {
time := item["when"].(string)[:19]
time = strings.Replace(time, "T", " ", 1)
err := item["message"].(string)
err = strings.TrimSpace(err)
fmt.Fprintln(writer, time+":\t"+err)
}
writer.Flush()
}
func errorsPush(c *cli.Context) {
err := strings.Join(c.Args(), " ")
response := httpPost(c, "system/error", strings.TrimSpace(err))
if response.StatusCode != 200 {
err = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
}

View File

@@ -1,361 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"path/filepath"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/fs"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "folders",
HideHelp: true,
Usage: "Folder command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List available folders",
Requires: &cli.Requires{},
Action: foldersList,
},
{
Name: "add",
Usage: "Add a new folder",
Requires: &cli.Requires{"folder id", "directory"},
Action: foldersAdd,
},
{
Name: "remove",
Usage: "Remove an existing folder",
Requires: &cli.Requires{"folder id"},
Action: foldersRemove,
},
{
Name: "override",
Usage: "Override changes from other nodes for a master folder",
Requires: &cli.Requires{"folder id"},
Action: foldersOverride,
},
{
Name: "get",
Usage: "Get a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersGet,
},
{
Name: "set",
Usage: "Set a property of a folder",
Requires: &cli.Requires{"folder id", "property", "value..."},
Action: foldersSet,
},
{
Name: "unset",
Usage: "Unset a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersUnset,
},
{
Name: "devices",
Usage: "Folder devices command group",
HideHelp: true,
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List of devices which the folder is shared with",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesList,
},
{
Name: "add",
Usage: "Share a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesAdd,
},
{
Name: "remove",
Usage: "Unshare a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesRemove,
},
{
Name: "clear",
Usage: "Unshare a folder with all devices",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesClear,
},
},
},
},
})
}
func foldersList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, folder := range cfg.Folders {
if !first {
fmt.Fprintln(writer)
}
fs := folder.Filesystem()
fmt.Fprintln(writer, "ID:\t", folder.ID, "\t")
fmt.Fprintln(writer, "Path:\t", fs.URI(), "\t(directory)")
fmt.Fprintln(writer, "Path type:\t", fs.Type(), "\t(directory-type)")
fmt.Fprintln(writer, "Folder type:\t", folder.Type, "\t(type)")
fmt.Fprintln(writer, "Ignore permissions:\t", folder.IgnorePerms, "\t(permissions)")
fmt.Fprintln(writer, "Rescan interval in seconds:\t", folder.RescanIntervalS, "\t(rescan)")
if folder.Versioning.Type != "" {
fmt.Fprintln(writer, "Versioning:\t", folder.Versioning.Type, "\t(versioning)")
for key, value := range folder.Versioning.Params {
fmt.Fprintf(writer, "Versioning %s:\t %s \t(versioning-%s)\n", key, value, key)
}
}
first = false
}
writer.Flush()
}
func foldersAdd(c *cli.Context) {
cfg := getConfig(c)
abs, err := filepath.Abs(c.Args()[1])
die(err)
folder := config.FolderConfiguration{
ID: c.Args()[0],
Path: filepath.Clean(abs),
FilesystemType: fs.FilesystemTypeBasic,
}
cfg.Folders = append(cfg.Folders, folder)
setConfig(c, cfg)
}
func foldersRemove(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for i, folder := range cfg.Folders {
if folder.ID == rid {
last := len(cfg.Folders) - 1
cfg.Folders[i] = cfg.Folders[last]
cfg.Folders = cfg.Folders[:last]
setConfig(c, cfg)
return
}
}
die("Folder " + rid + " not found")
}
func foldersOverride(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid && folder.Type == config.FolderTypeSendOnly {
response := httpPost(c, "db/override", "")
if response.StatusCode != 200 {
err := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
return
}
}
die("Folder " + rid + " not found or folder not master")
}
func foldersGet(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
value, ok := folder.Versioning.Params[arg]
if ok {
fmt.Println(value)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "directory":
fmt.Println(folder.Filesystem().URI())
case "directory-type":
fmt.Println(folder.Filesystem().Type())
case "type":
fmt.Println(folder.Type)
case "permissions":
fmt.Println(folder.IgnorePerms)
case "rescan":
fmt.Println(folder.RescanIntervalS)
case "versioning":
if folder.Versioning.Type != "" {
fmt.Println(folder.Versioning.Type)
}
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, directory-type, type, permissions, versioning, versioning-<key>")
}
return
}
die("Folder " + rid + " not found")
}
func foldersSet(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
val := strings.Join(c.Args()[2:], " ")
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
cfg.Folders[i].Versioning.Params[arg[11:]] = val
setConfig(c, cfg)
return
}
switch arg {
case "directory":
cfg.Folders[i].Path = val
case "directory-type":
var fsType fs.FilesystemType
fsType.UnmarshalText([]byte(val))
cfg.Folders[i].FilesystemType = fsType
case "type":
var t config.FolderType
if err := t.UnmarshalText([]byte(val)); err != nil {
die("Invalid folder type: " + err.Error())
}
cfg.Folders[i].Type = t
case "permissions":
cfg.Folders[i].IgnorePerms = parseBool(val)
case "rescan":
cfg.Folders[i].RescanIntervalS = parseInt(val)
case "versioning":
cfg.Folders[i].Versioning.Type = val
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, master, permissions, versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersUnset(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
if _, ok := folder.Versioning.Params[arg]; ok {
delete(cfg.Folders[i].Versioning.Params, arg)
setConfig(c, cfg)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "versioning":
cfg.Folders[i].Versioning.Type = ""
cfg.Folders[i].Versioning.Params = make(map[string]string)
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesList(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
fmt.Println(device.DeviceID)
}
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesAdd(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
if device.DeviceID == nid {
die("Device " + c.Args()[1] + " is already part of this folder")
}
}
for _, device := range cfg.Devices {
if device.DeviceID == nid {
cfg.Folders[i].Devices = append(folder.Devices, config.FolderDeviceConfiguration{
DeviceID: device.DeviceID,
})
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found in device list")
}
die("Folder " + rid + " not found")
}
func foldersDevicesRemove(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for ri, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for ni, device := range folder.Devices {
if device.DeviceID == nid {
last := len(folder.Devices) - 1
cfg.Folders[ri].Devices[ni] = folder.Devices[last]
cfg.Folders[ri].Devices = cfg.Folders[ri].Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found")
}
die("Folder " + rid + " not found")
}
func foldersDevicesClear(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
cfg.Folders[i].Devices = []config.FolderDeviceConfiguration{}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}

View File

@@ -1,94 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, []cli.Command{
{
Name: "id",
Usage: "Get ID of the Syncthing client",
Requires: &cli.Requires{},
Action: generalID,
},
{
Name: "status",
Usage: "Configuration status, whether or not a restart is required for changes to take effect",
Requires: &cli.Requires{},
Action: generalStatus,
},
{
Name: "config",
Usage: "Configuration",
Requires: &cli.Requires{},
Action: generalConfiguration,
},
{
Name: "restart",
Usage: "Restart syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/restart"),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/shutdown"),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/reset"),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/upgrade"),
},
{
Name: "version",
Usage: "Syncthing client version",
Requires: &cli.Requires{},
Action: generalVersion,
},
}...)
}
func generalID(c *cli.Context) {
fmt.Println(getMyID(c))
}
func generalStatus(c *cli.Context) {
response := httpGet(c, "system/config/insync")
var status struct{ ConfigInSync bool }
json.Unmarshal(responseToBArray(response), &status)
if !status.ConfigInSync {
die("Config out of sync")
}
fmt.Println("Config in sync")
}
func generalConfiguration(c *cli.Context) {
response := httpGet(c, "system/config")
var jsResponse interface{}
json.Unmarshal(responseToBArray(response), &jsResponse)
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
enc.Encode(jsResponse)
}
func generalVersion(c *cli.Context) {
response := httpGet(c, "system/version")
version := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &version)
prettyPrintJSON(version)
}

View File

@@ -1,127 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "gui",
HideHelp: true,
Usage: "GUI command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all GUI configuration settings",
Requires: &cli.Requires{},
Action: guiDump,
},
{
Name: "get",
Usage: "Get a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiGet,
},
{
Name: "set",
Usage: "Set a GUI configuration setting",
Requires: &cli.Requires{"setting", "value"},
Action: guiSet,
},
{
Name: "unset",
Usage: "Unset a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiUnset,
},
},
})
}
func guiDump(c *cli.Context) {
cfg := getConfig(c).GUI
writer := newTableWriter()
fmt.Fprintln(writer, "Enabled:\t", cfg.Enabled, "\t(enabled)")
fmt.Fprintln(writer, "Use HTTPS:\t", cfg.UseTLS(), "\t(tls)")
fmt.Fprintln(writer, "Listen Addresses:\t", cfg.Address(), "\t(address)")
if cfg.User != "" {
fmt.Fprintln(writer, "Authentication User:\t", cfg.User, "\t(username)")
fmt.Fprintln(writer, "Authentication Password:\t", cfg.Password, "\t(password)")
}
if cfg.APIKey != "" {
fmt.Fprintln(writer, "API Key:\t", cfg.APIKey, "\t(apikey)")
}
writer.Flush()
}
func guiGet(c *cli.Context) {
cfg := getConfig(c).GUI
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "enabled":
fmt.Println(cfg.Enabled)
case "tls":
fmt.Println(cfg.UseTLS())
case "address":
fmt.Println(cfg.Address())
case "user":
if cfg.User != "" {
fmt.Println(cfg.User)
}
case "password":
if cfg.User != "" {
fmt.Println(cfg.Password)
}
case "apikey":
if cfg.APIKey != "" {
fmt.Println(cfg.APIKey)
}
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
}
func guiSet(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "enabled":
cfg.GUI.Enabled = parseBool(val)
case "tls":
cfg.GUI.RawUseTLS = parseBool(val)
case "address":
validAddress(val)
cfg.GUI.RawAddress = val
case "user":
cfg.GUI.User = val
case "password":
cfg.GUI.Password = val
case "apikey":
cfg.GUI.APIKey = val
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
setConfig(c, cfg)
}
func guiUnset(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "user":
cfg.GUI.User = ""
case "password":
cfg.GUI.Password = ""
case "apikey":
cfg.GUI.APIKey = ""
default:
die("Invalid setting: " + arg + "\nAvailable settings: user, password, apikey")
}
setConfig(c, cfg)
}

View File

@@ -1,173 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "options",
HideHelp: true,
Usage: "Options command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all Syncthing option settings",
Requires: &cli.Requires{},
Action: optionsDump,
},
{
Name: "get",
Usage: "Get a Syncthing option setting",
Requires: &cli.Requires{"setting"},
Action: optionsGet,
},
{
Name: "set",
Usage: "Set a Syncthing option setting",
Requires: &cli.Requires{"setting", "value..."},
Action: optionsSet,
},
},
})
}
func optionsDump(c *cli.Context) {
cfg := getConfig(c).Options
writer := newTableWriter()
fmt.Fprintln(writer, "Sync protocol listen addresses:\t", strings.Join(cfg.ListenAddresses, " "), "\t(addresses)")
fmt.Fprintln(writer, "Global discovery enabled:\t", cfg.GlobalAnnEnabled, "\t(globalannenabled)")
fmt.Fprintln(writer, "Global discovery servers:\t", strings.Join(cfg.GlobalAnnServers, " "), "\t(globalannserver)")
fmt.Fprintln(writer, "Local discovery enabled:\t", cfg.LocalAnnEnabled, "\t(localannenabled)")
fmt.Fprintln(writer, "Local discovery port:\t", cfg.LocalAnnPort, "\t(localannport)")
fmt.Fprintln(writer, "Outgoing rate limit in KiB/s:\t", cfg.MaxSendKbps, "\t(maxsend)")
fmt.Fprintln(writer, "Incoming rate limit in KiB/s:\t", cfg.MaxRecvKbps, "\t(maxrecv)")
fmt.Fprintln(writer, "Reconnect interval in seconds:\t", cfg.ReconnectIntervalS, "\t(reconnect)")
fmt.Fprintln(writer, "Start browser:\t", cfg.StartBrowser, "\t(browser)")
fmt.Fprintln(writer, "Enable UPnP:\t", cfg.NATEnabled, "\t(nat)")
fmt.Fprintln(writer, "UPnP Lease in minutes:\t", cfg.NATLeaseM, "\t(natlease)")
fmt.Fprintln(writer, "UPnP Renewal period in minutes:\t", cfg.NATRenewalM, "\t(natrenew)")
fmt.Fprintln(writer, "Restart on Wake Up:\t", cfg.RestartOnWakeup, "\t(wake)")
reporting := "unrecognized value"
switch cfg.URAccepted {
case -1:
reporting = "false"
case 0:
reporting = "undecided/false"
case 1:
reporting = "true"
}
fmt.Fprintln(writer, "Anonymous usage reporting:\t", reporting, "\t(reporting)")
writer.Flush()
}
func optionsGet(c *cli.Context) {
cfg := getConfig(c).Options
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "address":
fmt.Println(strings.Join(cfg.ListenAddresses, "\n"))
case "globalannenabled":
fmt.Println(cfg.GlobalAnnEnabled)
case "globalannservers":
fmt.Println(strings.Join(cfg.GlobalAnnServers, "\n"))
case "localannenabled":
fmt.Println(cfg.LocalAnnEnabled)
case "localannport":
fmt.Println(cfg.LocalAnnPort)
case "maxsend":
fmt.Println(cfg.MaxSendKbps)
case "maxrecv":
fmt.Println(cfg.MaxRecvKbps)
case "reconnect":
fmt.Println(cfg.ReconnectIntervalS)
case "browser":
fmt.Println(cfg.StartBrowser)
case "nat":
fmt.Println(cfg.NATEnabled)
case "natlease":
fmt.Println(cfg.NATLeaseM)
case "natrenew":
fmt.Println(cfg.NATRenewalM)
case "reporting":
switch cfg.URAccepted {
case -1:
fmt.Println("false")
case 0:
fmt.Println("undecided/false")
case 1:
fmt.Println("true")
default:
fmt.Println("unknown")
}
case "wake":
fmt.Println(cfg.RestartOnWakeup)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
}
func optionsSet(c *cli.Context) {
config := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "address":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.ListenAddresses = c.Args().Tail()
case "globalannenabled":
config.Options.GlobalAnnEnabled = parseBool(val)
case "globalannserver":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.GlobalAnnServers = c.Args().Tail()
case "localannenabled":
config.Options.LocalAnnEnabled = parseBool(val)
case "localannport":
config.Options.LocalAnnPort = parsePort(val)
case "maxsend":
config.Options.MaxSendKbps = parseUint(val)
case "maxrecv":
config.Options.MaxRecvKbps = parseUint(val)
case "reconnect":
config.Options.ReconnectIntervalS = parseUint(val)
case "browser":
config.Options.StartBrowser = parseBool(val)
case "nat":
config.Options.NATEnabled = parseBool(val)
case "natlease":
config.Options.NATLeaseM = parseUint(val)
case "natrenew":
config.Options.NATRenewalM = parseUint(val)
case "reporting":
switch strings.ToLower(val) {
case "u", "undecided", "unset":
config.Options.URAccepted = 0
default:
boolvalue := parseBool(val)
if boolvalue {
config.Options.URAccepted = 1
} else {
config.Options.URAccepted = -1
}
}
case "wake":
config.Options.RestartOnWakeup = parseBool(val)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
setConfig(c, config)
}

View File

@@ -1,72 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "report",
HideHelp: true,
Usage: "Reporting command group",
Subcommands: []cli.Command{
{
Name: "system",
Usage: "Report system state",
Requires: &cli.Requires{},
Action: reportSystem,
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Requires: &cli.Requires{},
Action: reportConnections,
},
{
Name: "usage",
Usage: "Usage report",
Requires: &cli.Requires{},
Action: reportUsage,
},
},
})
}
func reportSystem(c *cli.Context) {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
prettyPrintJSON(data)
}
func reportConnections(c *cli.Context) {
response := httpGet(c, "system/connections")
data := make(map[string]map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
var overall map[string]interface{}
for key, value := range data {
if key == "total" {
overall = value
continue
}
value["Device ID"] = key
prettyPrintJSON(value)
fmt.Println()
}
if overall != nil {
fmt.Println("=== Overall statistics ===")
prettyPrintJSON(overall)
}
}
func reportUsage(c *cli.Context) {
response := httpGet(c, "svc/report")
report := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &report)
prettyPrintJSON(report)
}

61
cmd/stcli/errors.go Normal file
View File

@@ -0,0 +1,61 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"errors"
"fmt"
"strings"
"github.com/urfave/cli"
)
var errorsCommand = cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Action: expects(0, dumpOutput("system/error")),
},
{
Name: "push",
Usage: "Push an error to active clients",
ArgsUsage: "[error message]",
Action: expects(1, errorsPush),
},
{
Name: "clear",
Usage: "Clear pending errors",
Action: expects(0, emptyPost("system/error/clear")),
},
},
}
func errorsPush(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
errStr := strings.Join(c.Args(), " ")
response, err := client.Post("system/error", strings.TrimSpace(errStr))
if err != nil {
return err
}
if response.StatusCode != 200 {
errStr = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
bytes, err := responseToBArray(response)
if err != nil {
return err
}
body := string(bytes)
if body != "" {
errStr += "\nBody: " + body
}
return errors.New(errStr)
}
return nil
}

View File

@@ -1,31 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
var jsonAttributeLabels = map[string]string{
"folderMaxMiB": "Largest folder size in MiB",
"folderMaxFiles": "Largest folder file count",
"longVersion": "Long version",
"totMiB": "Total size in MiB",
"totFiles": "Total files",
"uniqueID": "Unique ID",
"numFolders": "Folder count",
"numDevices": "Device count",
"memoryUsageMiB": "Memory usage in MiB",
"memorySize": "Total memory in MiB",
"sha256Perf": "SHA256 Benchmark",
"At": "Last contacted",
"Completion": "Percent complete",
"InBytesTotal": "Total bytes received",
"OutBytesTotal": "Total bytes sent",
"ClientVersion": "Client version",
"alloc": "Memory allocated in bytes",
"sys": "Memory using in bytes",
"cpuPercent": "CPU load in percent",
"extAnnounceOK": "External announcments working",
"goroutines": "Number of Go routines",
"myID": "Client ID",
"tilde": "Tilde expands to",
"arch": "Architecture",
"os": "OS",
}

View File

@@ -1,63 +1,192 @@
// Copyright (C) 2014 Audrius Butkevičius
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"sort"
"bufio"
"crypto/tls"
"encoding/json"
"flag"
"log"
"os"
"reflect"
"github.com/AudriusButkevicius/cli"
"github.com/AudriusButkevicius/recli"
"github.com/flynn-archive/go-shlex"
"github.com/mattn/go-isatty"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/urfave/cli"
)
type ByAlphabet []cli.Command
func (a ByAlphabet) Len() int { return len(a) }
func (a ByAlphabet) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByAlphabet) Less(i, j int) bool { return a[i].Name < a[j].Name }
var cliCommands []cli.Command
func main() {
app := cli.NewApp()
app.Name = "syncthing-cli"
app.Author = "Audrius Butkevičius"
app.Email = "audrius.butkevicius@gmail.com"
app.Usage = "Syncthing command line interface"
app.Version = "0.1"
app.HideHelp = true
// This is somewhat a hack around a chicken and egg problem.
// We need to set the home directory and potentially other flags to know where the syncthing instance is running
// in order to get it's config ... which we then use to construct the actual CLI ... at which point it's too late
// to add flags there...
homeBaseDir := locations.GetBaseDir(locations.ConfigBaseDir)
guiCfg := config.GUIConfiguration{}
app.Flags = []cli.Flag{
flags := flag.NewFlagSet("", flag.ContinueOnError)
flags.StringVar(&guiCfg.RawAddress, "gui-address", guiCfg.RawAddress, "Override GUI address (e.g. \"http://192.0.2.42:8443\")")
flags.StringVar(&guiCfg.APIKey, "gui-apikey", guiCfg.APIKey, "Override GUI API key")
flags.StringVar(&homeBaseDir, "home", homeBaseDir, "Set configuration directory")
// Implement the same flags at the lower CLI, with the same default values (pre-parse), but do nothing with them.
// This is so that we could reuse os.Args
fakeFlags := []cli.Flag{
cli.StringFlag{
Name: "endpoint, e",
Value: "http://127.0.0.1:8384",
Usage: "End point to connect to",
EnvVar: "STENDPOINT",
Name: "gui-address",
Value: guiCfg.RawAddress,
Usage: "Override GUI address (e.g. \"http://192.0.2.42:8443\")",
},
cli.StringFlag{
Name: "apikey, k",
Value: "",
Usage: "API Key",
EnvVar: "STAPIKEY",
Name: "gui-apikey",
Value: guiCfg.APIKey,
Usage: "Override GUI API key",
},
cli.StringFlag{
Name: "username, u",
Value: "",
Usage: "Username",
EnvVar: "STUSERNAME",
},
cli.StringFlag{
Name: "password, p",
Value: "",
Usage: "Password",
EnvVar: "STPASSWORD",
},
cli.BoolFlag{
Name: "insecure, i",
Usage: "Do not verify SSL certificate",
EnvVar: "STINSECURE",
Name: "home",
Value: homeBaseDir,
Usage: "Set configuration directory",
},
}
sort.Sort(ByAlphabet(cliCommands))
app.Commands = cliCommands
app.RunAndExitOnError()
// Do not print usage of these flags, and ignore errors as this can't understand plenty of things
flags.Usage = func() {}
_ = flags.Parse(os.Args[1:])
// Now if the API key and address is not provided (we are not connecting to a remote instance),
// try to rip it out of the config.
if guiCfg.RawAddress == "" && guiCfg.APIKey == "" {
// Update the base directory
err := locations.SetBaseDir(locations.ConfigBaseDir, homeBaseDir)
if err != nil {
log.Fatal(errors.Wrap(err, "setting home"))
}
// Load the certs and get the ID
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
log.Fatal(errors.Wrap(err, "reading device ID"))
}
myID := protocol.NewDeviceID(cert.Certificate[0])
// Load the config
cfg, err := config.Load(locations.Get(locations.ConfigFile), myID, events.NoopLogger)
if err != nil {
log.Fatalln(errors.Wrap(err, "loading config"))
}
guiCfg = cfg.GUI()
} else if guiCfg.Address() == "" || guiCfg.APIKey == "" {
log.Fatalln("Both -gui-address and -gui-apikey should be specified")
}
if guiCfg.Address() == "" {
log.Fatalln("Could not find GUI Address")
}
if guiCfg.APIKey == "" {
log.Fatalln("Could not find GUI API key")
}
client := getClient(guiCfg)
cfg, err := getConfig(client)
original := cfg.Copy()
if err != nil {
log.Fatalln(errors.Wrap(err, "getting config"))
}
// Copy the config and set the default flags
recliCfg := recli.DefaultConfig
recliCfg.IDTag.Name = "xml"
recliCfg.SkipTag.Name = "json"
commands, err := recli.New(recliCfg).Construct(&cfg)
if err != nil {
log.Fatalln(errors.Wrap(err, "config reflect"))
}
// Construct the actual CLI
app := cli.NewApp()
app.Name = "stcli"
app.HelpName = app.Name
app.Author = "The Syncthing Authors"
app.Usage = "Syncthing command line interface"
app.Version = build.Version
app.Flags = fakeFlags
app.Metadata = map[string]interface{}{
"client": client,
}
app.Commands = []cli.Command{
{
Name: "config",
HideHelp: true,
Usage: "Configuration modification command group",
Subcommands: commands,
},
showCommand,
operationCommand,
errorsCommand,
}
tty := isatty.IsTerminal(os.Stdin.Fd()) || isatty.IsCygwinTerminal(os.Stdin.Fd())
if !tty {
// Not a TTY, consume from stdin
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
if err != nil {
log.Fatalln(errors.Wrap(err, "parsing input"))
}
if len(input) == 0 {
continue
}
err = app.Run(append(os.Args, input...))
if err != nil {
log.Fatalln(err)
}
}
err = scanner.Err()
if err != nil {
log.Fatalln(err)
}
} else {
err = app.Run(os.Args)
if err != nil {
log.Fatalln(err)
}
}
if !reflect.DeepEqual(cfg, original) {
body, err := json.MarshalIndent(cfg, "", " ")
if err != nil {
log.Fatalln(err)
}
resp, err := client.Post("system/config", string(body))
if err != nil {
log.Fatalln(err)
}
if resp.StatusCode != 200 {
body, err := responseToBArray(resp)
if err != nil {
log.Fatalln(err)
}
log.Fatalln(string(body))
}
}
}

78
cmd/stcli/operations.go Normal file
View File

@@ -0,0 +1,78 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"github.com/urfave/cli"
)
var operationCommand = cli.Command{
Name: "operations",
HideHelp: true,
Usage: "Operation command group",
Subcommands: []cli.Command{
{
Name: "restart",
Usage: "Restart syncthing",
Action: expects(0, emptyPost("system/restart")),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Action: expects(0, emptyPost("system/shutdown")),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Action: expects(0, emptyPost("system/reset")),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Action: expects(0, emptyPost("system/upgrade")),
},
{
Name: "folder-override",
Usage: "Override changes on folder (remote for sendonly, local for receiveonly)",
ArgsUsage: "[folder id]",
Action: expects(1, foldersOverride),
},
},
}
func foldersOverride(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
cfg, err := getConfig(client)
if err != nil {
return err
}
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid {
response, err := client.Post("db/override", "")
if err != nil {
return err
}
if response.StatusCode != 200 {
errStr := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
bytes, err := responseToBArray(response)
if err != nil {
return err
}
body := string(bytes)
if body != "" {
errStr += "\nBody: " + body
}
return fmt.Errorf(errStr)
}
return nil
}
}
return fmt.Errorf("Folder " + rid + " not found")
}

44
cmd/stcli/show.go Normal file
View File

@@ -0,0 +1,44 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"github.com/urfave/cli"
)
var showCommand = cli.Command{
Name: "show",
HideHelp: true,
Usage: "Show command group",
Subcommands: []cli.Command{
{
Name: "version",
Usage: "Show syncthing client version",
Action: expects(0, dumpOutput("system/version")),
},
{
Name: "config-status",
Usage: "Show configuration status, whether or not a restart is required for changes to take effect",
Action: expects(0, dumpOutput("system/config/insync")),
},
{
Name: "system",
Usage: "Show system status",
Action: expects(0, dumpOutput("system/status")),
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Action: expects(0, dumpOutput("system/connections")),
},
{
Name: "usage",
Usage: "Show usage report",
Action: expects(0, dumpOutput("svc/report")),
},
},
}

View File

@@ -1,4 +1,8 @@
// Copyright (C) 2014 Audrius Butkevičius
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
@@ -8,158 +12,83 @@ import (
"io/ioutil"
"net/http"
"os"
"regexp"
"sort"
"strconv"
"strings"
"text/tabwriter"
"unicode"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/urfave/cli"
)
func responseToBArray(response *http.Response) []byte {
defer response.Body.Close()
func responseToBArray(response *http.Response) ([]byte, error) {
bytes, err := ioutil.ReadAll(response.Body)
if err != nil {
die(err)
return nil, err
}
return bytes
return bytes, response.Body.Close()
}
func die(vals ...interface{}) {
if len(vals) > 1 || vals[0] != nil {
os.Stderr.WriteString(fmt.Sprintln(vals...))
os.Exit(1)
func emptyPost(url string) cli.ActionFunc {
return func(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
_, err := client.Post(url, "")
return err
}
}
func wrappedHTTPPost(url string) func(c *cli.Context) {
return func(c *cli.Context) {
httpPost(c, url, "")
}
}
func prettyPrintJSON(json map[string]interface{}) {
writer := newTableWriter()
remap := make(map[string]interface{})
for k, v := range json {
key, ok := jsonAttributeLabels[k]
if !ok {
key = firstUpper(k)
func dumpOutput(url string) cli.ActionFunc {
return func(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
response, err := client.Get(url)
if err != nil {
return err
}
remap[key] = v
return prettyPrintResponse(c, response)
}
}
jsonKeys := make([]string, 0, len(remap))
for key := range remap {
jsonKeys = append(jsonKeys, key)
func getConfig(c *APIClient) (config.Configuration, error) {
cfg := config.Configuration{}
response, err := c.Get("system/config")
if err != nil {
return cfg, err
}
sort.Strings(jsonKeys)
for _, k := range jsonKeys {
var value string
rvalue := remap[k]
switch rvalue.(type) {
case int, int16, int32, int64, uint, uint16, uint32, uint64, float32, float64:
value = fmt.Sprintf("%.0f", rvalue)
default:
value = fmt.Sprint(rvalue)
bytes, err := responseToBArray(response)
if err != nil {
return cfg, err
}
err = json.Unmarshal(bytes, &cfg)
if err == nil {
return cfg, err
}
return cfg, nil
}
func expects(n int, actionFunc cli.ActionFunc) cli.ActionFunc {
return func(ctx *cli.Context) error {
if ctx.NArg() != n {
plural := ""
if n != 1 {
plural = "s"
}
return fmt.Errorf("expected %d argument%s, got %d", n, plural, ctx.NArg())
}
if value == "" {
continue
}
fmt.Fprintln(writer, k+":\t"+value)
}
writer.Flush()
}
func firstUpper(str string) string {
for i, v := range str {
return string(unicode.ToUpper(v)) + str[i+1:]
}
return ""
}
func newTableWriter() *tabwriter.Writer {
writer := new(tabwriter.Writer)
writer.Init(os.Stdout, 0, 8, 0, '\t', 0)
return writer
}
func getMyID(c *cli.Context) string {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
return data["myID"].(string)
}
func getConfig(c *cli.Context) config.Configuration {
response := httpGet(c, "system/config")
config := config.Configuration{}
json.Unmarshal(responseToBArray(response), &config)
return config
}
func setConfig(c *cli.Context, cfg config.Configuration) {
body, err := json.Marshal(cfg)
die(err)
response := httpPost(c, "system/config", string(body))
if response.StatusCode != 200 {
die("Unexpected status code", response.StatusCode)
return actionFunc(ctx)
}
}
func parseBool(input string) bool {
val, err := strconv.ParseBool(input)
func prettyPrintJSON(data interface{}) error {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(data)
}
func prettyPrintResponse(c *cli.Context, response *http.Response) error {
bytes, err := responseToBArray(response)
if err != nil {
die(input + " is not a valid value for a boolean")
return err
}
return val
}
func parseInt(input string) int {
val, err := strconv.ParseInt(input, 0, 64)
if err != nil {
die(input + " is not a valid value for an integer")
}
return int(val)
}
func parseUint(input string) int {
val, err := strconv.ParseUint(input, 0, 64)
if err != nil {
die(input + " is not a valid value for an unsigned integer")
}
return int(val)
}
func parsePort(input string) int {
port := parseUint(input)
if port < 1 || port > 65535 {
die(input + " is not a valid port\nExpected value between 1 and 65535")
}
return port
}
func validAddress(input string) {
tokens := strings.Split(input, ":")
if len(tokens) != 2 {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
matched, err := regexp.MatchString("^[a-zA-Z0-9]+([-a-zA-Z0-9.]+[-a-zA-Z0-9]+)?$", tokens[0])
die(err)
if !matched {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
parsePort(tokens[1])
}
func parseDeviceID(input string) protocol.DeviceID {
device, err := protocol.DeviceIDFromString(input)
if err != nil {
die(input + " is not a valid device id")
}
return device
var data interface{}
if err := json.Unmarshal(bytes, &data); err != nil {
return err
}
// TODO: Check flag for pretty print format
return prettyPrintJSON(data)
}

View File

@@ -60,7 +60,7 @@ func compareDirectories(dirs ...string) error {
} else if res[i].name > res[0].name {
return fmt.Errorf("%s missing %v (present in %s)", dirs[i], res[0], dirs[0])
}
return fmt.Errorf("Mismatch; %v (%s) != %v (%s)", res[i], dirs[i], res[0], dirs[0])
return fmt.Errorf("mismatch; %v (%s) != %v (%s)", res[i], dirs[i], res[0], dirs[0])
}
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,219 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"errors"
"io/ioutil"
"regexp"
"strings"
"sync"
raven "github.com/getsentry/raven-go"
"github.com/maruel/panicparse/stack"
)
const reportServer = "https://crash.syncthing.net/report/"
var loader = newGithubSourceCodeLoader()
func init() {
raven.SetSourceCodeLoader(loader)
}
var (
clients = make(map[string]*raven.Client)
clientsMut sync.Mutex
)
func sendReport(dsn, path string, report []byte, userID string) error {
pkt, err := parseReport(path, report)
if err != nil {
return err
}
pkt.Interfaces = append(pkt.Interfaces, &raven.User{ID: userID})
clientsMut.Lock()
defer clientsMut.Unlock()
cli, ok := clients[dsn]
if !ok {
cli, err = raven.New(dsn)
if err != nil {
return err
}
clients[dsn] = cli
}
// The client sets release and such on the packet before sending, in the
// misguided idea that it knows this better than than the packet we give
// it. So we copy the values from the packet to the client first...
cli.SetRelease(pkt.Release)
cli.SetEnvironment(pkt.Environment)
defer cli.Wait()
_, errC := cli.Capture(pkt, nil)
return <-errC
}
func parseReport(path string, report []byte) (*raven.Packet, error) {
parts := bytes.SplitN(report, []byte("\n"), 2)
if len(parts) != 2 {
return nil, errors.New("no first line")
}
version, err := parseVersion(string(parts[0]))
if err != nil {
return nil, err
}
report = parts[1]
foundPanic := false
var subjectLine []byte
for {
parts = bytes.SplitN(report, []byte("\n"), 2)
if len(parts) != 2 {
return nil, errors.New("no panic line found")
}
line := parts[0]
report = parts[1]
if foundPanic {
// The previous line was our "Panic at ..." header. We are now
// at the beginning of the real panic trace and this is our
// subject line.
subjectLine = line
break
} else if bytes.HasPrefix(line, []byte("Panic at")) {
foundPanic = true
}
}
r := bytes.NewReader(report)
ctx, err := stack.ParseDump(r, ioutil.Discard, false)
if err != nil {
return nil, err
}
// Lock the source code loader to the version we are processing here.
if version.commit != "" {
// We have a commit hash, so we know exactly which source to use
loader.LockWithVersion(version.commit)
} else if strings.HasPrefix(version.tag, "v") {
// Lets hope the tag is close enough
loader.LockWithVersion(version.tag)
} else {
// Last resort
loader.LockWithVersion("master")
}
defer loader.Unlock()
var trace raven.Stacktrace
for _, gr := range ctx.Goroutines {
if gr.First {
trace.Frames = make([]*raven.StacktraceFrame, len(gr.Stack.Calls))
for i, sc := range gr.Stack.Calls {
trace.Frames[len(trace.Frames)-1-i] = raven.NewStacktraceFrame(0, sc.Func.Name(), sc.SrcPath, sc.Line, 3, nil)
}
break
}
}
pkt := &raven.Packet{
Message: string(subjectLine),
Platform: "go",
Release: version.tag,
Environment: version.environment(),
Tags: raven.Tags{
raven.Tag{Key: "version", Value: version.version},
raven.Tag{Key: "tag", Value: version.tag},
raven.Tag{Key: "codename", Value: version.codename},
raven.Tag{Key: "runtime", Value: version.runtime},
raven.Tag{Key: "goos", Value: version.goos},
raven.Tag{Key: "goarch", Value: version.goarch},
raven.Tag{Key: "builder", Value: version.builder},
},
Extra: raven.Extra{
"url": reportServer + path,
},
Interfaces: []raven.Interface{&trace},
}
if version.commit != "" {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: "commit", Value: version.commit})
}
for _, tag := range version.extra {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: tag, Value: "1"})
}
return pkt, nil
}
// syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC [foo, bar]
var longVersionRE = regexp.MustCompile(`syncthing\s+(v[^\s]+)\s+"([^"]+)"\s\(([^\s]+)\s+([^-]+)-([^)]+)\)\s+([^\s]+)[^\[]*(?:\[(.+)\])?$`)
type version struct {
version string // "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep"
tag string // "v1.1.4-rc.1"
commit string // "6aaae618", blank when absent
codename string // "Erbium Earthworm"
runtime string // "go1.12.5"
goos string // "darwin"
goarch string // "amd64"
builder string // "jb@kvin.kastelo.net"
extra []string // "foo", "bar"
}
func (v version) environment() string {
if v.commit != "" {
return "Development"
}
if strings.Contains(v.tag, "-rc.") {
return "Candidate"
}
if strings.Contains(v.tag, "-") {
return "Beta"
}
return "Stable"
}
func parseVersion(line string) (version, error) {
m := longVersionRE.FindStringSubmatch(line)
if len(m) == 0 {
return version{}, errors.New("unintelligeble version string")
}
v := version{
version: m[1],
codename: m[2],
runtime: m[3],
goos: m[4],
goarch: m[5],
builder: m[6],
}
parts := strings.Split(v.version, "+")
v.tag = parts[0]
if len(parts) > 1 {
fields := strings.Split(parts[1], "-")
if len(fields) >= 2 && strings.HasPrefix(fields[1], "g") {
v.commit = fields[1][1:]
}
}
if len(m) >= 8 && m[7] != "" {
tags := strings.Split(m[7], ",")
for i := range tags {
tags[i] = strings.TrimSpace(tags[i])
}
v.extra = tags
}
return v, nil
}

View File

@@ -0,0 +1,78 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"io/ioutil"
"testing"
)
func TestParseVersion(t *testing.T) {
cases := []struct {
longVersion string
parsed version
}{
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
},
},
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC [foo, bar]`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
extra: []string{"foo", "bar"},
},
},
}
for _, tc := range cases {
v, err := parseVersion(tc.longVersion)
if err != nil {
t.Errorf("%s\nerror: %v\n", tc.longVersion, err)
continue
}
if fmt.Sprint(v) != fmt.Sprint(tc.parsed) {
t.Errorf("%s\nA: %v\nE: %v\n", tc.longVersion, v, tc.parsed)
}
}
}
func TestParseReport(t *testing.T) {
bs, err := ioutil.ReadFile("_testdata/panic.log")
if err != nil {
t.Fatal(err)
}
pkt, err := parseReport("1/2/345", bs)
if err != nil {
t.Fatal(err)
}
bs, err = pkt.JSON()
if err != nil {
t.Fatal(err)
}
fmt.Printf("%s\n", bs)
}

View File

@@ -0,0 +1,114 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"path/filepath"
"strings"
"sync"
"time"
)
const (
urlPrefix = "https://raw.githubusercontent.com/syncthing/syncthing/"
httpTimeout = 10 * time.Second
)
type githubSourceCodeLoader struct {
mut sync.Mutex
version string
cache map[string]map[string][][]byte // version -> file -> lines
client *http.Client
}
func newGithubSourceCodeLoader() *githubSourceCodeLoader {
return &githubSourceCodeLoader{
cache: make(map[string]map[string][][]byte),
client: &http.Client{Timeout: httpTimeout},
}
}
func (l *githubSourceCodeLoader) LockWithVersion(version string) {
l.mut.Lock()
l.version = version
if _, ok := l.cache[version]; !ok {
l.cache[version] = make(map[string][][]byte)
}
}
func (l *githubSourceCodeLoader) Unlock() {
l.mut.Unlock()
}
func (l *githubSourceCodeLoader) Load(filename string, line, context int) ([][]byte, int) {
filename = filepath.ToSlash(filename)
lines, ok := l.cache[l.version][filename]
if !ok {
// Cache whatever we managed to find (or nil if nothing, so we don't try again)
defer func() {
l.cache[l.version][filename] = lines
}()
knownPrefixes := []string{"/lib/", "/cmd/"}
var idx int
for _, pref := range knownPrefixes {
idx = strings.Index(filename, pref)
if idx >= 0 {
break
}
}
if idx == -1 {
return nil, 0
}
url := urlPrefix + l.version + filename[idx:]
resp, err := l.client.Get(url)
if err != nil || resp.StatusCode != http.StatusOK {
fmt.Println("Loading source:", err.Error())
return nil, 0
}
data, err := ioutil.ReadAll(resp.Body)
_ = resp.Body.Close()
if err != nil {
fmt.Println("Loading source:", err.Error())
return nil, 0
}
lines = bytes.Split(data, []byte{'\n'})
}
return getLineFromLines(lines, line, context)
}
func getLineFromLines(lines [][]byte, line, context int) ([][]byte, int) {
if lines == nil {
// cached error from ReadFile: return no lines
return nil, 0
}
line-- // stack trace lines are 1-indexed
start := line - context
var idx int
if start < 0 {
start = 0
idx = line
} else {
idx = context
}
end := line + context + 1
if line >= len(lines) {
return nil, 0
}
if end > len(lines) {
end = len(lines)
}
return lines[start:end], idx
}

View File

@@ -0,0 +1,185 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// Command stcrashreceiver is a trivial HTTP server that allows two things:
//
// - uploading files (crash reports) named like a SHA256 hash using a PUT request
// - checking whether such file exists using a HEAD request
//
// Typically this should be deployed behind something that manages HTTPS.
package main
import (
"bytes"
"compress/gzip"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os"
"path"
"path/filepath"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
const maxRequestSize = 1 << 20 // 1 MiB
func main() {
dir := flag.String("dir", ".", "Directory to store reports in")
dsn := flag.String("dsn", "", "Sentry DSN")
listen := flag.String("listen", ":22039", "HTTP listen address")
flag.Parse()
cr := &crashReceiver{
dir: *dir,
dsn: *dsn,
}
log.SetOutput(os.Stdout)
if err := http.ListenAndServe(*listen, cr); err != nil {
log.Fatalln("HTTP serve:", err)
}
}
type crashReceiver struct {
dir string
dsn string
}
func (r *crashReceiver) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// The final path component should be a SHA256 hash in hex, so 64 hex
// characters. We don't care about case on the request but use lower
// case internally.
reportID := strings.ToLower(path.Base(req.URL.Path))
if len(reportID) != 64 {
http.Error(w, "Bad request", http.StatusBadRequest)
return
}
for _, c := range reportID {
if c >= 'a' && c <= 'f' {
continue
}
if c >= '0' && c <= '9' {
continue
}
http.Error(w, "Bad request", http.StatusBadRequest)
return
}
// The location of the report on disk, compressed
fullPath := filepath.Join(r.dir, r.dirFor(reportID), reportID) + ".gz"
switch req.Method {
case http.MethodGet:
r.serveGet(fullPath, w, req)
case http.MethodHead:
r.serveHead(fullPath, w, req)
case http.MethodPut:
r.servePut(reportID, fullPath, w, req)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
// serveGet responds to GET requests by serving the uncompressed report.
func (r *crashReceiver) serveGet(fullPath string, w http.ResponseWriter, _ *http.Request) {
fd, err := os.Open(fullPath)
if err != nil {
http.Error(w, "Not found", http.StatusNotFound)
return
}
defer fd.Close()
gr, err := gzip.NewReader(fd)
if err != nil {
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
_, _ = io.Copy(w, gr) // best effort
}
// serveHead responds to HEAD requests by checking if the named report
// already exists in the system.
func (r *crashReceiver) serveHead(fullPath string, w http.ResponseWriter, _ *http.Request) {
if _, err := os.Lstat(fullPath); err != nil {
http.Error(w, "Not found", http.StatusNotFound)
}
}
// servePut accepts and stores the given report.
func (r *crashReceiver) servePut(reportID, fullPath string, w http.ResponseWriter, req *http.Request) {
// Ensure the destination directory exists
if err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {
log.Println("Creating directory:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Read at most maxRequestSize of report data.
log.Println("Receiving report", reportID)
lr := io.LimitReader(req.Body, maxRequestSize)
bs, err := ioutil.ReadAll(lr)
if err != nil {
log.Println("Reading report:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Compress the report for storage
buf := new(bytes.Buffer)
gw := gzip.NewWriter(buf)
_, _ = gw.Write(bs) // can't fail
gw.Close()
// Create an output file with the compressed report
err = ioutil.WriteFile(fullPath, buf.Bytes(), 0644)
if err != nil {
log.Println("Saving report:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Send the report to Sentry
if r.dsn != "" {
// Remote ID
user := userIDFor(req)
go func() {
// There's no need for the client to have to wait for this part.
if err := sendReport(r.dsn, reportID, bs, user); err != nil {
log.Println("Failed to send report:", err)
}
}()
}
}
// 01234567890abcdef... => 01/23
func (r *crashReceiver) dirFor(base string) string {
return filepath.Join(base[0:2], base[2:4])
}
// userIDFor returns a string we can use as the user ID for the purpose of
// counting affected users. It's the truncated hash of a salt, the user
// remote IP, and the current month.
func userIDFor(req *http.Request) string {
addr := req.RemoteAddr
if fwd := req.Header.Get("x-forwarded-for"); fwd != "" {
addr = fwd
}
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
now := time.Now().Format("200601")
salt := "stcrashreporter"
hash := sha256.Sum256([]byte(salt + addr + now))
return fmt.Sprintf("%x", hash[:8])
}

View File

@@ -18,6 +18,7 @@ import (
"net"
"net/http"
"net/url"
"sort"
"strconv"
"strings"
"sync"
@@ -279,6 +280,10 @@ func (s *apiSrv) handleAnnounce(remote net.IP, deviceID protocol.DeviceID, addre
dbAddrs[i].Expires = expire
}
// The address slice must always be sorted for database merges to work
// properly.
sort.Sort(databaseAddressOrder(dbAddrs))
seen := now.UnixNano()
if s.repl != nil {
s.repl.send(key, dbAddrs, seen)
@@ -295,28 +300,66 @@ func certificateBytes(req *http.Request) []byte {
return req.TLS.PeerCertificates[0].Raw
}
var bs []byte
if hdr := req.Header.Get("X-SSL-Cert"); hdr != "" {
bs := []byte(hdr)
// The certificate is in PEM format but with spaces for newlines. We
// need to reinstate the newlines for the PEM decoder. But we need to
// leave the spaces in the BEGIN and END lines - the first and last
// space - alone.
firstSpace := bytes.Index(bs, []byte(" "))
lastSpace := bytes.LastIndex(bs, []byte(" "))
for i := firstSpace + 1; i < lastSpace; i++ {
if bs[i] == ' ' {
bs[i] = '\n'
if strings.Contains(hdr, "%") {
// Nginx using $ssl_client_escaped_cert
// The certificate is in PEM format with url encoding.
// We need to decode for the PEM decoder
hdr, err := url.QueryUnescape(hdr)
if err != nil {
// Decoding failed
return nil
}
bs = []byte(hdr)
} else {
// Nginx using $ssl_client_cert
// The certificate is in PEM format but with spaces for newlines. We
// need to reinstate the newlines for the PEM decoder. But we need to
// leave the spaces in the BEGIN and END lines - the first and last
// space - alone.
bs = []byte(hdr)
firstSpace := bytes.Index(bs, []byte(" "))
lastSpace := bytes.LastIndex(bs, []byte(" "))
for i := firstSpace + 1; i < lastSpace; i++ {
if bs[i] == ' ' {
bs[i] = '\n'
}
}
}
block, _ := pem.Decode(bs)
if block == nil {
} else if hdr := req.Header.Get("X-Forwarded-Tls-Client-Cert"); hdr != "" {
// Traefik 2 passtlsclientcert
// The certificate is in PEM format with url encoding but without newlines
// and start/end statements. We need to decode, reinstate the newlines every 64
// character and add statements for the PEM decoder
hdr, err := url.QueryUnescape(hdr)
if err != nil {
// Decoding failed
return nil
}
return block.Bytes
for i := 64; i < len(hdr); i += 65 {
hdr = hdr[:i] + "\n" + hdr[i:]
}
hdr = "-----BEGIN CERTIFICATE-----\n" + hdr
hdr = hdr + "\n-----END CERTIFICATE-----\n"
bs = []byte(hdr)
}
return nil
if bs == nil {
return nil
}
block, _ := pem.Decode(bs)
if block == nil {
// Decoding failed
return nil
}
return block.Bytes
}
// fixupAddresses checks the list of addresses, removing invalid ones and

View File

@@ -5,11 +5,12 @@
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate go run ../../script/protofmt.go database.proto
//go:generate protoc -I ../../../../../ -I ../../vendor/ -I ../../vendor/github.com/gogo/protobuf/protobuf -I . --gogofast_out=. database.proto
//go:generate protoc -I ../../ -I . --gogofast_out=. database.proto
package main
import (
"log"
"sort"
"time"
@@ -263,12 +264,15 @@ func (s *levelDBStore) Stop() {
// chosen for any duplicates.
func merge(a, b DatabaseRecord) DatabaseRecord {
// Both lists must be sorted for this to work.
sort.Slice(a.Addresses, func(i, j int) bool {
return a.Addresses[i].Address < a.Addresses[j].Address
})
sort.Slice(b.Addresses, func(i, j int) bool {
return b.Addresses[i].Address < b.Addresses[j].Address
})
if !sort.IsSorted(databaseAddressOrder(a.Addresses)) {
log.Println("Warning: bug: addresses not correctly sorted in merge")
a.Addresses = sortedAddressCopy(a.Addresses)
}
if !sort.IsSorted(databaseAddressOrder(b.Addresses)) {
// no warning because this is the side we read from disk and it may
// legitimately predate correct sorting.
b.Addresses = sortedAddressCopy(b.Addresses)
}
res := DatabaseRecord{
Addresses: make([]DatabaseAddress, 0, len(a.Addresses)+len(b.Addresses)),
@@ -352,3 +356,24 @@ func expire(addrs []DatabaseAddress, now int64) []DatabaseAddress {
}
return addrs
}
func sortedAddressCopy(addrs []DatabaseAddress) []DatabaseAddress {
sorted := make([]DatabaseAddress, len(addrs))
copy(sorted, addrs)
sort.Sort(databaseAddressOrder(sorted))
return sorted
}
type databaseAddressOrder []DatabaseAddress
func (s databaseAddressOrder) Less(a, b int) bool {
return s[a].Address < s[b].Address
}
func (s databaseAddressOrder) Swap(a, b int) {
s[a], s[b] = s[b], s[a]
}
func (s databaseAddressOrder) Len() int {
return len(s)
}

View File

@@ -1,25 +1,16 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: database.proto
/*
Package main is a generated protocol buffer package.
It is generated from these files:
database.proto
It has these top-level messages:
DatabaseRecord
ReplicationRecord
DatabaseAddress
*/
package main
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import io "io"
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -30,50 +21,158 @@ var _ = math.Inf
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type DatabaseRecord struct {
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses" json:"addresses"`
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses"`
Misses int32 `protobuf:"varint,2,opt,name=misses,proto3" json:"misses,omitempty"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
Missed int64 `protobuf:"varint,4,opt,name=missed,proto3" json:"missed,omitempty"`
}
func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
func (m *DatabaseRecord) String() string { return proto.CompactTextString(m) }
func (*DatabaseRecord) ProtoMessage() {}
func (*DatabaseRecord) Descriptor() ([]byte, []int) { return fileDescriptorDatabase, []int{0} }
func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
func (m *DatabaseRecord) String() string { return proto.CompactTextString(m) }
func (*DatabaseRecord) ProtoMessage() {}
func (*DatabaseRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{0}
}
func (m *DatabaseRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DatabaseRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DatabaseRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseRecord.Merge(m, src)
}
func (m *DatabaseRecord) XXX_Size() int {
return m.Size()
}
func (m *DatabaseRecord) XXX_DiscardUnknown() {
xxx_messageInfo_DatabaseRecord.DiscardUnknown(m)
}
var xxx_messageInfo_DatabaseRecord proto.InternalMessageInfo
type ReplicationRecord struct {
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
Addresses []DatabaseAddress `protobuf:"bytes,2,rep,name=addresses" json:"addresses"`
Addresses []DatabaseAddress `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
}
func (m *ReplicationRecord) Reset() { *m = ReplicationRecord{} }
func (m *ReplicationRecord) String() string { return proto.CompactTextString(m) }
func (*ReplicationRecord) ProtoMessage() {}
func (*ReplicationRecord) Descriptor() ([]byte, []int) { return fileDescriptorDatabase, []int{1} }
func (m *ReplicationRecord) Reset() { *m = ReplicationRecord{} }
func (m *ReplicationRecord) String() string { return proto.CompactTextString(m) }
func (*ReplicationRecord) ProtoMessage() {}
func (*ReplicationRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{1}
}
func (m *ReplicationRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ReplicationRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ReplicationRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ReplicationRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_ReplicationRecord.Merge(m, src)
}
func (m *ReplicationRecord) XXX_Size() int {
return m.Size()
}
func (m *ReplicationRecord) XXX_DiscardUnknown() {
xxx_messageInfo_ReplicationRecord.DiscardUnknown(m)
}
var xxx_messageInfo_ReplicationRecord proto.InternalMessageInfo
type DatabaseAddress struct {
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"`
Expires int64 `protobuf:"varint,2,opt,name=expires,proto3" json:"expires,omitempty"`
}
func (m *DatabaseAddress) Reset() { *m = DatabaseAddress{} }
func (m *DatabaseAddress) String() string { return proto.CompactTextString(m) }
func (*DatabaseAddress) ProtoMessage() {}
func (*DatabaseAddress) Descriptor() ([]byte, []int) { return fileDescriptorDatabase, []int{2} }
func (m *DatabaseAddress) Reset() { *m = DatabaseAddress{} }
func (m *DatabaseAddress) String() string { return proto.CompactTextString(m) }
func (*DatabaseAddress) ProtoMessage() {}
func (*DatabaseAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{2}
}
func (m *DatabaseAddress) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DatabaseAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DatabaseAddress.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseAddress.Merge(m, src)
}
func (m *DatabaseAddress) XXX_Size() int {
return m.Size()
}
func (m *DatabaseAddress) XXX_DiscardUnknown() {
xxx_messageInfo_DatabaseAddress.DiscardUnknown(m)
}
var xxx_messageInfo_DatabaseAddress proto.InternalMessageInfo
func init() {
proto.RegisterType((*DatabaseRecord)(nil), "main.DatabaseRecord")
proto.RegisterType((*ReplicationRecord)(nil), "main.ReplicationRecord")
proto.RegisterType((*DatabaseAddress)(nil), "main.DatabaseAddress")
}
func init() { proto.RegisterFile("database.proto", fileDescriptor_b90fe3356ea5df07) }
var fileDescriptor_b90fe3356ea5df07 = []byte{
// 270 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x90, 0x41, 0x4a, 0xc4, 0x30,
0x18, 0x85, 0x9b, 0x49, 0x1d, 0x99, 0x08, 0xa3, 0x06, 0x94, 0x20, 0x12, 0x4b, 0xdd, 0x74, 0xd5,
0x01, 0x5d, 0xb9, 0x74, 0xd0, 0x0b, 0xe4, 0x06, 0xe9, 0xe4, 0x77, 0x08, 0x3a, 0x4d, 0x49, 0x2a,
0xe8, 0x29, 0xf4, 0x58, 0x5d, 0xce, 0xd2, 0x95, 0x68, 0x7b, 0x11, 0x69, 0x26, 0x55, 0x14, 0x37,
0xb3, 0x7b, 0xdf, 0xff, 0xbf, 0x97, 0xbc, 0x84, 0x4c, 0x95, 0xac, 0x65, 0x21, 0x1d, 0xe4, 0x95,
0x35, 0xb5, 0xa1, 0xf1, 0x4a, 0xea, 0xf2, 0xe4, 0xdc, 0x42, 0x65, 0xdc, 0xcc, 0x8f, 0x8a, 0xc7,
0xbb, 0xd9, 0xd2, 0x2c, 0x8d, 0x07, 0xaf, 0x36, 0xd6, 0xf4, 0x05, 0x91, 0xe9, 0x4d, 0x48, 0x0b,
0x58, 0x18, 0xab, 0xe8, 0x15, 0x99, 0x48, 0xa5, 0x2c, 0x38, 0x07, 0x8e, 0xa1, 0x04, 0x67, 0x7b,
0x17, 0x47, 0x79, 0x7f, 0x62, 0x3e, 0x18, 0xaf, 0x37, 0xeb, 0x79, 0xdc, 0xbc, 0x9f, 0x45, 0xe2,
0xc7, 0x4d, 0x8f, 0xc9, 0x78, 0xa5, 0x7d, 0x6e, 0x94, 0xa0, 0x6c, 0x47, 0x04, 0xa2, 0x94, 0xc4,
0x0e, 0xa0, 0x64, 0x38, 0x41, 0x19, 0x16, 0x5e, 0x7f, 0x7b, 0x15, 0x8b, 0xfd, 0x34, 0x50, 0x5a,
0x93, 0x43, 0x01, 0xd5, 0x83, 0x5e, 0xc8, 0x5a, 0x9b, 0x32, 0x74, 0x3a, 0x20, 0xf8, 0x1e, 0x9e,
0x19, 0x4a, 0x50, 0x36, 0x11, 0xbd, 0xfc, 0xdd, 0x72, 0xb4, 0x55, 0xcb, 0x7f, 0xda, 0xa4, 0xb7,
0x64, 0xff, 0x4f, 0x8e, 0x32, 0xb2, 0x1b, 0x32, 0xe1, 0xde, 0x01, 0xfb, 0x0d, 0x3c, 0x55, 0xda,
0x86, 0x77, 0x62, 0x31, 0xe0, 0xfc, 0xb4, 0xf9, 0xe4, 0x51, 0xd3, 0x72, 0xb4, 0x6e, 0x39, 0xfa,
0x68, 0x39, 0x7a, 0xed, 0x78, 0xb4, 0xee, 0x78, 0xf4, 0xd6, 0xf1, 0xa8, 0x18, 0xfb, 0x3f, 0xbf,
0xfc, 0x0a, 0x00, 0x00, 0xff, 0xff, 0x7a, 0xa2, 0xf6, 0x1e, 0xb0, 0x01, 0x00, 0x00,
}
func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
@@ -81,44 +180,51 @@ func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
}
func (m *DatabaseRecord) MarshalTo(dAtA []byte) (int, error) {
var i int
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Addresses) > 0 {
for _, msg := range m.Addresses {
dAtA[i] = 0xa
i++
i = encodeVarintDatabase(dAtA, i, uint64(msg.Size()))
n, err := msg.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n
}
}
if m.Misses != 0 {
dAtA[i] = 0x10
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
if m.Missed != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
i--
dAtA[i] = 0x20
}
if m.Seen != 0 {
dAtA[i] = 0x18
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if m.Missed != 0 {
dAtA[i] = 0x20
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
if m.Misses != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
i--
dAtA[i] = 0x10
}
return i, nil
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
@@ -126,40 +232,48 @@ func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
}
func (m *ReplicationRecord) MarshalTo(dAtA []byte) (int, error) {
var i int
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ReplicationRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Key) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
i += copy(dAtA[i:], m.Key)
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if len(m.Addresses) > 0 {
for _, msg := range m.Addresses {
dAtA[i] = 0x12
i++
i = encodeVarintDatabase(dAtA, i, uint64(msg.Size()))
n, err := msg.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i += n
i--
dAtA[i] = 0x12
}
}
if m.Seen != 0 {
dAtA[i] = 0x18
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
if len(m.Key) > 0 {
i -= len(m.Key)
copy(dAtA[i:], m.Key)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
i--
dAtA[i] = 0xa
}
return i, nil
return len(dAtA) - i, nil
}
func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
@@ -167,52 +281,45 @@ func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
}
func (m *DatabaseAddress) MarshalTo(dAtA []byte) (int, error) {
var i int
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseAddress) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Address) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
i += copy(dAtA[i:], m.Address)
}
if m.Expires != 0 {
dAtA[i] = 0x10
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Expires))
i--
dAtA[i] = 0x10
}
return i, nil
if len(m.Address) > 0 {
i -= len(m.Address)
copy(dAtA[i:], m.Address)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func encodeFixed64Database(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Database(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintDatabase(dAtA []byte, offset int, v uint64) int {
offset -= sovDatabase(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return offset + 1
return base
}
func (m *DatabaseRecord) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Addresses) > 0 {
@@ -234,6 +341,9 @@ func (m *DatabaseRecord) Size() (n int) {
}
func (m *ReplicationRecord) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Key)
@@ -253,6 +363,9 @@ func (m *ReplicationRecord) Size() (n int) {
}
func (m *DatabaseAddress) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Address)
@@ -266,14 +379,7 @@ func (m *DatabaseAddress) Size() (n int) {
}
func sovDatabase(x uint64) (n int) {
for {
n++
x >>= 7
if x == 0 {
break
}
}
return n
return (math_bits.Len64(x|1) + 6) / 7
}
func sozDatabase(x uint64) (n int) {
return sovDatabase(uint64((x << 1) ^ uint64((int64(x) >> 63))))
@@ -293,7 +399,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -321,7 +427,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -330,6 +436,9 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -352,7 +461,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Misses |= (int32(b) & 0x7F) << shift
m.Misses |= int32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -371,7 +480,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= (int64(b) & 0x7F) << shift
m.Seen |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -390,7 +499,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Missed |= (int64(b) & 0x7F) << shift
m.Missed |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -404,6 +513,9 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
@@ -431,7 +543,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -459,7 +571,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -469,6 +581,9 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -488,7 +603,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -497,6 +612,9 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -519,7 +637,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= (int64(b) & 0x7F) << shift
m.Seen |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -533,6 +651,9 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
@@ -560,7 +681,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -588,7 +709,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -598,6 +719,9 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -617,7 +741,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Expires |= (int64(b) & 0x7F) << shift
m.Expires |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -631,6 +755,9 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
@@ -646,6 +773,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
func skipDatabase(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -677,10 +805,8 @@ func skipDatabase(dAtA []byte) (n int, err error) {
break
}
}
return iNdEx, nil
case 1:
iNdEx += 8
return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -697,76 +823,34 @@ func skipDatabase(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthDatabase
}
return iNdEx, nil
iNdEx += length
case 3:
for {
var innerWire uint64
var start int = iNdEx
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
innerWireType := int(innerWire & 0x7)
if innerWireType == 4 {
break
}
next, err := skipDatabase(dAtA[start:])
if err != nil {
return 0, err
}
iNdEx = start + next
}
return iNdEx, nil
depth++
case 4:
return iNdEx, nil
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupDatabase
}
depth--
case 5:
iNdEx += 4
return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthDatabase
}
if depth == 0 {
return iNdEx, nil
}
}
panic("unreachable")
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupDatabase = fmt.Errorf("proto: unexpected end of group")
)
func init() { proto.RegisterFile("database.proto", fileDescriptorDatabase) }
var fileDescriptorDatabase = []byte{
// 264 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4b, 0x49, 0x2c, 0x49,
0x4c, 0x4a, 0x2c, 0x4e, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0xc9, 0x4d, 0xcc, 0xcc,
0x93, 0xd2, 0x4d, 0xcf, 0x2c, 0xc9, 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0x4f,
0xcf, 0xd7, 0x07, 0x4b, 0x26, 0x95, 0xa6, 0x81, 0x79, 0x60, 0x0e, 0x98, 0x05, 0xd1, 0xa4, 0xd4,
0xcf, 0xc8, 0xc5, 0xe7, 0x02, 0x35, 0x27, 0x28, 0x35, 0x39, 0xbf, 0x28, 0x45, 0xc8, 0x92, 0x8b,
0x33, 0x31, 0x25, 0xa5, 0x28, 0xb5, 0xb8, 0x38, 0xb5, 0x58, 0x82, 0x51, 0x81, 0x59, 0x83, 0xdb,
0x48, 0x54, 0x0f, 0x64, 0xb6, 0x1e, 0x4c, 0xa1, 0x23, 0x44, 0xda, 0x89, 0xe5, 0xc4, 0x3d, 0x79,
0x86, 0x20, 0x84, 0x6a, 0x21, 0x31, 0x2e, 0xb6, 0xdc, 0x4c, 0xb0, 0x3e, 0x26, 0x05, 0x46, 0x0d,
0xd6, 0x20, 0x28, 0x4f, 0x48, 0x88, 0x8b, 0xa5, 0x38, 0x35, 0x35, 0x4f, 0x82, 0x59, 0x81, 0x51,
0x83, 0x39, 0x08, 0xcc, 0x86, 0xab, 0x4d, 0x91, 0x60, 0x01, 0x8b, 0x42, 0x79, 0x4a, 0x25, 0x5c,
0x82, 0x41, 0xa9, 0x05, 0x39, 0x99, 0xc9, 0x89, 0x25, 0x99, 0xf9, 0x79, 0x50, 0x37, 0x09, 0x70,
0x31, 0x67, 0xa7, 0x56, 0x4a, 0x30, 0x2a, 0x30, 0x6a, 0x70, 0x06, 0x81, 0x98, 0xa8, 0xae, 0x64,
0x22, 0xc9, 0x95, 0x58, 0x5c, 0xa3, 0xe4, 0xca, 0xc5, 0x8f, 0xa6, 0x4f, 0x48, 0x82, 0x8b, 0x1d,
0xaa, 0x07, 0x6a, 0x2f, 0x8c, 0x0b, 0x92, 0x49, 0xad, 0x28, 0xc8, 0x2c, 0x82, 0xfa, 0x93, 0x39,
0x08, 0xc6, 0x75, 0x12, 0x38, 0xf1, 0x50, 0x8e, 0xe1, 0xc4, 0x23, 0x39, 0xc6, 0x0b, 0x8f, 0xe4,
0x18, 0x1f, 0x3c, 0x92, 0x63, 0x4c, 0x62, 0x03, 0x87, 0xb3, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff,
0x6a, 0x22, 0xa2, 0x85, 0xae, 0x01, 0x00, 0x00,
}

View File

@@ -8,9 +8,12 @@ syntax = "proto3";
package main;
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
import "repos/protobuf/gogoproto/gogo.proto";
option (gogoproto.goproto_getters_all) = false;
option (gogoproto.goproto_unkeyed_all) = false;
option (gogoproto.goproto_unrecognized_all) = false;
option (gogoproto.goproto_sizecache_all) = false;
message DatabaseRecord {
repeated DatabaseAddress addresses = 1 [(gogoproto.nullable) = false];

View File

@@ -0,0 +1,4 @@
[stdiscosrv]
title=Syncthing discovery server
description=Lets syncthing clients discover each other
ports=8443/tcp

View File

@@ -0,0 +1,3 @@
# Default settings for syncthing-discosrv (stdiscosrv).
## Add Options here:
DISCOSRV_OPTS=

View File

@@ -0,0 +1,25 @@
[Unit]
Description=Syncthing Discovery Server
After=network.target
Documentation=man:stdiscosrv(1)
[Service]
WorkingDirectory=/var/lib/syncthing-discosrv
EnvironmentFile=/etc/default/syncthing-discosrv
ExecStart=/usr/bin/stdiscosrv $DISCOSRV_OPTS
# Hardening
User=syncthing-discosrv
Group=syncthing
ProtectSystem=strict
ReadWritePaths=/var/lib/syncthing-discosrv
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
Alias=syncthing-discosrv.service

View File

@@ -9,17 +9,15 @@ package main
import (
"crypto/tls"
"flag"
"fmt"
"log"
"net"
"net/http"
"os"
"runtime"
"strconv"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syndtr/goleveldb/leveldb/opt"
@@ -65,34 +63,11 @@ var levelDBOptions = &opt.Options{
WriteBuffer: 32 << 20, // default 4<<20
}
var (
Version string
BuildStamp string
BuildUser string
BuildHost string
BuildDate time.Time
LongVersion string
)
func init() {
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`stdiscosrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
debug = false
)
func main() {
const (
cleanIntv = 1 * time.Hour
statsIntv = 5 * time.Minute
)
var listen string
var dir string
var metricsListen string
@@ -114,19 +89,24 @@ func main() {
flag.StringVar(&metricsListen, "metrics-listen", "", "Metrics listen address")
flag.StringVar(&replicationPeers, "replicate", "", "Replication peers, id@address, comma separated")
flag.StringVar(&replicationListen, "replication-listen", ":19200", "Replication listen address")
showVersion := flag.Bool("version", false, "Show version")
flag.Parse()
log.Println(LongVersion)
log.Println(build.LongVersionFor("stdiscosrv"))
if *showVersion {
return
}
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
if os.IsNotExist(err) {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv", 20*365)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
} else if err != nil {
log.Fatalln("Failed to load keypair:", err)
}
devID := protocol.NewDeviceID(cert.Certificate[0])
log.Println("Server device ID is", devID)

View File

@@ -0,0 +1,4 @@
#!/bin/bash
addgroup --system syncthing
adduser --system --home /var/lib/syncthing-discosrv --ingroup syncthing syncthing-discosrv

View File

@@ -7,6 +7,7 @@
package main
import (
"context"
"crypto/tls"
"errors"
"flag"
@@ -17,6 +18,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -82,7 +84,7 @@ func checkServers(deviceID protocol.DeviceID, servers ...string) {
}
func checkServer(deviceID protocol.DeviceID, server string) checkResult {
disco, err := discover.NewGlobal(server, tls.Certificate{}, nil)
disco, err := discover.NewGlobal(server, tls.Certificate{}, nil, events.NoopLogger)
if err != nil {
return checkResult{error: err}
}
@@ -94,7 +96,7 @@ func checkServer(deviceID protocol.DeviceID, server string) checkResult {
})
go func() {
addresses, err := disco.Lookup(deviceID)
addresses, err := disco.Lookup(context.Background(), deviceID)
res <- checkResult{addresses: addresses, error: err}
}()

View File

@@ -82,7 +82,7 @@ func generateOneFile(fd io.ReadSeeker, p1 string, s int64) error {
return err
}
_ = os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
t := time.Now().Add(-time.Duration(rand.Intn(30*86400)) * time.Second)
return os.Chtimes(p1, t, t)

58
cmd/stindex/accounting.go Normal file
View File

@@ -0,0 +1,58 @@
// Copyright (C) 2020 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"log"
"os"
"text/tabwriter"
"github.com/syncthing/syncthing/lib/db/backend"
)
// account prints key and data size statistics per class
func account(ldb backend.Backend) {
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
var ksizes [256]int
var dsizes [256]int
var counts [256]int
var max [256]int
for it.Next() {
key := it.Key()
t := key[0]
ds := len(it.Value())
ks := len(key)
s := ks + ds
counts[t]++
ksizes[t] += ks
dsizes[t] += ds
if s > max[t] {
max[t] = s
}
}
tw := tabwriter.NewWriter(os.Stdout, 1, 1, 1, ' ', tabwriter.AlignRight)
toti, totds, totks := 0, 0, 0
for t := range ksizes {
if ksizes[t] > 0 {
// yes metric kilobytes 🤘
fmt.Fprintf(tw, "0x%02x:\t%d items,\t%d KB keys +\t%d KB data,\t%d B +\t%d B avg,\t%d B max\t\n", t, counts[t], ksizes[t]/1000, dsizes[t]/1000, ksizes[t]/counts[t], dsizes[t]/counts[t], max[t])
toti += counts[t]
totds += dsizes[t]
totks += ksizes[t]
}
}
fmt.Fprintf(tw, "Total\t%d items,\t%d KB keys +\t%d KB data.\t\n", toti, totks/1000, totds/1000)
tw.Flush()
}

View File

@@ -13,11 +13,15 @@ import (
"time"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
)
func dump(ldb *db.Lowlevel) {
it := ldb.NewIterator(nil, nil)
func dump(ldb backend.Backend) {
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
for it.Next() {
key := it.Key()
switch key[0] {
@@ -48,10 +52,10 @@ func dump(ldb *db.Lowlevel) {
fmt.Printf("[block] F:%d H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
case db.KeyTypeDeviceStatistic:
fmt.Printf("[dstat] K:%x V:%x\n", it.Key(), it.Value())
fmt.Printf("[dstat] K:%x V:%x\n", key, it.Value())
case db.KeyTypeFolderStatistic:
fmt.Printf("[fstat] K:%x V:%x\n", it.Key(), it.Value())
fmt.Printf("[fstat] K:%x V:%x\n", key, it.Value())
case db.KeyTypeVirtualMtime:
folder := binary.BigEndian.Uint32(key[1:])
@@ -63,21 +67,59 @@ func dump(ldb *db.Lowlevel) {
fmt.Printf("[mtime] F:%d N:%q R:%v V:%v\n", folder, name, real, virt)
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
key := binary.BigEndian.Uint32(key[1:])
fmt.Printf("[folderidx] K:%d V:%q\n", key, it.Value())
case db.KeyTypeDeviceIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
key := binary.BigEndian.Uint32(key[1:])
val := it.Value()
if len(val) == 0 {
fmt.Printf("[deviceidx] K:%d V:<nil>\n", key)
} else {
dev := protocol.DeviceIDFromBytes(val)
fmt.Printf("[deviceidx] K:%d V:%s\n", key, dev)
device := "<nil>"
if len(val) > 0 {
dev, err := protocol.DeviceIDFromBytes(val)
if err != nil {
device = fmt.Sprintf("<invalid %d bytes>", len(val))
} else {
device = dev.String()
}
}
fmt.Printf("[deviceidx] K:%d V:%s\n", key, device)
case db.KeyTypeIndexID:
device := binary.BigEndian.Uint32(key[1:])
folder := binary.BigEndian.Uint32(key[5:])
fmt.Printf("[indexid] D:%d F:%d I:%x\n", device, folder, it.Value())
case db.KeyTypeFolderMeta:
folder := binary.BigEndian.Uint32(key[1:])
fmt.Printf("[foldermeta] F:%d V:%x\n", folder, it.Value())
case db.KeyTypeMiscData:
fmt.Printf("[miscdata] K:%q V:%q\n", key[1:], it.Value())
case db.KeyTypeSequence:
folder := binary.BigEndian.Uint32(key[1:])
seq := binary.BigEndian.Uint64(key[5:])
fmt.Printf("[sequence] F:%d S:%d V:%q\n", folder, seq, it.Value())
case db.KeyTypeNeed:
folder := binary.BigEndian.Uint32(key[1:])
file := string(key[5:])
fmt.Printf("[need] F:%d V:%q\n", folder, file)
case db.KeyTypeBlockList:
fmt.Printf("[blocklist] H:%x\n", key[1:])
case db.KeyTypeBlockListMap:
folder := binary.BigEndian.Uint32(key[1:])
hash := key[5:37]
fileName := string(key[37:])
fmt.Printf("[blocklistmap] F:%d H:%x N:%s\n", folder, hash, fileName)
case db.KeyTypeVersion:
fmt.Printf("[version] H:%x\n", key[1:])
default:
fmt.Printf("[???]\n %x\n %x\n", it.Key(), it.Value())
fmt.Printf("[??? %d]\n %x\n %x\n", key[0], key, it.Value())
}
}
}

View File

@@ -10,8 +10,10 @@ import (
"container/heap"
"encoding/binary"
"fmt"
"log"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
)
type SizedElement struct {
@@ -37,11 +39,14 @@ func (h *ElementHeap) Pop() interface{} {
return x
}
func dumpsize(ldb *db.Lowlevel) {
func dumpsize(ldb backend.Backend) {
h := &ElementHeap{}
heap.Init(h)
it := ldb.NewIterator(nil, nil)
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
var ele SizedElement
for it.Next() {
key := it.Key()

View File

@@ -10,8 +10,11 @@ import (
"bytes"
"encoding/binary"
"fmt"
"log"
"sort"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -31,7 +34,7 @@ type sequenceKey struct {
sequence uint64
}
func idxck(ldb *db.Lowlevel) (success bool) {
func idxck(ldb backend.Backend) (success bool) {
folders := make(map[uint32]string)
devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32)
@@ -39,10 +42,17 @@ func idxck(ldb *db.Lowlevel) (success bool) {
globals := make(map[globalKey]db.VersionList)
sequences := make(map[sequenceKey]string)
needs := make(map[globalKey]struct{})
blocklists := make(map[string]struct{})
versions := make(map[string]protocol.Vector)
usedBlocklists := make(map[string]struct{})
usedVersions := make(map[string]struct{})
var localDeviceKey uint32
success = true
it := ldb.NewIterator(nil, nil)
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
for it.Next() {
key := it.Key()
switch key[0] {
@@ -94,6 +104,20 @@ func idxck(ldb *db.Lowlevel) (success bool) {
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
needs[globalKey{folder, name}] = struct{}{}
case db.KeyTypeBlockList:
hash := string(key[1:])
blocklists[hash] = struct{}{}
case db.KeyTypeVersion:
hash := string(key[1:])
var v protocol.Vector
if err := v.Unmarshal(it.Value()); err != nil {
fmt.Println("Unable to unmarshal Vector:", err)
success = false
continue
}
versions[hash] = v
}
}
@@ -103,6 +127,7 @@ func idxck(ldb *db.Lowlevel) (success bool) {
return
}
var missingSeq []sequenceKey
for fk, fi := range fileInfos {
if fk.name != fi.Name {
fmt.Printf("Mismatching FileInfo name, %q (key) != %q (actual)\n", fk.name, fi.Name)
@@ -121,9 +146,11 @@ func idxck(ldb *db.Lowlevel) (success bool) {
}
if fk.device == localDeviceKey {
name, ok := sequences[sequenceKey{fk.folder, uint64(fi.Sequence)}]
sk := sequenceKey{fk.folder, uint64(fi.Sequence)}
name, ok := sequences[sk]
if !ok {
fmt.Printf("Sequence entry missing for FileInfo %q, folder %q, seq %d\n", fi.Name, folder, fi.Sequence)
missingSeq = append(missingSeq, sk)
success = false
continue
}
@@ -132,6 +159,58 @@ func idxck(ldb *db.Lowlevel) (success bool) {
success = false
}
}
if len(fi.Blocks) == 0 && len(fi.BlocksHash) != 0 {
key := string(fi.BlocksHash)
if _, ok := blocklists[key]; !ok {
fmt.Printf("Missing block list for file %q, block list hash %x\n", fi.Name, fi.BlocksHash)
success = false
} else {
usedBlocklists[key] = struct{}{}
}
}
if fi.VersionHash != nil {
key := string(fi.VersionHash)
if _, ok := versions[key]; !ok {
fmt.Printf("Missing version vector for file %q, version hash %x\n", fi.Name, fi.VersionHash)
success = false
} else {
usedVersions[key] = struct{}{}
}
}
_, ok := globals[globalKey{fk.folder, fk.name}]
if !ok {
fmt.Printf("Missing global for file %q\n", fi.Name)
success = false
continue
}
}
// Aggregate the ranges of missing sequence entries, print them
sort.Slice(missingSeq, func(a, b int) bool {
if missingSeq[a].folder != missingSeq[b].folder {
return missingSeq[a].folder < missingSeq[b].folder
}
return missingSeq[a].sequence < missingSeq[b].sequence
})
var folder uint32
var startSeq, prevSeq uint64
for _, sk := range missingSeq {
if folder != sk.folder || sk.sequence != prevSeq+1 {
if folder != 0 {
fmt.Printf("Folder %d missing %d sequence entries: #%d - #%d\n", folder, prevSeq-startSeq+1, startSeq, prevSeq)
}
startSeq = sk.sequence
folder = sk.folder
}
prevSeq = sk.sequence
}
if folder != 0 {
fmt.Printf("Folder %d missing %d sequence entries: #%d - #%d\n", folder, prevSeq-startSeq+1, startSeq, prevSeq)
}
for gk, vl := range globals {
@@ -140,10 +219,10 @@ func idxck(ldb *db.Lowlevel) (success bool) {
fmt.Printf("Unknown folder ID %d for VersionList %q\n", gk.folder, gk.name)
success = false
}
for i, fv := range vl.Versions {
dev, ok := deviceToIDs[string(fv.Device)]
checkGlobal := func(i int, device []byte, version protocol.Vector, invalid, deleted bool) {
dev, ok := deviceToIDs[string(device)]
if !ok {
fmt.Printf("VersionList %q, folder %q refers to unknown device %q\n", gk.name, folder, fv.Device)
fmt.Printf("VersionList %q, folder %q refers to unknown device %q\n", gk.name, folder, device)
success = false
}
fi, ok := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
@@ -151,14 +230,31 @@ func idxck(ldb *db.Lowlevel) (success bool) {
fmt.Printf("VersionList %q, folder %q, entry %d refers to unknown FileInfo\n", gk.name, folder, i)
success = false
}
if !fi.Version.Equal(fv.Version) {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, fv.Version, fi.Version)
fiv := fi.Version
if fi.VersionHash != nil {
fiv = versions[string(fi.VersionHash)]
}
if !fiv.Equal(version) {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, version, fi.Version)
success = false
}
if fi.IsInvalid() != fv.Invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, fv.Invalid, fi.IsInvalid())
if fi.IsInvalid() != invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, invalid, fi.IsInvalid())
success = false
}
if fi.IsDeleted() != deleted {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo deleted mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, deleted, fi.IsDeleted())
success = false
}
}
for i, fv := range vl.RawVersions {
for _, device := range fv.Devices {
checkGlobal(i, device, fv.Version, false, fv.Deleted)
}
for _, device := range fv.InvalidDevices {
checkGlobal(i, device, fv.Version, true, fv.Deleted)
}
}
// If we need this file we should have a need entry for it. False
@@ -167,8 +263,10 @@ func idxck(ldb *db.Lowlevel) (success bool) {
if needsLocally(vl) {
_, ok := needs[gk]
if !ok {
dev, _ := deviceToIDs[string(vl.Versions[0].Device)]
fi, _ := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
fv, _ := vl.GetGlobal()
devB, _ := fv.FirstDevice()
dev := deviceToIDs[string(devB)]
fi := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
}
@@ -224,19 +322,21 @@ func idxck(ldb *db.Lowlevel) (success bool) {
}
}
if d := len(blocklists) - len(usedBlocklists); d > 0 {
fmt.Printf("%d block list entries out of %d needs GC\n", d, len(blocklists))
}
if d := len(versions) - len(usedVersions); d > 0 {
fmt.Printf("%d version entries out of %d needs GC\n", d, len(versions))
}
return
}
func needsLocally(vl db.VersionList) bool {
var lv *protocol.Vector
for _, fv := range vl.Versions {
if bytes.Equal(fv.Device, protocol.LocalDeviceID[:]) {
lv = &fv.Version
break
}
}
if lv == nil {
fv, ok := vl.Get(protocol.LocalDeviceID[:])
if !ok {
return true // proviosinally, it looks like we need the file
}
return !lv.GreaterEqual(vl.Versions[0].Version)
gfv, _ := vl.GetGlobal() // Can't not have a global if we got something above
return !fv.Version.GreaterEqual(gfv.Version)
}

View File

@@ -13,7 +13,7 @@ import (
"os"
"path/filepath"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
)
func main() {
@@ -30,20 +30,34 @@ func main() {
path = filepath.Join(defaultConfigDir(), "index-v0.14.0.db")
}
ldb, err := db.OpenRO(path)
var ldb backend.Backend
var err error
if looksLikeBadger(path) {
ldb, err = backend.OpenBadger(path)
} else {
ldb, err = backend.OpenLevelDBRO(path)
}
if err != nil {
log.Fatal(err)
}
if mode == "dump" {
switch mode {
case "dump":
dump(ldb)
} else if mode == "dumpsize" {
case "dumpsize":
dumpsize(ldb)
} else if mode == "idxck" {
case "idxck":
if !idxck(ldb) {
os.Exit(1)
}
} else {
case "account":
account(ldb)
default:
fmt.Println("Unknown mode")
}
}
func looksLikeBadger(path string) bool {
_, err := os.Stat(filepath.Join(path, "KEYREGISTRY"))
return err == nil
}

View File

@@ -9,8 +9,15 @@
<meta name="author" content=""/>
<title>Relay stats</title>
<link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet" href="//use.fontawesome.com/releases/v5.0.13/css/all.css"/>
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css"/>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.13/css/all.css"/>
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.6.0/dist/leaflet.css"
integrity="sha512-xwE/Az9zrjBIphAcBb3F6JVqxf46+CDLwfLMHloNu6KEQCAWi6HcDUbeOfBIptF7tcCzusKFjFw2yuvEpDL9wQ=="
crossorigin=""/>
<script src="https://unpkg.com/leaflet@1.6.0/dist/leaflet.js"
integrity="sha512-gZwIG9x3wUXg2hdXF6+rVkLF/0Vi9U8D2Ntg4Ga5I5BZpVkVxlJWbSQtXPSiUTtC0TjtGOmxa1AJPuV0CPthew=="
crossorigin=""></script>
<style>
#map {
@@ -38,13 +45,15 @@
<div class="container">
<h1>Relay Pool Data</h1>
<div ng-if="relays === undefined" class="text-center">
<img src="//cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif" alt=""/>
<img src="https://cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif" alt=""/>
<p>Please wait while we gather data</p>
</div>
<div>
<div ng-show="relays !== undefined" class="ng-hide">
<p>
Currently {{ relays.length }} relays online ({{ totals.goMaxProcs }} cores in total).
The relays listed on this page are not managed or vetted by the Syncthing project.
Each relay is the responsibility of the relay operator.
Currently {{ relays.length }} relays online.
</p>
</div>
<div id="map"></div> <!-- Can't hide the map, otherwise it freaks out -->
@@ -184,18 +193,17 @@
</div>
<script type="text/javascript" src="//code.jquery.com/jquery-2.1.4.min.js"></script>
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.8/angular.min.js"></script>
<script type="text/javascript" src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script type="text/javascript" src="//maps.googleapis.com/maps/api/js?key=AIzaSyDk5WJ8s7ueLKb99X5DbQ-vkWtPDAKqYs0"></script>
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.8/angular.min.js"></script>
<script type="text/javascript" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
</body>
<script>
angular.module('syncthing', [
])
.config(function($httpProvider) {
.config(['$httpProvider', function($httpProvider) {
$httpProvider.defaults.timeout = 5000;
})
}])
.filter('bytes', function() {
return function(bytes, precision) {
if (isNaN(parseFloat(bytes)) || !isFinite(bytes)) return '-';
@@ -228,11 +236,12 @@
numProxies: 0,
uptimeSeconds: 0,
};
$scope.map = new google.maps.Map(document.getElementById('map'), {
zoom: 1,
mapTypeId: google.maps.MapTypeId.ROADMAP
});
$scope.mapBounds = new google.maps.LatLngBounds();
$scope.map = L.map('map').setView([40.90296, 1.90925], 2);
L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png',
{
attribution: 'Leaflet',
maxZoom: 17
}).addTo($scope.map);
$scope.tooltipTemplate = $('#infoTemplate').html();
$scope.usedLocations = {};
$scope.sortType = 'stats.numActiveSessions';
@@ -279,8 +288,9 @@
}
});
$scope.map.fitBounds($scope.mapBounds);
if ($scope.relays.length == 1) {
//Center to only relay with zoom
$scope.map.panTo(new L.LatLng(relays[0].location.latitude, relays[0].location.longitude));
$scope.map.setZoom(13);
}
});
@@ -300,44 +310,50 @@
var locParts = loc.split(',');
relay.marker = new google.maps.Marker({
map: $scope.map,
position: new google.maps.LatLng(locParts[0], locParts[1]),
relay.marker = new L.Marker([relay.location.latitude, relay.location.longitude],{
title: relay.url,
});
var scope = $rootScope.$new(true);
scope.relay = relay;
relay.marker.info = new google.maps.InfoWindow({
content: $compile($scope.tooltipTemplate)(scope)[0],
var icon = new L.Icon({
iconSize: [18, 28], // size of the icon
iconAnchor: [9, 28], // point of the icon which will correspond to marker's location
shadowAnchor: [0, 0], // the same for the shadow
popupAnchor: [0, -27], // popup anchor
shadowSize: [0,0],
iconUrl: 'https://cdn.rawgit.com/pointhi/leaflet-color-markers/master/img/marker-icon-red.png',
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/images/marker-shadow.png',
});
relay.marker = new L.marker(new L.latLng(locParts[0], locParts[1]),{icon})
.bindPopup($compile($scope.tooltipTemplate)(scope)[0],{})
.on('mouseover', function (e) {
this.openPopup();
}).on('mouseout', function (e) {
this.closePopup();
}).addTo($scope.map);
relay.showMarker = function() {
relay.marker.info.open($scope.map, relay.marker);
relay.marker.openPopup();
}
relay.hideMarker = function() {
relay.marker.info.close();
relay.marker.closePopup();
}
}
relay.marker.addListener('mouseover', relay.showMarker);
relay.marker.addListener('mouseout', relay.hideMarker);
$scope.mapBounds.extend(relay.marker.position);
}
function addCircleToMap(relay) {
relay.marker.circle = new google.maps.Circle({
strokeColor: '#FF0000',
strokeOpacity: 0.8,
strokeWeight: 2,
fillColor: '#FF0000',
fillOpacity: 0.35,
map: $scope.map,
center: relay.marker.position,
radius: ((relay.stats.bytesProxied * 100) / $scope.totals.bytesProxied) * 10000
});
console.log(relay.location.latitude)
L.circle([relay.location.latitude, relay.location.longitude],
{
radius: ((relay.stats.bytesProxied * 100) / $scope.totals.bytesProxied) * 10000,
color: "FF0000",
fillColor: "#FF0000",
fillOpacity: 0.35,
}).addTo($scope.map);
}
function constructURI(url) {

View File

@@ -1,20 +1,17 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
//go:generate go run ../../script/genassets.go gui >auto/gui.go
package main
import (
"bytes"
"compress/gzip"
"context"
"crypto/tls"
"encoding/json"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"mime"
"net"
"net/http"
"net/url"
@@ -29,6 +26,8 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/assets"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
@@ -92,33 +91,35 @@ type result struct {
}
var (
testCert tls.Certificate
knownRelaysFile = filepath.Join(os.TempDir(), "strelaypoolsrv_known_relays")
listen = ":80"
dir string
evictionTime = time.Hour
debug bool
getLRUSize = 10 << 10
getLimitBurst = 10
getLimitAvg = 2
postLRUSize = 1 << 10
postLimitBurst = 2
postLimitAvg = 2
getLimit time.Duration
postLimit time.Duration
permRelaysFile string
ipHeader string
geoipPath string
proto string
statsRefresh = time.Minute / 2
testCert tls.Certificate
knownRelaysFile = filepath.Join(os.TempDir(), "strelaypoolsrv_known_relays")
listen = ":80"
dir string
evictionTime = time.Hour
debug bool
getLRUSize = 10 << 10
getLimitBurst = 10
getLimitAvg = 2
postLRUSize = 1 << 10
postLimitBurst = 2
postLimitAvg = 2
getLimit time.Duration
postLimit time.Duration
permRelaysFile string
ipHeader string
geoipPath string
proto string
statsRefresh = time.Minute / 2
requestQueueLen = 10
requestProcessors = 1
getMut = sync.NewRWMutex()
getMut = sync.NewMutex()
getLRUCache *lru.Cache
postMut = sync.NewRWMutex()
postMut = sync.NewMutex()
postLRUCache *lru.Cache
requests = make(chan request, 10)
requests chan request
mut = sync.NewRWMutex()
knownRelays = make([]*relay, 0)
@@ -131,6 +132,9 @@ const (
)
func main() {
log.SetOutput(os.Stdout)
log.SetFlags(log.Lshortfile)
flag.StringVar(&listen, "listen", listen, "Listen address")
flag.StringVar(&dir, "keys", dir, "Directory where http-cert.pem and http-key.pem is stored for TLS listening")
flag.BoolVar(&debug, "debug", debug, "Enable debug output")
@@ -146,9 +150,13 @@ func main() {
flag.StringVar(&geoipPath, "geoip", "GeoLite2-City.mmdb", "Path to GeoLite2-City database")
flag.StringVar(&proto, "protocol", "tcp", "Protocol used for listening. 'tcp' for IPv4 and IPv6, 'tcp4' for IPv4, 'tcp6' for IPv6")
flag.DurationVar(&statsRefresh, "stats-refresh", statsRefresh, "Interval at which to refresh relay stats")
flag.IntVar(&requestQueueLen, "request-queue", requestQueueLen, "Queue length for incoming test requests")
flag.IntVar(&requestProcessors, "request-processors", requestProcessors, "Number of request processor routines")
flag.Parse()
requests = make(chan request, requestQueueLen)
getLimit = 10 * time.Second / time.Duration(getLimitAvg)
postLimit = time.Minute / time.Duration(postLimitAvg)
@@ -164,7 +172,9 @@ func main() {
testCert = createTestCertificate()
go requestProcessor()
for i := 0; i < requestProcessors; i++ {
go requestProcessor()
}
// Load relays from cache in the background.
// Load them in a serial fashion to make sure any genuine requests
@@ -253,88 +263,27 @@ func handleMetrics(w http.ResponseWriter, r *http.Request) {
func handleAssets(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Cache-Control", "no-cache, must-revalidate")
assets := auto.Assets()
path := r.URL.Path[1:]
if path == "" {
path = "index.html"
}
bs, ok := assets[path]
as, ok := auto.Assets()[path]
if !ok {
w.WriteHeader(http.StatusNotFound)
return
}
etag := fmt.Sprintf("%d", auto.Generated)
modified := time.Unix(auto.Generated, 0).UTC()
w.Header().Set("Last-Modified", modified.Format(http.TimeFormat))
w.Header().Set("Etag", etag)
mtype := mimeTypeForFile(path)
if len(mtype) != 0 {
w.Header().Set("Content-Type", mtype)
}
if t, err := time.Parse(http.TimeFormat, r.Header.Get("If-Modified-Since")); err == nil && modified.Add(time.Second).After(t) {
w.WriteHeader(http.StatusNotModified)
return
}
if match := r.Header.Get("If-None-Match"); match != "" {
if strings.Contains(match, etag) {
w.WriteHeader(http.StatusNotModified)
return
}
}
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
w.Header().Set("Content-Encoding", "gzip")
} else {
// ungzip if browser not send gzip accepted header
var gr *gzip.Reader
gr, _ = gzip.NewReader(bytes.NewReader(bs))
bs, _ = ioutil.ReadAll(gr)
gr.Close()
}
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
w.Write(bs)
}
func mimeTypeForFile(file string) string {
// We use a built in table of the common types since the system
// TypeByExtension might be unreliable. But if we don't know, we delegate
// to the system.
ext := filepath.Ext(file)
switch ext {
case ".htm", ".html":
return "text/html"
case ".css":
return "text/css"
case ".js":
return "application/javascript"
case ".json":
return "application/json"
case ".png":
return "image/png"
case ".ttf":
return "application/x-font-ttf"
case ".woff":
return "application/x-font-woff"
case ".svg":
return "image/svg+xml"
default:
return mime.TypeByExtension(ext)
}
assets.Serve(w, r, as)
}
func handleRequest(w http.ResponseWriter, r *http.Request) {
timer := prometheus.NewTimer(apiRequestsSeconds.WithLabelValues(r.Method))
lw := NewLoggingResponseWriter(w)
w = NewLoggingResponseWriter(w)
defer func() {
timer.ObserveDuration()
lw := w.(*loggingResponseWriter)
apiRequestsTotal.WithLabelValues(r.Method, strconv.Itoa(lw.statusCode)).Inc()
}()
@@ -363,19 +312,24 @@ func handleRequest(w http.ResponseWriter, r *http.Request) {
}
}
func handleGetRequest(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
func handleGetRequest(rw http.ResponseWriter, r *http.Request) {
rw.Header().Set("Content-Type", "application/json; charset=utf-8")
mut.RLock()
relays := append(permanentRelays, knownRelays...)
mut.RUnlock()
// Shuffle
for i := range relays {
j := rand.Intn(i + 1)
relays[i], relays[j] = relays[j], relays[i]
rand.Shuffle(relays)
w := io.Writer(rw)
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
rw.Header().Set("Content-Encoding", "gzip")
gw := gzip.NewWriter(rw)
defer gw.Close()
w = gw
}
json.NewEncoder(w).Encode(map[string][]*relay{
_ = json.NewEncoder(w).Encode(map[string][]*relay{
"relays": relays,
})
}
@@ -389,7 +343,7 @@ func handlePostRequest(w http.ResponseWriter, r *http.Request) {
if debug {
log.Println("Failed to parse payload")
}
http.Error(w, err.Error(), 500)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
@@ -398,7 +352,7 @@ func handlePostRequest(w http.ResponseWriter, r *http.Request) {
if debug {
log.Println("Failed to parse URI", newRelay.URL)
}
http.Error(w, err.Error(), 500)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
@@ -407,7 +361,7 @@ func handlePostRequest(w http.ResponseWriter, r *http.Request) {
if debug {
log.Println("Failed to split URI", newRelay.URL)
}
http.Error(w, err.Error(), 500)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
@@ -483,11 +437,11 @@ func handleRelayTest(request request) {
if debug {
log.Println("Request for", request.relay)
}
if !client.TestRelay(request.relay.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if err := client.TestRelay(context.TODO(), request.relay.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3); err != nil {
if debug {
log.Println("Test for relay", request.relay, "failed")
log.Println("Test for relay", request.relay, "failed:", err)
}
request.result <- result{fmt.Errorf("connection test failed"), 0}
request.result <- result{err, 0}
return
}
@@ -565,26 +519,21 @@ func evict(relay *relay) func() {
}
}
func limit(addr string, cache *lru.Cache, lock sync.RWMutex, intv time.Duration, burst int) bool {
func limit(addr string, cache *lru.Cache, lock sync.Mutex, intv time.Duration, burst int) bool {
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
lock.RLock()
bkt, ok := cache.Get(addr)
lock.RUnlock()
if ok {
bkt := bkt.(*rate.Limiter)
if !bkt.Allow() {
// Rate limit
return true
}
} else {
lock.Lock()
cache.Add(addr, rate.NewLimiter(rate.Every(intv), burst))
lock.Unlock()
lock.Lock()
v, _ := cache.Get(addr)
bkt, ok := v.(*rate.Limiter)
if !ok {
bkt = rate.NewLimiter(rate.Every(intv), burst)
cache.Add(addr, bkt)
}
return false
lock.Unlock()
return !bkt.Allow()
}
func loadRelays(file string) []*relay {
@@ -636,7 +585,7 @@ func createTestCertificate() tls.Certificate {
}
certFile, keyFile := filepath.Join(tmpDir, "cert.pem"), filepath.Join(tmpDir, "key.pem")
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv")
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv", 20*365)
if err != nil {
log.Fatalln("Failed to create test X509 key pair:", err)
}

View File

@@ -126,7 +126,7 @@ func refreshStats() {
go func(rel *relay) {
t0 := time.Now()
stats := fetchStats(rel)
duration := time.Now().Sub(t0).Seconds()
duration := time.Since(t0).Seconds()
result := "success"
if stats == nil {
result = "failed"

View File

@@ -0,0 +1,9 @@
[strelaysrv]
title=Syncthing relay server
description=Proxies traffic of syncthing client behind firewalls
ports=22067/tcp
[strelaysrv-metrics]
title=Syncthing relay metrics
description=Provides metrics about the syncthing relay server
ports=22070/tcp

View File

@@ -0,0 +1,5 @@
# Default settings for syncthing-relaysrv (strelaysrv).
NAT=true
## Add Options here:
RELAYSRV_OPTS=

View File

@@ -1,17 +1,25 @@
[Unit]
Description=Syncthing relay server
Description=Syncthing Relay Server
After=network.target
Documentation=man:strelaysrv(1)
[Service]
User=strelaysrv
Group=strelaysrv
ExecStart=/usr/bin/strelaysrv
WorkingDirectory=/var/lib/strelaysrv
WorkingDirectory=/var/lib/syncthing-relaysrv
EnvironmentFile=/etc/default/syncthing-relaysrv
ExecStart=/usr/bin/strelaysrv -nat=${NAT} $RELAYSRV_OPTS
PrivateTmp=true
ProtectSystem=full
ProtectHome=true
# Hardening
User=syncthing-relaysrv
Group=syncthing
ProtectSystem=strict
ReadWritePaths=/var/lib/syncthing-relaysrv
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
Alias=syncthing-relaysrv.service

View File

@@ -149,7 +149,15 @@ func protocolConnectionHandler(tcpConn net.Conn, config *tls.Config) {
protocol.WriteMessage(conn, protocol.ResponseSuccess)
case protocol.ConnectRequest:
requestedPeer := syncthingprotocol.DeviceIDFromBytes(msg.ID)
requestedPeer, err := syncthingprotocol.DeviceIDFromBytes(msg.ID)
if err != nil {
if debug {
log.Println(id, "is looking for an invalid peer ID")
}
protocol.WriteMessage(conn, protocol.ResponseNotFound)
conn.Close()
continue
}
outboxesMut.RLock()
peerOutbox, ok := outboxes[requestedPeer]
outboxesMut.RUnlock()

View File

@@ -14,12 +14,13 @@ import (
"os/signal"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync/atomic"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
@@ -33,24 +34,6 @@ import (
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
)
var (
Version string
BuildStamp string
BuildUser string
BuildHost string
BuildDate time.Time
LongVersion string
)
func init() {
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`strelaysrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
listen string
debug bool
@@ -116,8 +99,15 @@ func main() {
flag.IntVar(&natTimeout, "nat-timeout", 10, "NAT discovery timeout in seconds")
flag.BoolVar(&pprofEnabled, "pprof", false, "Enable the built in profiling on the status server")
flag.IntVar(&networkBufferSize, "network-buffer", 2048, "Network buffer size (two of these per proxied connection)")
showVersion := flag.Bool("version", false, "Show version")
flag.Parse()
longVer := build.LongVersionFor("strelaysrv")
if *showVersion {
fmt.Println(longVer)
return
}
if extAddress == "" {
extAddress = listen
}
@@ -146,7 +136,7 @@ func main() {
}
}
log.Println(LongVersion)
log.Println(longVer)
maxDescriptors, err := osutil.MaximizeOpenFileLimit()
if maxDescriptors > 0 {
@@ -166,7 +156,7 @@ func main() {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv", 20*365)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
@@ -194,7 +184,7 @@ func main() {
log.Println("ID:", id)
}
wrapper := config.Wrap("config", config.New(id))
wrapper := config.Wrap("config", config.New(id), events.NoopLogger)
wrapper.SetOptions(config.OptionsConfiguration{
NATLeaseM: natLease,
NATRenewalM: natRenewal,

View File

@@ -7,10 +7,15 @@ import (
"encoding/json"
"io/ioutil"
"log"
"net/http"
"net/url"
"time"
)
const (
httpStatusEnhanceYourCalm = 429
)
func poolHandler(pool string, uri *url.URL, mapping mapping) {
if debug {
log.Println("Joining", pool)
@@ -28,38 +33,62 @@ func poolHandler(pool string, uri *url.URL, mapping mapping) {
resp, err := httpClient.Post(pool, "application/json", &b)
if err != nil {
log.Println("Error joining pool", pool, err)
} else if resp.StatusCode == 500 {
bs, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Println("Failed to join", pool, "due to an internal server error. Could not read response:", err)
} else {
log.Println("Failed to join", pool, "due to an internal server error:", string(bs))
}
resp.Body.Close()
} else if resp.StatusCode == 429 {
log.Println(pool, "under load, will retry in a minute")
log.Printf("Error joining pool %v: HTTP request: %v", pool, err)
time.Sleep(time.Minute)
continue
} else if resp.StatusCode == 401 {
log.Println(pool, "failed to join due to IP address not matching external address. Aborting")
return
} else if resp.StatusCode == 200 {
}
bs, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
log.Printf("Error joining pool %v: reading response: %v", pool, err)
time.Sleep(time.Minute)
continue
}
switch resp.StatusCode {
case http.StatusOK:
var x struct {
EvictionIn time.Duration `json:"evictionIn"`
}
err := json.NewDecoder(resp.Body).Decode(&x)
if err == nil {
if err := json.NewDecoder(resp.Body).Decode(&x); err == nil {
rejoin := x.EvictionIn - (x.EvictionIn / 5)
log.Println("Joined", pool, "rejoining in", rejoin)
log.Printf("Joined pool %s, rejoining in %v", pool, rejoin)
time.Sleep(rejoin)
continue
} else {
log.Println("Failed to deserialize response", err)
log.Printf("Joined pool %s, failed to deserialize response: %v", pool, err)
}
} else {
log.Println(pool, "unknown response type from server", resp.StatusCode)
case http.StatusInternalServerError:
log.Printf("Failed to join %v: server error", pool)
log.Printf("Response data: %s", bs)
time.Sleep(time.Minute)
continue
case http.StatusBadRequest:
log.Printf("Failed to join %v: request or check error", pool)
log.Printf("Response data: %s", bs)
time.Sleep(time.Minute)
continue
case httpStatusEnhanceYourCalm:
log.Printf("Failed to join %v: under load (rate limiting)", pool)
time.Sleep(time.Minute)
continue
case http.StatusUnauthorized:
log.Printf("Failed to join %v: IP address not matching external address", pool)
log.Println("Aborting")
return
default:
log.Printf("Failed to join %v: unexpected status code from server: %d", pool, resp.StatusCode)
log.Printf("Response data: %s", bs)
time.Sleep(time.Minute)
continue
}
time.Sleep(time.Hour)
}
}

View File

@@ -0,0 +1,4 @@
#!/bin/bash
addgroup --system syncthing
adduser --system --home /var/lib/syncthing-relaysrv --ingroup syncthing syncthing-relaysrv

View File

@@ -22,7 +22,7 @@ import (
var (
sessionMut = sync.RWMutex{}
activeSessions = make([]*session, 0)
pendingSessions = make(map[string]*session, 0)
pendingSessions = make(map[string]*session)
numProxies int64
bytesProxied int64
)

View File

@@ -10,6 +10,8 @@ import (
"runtime"
"sync/atomic"
"time"
"github.com/syncthing/syncthing/lib/build"
)
var rc *rateCalculator
@@ -40,10 +42,10 @@ func getStatus(w http.ResponseWriter, r *http.Request) {
sessionMut.Lock()
// This can potentially be double the number of pending sessions, as each session has two keys, one for each side.
status["version"] = Version
status["buildHost"] = BuildHost
status["buildUser"] = BuildUser
status["buildDate"] = BuildDate
status["version"] = build.Version
status["buildHost"] = build.Host
status["buildUser"] = build.User
status["buildDate"] = build.Date
status["startTime"] = rc.startTime
status["uptimeSeconds"] = time.Since(rc.startTime) / time.Second
status["numPendingSessionKeys"] = len(pendingSessions)
@@ -86,9 +88,9 @@ func getStatus(w http.ResponseWriter, r *http.Request) {
}
type rateCalculator struct {
counter *int64 // atomic, must remain 64-bit aligned
rates []int64
prev int64
counter *int64
startTime time.Time
}

View File

@@ -4,6 +4,7 @@ package main
import (
"bufio"
"context"
"crypto/tls"
"flag"
"log"
@@ -19,6 +20,8 @@ import (
)
func main() {
ctx := context.Background()
log.SetOutput(os.Stdout)
log.SetFlags(log.LstdFlags | log.Lshortfile)
@@ -76,7 +79,7 @@ func main() {
}()
for {
conn, err := client.JoinSession(<-recv)
conn, err := client.JoinSession(ctx, <-recv)
if err != nil {
log.Fatalln("Failed to join", err)
}
@@ -90,13 +93,13 @@ func main() {
log.Fatal(err)
}
invite, err := client.GetInvitationFromRelay(uri, id, []tls.Certificate{cert}, 10*time.Second)
invite, err := client.GetInvitationFromRelay(ctx, uri, id, []tls.Certificate{cert}, 10*time.Second)
if err != nil {
log.Fatal(err)
}
log.Println("Received invitation", invite)
conn, err := client.JoinSession(invite)
conn, err := client.JoinSession(ctx, invite)
if err != nil {
log.Fatalln("Failed to join", err)
}
@@ -104,10 +107,10 @@ func main() {
connectToStdio(stdin, conn)
log.Println("Finished", conn.RemoteAddr(), conn.LocalAddr())
} else if test {
if client.TestRelay(uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4) {
if err := client.TestRelay(ctx, uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4); err == nil {
log.Println("OK")
} else {
log.Println("FAIL")
log.Println("FAIL:", err)
}
} else {
log.Fatal("Requires either join or connect")

57
cmd/stupgrades/main.go Normal file
View File

@@ -0,0 +1,57 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"encoding/json"
"flag"
"os"
"sort"
"github.com/syncthing/syncthing/lib/upgrade"
)
const defaultURL = "https://api.github.com/repos/syncthing/syncthing/releases?per_page=25"
func main() {
url := flag.String("u", defaultURL, "GitHub releases url")
flag.Parse()
rels := upgrade.FetchLatestReleases(*url, "")
if rels == nil {
// An error was already logged
os.Exit(1)
}
sort.Sort(upgrade.SortByRelease(rels))
rels = filterForLatest(rels)
if err := json.NewEncoder(os.Stdout).Encode(rels); err != nil {
os.Exit(1)
}
}
// filterForLatest returns the latest stable and prerelease only. If the
// stable version is newer (comes first in the list) there is no need to go
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
var havePre bool
for _, rel := range rels {
if !rel.Prerelease {
// We found a stable version, we're good now.
filtered = append(filtered, rel)
break
}
if rel.Prerelease && !havePre {
// We remember the first prerelease we find.
filtered = append(filtered, rel)
havePre = true
}
}
return filtered
}

View File

@@ -14,6 +14,7 @@ import (
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"errors"
"flag"
"fmt"
"math/big"
@@ -191,7 +192,7 @@ func pemBlockForKey(priv interface{}) (*pem.Block, error) {
}
return &pem.Block{Type: "EC PRIVATE KEY", Bytes: b}, nil
default:
return nil, fmt.Errorf("unknown key type")
return nil, errors.New("unknown key type")
}
}

View File

@@ -1,69 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"encoding/json"
"io"
"github.com/syncthing/syncthing/lib/events"
)
// The auditService subscribes to events and writes these in JSON format, one
// event per line, to the specified writer.
type auditService struct {
w io.Writer // audit destination
stop chan struct{} // signals time to stop
started chan struct{} // signals startup complete
stopped chan struct{} // signals stop complete
}
func newAuditService(w io.Writer) *auditService {
return &auditService{
w: w,
stop: make(chan struct{}),
started: make(chan struct{}),
stopped: make(chan struct{}),
}
}
// Serve runs the audit service.
func (s *auditService) Serve() {
defer close(s.stopped)
sub := events.Default.Subscribe(events.AllEvents)
defer events.Default.Unsubscribe(sub)
enc := json.NewEncoder(s.w)
// We're ready to start processing events.
close(s.started)
for {
select {
case ev := <-sub.C():
enc.Encode(ev)
case <-s.stop:
return
}
}
}
// Stop stops the audit service.
func (s *auditService) Stop() {
close(s.stop)
}
// WaitForStart returns once the audit service is ready to receive events, or
// immediately if it's already running.
func (s *auditService) WaitForStart() {
<-s.started
}
// WaitForStop returns once the audit service has stopped.
// (Needed by the tests.)
func (s *auditService) WaitForStop() {
<-s.stopped
}

View File

@@ -22,11 +22,15 @@ func init() {
panic("Couldn't find block profiler")
}
l.Debugln("Starting block profiling")
go saveBlockingProfiles(profiler)
go func() {
err := saveBlockingProfiles(profiler) // Only returns on error
l.Warnln("Block profiler failed:", err)
panic("Block profiler failed")
}()
}
}
func saveBlockingProfiles(profiler *pprof.Profile) {
func saveBlockingProfiles(profiler *pprof.Profile) error {
runtime.SetBlockProfileRate(1)
t0 := time.Now()
@@ -35,16 +39,16 @@ func saveBlockingProfiles(profiler *pprof.Profile) {
fd, err := os.Create(fmt.Sprintf("block-%05d-%07d.pprof", syscall.Getpid(), startms))
if err != nil {
panic(err)
return err
}
err = profiler.WriteTo(fd, 0)
if err != nil {
panic(err)
return err
}
err = fd.Close()
if err != nil {
panic(err)
return err
}
}
return nil
}

View File

@@ -1,59 +0,0 @@
// Copyright (C) 2017 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"math"
"time"
metrics "github.com/rcrowley/go-metrics"
)
const cpuTickRate = 5 * time.Second
type cpuService struct {
avg metrics.EWMA
stop chan struct{}
}
func newCPUService() *cpuService {
return &cpuService{
// 10 second average. Magic alpha value comes from looking at EWMA package
// definitions of EWMA1, EWMA5. The tick rate *must* be five seconds (hard
// coded in the EWMA package).
avg: metrics.NewEWMA(1 - math.Exp(-float64(cpuTickRate)/float64(time.Second)/10.0)),
stop: make(chan struct{}),
}
}
func (s *cpuService) Serve() {
// Initialize prevUsage to an actual value returned by cpuUsage
// instead of zero, because at least Windows returns a huge negative
// number here that then slowly increments...
prevUsage := cpuUsage()
ticker := time.NewTicker(cpuTickRate)
defer ticker.Stop()
for {
select {
case <-ticker.C:
curUsage := cpuUsage()
s.avg.Update(int64((curUsage - prevUsage) / time.Millisecond))
prevUsage = curUsage
s.avg.Tick()
case <-s.stop:
return
}
}
}
func (s *cpuService) Stop() {
close(s.stop)
}
func (s *cpuService) Rate() float64 {
return s.avg.Rate()
}

View File

@@ -1,78 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//+build solaris
package main
import (
"encoding/binary"
"fmt"
"os"
"time"
)
type id_t int32
type ulong_t uint32
type timestruc_t struct {
Tv_sec int64
Tv_nsec int64
}
func (tv timestruc_t) Nano() int64 {
return tv.Tv_sec*1e9 + tv.Tv_nsec
}
type prusage_t struct {
Pr_lwpid id_t /* lwp id. 0: process or defunct */
Pr_count int32 /* number of contributing lwps */
Pr_tstamp timestruc_t /* real time stamp, time of read() */
Pr_create timestruc_t /* process/lwp creation time stamp */
Pr_term timestruc_t /* process/lwp termination time stamp */
Pr_rtime timestruc_t /* total lwp real (elapsed) time */
Pr_utime timestruc_t /* user level CPU time */
Pr_stime timestruc_t /* system call CPU time */
Pr_ttime timestruc_t /* other system trap CPU time */
Pr_tftime timestruc_t /* text page fault sleep time */
Pr_dftime timestruc_t /* data page fault sleep time */
Pr_kftime timestruc_t /* kernel page fault sleep time */
Pr_ltime timestruc_t /* user lock wait sleep time */
Pr_slptime timestruc_t /* all other sleep time */
Pr_wtime timestruc_t /* wait-cpu (latency) time */
Pr_stoptime timestruc_t /* stopped time */
Pr_minf ulong_t /* minor page faults */
Pr_majf ulong_t /* major page faults */
Pr_nswap ulong_t /* swaps */
Pr_inblk ulong_t /* input blocks */
Pr_oublk ulong_t /* output blocks */
Pr_msnd ulong_t /* messages sent */
Pr_mrcv ulong_t /* messages received */
Pr_sigs ulong_t /* signals received */
Pr_vctx ulong_t /* voluntary context switches */
Pr_ictx ulong_t /* involuntary context switches */
Pr_sysc ulong_t /* system calls */
Pr_ioch ulong_t /* chars read and written */
}
var procFile = fmt.Sprintf("/proc/%d/usage", os.Getpid())
func cpuUsage() time.Duration {
fd, err := os.Open(procFile)
if err != nil {
return 0
}
var rusage prusage_t
err = binary.Read(fd, binary.LittleEndian, rusage)
fd.Close()
if err != nil {
return 0
}
return time.Duration(rusage.Pr_utime.Nano() + rusage.Pr_stime.Nano())
}

View File

@@ -1,27 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//+build windows
package main
import "syscall"
import "time"
func cpuUsage() time.Duration {
handle, err := syscall.GetCurrentProcess()
if err != nil {
return 0
}
defer syscall.CloseHandle(handle)
var ctime, etime, ktime, utime syscall.Filetime
if err := syscall.GetProcessTimes(handle, &ctime, &etime, &ktime, &utime); err != nil {
return 0
}
return time.Duration(ktime.Nanoseconds() + utime.Nanoseconds())
}

View File

@@ -0,0 +1,152 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"sort"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
const (
headRequestTimeout = 10 * time.Second
putRequestTimeout = time.Minute
)
// uploadPanicLogs attempts to upload all the panic logs in the named
// directory to the crash reporting server as urlBase. Uploads are attempted
// with the newest log first.
//
// This can can block for a long time. The context can set a final deadline
// for this.
func uploadPanicLogs(ctx context.Context, urlBase, dir string) {
files, err := filepath.Glob(filepath.Join(dir, "panic-*.log"))
if err != nil {
l.Warnln("Failed to list panic logs:", err)
return
}
sort.Sort(sort.Reverse(sort.StringSlice(files)))
for _, file := range files {
if strings.Contains(file, ".reported.") {
// We've already sent this file. It'll be cleaned out at some
// point.
continue
}
if err := uploadPanicLog(ctx, urlBase, file); err != nil {
l.Warnln("Reporting crash:", err)
} else {
// Rename the log so we don't have to try to report it again. This
// succeeds, or it does not. There is no point complaining about it.
_ = os.Rename(file, strings.Replace(file, ".log", ".reported.log", 1))
}
}
}
// uploadPanicLog attempts to upload the named panic log to the crash
// reporting server at urlBase. The panic ID is constructed as the sha256 of
// the log contents. A HEAD request is made to see if the log has already
// been reported. If not, a PUT is made with the log contents.
func uploadPanicLog(ctx context.Context, urlBase, file string) error {
data, err := ioutil.ReadFile(file)
if err != nil {
return err
}
// Remove log lines, for privacy.
data = filterLogLines(data)
hash := fmt.Sprintf("%x", sha256.Sum256(data))
l.Infof("Reporting crash found in %s (report ID %s) ...\n", filepath.Base(file), hash[:8])
url := fmt.Sprintf("%s/%s", urlBase, hash)
headReq, err := http.NewRequest(http.MethodHead, url, nil)
if err != nil {
return err
}
// Set a reasonable timeout on the HEAD request
headCtx, headCancel := context.WithTimeout(ctx, headRequestTimeout)
defer headCancel()
headReq = headReq.WithContext(headCtx)
resp, err := http.DefaultClient.Do(headReq)
if err != nil {
return err
}
resp.Body.Close()
if resp.StatusCode == http.StatusOK {
// It's known, we're done
return nil
}
putReq, err := http.NewRequest(http.MethodPut, url, bytes.NewReader(data))
if err != nil {
return err
}
// Set a reasonable timeout on the PUT request
putCtx, putCancel := context.WithTimeout(ctx, putRequestTimeout)
defer putCancel()
putReq = putReq.WithContext(putCtx)
resp, err = http.DefaultClient.Do(putReq)
if err != nil {
return err
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("upload: %s", resp.Status)
}
return nil
}
// filterLogLines returns the data without any log lines between the first
// line and the panic trace. This is done in-place: the original data slice
// is destroyed.
func filterLogLines(data []byte) []byte {
filtered := data[:0]
matched := false
for _, line := range bytes.Split(data, []byte("\n")) {
switch {
case !matched && bytes.HasPrefix(line, []byte("Panic ")):
// This begins the panic trace, set the matched flag and append.
matched = true
fallthrough
case len(filtered) == 0 || matched:
// This is the first line or inside the panic trace.
if len(filtered) > 0 {
// We add the newline before rather than after because
// bytes.Split sees the \n as *separator* and not line
// ender, so ir will generate a last empty line that we
// don't really want. (We want to keep blank lines in the
// middle of the trace though.)
filtered = append(filtered, '\n')
}
// Remove the device ID prefix. The "plus two" stuff is because
// the line will look like "[foo] whatever" and the end variable
// will end up pointing at the ] and we want to step over that
// and the following space.
if end := bytes.Index(line, []byte("]")); end > 1 && end < len(line)-2 && bytes.HasPrefix(line, []byte("[")) {
line = line[end+2:]
}
filtered = append(filtered, line...)
}
}
return filtered
}

View File

@@ -0,0 +1,37 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"testing"
)
func TestFilterLogLines(t *testing.T) {
in := []byte(`[ABCD123] syncthing version whatever
here is more log data
and more
...
and some more
yet more
Panic detected at like right now
here is panic data
and yet more panic stuff
`)
filtered := []byte(`syncthing version whatever
Panic detected at like right now
here is panic data
and yet more panic stuff
`)
result := filterLogLines(in)
if !bytes.Equal(result, filtered) {
t.Logf("%q\n", result)
t.Error("it should have been filtered")
}
}

View File

@@ -7,22 +7,9 @@
package main
import (
"os"
"strings"
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("main", "Main package")
httpl = logger.DefaultLogger.NewFacility("http", "REST API")
l = logger.DefaultLogger.NewFacility("main", "Main package")
)
func shouldDebugHTTP() bool {
return l.ShouldDebug("http")
}
func init() {
l.SetDebug("main", strings.Contains(os.Getenv("STTRACE"), "main") || os.Getenv("STTRACE") == "all")
l.SetDebug("http", strings.Contains(os.Getenv("STTRACE"), "http") || os.Getenv("STTRACE") == "all")
}

View File

@@ -1,142 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bufio"
"fmt"
"net/http"
"os"
"strings"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
)
// csrfTokens is a list of valid tokens. It is sorted so that the most
// recently used token is first in the list. New tokens are added to the front
// of the list (as it is the most recently used at that time). The list is
// pruned to a maximum of maxCsrfTokens, throwing away the least recently used
// tokens.
var csrfTokens []string
var csrfMut = sync.NewMutex()
const maxCsrfTokens = 25
// Check for CSRF token on /rest/ URLs. If a correct one is not given, reject
// the request with 403. For / and /index.html, set a new CSRF cookie if none
// is currently set.
func csrfMiddleware(unique string, prefix string, cfg config.GUIConfiguration, next http.Handler) http.Handler {
loadCsrfTokens()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Allow requests carrying a valid API key
if cfg.IsValidAPIKey(r.Header.Get("X-API-Key")) {
// Set the access-control-allow-origin header for CORS requests
// since a valid API key has been provided
w.Header().Add("Access-Control-Allow-Origin", "*")
next.ServeHTTP(w, r)
return
}
if strings.HasPrefix(r.URL.Path, "/rest/debug") {
// Debugging functions are only available when explicitly
// enabled, and can be accessed without a CSRF token
next.ServeHTTP(w, r)
return
}
// Allow requests for anything not under the protected path prefix,
// and set a CSRF cookie if there isn't already a valid one.
if !strings.HasPrefix(r.URL.Path, prefix) {
cookie, err := r.Cookie("CSRF-Token-" + unique)
if err != nil || !validCsrfToken(cookie.Value) {
httpl.Debugln("new CSRF cookie in response to request for", r.URL)
cookie = &http.Cookie{
Name: "CSRF-Token-" + unique,
Value: newCsrfToken(),
}
http.SetCookie(w, cookie)
}
next.ServeHTTP(w, r)
return
}
// Verify the CSRF token
token := r.Header.Get("X-CSRF-Token-" + unique)
if !validCsrfToken(token) {
http.Error(w, "CSRF Error", 403)
return
}
next.ServeHTTP(w, r)
})
}
func validCsrfToken(token string) bool {
csrfMut.Lock()
defer csrfMut.Unlock()
for i, t := range csrfTokens {
if t == token {
if i > 0 {
// Move this token to the head of the list. Copy the tokens at
// the front one step to the right and then replace the token
// at the head.
copy(csrfTokens[1:], csrfTokens[:i+1])
csrfTokens[0] = token
}
return true
}
}
return false
}
func newCsrfToken() string {
token := rand.String(32)
csrfMut.Lock()
csrfTokens = append([]string{token}, csrfTokens...)
if len(csrfTokens) > maxCsrfTokens {
csrfTokens = csrfTokens[:maxCsrfTokens]
}
defer csrfMut.Unlock()
saveCsrfTokens()
return token
}
func saveCsrfTokens() {
// We're ignoring errors in here. It's not super critical and there's
// nothing relevant we can do about them anyway...
name := locations[locCsrfTokens]
f, err := osutil.CreateAtomic(name)
if err != nil {
return
}
for _, t := range csrfTokens {
fmt.Fprintln(f, t)
}
f.Close()
}
func loadCsrfTokens() {
f, err := os.Open(locations[locCsrfTokens])
if err != nil {
return
}
defer f.Close()
s := bufio.NewScanner(f)
for s.Scan() {
csrfTokens = append(csrfTokens, s.Text())
}
}

View File

@@ -23,11 +23,15 @@ func init() {
rate = i
}
l.Debugln("Starting heap profiling")
go saveHeapProfiles(rate)
go func() {
err := saveHeapProfiles(rate) // Only returns on error
l.Warnln("Heap profiler failed:", err)
panic("Heap profiler failed")
}()
}
}
func saveHeapProfiles(rate int) {
func saveHeapProfiles(rate int) error {
runtime.MemProfileRate = rate
var memstats, prevMemstats runtime.MemStats
@@ -38,21 +42,21 @@ func saveHeapProfiles(rate int) {
if memstats.HeapInuse > prevMemstats.HeapInuse {
fd, err := os.Create(name + ".tmp")
if err != nil {
panic(err)
return err
}
err = pprof.WriteHeapProfile(fd)
if err != nil {
panic(err)
return err
}
err = fd.Close()
if err != nil {
panic(err)
return err
}
_ = os.Remove(name) // Error deliberately ignored
os.Remove(name) // Error deliberately ignored
err = os.Rename(name+".tmp", name)
if err != nil {
panic(err)
return err
}
prevMemstats = memstats

View File

@@ -1,125 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"os"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/syncthing/syncthing/lib/fs"
)
type locationEnum string
// Use strings as keys to make printout and serialization of the locations map
// more meaningful.
const (
locConfigFile locationEnum = "config"
locCertFile = "certFile"
locKeyFile = "keyFile"
locHTTPSCertFile = "httpsCertFile"
locHTTPSKeyFile = "httpsKeyFile"
locDatabase = "database"
locLogFile = "logFile"
locCsrfTokens = "csrfTokens"
locPanicLog = "panicLog"
locAuditLog = "auditLog"
locGUIAssets = "GUIAssets"
locDefFolder = "defFolder"
)
// Platform dependent directories
var baseDirs = map[string]string{
"config": defaultConfigDir(), // Overridden by -home flag
"home": homeDir(), // User's home directory, *not* -home flag
}
// Use the variables from baseDirs here
var locations = map[locationEnum]string{
locConfigFile: "${config}/config.xml",
locCertFile: "${config}/cert.pem",
locKeyFile: "${config}/key.pem",
locHTTPSCertFile: "${config}/https-cert.pem",
locHTTPSKeyFile: "${config}/https-key.pem",
locDatabase: "${config}/index-v0.14.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-${timestamp}.log",
locAuditLog: "${config}/audit-${timestamp}.log",
locGUIAssets: "${config}/gui",
locDefFolder: "${home}/Sync",
}
// expandLocations replaces the variables in the location map with actual
// directory locations.
func expandLocations() error {
for key, dir := range locations {
for varName, value := range baseDirs {
dir = strings.Replace(dir, "${"+varName+"}", value, -1)
}
var err error
dir, err = fs.ExpandTilde(dir)
if err != nil {
return err
}
locations[key] = dir
}
return nil
}
// defaultConfigDir returns the default configuration directory, as figured
// out by various the environment variables present on each platform, or dies
// trying.
func defaultConfigDir() string {
switch runtime.GOOS {
case "windows":
if p := os.Getenv("LocalAppData"); p != "" {
return filepath.Join(p, "Syncthing")
}
return filepath.Join(os.Getenv("AppData"), "Syncthing")
case "darwin":
dir, err := fs.ExpandTilde("~/Library/Application Support/Syncthing")
if err != nil {
l.Fatalln(err)
}
return dir
default:
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
return filepath.Join(xdgCfg, "syncthing")
}
dir, err := fs.ExpandTilde("~/.config/syncthing")
if err != nil {
l.Fatalln(err)
}
return dir
}
}
// homeDir returns the user's home directory, or dies trying.
func homeDir() string {
home, err := fs.ExpandTilde("~")
if err != nil {
l.Fatalln(err)
}
return home
}
func timestampedLoc(key locationEnum) string {
// We take the roundtrip via "${timestamp}" instead of passing the path
// directly through time.Format() to avoid issues when the path we are
// expanding contains numbers; otherwise for example
// /home/user2006/.../panic-20060102-150405.log would get both instances of
// 2006 replaced by 2015...
tpl := locations[key]
now := time.Now().Format("20060102-150405")
return strings.Replace(tpl, "${timestamp}", now, -1)
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,31 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"errors"
"os/exec"
"strconv"
"strings"
)
func memorySize() (int64, error) {
cmd := exec.Command("sysctl", "hw.memsize")
out, err := cmd.Output()
if err != nil {
return 0, err
}
fs := strings.Fields(string(out))
if len(fs) != 2 {
return 0, errors.New("sysctl parse error")
}
bytes, err := strconv.ParseInt(fs[1], 10, 64)
if err != nil {
return 0, err
}
return bytes, nil
}

View File

@@ -1,17 +0,0 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
type mockedConnections struct{}
func (m *mockedConnections) Status() map[string]interface{} {
return nil
}
func (m *mockedConnections) NATType() string {
return ""
}

View File

@@ -8,17 +8,25 @@ package main
import (
"bufio"
"context"
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/syncthing"
)
var (
@@ -32,6 +40,8 @@ const (
loopThreshold = 60 * time.Second
logFileAutoCloseDelay = 5 * time.Second
logFileMaxOpenTime = time.Minute
panicUploadMaxWait = 30 * time.Second
panicUploadNoticeWait = 10 * time.Second
)
func monitorMain(runtimeOptions RuntimeOptions) {
@@ -41,28 +51,42 @@ func monitorMain(runtimeOptions RuntimeOptions) {
logFile := runtimeOptions.logFile
if logFile != "-" {
var fileDst io.Writer = newAutoclosedFile(logFile, logFileAutoCloseDelay, logFileMaxOpenTime)
if runtime.GOOS == "windows" {
// Translate line breaks to Windows standard
fileDst = osutil.ReplacingWriter{
Writer: fileDst,
From: '\n',
To: []byte{'\r', '\n'},
}
if expanded, err := fs.ExpandTilde(logFile); err == nil {
logFile = expanded
}
var fileDst io.Writer
var err error
open := func(name string) (io.WriteCloser, error) {
return newAutoclosedFile(name, logFileAutoCloseDelay, logFileMaxOpenTime)
}
if runtimeOptions.logMaxSize > 0 {
fileDst, err = newRotatedFile(logFile, open, int64(runtimeOptions.logMaxSize), runtimeOptions.logMaxFiles)
} else {
fileDst, err = open(logFile)
}
if err != nil {
l.Warnln("Failed to setup logging to file, proceeding with logging to stdout only:", err)
} else {
if runtime.GOOS == "windows" {
// Translate line breaks to Windows standard
fileDst = osutil.ReplacingWriter{
Writer: fileDst,
From: '\n',
To: []byte{'\r', '\n'},
}
}
// Log to both stdout and file.
dst = io.MultiWriter(dst, fileDst)
// Log to both stdout and file.
dst = io.MultiWriter(dst, fileDst)
l.Infof(`Log output saved to file "%s"`, logFile)
l.Infof(`Log output saved to file "%s"`, logFile)
}
}
args := os.Args
var restarts [countRestarts]time.Time
stopSign := make(chan os.Signal, 1)
sigTerm := syscall.Signal(15)
signal.Notify(stopSign, os.Interrupt, sigTerm)
restartSign := make(chan os.Signal, 1)
sigHup := syscall.Signal(1)
@@ -71,9 +95,11 @@ func monitorMain(runtimeOptions RuntimeOptions) {
childEnv := childEnv()
first := true
for {
maybeReportPanics()
if t := time.Since(restarts[0]); t < loopThreshold {
l.Warnf("%d restarts in %v; not retrying further", countRestarts, t)
os.Exit(exitError)
os.Exit(syncthing.ExitError.AsInt())
}
copy(restarts[0:], restarts[1:])
@@ -84,18 +110,19 @@ func monitorMain(runtimeOptions RuntimeOptions) {
stderr, err := cmd.StderrPipe()
if err != nil {
l.Fatalln("stderr:", err)
panic(err)
}
stdout, err := cmd.StdoutPipe()
if err != nil {
l.Fatalln("stdout:", err)
panic(err)
}
l.Infoln("Starting syncthing")
l.Debugln("Starting syncthing")
err = cmd.Start()
if err != nil {
l.Fatalln(err)
l.Warnln("Error starting the main Syncthing process:", err)
panic("Error starting the main Syncthing process")
}
stdoutMut.Lock()
@@ -124,12 +151,13 @@ func monitorMain(runtimeOptions RuntimeOptions) {
exit <- cmd.Wait()
}()
stopped := false
select {
case s := <-stopSign:
l.Infof("Signal %d received; exiting", s)
cmd.Process.Kill()
<-exit
return
cmd.Process.Signal(sigTerm)
err = <-exit
stopped = true
case s := <-restartSign:
l.Infof("Signal %d received; restarting", s)
@@ -137,23 +165,31 @@ func monitorMain(runtimeOptions RuntimeOptions) {
err = <-exit
case err = <-exit:
if err == nil {
// Successful exit indicates an intentional shutdown
return
} else if exiterr, ok := err.(*exec.ExitError); ok {
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
switch status.ExitStatus() {
case exitUpgrading:
// Restart the monitor process to release the .old
// binary as part of the upgrade process.
l.Infoln("Restarting monitor...")
if err = restartMonitor(args); err != nil {
l.Warnln("Restart:", err)
}
return
}
}
}
if err == nil {
// Successful exit indicates an intentional shutdown
os.Exit(syncthing.ExitSuccess.AsInt())
}
if exiterr, ok := err.(*exec.ExitError); ok {
exitCode := exiterr.ExitCode()
if stopped || runtimeOptions.noRestart {
os.Exit(exitCode)
}
if exitCode == syncthing.ExitUpgrade.AsInt() {
// Restart the monitor process to release the .old
// binary as part of the upgrade process.
l.Infoln("Restarting monitor...")
if err = restartMonitor(args); err != nil {
l.Warnln("Restart:", err)
}
os.Exit(exitCode)
}
}
if runtimeOptions.noRestart {
os.Exit(syncthing.ExitError.AsInt())
}
l.Infoln("Syncthing exited:", err)
@@ -172,6 +208,13 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
br := bufio.NewReader(stderr)
var panicFd *os.File
defer func() {
if panicFd != nil {
_ = panicFd.Close()
maybeReportPanics()
}
}()
for {
line, err := br.ReadString('\n')
if err != nil {
@@ -198,7 +241,7 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
}
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(timestampedLoc(locPanicLog))
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
@@ -268,6 +311,11 @@ func copyStdout(stdout io.Reader, dst io.Writer) {
}
func restartMonitor(args []string) error {
// Set the STRESTART environment variable to indicate to the next
// process that this is a restart and not initial start. This prevents
// opening the browser on startup.
os.Setenv("STRESTART", "yes")
if runtime.GOOS != "windows" {
// syscall.Exec is the cleanest way to restart on Unixes as it
// replaces the current process with the new one, keeping the pid and
@@ -303,6 +351,89 @@ func restartMonitorWindows(args []string) error {
return cmd.Start()
}
// rotatedFile keeps a set of rotating logs. There will be the base file plus up
// to maxFiles rotated ones, each ~ maxSize bytes large.
type rotatedFile struct {
name string
create createFn
maxSize int64 // bytes
maxFiles int
currentFile io.WriteCloser
currentSize int64
}
type createFn func(name string) (io.WriteCloser, error)
func newRotatedFile(name string, create createFn, maxSize int64, maxFiles int) (*rotatedFile, error) {
var size int64
if info, err := os.Lstat(name); err != nil {
if !os.IsNotExist(err) {
return nil, err
}
size = 0
} else {
size = info.Size()
}
writer, err := create(name)
if err != nil {
return nil, err
}
return &rotatedFile{
name: name,
create: create,
maxSize: maxSize,
maxFiles: maxFiles,
currentFile: writer,
currentSize: size,
}, nil
}
func (r *rotatedFile) Write(bs []byte) (int, error) {
// Check if we're about to exceed the max size, and if so close this
// file so we'll start on a new one.
if r.currentSize+int64(len(bs)) > r.maxSize {
r.currentFile.Close()
r.currentSize = 0
r.rotate()
f, err := r.create(r.name)
if err != nil {
return 0, err
}
r.currentFile = f
}
n, err := r.currentFile.Write(bs)
r.currentSize += int64(n)
return n, err
}
func (r *rotatedFile) rotate() {
// The files are named "name", "name.0", "name.1", ...
// "name.(r.maxFiles-1)". Increase the numbers on the
// suffixed ones.
for i := r.maxFiles - 1; i > 0; i-- {
from := numberedFile(r.name, i-1)
to := numberedFile(r.name, i)
err := os.Rename(from, to)
if err != nil && !os.IsNotExist(err) {
fmt.Println("LOG: Rotating logs:", err)
}
}
// Rename the base to base.0
err := os.Rename(r.name, numberedFile(r.name, 0))
if err != nil && !os.IsNotExist(err) {
fmt.Println("LOG: Rotating logs:", err)
}
}
// numberedFile adds the number between the file name and the extension.
func numberedFile(name string, num int) string {
ext := filepath.Ext(name) // contains the dot
withoutExt := name[:len(name)-len(ext)]
return fmt.Sprintf("%s.%d%s", withoutExt, num, ext)
}
// An autoclosedFile is an io.WriteCloser that opens itself for appending on
// Write() and closes itself after an interval of no writes (closeDelay) or
// when the file has been open for too long (maxOpenTime). A call to Write()
@@ -321,7 +452,7 @@ type autoclosedFile struct {
mut sync.Mutex
}
func newAutoclosedFile(name string, closeDelay, maxOpenTime time.Duration) *autoclosedFile {
func newAutoclosedFile(name string, closeDelay, maxOpenTime time.Duration) (*autoclosedFile, error) {
f := &autoclosedFile{
name: name,
closeDelay: closeDelay,
@@ -330,8 +461,13 @@ func newAutoclosedFile(name string, closeDelay, maxOpenTime time.Duration) *auto
closed: make(chan struct{}),
closeTimer: time.NewTimer(time.Minute),
}
f.mut.Lock()
defer f.mut.Unlock()
if err := f.ensureOpenLocked(); err != nil {
return nil, err
}
go f.closerLoop()
return f
return f, nil
}
func (f *autoclosedFile) Write(bs []byte) (int, error) {
@@ -339,7 +475,7 @@ func (f *autoclosedFile) Write(bs []byte) (int, error) {
defer f.mut.Unlock()
// Make sure the file is open for appending
if err := f.ensureOpen(); err != nil {
if err := f.ensureOpenLocked(); err != nil {
return 0, err
}
@@ -369,22 +505,14 @@ func (f *autoclosedFile) Close() error {
}
// Must be called with f.mut held!
func (f *autoclosedFile) ensureOpen() error {
func (f *autoclosedFile) ensureOpenLocked() error {
if f.fd != nil {
// File is already open
return nil
}
// We open the file for write only, and create it if it doesn't exist.
flags := os.O_WRONLY | os.O_CREATE
if f.opened.IsZero() {
// This is the first time we are opening the file. We should truncate
// it to better emulate an os.Create() call.
flags |= os.O_TRUNC
} else {
// The file was already opened once, so we should append to it.
flags |= os.O_APPEND
}
flags := os.O_WRONLY | os.O_CREATE | os.O_APPEND
fd, err := os.OpenFile(f.name, flags, 0644)
if err != nil {
@@ -418,10 +546,10 @@ func (f *autoclosedFile) closerLoop() {
func childEnv() []string {
var env []string
for _, str := range os.Environ() {
if strings.HasPrefix("STNORESTART=", str) {
if strings.HasPrefix(str, "STNORESTART=") {
continue
}
if strings.HasPrefix("STMONITORED=", str) {
if strings.HasPrefix(str, "STMONITORED=") {
continue
}
env = append(env, str)
@@ -429,3 +557,39 @@ func childEnv() []string {
env = append(env, "STMONITORED=yes")
return env
}
// maybeReportPanics tries to figure out if crash reporting is on or off,
// and reports any panics it can find if it's enabled. We spend at most
// panicUploadMaxWait uploading panics...
func maybeReportPanics() {
// Try to get a config to see if/where panics should be reported.
cfg, err := loadOrDefaultConfig(protocol.EmptyDeviceID, events.NoopLogger)
if err != nil {
l.Warnln("Couldn't load config; not reporting crash")
return
}
// Bail if we're not supposed to report panics.
opts := cfg.Options()
if !opts.CREnabled {
return
}
// Set up a timeout on the whole operation.
ctx, cancel := context.WithTimeout(context.Background(), panicUploadMaxWait)
defer cancel()
// Print a notice if the upload takes a long time.
go func() {
select {
case <-ctx.Done():
return
case <-time.After(panicUploadNoticeWait):
l.Warnln("Uploading crash reports is taking a while, please wait...")
}
}()
// Report the panics.
dir := locations.GetBaseDir(locations.ConfigBaseDir)
uploadPanicLogs(ctx, opts.CRURL, dir)
}

View File

@@ -7,6 +7,7 @@
package main
import (
"io"
"io/ioutil"
"os"
"path/filepath"
@@ -14,6 +15,126 @@ import (
"time"
)
func TestRotatedFile(t *testing.T) {
// Verify that log rotation happens.
dir, err := ioutil.TempDir("", "syncthing")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
open := func(name string) (io.WriteCloser, error) {
return os.Create(name)
}
logName := filepath.Join(dir, "log.txt")
testData := []byte("12345678\n")
maxSize := int64(len(testData) + len(testData)/2)
// We allow the log file plus two rotated copies.
rf, err := newRotatedFile(logName, open, maxSize, 2)
if err != nil {
t.Fatal(err)
}
// Write some bytes.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
// They should be in the log.
checkSize(t, logName, len(testData))
checkNotExist(t, logName+".0")
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData))
checkNotExist(t, logName+".1")
// Write another byte. That should fit without causing an extra rotate.
_, _ = rf.Write([]byte{42})
checkSize(t, logName, len(testData)+1)
checkSize(t, numberedFile(logName, 0), len(testData))
checkNotExist(t, numberedFile(logName, 1))
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData)+1) // the one we wrote extra to, now rotated
checkSize(t, numberedFile(logName, 1), len(testData))
checkNotExist(t, numberedFile(logName, 2))
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData))
checkSize(t, numberedFile(logName, 1), len(testData)+1)
checkNotExist(t, numberedFile(logName, 2)) // exceeds maxFiles so deleted
}
func TestNumberedFile(t *testing.T) {
// Mostly just illustrates where the number ends up and makes sure it
// doesn't crash without an extension.
cases := []struct {
in string
num int
out string
}{
{
in: "syncthing.log",
num: 42,
out: "syncthing.42.log",
},
{
in: filepath.Join("asdfasdf", "syncthing.log.txt"),
num: 42,
out: filepath.Join("asdfasdf", "syncthing.log.42.txt"),
},
{
in: "syncthing-log",
num: 42,
out: "syncthing-log.42",
},
}
for _, tc := range cases {
res := numberedFile(tc.in, tc.num)
if res != tc.out {
t.Errorf("numberedFile(%q, %d) => %q, expected %q", tc.in, tc.num, res, tc.out)
}
}
}
func checkSize(t *testing.T, name string, size int) {
t.Helper()
info, err := os.Lstat(name)
if err != nil {
t.Fatal(err)
}
if info.Size() != int64(size) {
t.Errorf("%s wrong size: %d != expected %d", name, info.Size(), size)
}
}
func checkNotExist(t *testing.T, name string) {
t.Helper()
_, err := os.Lstat(name)
if !os.IsNotExist(err) {
t.Errorf("%s should not exist", name)
}
}
func TestAutoClosedFile(t *testing.T) {
os.RemoveAll("_autoclose")
defer os.RemoveAll("_autoclose")
@@ -22,7 +143,10 @@ func TestAutoClosedFile(t *testing.T) {
data := []byte("hello, world\n")
// An autoclosed file that closes very quickly
ac := newAutoclosedFile(file, time.Millisecond, time.Millisecond)
ac, err := newAutoclosedFile(file, time.Millisecond, time.Millisecond)
if err != nil {
t.Fatal(err)
}
// Write some data.
if _, err := ac.Write(data); err != nil {
@@ -64,21 +188,23 @@ func TestAutoClosedFile(t *testing.T) {
}
// Open the file again.
ac = newAutoclosedFile(file, time.Second, time.Second)
ac, err = newAutoclosedFile(file, time.Second, time.Second)
if err != nil {
t.Fatal(err)
}
// Write something
if _, err := ac.Write(data); err != nil {
t.Fatal(err)
}
// It should now contain only one write, because the first open
// should be a truncate.
// It should now contain three writes, as the file is always opened for appending
bs, err = ioutil.ReadFile(file)
if err != nil {
t.Fatal(err)
}
if len(bs) != len(data) {
t.Fatalf("Write failed, expected %d bytes, not %d", len(data), len(bs))
if len(bs) != 3*len(data) {
t.Fatalf("Write failed, expected %d bytes, not %d", 3*len(data), len(bs))
}
// Close.

View File

@@ -38,7 +38,10 @@ func savePerfStats(file string) {
t0 := time.Now()
for t := range time.NewTicker(250 * time.Millisecond).C {
syscall.Getrusage(syscall.RUSAGE_SELF, &rusage)
if err := syscall.Getrusage(syscall.RUSAGE_SELF, &rusage); err != nil {
continue
}
curTime := time.Now().UnixNano()
timeDiff := curTime - prevTime
curUsage := rusage.Utime.Nano() + rusage.Stime.Nano()

View File

@@ -1,231 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture"
)
// The folderSummaryService adds summary information events (FolderSummary and
// FolderCompletion) into the event stream at certain intervals.
type folderSummaryService struct {
*suture.Supervisor
cfg configIntf
model modelIntf
stop chan struct{}
immediate chan string
// For keeping track of folders to recalculate for
foldersMut sync.Mutex
folders map[string]struct{}
// For keeping track of when the last event request on the API was
lastEventReq time.Time
lastEventReqMut sync.Mutex
}
func newFolderSummaryService(cfg configIntf, m modelIntf) *folderSummaryService {
service := &folderSummaryService{
Supervisor: suture.New("folderSummaryService", suture.Spec{
PassThroughPanics: true,
}),
cfg: cfg,
model: m,
stop: make(chan struct{}),
immediate: make(chan string),
folders: make(map[string]struct{}),
foldersMut: sync.NewMutex(),
lastEventReqMut: sync.NewMutex(),
}
service.Add(serviceFunc(service.listenForUpdates))
service.Add(serviceFunc(service.calculateSummaries))
return service
}
func (c *folderSummaryService) Stop() {
c.Supervisor.Stop()
close(c.stop)
}
// listenForUpdates subscribes to the event bus and makes note of folders that
// need their data recalculated.
func (c *folderSummaryService) listenForUpdates() {
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged | events.RemoteDownloadProgress | events.DeviceConnected | events.FolderWatchStateChanged)
defer events.Default.Unsubscribe(sub)
for {
// This loop needs to be fast so we don't miss too many events.
select {
case ev := <-sub.C():
if ev.Type == events.DeviceConnected {
// When a device connects we schedule a refresh of all
// folders shared with that device.
data := ev.Data.(map[string]string)
deviceID, _ := protocol.DeviceIDFromString(data["id"])
c.foldersMut.Lock()
nextFolder:
for _, folder := range c.cfg.Folders() {
for _, dev := range folder.Devices {
if dev.DeviceID == deviceID {
c.folders[folder.ID] = struct{}{}
continue nextFolder
}
}
}
c.foldersMut.Unlock()
continue
}
// The other events all have a "folder" attribute that they
// affect. Whenever the local or remote index is updated for a
// given folder we make a note of it.
data := ev.Data.(map[string]interface{})
folder := data["folder"].(string)
switch ev.Type {
case events.StateChanged:
if data["to"].(string) == "idle" && data["from"].(string) == "syncing" {
// The folder changed to idle from syncing. We should do an
// immediate refresh to update the GUI. The send to
// c.immediate must be nonblocking so that we can continue
// handling events.
c.foldersMut.Lock()
select {
case c.immediate <- folder:
delete(c.folders, folder)
default:
c.folders[folder] = struct{}{}
}
c.foldersMut.Unlock()
}
default:
// This folder needs to be refreshed whenever we do the next
// refresh.
c.foldersMut.Lock()
c.folders[folder] = struct{}{}
c.foldersMut.Unlock()
}
case <-c.stop:
return
}
}
}
// calculateSummaries periodically recalculates folder summaries and
// completion percentage, and sends the results on the event bus.
func (c *folderSummaryService) calculateSummaries() {
const pumpInterval = 2 * time.Second
pump := time.NewTimer(pumpInterval)
for {
select {
case <-pump.C:
t0 := time.Now()
for _, folder := range c.foldersToHandle() {
c.sendSummary(folder)
}
// We don't want to spend all our time calculating summaries. Lets
// set an arbitrary limit at not spending more than about 30% of
// our time here...
wait := 2*time.Since(t0) + pumpInterval
pump.Reset(wait)
case folder := <-c.immediate:
c.sendSummary(folder)
case <-c.stop:
return
}
}
}
// foldersToHandle returns the list of folders needing a summary update, and
// clears the list.
func (c *folderSummaryService) foldersToHandle() []string {
// We only recalculate summaries if someone is listening to events
// (a request to /rest/events has been made within the last
// pingEventInterval).
c.lastEventReqMut.Lock()
last := c.lastEventReq
c.lastEventReqMut.Unlock()
if time.Since(last) > defaultEventTimeout {
return nil
}
c.foldersMut.Lock()
res := make([]string, 0, len(c.folders))
for folder := range c.folders {
res = append(res, folder)
delete(c.folders, folder)
}
c.foldersMut.Unlock()
return res
}
// sendSummary send the summary events for a single folder
func (c *folderSummaryService) sendSummary(folder string) {
// The folder summary contains how many bytes, files etc
// are in the folder and how in sync we are.
data, err := folderSummary(c.cfg, c.model, folder)
if err != nil {
return
}
events.Default.Log(events.FolderSummary, map[string]interface{}{
"folder": folder,
"summary": data,
})
for _, devCfg := range c.cfg.Folders()[folder].Devices {
if devCfg.DeviceID.Equals(myID) {
// We already know about ourselves.
continue
}
if _, ok := c.model.Connection(devCfg.DeviceID); !ok {
// We're not interested in disconnected devices.
continue
}
// Get completion percentage of this folder for the
// remote device.
comp := jsonCompletion(c.model.Completion(devCfg.DeviceID, folder))
comp["folder"] = folder
comp["device"] = devCfg.DeviceID.String()
events.Default.Log(events.FolderCompletion, comp)
}
}
func (c *folderSummaryService) gotEventRequest() {
c.lastEventReqMut.Lock()
c.lastEventReq = time.Now()
c.lastEventReqMut.Unlock()
}
// serviceFunc wraps a function to create a suture.Service without stop
// functionality.
type serviceFunc func()
func (f serviceFunc) Serve() { f() }
func (f serviceFunc) Stop() {}

View File

@@ -1,39 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// +build ignore
package main
import (
"bytes"
"fmt"
"io"
"os"
)
func main() {
buf := make([]byte, 4096)
var err error
for err == nil {
n, err := io.ReadFull(os.Stdin, buf)
if n > 0 {
buf = buf[:n]
repl := bytes.Replace(buf, []byte("\n"), []byte("\r\n"), -1)
_, err = os.Stdout.Write(repl)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
if err == io.EOF {
return
}
buf = buf[:cap(buf)]
}
fmt.Println(err)
os.Exit(1)
}

Some files were not shown because too many files have changed in this diff Show More