Compare commits

..

1 Commits

Author SHA1 Message Date
Jakob Borg
897f238b91 lib/db: Add hacky way to adjust database parameters (#5889)
This adds a set of magical environment variables that can be used to
tweak the database parameters. It's totally undocumented and not
intended to be a long term or supported thing.

It's ugly, but there is a backstory. I have a couple of large
installations where the database options are inefficient or otherwise
suboptimal (24/7 compaction going on and stuff like that). I don't
*know* the correct database parameters, nor yet the formula or method to
derive them by, so this requires experimentation. Experimentation needs
to happen partly in production, and rolling out new builds for every
tweak isn't practical. This provides override points for all reasonable
values, while not changing anything by default.

Ideally, at the end of such experimentation, we'll know which values are
relevant to change and in what manner, and can provide a more user
friendly knob to do so - or do it automatically based on the database
size.
2019-07-30 21:41:18 +02:00
1672 changed files with 482412 additions and 25210 deletions

View File

@@ -1,5 +0,0 @@
coverage:
range: "40...100"
ignore:
- "**.pb.go"

View File

@@ -36,7 +36,7 @@ its entirety.
### Version Information
Syncthing Version: v1.x.y
Syncthing Version: v0.x.y
OS Version: Windows 7 / Ubuntu 14.04 / ...
Browser Version: (if applicable, for GUI issues)

48
.github/SECURITY.md vendored
View File

@@ -1,48 +0,0 @@
## Reporting a Vulnerability
If you believe that you've found a Syncthing-related security vulnerability, please report it by sending email to the address security@syncthing.net. The PGP key for security@syncthing.net (B683AD7B76CAB013) below can be used to send encrypted mail or to verify responses received from that address.
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQENBFShFlIBCACsW346HYskhKhxrdZMjyU5Ntsjvg6/ogqINDPoL10/oaIP0G+t
7zzC0K5Cq29ix43kNNLKTJNyPkdTeaJEcqslMUt6tovjHwoKJ073GP0W3KsNvBRg
ffCZOAexGfOsBSL9KHaYGK67Py3TFgtN1H/EmboU1arrLfAMrmqOip++EGqOxjse
gH0qk7Mk1TqEC5Xh3NGE7r1UobAlqdUv5E3v7U11NhAdP1zu/XZ/zvP5mwVQJMLv
iZyeWGliNI8nEeRjYw+S85f4gq7H2mgoeNBN4WwwK1hhz9qpvCsgPW3XqlExTPI4
1vM4PxpiFIuF0zuy2OwsmjrpTCZeBscr4Tj5ABEBAAG0K1N5bmN0aGluZyBTZWN1
cml0eSA8c2VjdXJpdHlAc3luY3RoaW5nLm5ldD6JATgEEwECACIFAlShFlICGwMG
CwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJELaDrXt2yrATB1UH/jnnIa6DekCA
2V36N/2+pFSNLVWeoZQxZ9ne9S7WSubaoK4oCWiuChPSAy5hagnKnNA3a9wrz5iN
0hgCA88NnTU18biZW6HX8xWEd3dnY3fX1sG0XdHuKlFria8ByfcrbShf/CttXkEd
Y5qPHH9aLIMtksBS1MIsRYrOCHiLNYFCKlbjDDCGT3tuk/yaU7aBAFVPIag54afC
zZAIvTIgRruLWkNT/sSEJx9ekhJzO/+pXNbSSHwhoj5OJh7sQjZWwPzNazelk48R
+ku8VpB5Wxgk2MnJPf1RxF+7saBAaeAVZsJcPN+Jp7u9LCFelIRn1ISsHbhLhyqL
cPXqllltLCKJAhwEEAECAAYFAlShFlsACgkQSfWuwLzlJMcEpA//VvZYHGZgZMM0
bBkYnjManYnJkHSQ2pHsjquSFH+fbfq2FxxTk/nQ/IAvBzt8NDTT3ylPOmsl4A8q
r8BxsRNENhU7zDQrKC5NtYUrzXhPGo8qfDwkeLyqd2msUvHn7EH3PNgiN2QaFWxE
21FqoXIpERKJRgRtv1SYZoMyNGjmT2hta1ZbwEfrPJnzjYhneoUGsRApG7p+uPqe
4LRpbG3Fa2vBm/UWUrOe+6jPzvMokjIJbdK0IjXamFAzwYW5fSZaFa94mR6L5f5m
X8Dhp66mBRTx7qk6ldEqptvaYGqihaWP1xbDaApQsGHQujtcBYGp+edlkabuW5h7
stl/7QTFkEPqHT4ybxEX0rLoFUGq56WUlKp27z40keStaXMgfrsxJtkz66Xs8vU4
7qZcLAcPsin+y0toqavtwtM/L7uCMu1yhbFRGJ+JL6saJAqzwS8l7r6E2R7OnVj1
BdASgxx1TgzW6ZFW5p1Iy6mtpkBypsnp8s3UcP3GFRnQe9gi1EXHjzuZAUGafe4Q
juvJ8t8xIcQMFuAylNIVyXvIWJoehqsVY9EBgVtE4y1SRh8XTT3Tddn8ab5fl7uh
HWAY8cRlv6WIOhK8w8oroiYx0SID8jMeEwJBS4DL7qWtJMkDo8ZEJiB+Pkd963MX
05QXt33AvJJ9PmbGCHkcH7198tCmA+e5AQ0EVKEWUgEIAPGczHpa6NdxY0pm+tIp
btiA6gdPE70pJYgJTKX7siCQ2w770H3CBSKmqEXadr7WnfIgUYIDaSxadeGzB/Mn
3SHCYRCqbA7mwu2k4wNNvCEM0xZqFAvaDJ4avlZ5oiMT8IFHKsjC77nkhmfXaIGt
hn10H2MFADjvJYul7vR+Ghg1wGhTGWo2u7VVj9BI+AfvnWaouFI0cx2KNWEI/Ocj
z6jk8nmC3yOEFQECM/hF4lkAOv9CQUa8UcvAr31trzawmV1iSsKjmVZgqd0N4T8f
hUikqUPZGNCRcqEUffTzggIyGPbedFnZw9Di7o1xByxyTrZycemAVqaVGF+9nFLG
pccAEQEAAYkBHwQYAQIACQUCVKEWUgIbDAAKCRC2g617dsqwEzrjB/9q0T8A4XUE
p0g6xq86jhmh4jlEedxrfXUL3R6ejFtuKMThulxEP0xiQ7xLzBhOnvyxLCsVbjp8
0JtJTVCq44UzUiIBuVNRoYG29uXuTUL2UtI27VhjFxMzLDwZL97tbGR6lzdM/+U5
9PC+PIvS0yz13z2t3/x3KUOnBgxZnpy9h4AdKwjrNVtnQdGDXSKlJBLb57TFcS94
f3roZ5Gpw3AWYSmSSiZWbhks2UNzYSQ84LAKlV1NkktO+qRc/pUqr6IxMjOKc7XP
e8u1Nst6fN3GNqZOV+jUYs/fqrJgp1TUWjNTuf22Rl0Idz7XLPJKYFh9W7T/4MbU
M7Q8GZuww1rk
=No/v
-----END PGP PUBLIC KEY BLOCK-----
```

View File

@@ -1,26 +0,0 @@
linters-settings:
maligned:
suggest-new: true
linters:
enable-all: true
disable:
- goimports
- depguard
- lll
- gochecknoinits
- gochecknoglobals
- gofmt
- scopelint
- gocyclo
- funlen
- wsl
- gocognit
- godox
service:
golangci-lint-version: 1.21.x
prepare:
- rm -f go.sum # 1.12 -> 1.13 issues with QUIC-go
- GO111MODULE=on go mod vendor
- go run build.go assets

42
AUTHORS
View File

@@ -16,26 +16,20 @@
Aaron Bieber (qbit) <qbit@deftly.net>
Adam Piggott (ProactiveServices) <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com> <ProactiveServices@users.noreply.github.com> <adam@proactiveservices.co.uk>
Adel Qalieh (adelq) <aqalieh95@gmail.com> <adelq@users.noreply.github.com>
Alan Pope <alan@popey.com>
Alessandro G. (alessandro.g89) <alessandro.g89@gmail.com>
Alexander Graf (alex2108) <register-github@alex-graf.de>
Alexandre Viau (aviau) <alexandre@alexandreviau.net> <aviau@debian.org>
Aman Gupta <aman@tmm1.net>
Anderson Mesquita (andersonvom) <andersonvom@gmail.com>
andresvia <andres.via@gmail.com>
Andrew Dunham (andrew-d) <andrew@du.nham.ca>
Andrew Rabert (nvllsvm) <ar@nullsum.net> <6550543+nvllsvm@users.noreply.github.com>
Andrey D (scienmind) <scintertech@cryptolab.net> <scienmind@users.noreply.github.com>
André Colomb (acolomb) <src@andre.colomb.de> <github.com@andre.colomb.de>
andyleap <andyleap@gmail.com>
Antoine Lamielle (0x010C) <antoine.lamielle@0x010c.fr> <gh@0x010c.fr>
Antony Male (canton7) <antony.male@gmail.com>
Aranjedeath <Aranjedeath@users.noreply.github.com>
Arkadiusz Tymiński <gevleeog@gmail.com>
Arthur Axel fREW Schmidt (frioux) <frew@afoolishmanifesto.com> <frioux@gmail.com>
Artur Zubilewicz <AkaZecik@users.noreply.github.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com> <github@audrius.rocks>
Aurélien Rainone <476650+arl@users.noreply.github.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com>
BAHADIR YILMAZ <bahadiryilmaz32@gmail.com>
Bart De Vries (mogwa1) <devriesb@gmail.com>
Ben Curthoys (bencurthoys) <ben@bencurthoys.com>
@@ -46,25 +40,21 @@ Benedikt Heine (bebehei) <bebe@bebehei.de>
Benedikt Morbach <benedikt.morbach@googlemail.com>
Benno Fünfstück <benno.fuenfstueck@gmail.com>
Benny Ng (tpng) <benny.tpng@gmail.com>
boomsquared <54829195+boomsquared@users.noreply.github.com>
Boris Rybalkin <ribalkin@gmail.com>
Brandon Philips (philips) <brandon@ifup.org>
Brendan Long (brendanlong) <self@brendanlong.com>
Brian R. Becker (brbecker) <brbecker@gmail.com>
Caleb Callaway (cqcallaw) <enlightened.despot@gmail.com>
Carsten Hagemann (carstenhag) <moter8@gmail.com> <carsten@chagemann.de>
Carsten Hagemann (Moter8) <moter8@gmail.com>
Cathryne Linenweaver (Cathryne) <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com> <katrinleinweber@MAC.local>
Cedric Staniewski (xduugu) <cedric@gmx.ca>
chenrui <rui@meetup.com>
Chris Howie (cdhowie) <me@chrishowie.com>
Chris Joel (cdata) <chris@scriptolo.gy>
Chris Tonkinson <chris@masterbran.ch>
chucic <chucic@seznam.cz>
Colin Kennedy (moshen) <moshen.colin@gmail.com>
Cromefire_ <tim.l@nghorst.net> <26320625+cromefire@users.noreply.github.com>
Cyprien Devillez <cypx@users.noreply.github.com>
Cromefire_ <tim.l@nghorst.net>
Dale Visser <dale.visser@live.com>
Dan <benda.daniel@gmail.com>
Daniel Bergmann (brgmnn) <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Harte (norgeous) <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@users.noreply.github.com>
Daniel Martí (mvdan) <mvdan@mvdan.cc>
@@ -72,25 +62,19 @@ Darshil Chanpura (dtchanpura) <dtchanpura@gmail.com> <dcprime314@gmail.com>
David Rimmer (dinosore) <dinosore@dbrsoftware.co.uk>
Denis A. (dva) <denisva@gmail.com>
Dennis Wilson (snnd) <dw@risu.io>
dependabot-preview[bot] <dependabot-preview[bot]@users.noreply.github.com> <27856297+dependabot-preview[bot]@users.noreply.github.com>
dependabot[bot] <dependabot[bot]@users.noreply.github.com>
derekriemer <derek.riemer@colorado.edu>
desbma <desbma@users.noreply.github.com>
Dmitry Saveliev (dsaveliev) <d.e.saveliev@gmail.com>
Domenic Horner <domenic@tgxn.net>
Dominik Heidler (asdil12) <dominik@heidler.eu>
Elias Jarlebring (jarlebring) <jarlebring@gmail.com>
Elliot Huffman <thelich2@gmail.com>
Emil Hessman (ceh) <emil@hessman.se>
Erik Meitner (WSGCSysadmin) <e.meitner@willystreet.coop>
Evgeny Kuznetsov <evgeny@kuznetsov.md>
Federico Castagnini (facastagnini) <federico.castagnini@gmail.com>
Felix Ableitner (Nutomic) <me@nutomic.com>
Felix Unterpaintner (bigbear2nd) <bigbear2nd@gmail.com>
Francois-Xavier Gsell (zukoo) <fxgsell@gmail.com>
Frank Isemann (fti7) <frank@isemann.name>
georgespatton <georgespatton@users.noreply.github.com>
ghjklw <malo@jaffre.info>
Gilli Sigurdsson (gillisig) <gilli@vx.is>
Graham Miln (grahammiln) <graham.miln@dssw.co.uk> <graham.miln@miln.eu>
Han Boetes <han@boetes.org>
@@ -99,11 +83,8 @@ Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Hugo Locurcio <hugo.locurcio@hugo.pro>
Iain Barnett <iainspeed@gmail.com>
Ian Johnson (anonymouse64) <ian.johnson@canonical.com> <person.uwsome@gmail.com>
Ilya Brin <464157+ilyabrin@users.noreply.github.com>
Iskander Sharipov (Alex) <quasilyte@gmail.com>
Jaakko Hannikainen (jgke) <jgke@jgke.fi>
Jacek Szafarkiewicz (hadogenes) <szafar@linux.pl>
Jacob <jyundt@gmail.com>
Jake Peterson (acogdev) <jake@acogdev.com>
Jakob Borg (calmh) <jakob@nym.se> <jakob@kastelo.net>
James Patterson (jpjp) <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
@@ -111,18 +92,15 @@ janost <janost@tuta.io>
Jaroslav Malec (dzarda) <dzardacz@gmail.com>
jaseg <githubaccount@jaseg.net>
Jaya Chithra (jayachithra) <s.k.jayachithra@gmail.com>
jelle van der Waa <jelle@vdwaa.nl>
Jens Diemer (jedie) <github.com@jensdiemer.de> <git@jensdiemer.de>
Jerry Jacobs (xor-gate) <jerry.jacobs@xor-gate.org> <xor-gate@users.noreply.github.com>
Jochen Voss (seehuhn) <voss@seehuhn.de>
Johan Andersson <j@i19.se>
Johan Vromans (sciurius) <jvromans@squirrel.nl>
John Rinehart (fuzzybear3965) <johnrichardrinehart@gmail.com>
Jonas Thelemann <e-mail@jonas-thelemann.de>
Jonathan Cross <jcross@gmail.com>
Jose Manuel Delicado (jmdaweb) <jmdaweb@hotmail.com> <jmdaweb@users.noreply.github.com>
Jörg Thalheim <Mic92@users.noreply.github.com>
Kalle Laine <pahakalle@protonmail.com>
Karol Różycki (krozycki) <rozycki.karol@gmail.com>
Keith Turner <kturner@apache.org>
Kelong Cong (kc1212) <kc04bc@gmx.com> <kc1212@users.noreply.github.com>
@@ -138,19 +116,15 @@ Leo Arias (elopio) <yo@elopio.net>
Liu Siyuan (liusy182) <liusy182@gmail.com> <liusy182@hotmail.com>
Lode Hoste (Zillode) <zillode@zillode.be>
Lord Landon Agahnim (LordLandon) <lordlandon@gmail.com>
Lukas Lihotzki <lukas@lihotzki.de>
Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol (kilburn) <kilburn@la3.org>
Marcin Dziadus (marcindziadus) <dziadus.marcin@gmail.com>
marco-m <marco.molteni@laposte.net>
Marcus Legendre <marcus.legendre@gmail.com>
Mark Pulford (mpx) <mark@kyne.com.au>
Mateusz Naściszewski (mateon1) <matin1111@wp.pl>
Mateusz Ż <thedead4fun@live.com>
Matic Potočnik <hairyfotr@gmail.com>
Matt Burke (burkemw3) <mburke@amplify.com> <burkemw3@gmail.com>
Matt Robenolt <matt@ydekproductions.com>
Matteo Ruina <matteo.ruina@gmail.com>
Maurizio Tomasi <ziotom78@gmail.com>
Max Schulze (kralo) <max.schulze@online.de> <kralo@users.noreply.github.com>
@@ -161,22 +135,15 @@ Michael Ploujnikov (plouj) <ploujj@gmail.com>
Michael Tilli (pyfisch) <pyfisch@gmail.com>
Mike Boone <mike@boonedocks.net>
MikeLund <MikeLund@users.noreply.github.com>
Mingxuan Lin <gdlmx@users.noreply.github.com>
Nate Morrison (nrm21) <natemorrison@gmail.com>
Nicholas Rishel (PrototypeNM1) <rishel.nick@gmail.com> <PrototypeNM1@users.noreply.github.com>
Nico Stapelbroek <3368018+nstapelbroek@users.noreply.github.com>
Nicolas Braud-Santoni <nicolas@braud-santoni.eu>
Niels Peter Roest (Niller303) <nielsproest@hotmail.com> <seje.niels@hotmail.com>
Nils Jakobi (thunderstorm99) <jakobi.nils@gmail.com>
Nitroretro <43112364+Nitroretro@users.noreply.github.com>
NoLooseEnds <jon.koslung@gmail.com>
Oliver Freyermuth <o.freyermuth@googlemail.com>
otbutz <tbutz@optitool.de>
Otiel <Otiel@users.noreply.github.com>
Oyebanji Jacob Mayowa <oyebanji05@gmail.com>
Pablo <pbaeyens31+github@gmail.com>
Pascal Jungblut (pascalj) <github@pascalj.com> <mail@pascal-jungblut.com>
Paul Brit <paulbrit44@gmail.com>
Pawel Palenica (qepasa) <pawelpalenica11@gmail.com>
Paweł Rozlach <vespian@users.noreply.github.com>
perewa <cavalcante.ten@gmail.com>
@@ -192,11 +159,9 @@ Piotr Bejda (piobpl) <piotrb10@gmail.com>
Pramodh KP (pramodhkp) <pramodh.p@directi.com> <1507241+pramodhkp@users.noreply.github.com>
Richard Hartmann <RichiH@users.noreply.github.com>
Robert Carosi (nov1n) <robert@carosi.nl>
Robin Schoonover <robin@cornhooves.org>
Roman Zaynetdinov (zaynetro) <romanznet@gmail.com>
Ross Smith II (rasa) <ross@smithii.com>
rubenbe <github-com-00ff86@vandamme.email>
Ruslan Yevdokymov <38809160+ruslanye@users.noreply.github.com>
Ryan Sullivan (KayoticSully) <kayoticsully@gmail.com>
Sacheendra Talluri (sacheendra) <sacheendra.t@gmail.com>
Scott Klupfel (kluppy) <kluppy@going2blue.com>
@@ -212,7 +177,6 @@ Tim Abell (timabell) <tim@timwise.co.uk>
Tim Howes (timhowes) <timhowes@berkeley.edu>
Tobias Nygren (tnn2) <tnn@nygren.pp.se>
Tobias Tom (tobiastom) <t.tom@succont.de>
Tom Jakubowski <tom@crystae.net>
Tomas Cerveny (kozec) <kozec@kozec.com>
Tommy Thorn <tommy-github-email@thorn.ws>
Tully Robinson (tojrobinson) <tully@tojr.org>

View File

@@ -1,6 +1,6 @@
FROM golang:1.13 AS builder
FROM golang:1.11 AS builder
WORKDIR /src
WORKDIR /go/src/github.com/syncthing/syncthing
COPY . .
ENV CGO_ENABLED=0
@@ -16,13 +16,17 @@ VOLUME ["/var/syncthing"]
RUN apk add --no-cache ca-certificates su-exec
COPY --from=builder /src/syncthing /bin/syncthing
COPY --from=builder /src/script/docker-entrypoint.sh /bin/entrypoint.sh
COPY --from=builder /go/src/github.com/syncthing/syncthing/syncthing /bin/syncthing
ENV PUID=1000 PGID=1000 HOME=/var/syncthing
ENV PUID=1000 PGID=1000
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -z localhost 8384 || exit 1
ENV STGUIADDRESS=0.0.0.0:8384
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/syncthing", "-home", "/var/syncthing/config"]
ENTRYPOINT \
chown "${PUID}:${PGID}" /var/syncthing \
&& su-exec "${PUID}:${PGID}" \
env HOME=/var/syncthing \
/bin/syncthing \
-home /var/syncthing/config \
-gui-address 0.0.0.0:8384

View File

@@ -1,28 +0,0 @@
FROM golang:1.13 AS builder
WORKDIR /src
COPY . .
ENV CGO_ENABLED=0
ENV BUILD_HOST=syncthing.net
ENV BUILD_USER=docker
RUN rm -f stdiscosrv && go run build.go -no-upgrade build stdiscosrv
FROM alpine
EXPOSE 19200 8443
VOLUME ["/var/stdiscosrv"]
RUN apk add --no-cache ca-certificates su-exec
COPY --from=builder /src/stdiscosrv /bin/stdiscosrv
COPY --from=builder /src/script/docker-entrypoint.sh /bin/entrypoint.sh
ENV PUID=1000 PGID=1000 HOME=/var/stdiscosrv
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -z localhost 8443 || exit 1
WORKDIR /var/stdiscosrv
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/stdiscosrv"]

View File

@@ -1,28 +0,0 @@
FROM golang:1.13 AS builder
WORKDIR /src
COPY . .
ENV CGO_ENABLED=0
ENV BUILD_HOST=syncthing.net
ENV BUILD_USER=docker
RUN rm -f strelaysrv && go run build.go -no-upgrade build strelaysrv
FROM alpine
EXPOSE 22067 22070
VOLUME ["/var/strelaysrv"]
RUN apk add --no-cache ca-certificates su-exec
COPY --from=builder /src/strelaysrv /bin/strelaysrv
COPY --from=builder /src/script/docker-entrypoint.sh /bin/entrypoint.sh
ENV PUID=1000 PGID=1000 HOME=/var/strelaysrv
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -z localhost 22067 || exit 1
WORKDIR /var/strelaysrv
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/strelaysrv"]

View File

@@ -1,34 +1,39 @@
# Docker Container for Syncthing
Use the Dockerfile in this repo, or pull the `syncthing/syncthing` image
from Docker Hub.
from Docker Hub. Use volumes to have the synchronized files available on the
host.
Use the `/var/syncthing` volume to have the synchronized files available on the
host. You can add more folders and map them as you prefer.
The exposed volumes are by default:
/var/syncthing/config - the configuration and index directory into the Container
/var/syncthing - the default sync folder into the Container
You can add more folders and map them as you prefer.
Note that Syncthing runs as UID 1000 and GID 1000 by default. These may be
altered with the ``PUID`` and ``PGID`` environment variables.
## Example Usage
Example usage:
```
$ docker pull syncthing/syncthing
$ docker run -p 8384:8384 -p 22000:22000 \
-v /wherever/st-cfg:/var/syncthing/config \
-v /wherever/st-sync:/var/syncthing \
syncthing/syncthing:latest
```
## Discovery
Note that local device discovery will not work with the above command,
resulting in poor local transfer rates if local device addresses are not
manually configured.
Note that local device discovery will not work with the above command resulting
in poor local transfer rates if local device addresses are not manually
configured.
To allow local discovery, the docker host network can be used instead:
```
$ docker pull syncthing/syncthing
$ docker run --network=host \
-v /wherever/st-cfg:/var/syncthing/config \
-v /wherever/st-sync:/var/syncthing \
syncthing/syncthing:latest
```
@@ -36,24 +41,3 @@ $ docker run --network=host \
Be aware that syncthing alone is now in control of what interfaces and ports it
listens on. You can edit the syncthing configuration to change the defaults if
there are conflicts.
## GUI Security
By default Syncthing inside the Docker image listens on 0.0.0.0:8384 to
allow GUI connections via the Docker proxy. This is set by the
`STGUIADDRESS` environment variable in the Dockerfile, as it differs from
what Syncthing would otherwise use by default. This means you should set up
authentication in the GUI, like for any other externally reachable Syncthing
instance. If you do not require the GUI, or you use host networking, you can
unset the `STGUIADDRESS` variable to have Syncthing fall back to listening
on 127.0.0.1:
```
$ docker pull syncthing/syncthing
$ docker run -e STGUIADDRESS= \
-v /wherever/st-sync:/var/syncthing \
syncthing/syncthing:latest
```
With the environment variable unset Syncthing will follow what is set in the
configuration file / GUI settings dialog.

View File

@@ -2,6 +2,7 @@
---
[![Latest Downloads](https://img.shields.io/badge/latest-downloads-brightgreen.svg?style=flat-square)](https://build.syncthing.net/latest/)
[![Latest Linux & Cross Build](https://img.shields.io/teamcity/https/build.syncthing.net/s/Syncthing_BuildLinuxCross.svg?style=flat-square&label=linux+%26+cross+build)](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildLinuxCross&guest=1)
[![Latest Windows Build](https://img.shields.io/teamcity/https/build.syncthing.net/s/Syncthing_BuildWindows.svg?style=flat-square&label=windows+build)](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildWindows&guest=1)
[![Latest Mac Build](https://img.shields.io/teamcity/https/build.syncthing.net/s/Syncthing_BuildMac.svg?style=flat-square&label=mac+build)](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildMac&guest=1)
@@ -62,10 +63,6 @@ There are a few examples for keeping Syncthing running in the background
on your system in [the etc directory][3]. There are also several [GUI
implementations][11] for Windows, Mac and Linux.
## Docker
To run Syncthing in Docker, see [the Docker README][16].
## Vote on features/bugs
We'd like to encourage you to [vote][12] on issues that matter to you.
@@ -114,5 +111,4 @@ All code is licensed under the [MPLv2 License][7].
[13]: https://github.com/syncthing/syncthing/blob/master/GOALS.md
[14]: assets/logo-text-128.png
[15]: https://syncthing.net/
[16]: https://github.com/syncthing/syncthing/blob/master/README-Docker.md

180
build.go
View File

@@ -24,7 +24,6 @@ import (
"os"
"os/exec"
"os/user"
"path"
"path/filepath"
"regexp"
"runtime"
@@ -49,7 +48,6 @@ var (
pkgdir string
cc string
debugBinary bool
coverage bool
timeout = "120s"
)
@@ -57,13 +55,11 @@ type target struct {
name string
debname string
debdeps []string
debpre string
debpost string
description string
buildPkgs []string
buildPkg string
binaryName string
archiveFiles []archiveFile
systemdServices []string
installationFiles []archiveFile
tags []string
}
@@ -77,8 +73,9 @@ type archiveFile struct {
var targets = map[string]target{
"all": {
// Only valid for the "build" and "install" commands as it lacks all
// the archive creation stuff. buildPkgs gets filled out in init()
tags: []string{"purego"},
// the archive creation stuff.
buildPkg: "github.com/syncthing/syncthing/cmd/...",
tags: []string{"purego"},
},
"syncthing": {
// The default target for "build", "install", "tar", "zip", "deb", etc.
@@ -87,7 +84,7 @@ var targets = map[string]target{
debdeps: []string{"libc6", "procps"},
debpost: "script/post-upgrade",
description: "Open Source Continuous File Synchronization",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/syncthing"},
buildPkg: "github.com/syncthing/syncthing/cmd/syncthing",
binaryName: "syncthing", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -129,9 +126,8 @@ var targets = map[string]target{
name: "stdiscosrv",
debname: "syncthing-discosrv",
debdeps: []string{"libc6"},
debpre: "cmd/stdiscosrv/scripts/preinst",
description: "Syncthing Discovery Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stdiscosrv"},
buildPkg: "github.com/syncthing/syncthing/cmd/stdiscosrv",
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -139,17 +135,12 @@ var targets = map[string]target{
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
systemdServices: []string{
"cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service",
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0644},
},
tags: []string{"purego"},
},
@@ -157,9 +148,8 @@ var targets = map[string]target{
name: "strelaysrv",
debname: "syncthing-relaysrv",
debdeps: []string{"libc6"},
debpre: "cmd/strelaysrv/scripts/preinst",
description: "Syncthing Relay Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaysrv"},
buildPkg: "github.com/syncthing/syncthing/cmd/strelaysrv",
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -168,9 +158,6 @@ var targets = map[string]target{
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
systemdServices: []string{
"cmd/strelaysrv/etc/linux-systemd/strelaysrv.service",
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0644},
@@ -178,8 +165,6 @@ var targets = map[string]target{
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0644},
},
},
"strelaypoolsrv": {
@@ -187,7 +172,7 @@ var targets = map[string]target{
debname: "syncthing-relaypoolsrv",
debdeps: []string{"libc6"},
description: "Syncthing Relay Pool Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaypoolsrv"},
buildPkg: "github.com/syncthing/syncthing/cmd/strelaypoolsrv",
binaryName: "strelaypoolsrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
@@ -213,22 +198,11 @@ type dependencyRepo struct {
}
var dependencyRepos = []dependencyRepo{
{path: "protobuf", repo: "https://github.com/gogo/protobuf.git", commit: "v1.2.0"},
{path: "xdr", repo: "https://github.com/calmh/xdr.git", commit: "08e072f9cb16"},
}
func init() {
all := targets["all"]
pkgs, _ := filepath.Glob("cmd/*")
for _, pkg := range pkgs {
pkg = filepath.Base(pkg)
if strings.HasPrefix(pkg, ".") {
// ignore dotfiles
continue
}
all.buildPkgs = append(all.buildPkgs, fmt.Sprintf("github.com/syncthing/syncthing/cmd/%s", pkg))
}
targets["all"] = all
// The "syncthing" target includes a few more files found in the "etc"
// and "extra" dirs.
syncthingPkg := targets["syncthing"]
@@ -279,6 +253,9 @@ func main() {
func runCommand(cmd string, target target) {
switch cmd {
case "setup":
setup()
case "install":
var tags []string
if noupgrade {
@@ -355,27 +332,51 @@ func parseFlags() {
flag.StringVar(&pkgdir, "pkgdir", "", "Set -pkgdir parameter for `go build`")
flag.StringVar(&cc, "cc", os.Getenv("CC"), "Set CC environment variable for `go build`")
flag.BoolVar(&debugBinary, "debug-binary", debugBinary, "Create unoptimized binary to use with delve, set -gcflags='-N -l' and omit -ldflags")
flag.BoolVar(&coverage, "coverage", coverage, "Write coverage profile of tests to coverage.txt")
flag.Parse()
}
func setup() {
packages := []string{
"github.com/alecthomas/gometalinter",
"github.com/AlekSi/gocov-xml",
"github.com/axw/gocov/gocov",
"github.com/FiloSottile/gvt",
"golang.org/x/lint/golint",
"github.com/gordonklaus/ineffassign",
"github.com/mdempsky/unconvert",
"github.com/mitchellh/go-wordwrap",
"github.com/opennota/check/cmd/...",
"github.com/tsenart/deadcode",
"golang.org/x/net/html",
"golang.org/x/tools/cmd/cover",
"honnef.co/go/tools/cmd/gosimple",
"honnef.co/go/tools/cmd/staticcheck",
"honnef.co/go/tools/cmd/unused",
"github.com/josephspurrier/goversioninfo",
}
for _, pkg := range packages {
fmt.Println(pkg)
runPrint(goCmd, "get", "-u", pkg)
}
runPrint(goCmd, "install", "-v", "github.com/syncthing/syncthing/vendor/github.com/gogo/protobuf/protoc-gen-gogofast")
}
func test(pkgs ...string) {
lazyRebuildAssets()
args := []string{"test", "-short", "-timeout", timeout, "-tags", "purego"}
if runtime.GOARCH == "amd64" {
switch runtime.GOOS {
case "darwin", "linux", "freebsd": // , "windows": # See https://github.com/golang/go/issues/27089
args = append(args, "-race")
}
useRace := runtime.GOARCH == "amd64"
switch runtime.GOOS {
case "darwin", "linux", "freebsd", "windows":
default:
useRace = false
}
if coverage {
args = append(args, "-covermode", "atomic", "-coverprofile", "coverage.txt", "-coverpkg", strings.Join(pkgs, ","))
if useRace {
runPrint(goCmd, append([]string{"test", "-short", "-race", "-timeout", timeout, "-tags", "purego"}, pkgs...)...)
} else {
runPrint(goCmd, append([]string{"test", "-short", "-timeout", timeout, "-tags", "purego"}, pkgs...)...)
}
runPrint(goCmd, append(args, pkgs...)...)
}
func bench(pkgs ...string) {
@@ -394,6 +395,9 @@ func install(target target, tags []string) {
}
os.Setenv("GOBIN", filepath.Join(cwd, "bin"))
args := []string{"install", "-v"}
args = appendParameters(args, tags, target)
os.Setenv("GOOS", goos)
os.Setenv("GOARCH", goarch)
os.Setenv("CC", cc)
@@ -409,20 +413,19 @@ func install(target target, tags []string) {
defer shouldCleanupSyso(sysoPath)
}
for _, pkg := range target.buildPkgs {
args := []string{"install", "-v"}
args = appendParameters(args, tags, pkg)
runPrint(goCmd, args...)
}
runPrint(goCmd, args...)
}
func build(target target, tags []string) {
lazyRebuildAssets()
tags = append(target.tags, tags...)
rmr(target.BinaryName())
args := []string{"build", "-v"}
args = appendParameters(args, tags, target)
os.Setenv("GOOS", goos)
os.Setenv("GOARCH", goarch)
os.Setenv("CC", cc)
@@ -442,15 +445,10 @@ func build(target target, tags []string) {
defer shouldCleanupSyso(sysoPath)
}
for _, pkg := range target.buildPkgs {
args := []string{"build", "-v"}
args = appendParameters(args, tags, pkg)
runPrint(goCmd, args...)
}
runPrint(goCmd, args...)
}
func appendParameters(args []string, tags []string, pkg string) []string {
func appendParameters(args []string, tags []string, target target) []string {
if pkgdir != "" {
args = append(args, "-pkgdir", pkgdir)
}
@@ -466,7 +464,7 @@ func appendParameters(args []string, tags []string, pkg string) []string {
if !debugBinary {
// Regular binaries get version tagged and skip some debug symbols
args = append(args, "-ldflags", ldflags(path.Base(pkg)))
args = append(args, "-ldflags", ldflags())
} else {
// -gcflags to disable optimizations and inlining. Skip -ldflags
// because `Could not launch program: decoding dwarf section info at
@@ -475,7 +473,7 @@ func appendParameters(args []string, tags []string, pkg string) []string {
args = append(args, "-gcflags", "-N -l")
}
return append(args, pkg)
return append(args, target.buildPkg)
}
func buildTar(target target) {
@@ -489,7 +487,10 @@ func buildTar(target target) {
}
build(target, tags)
codesign(target)
if goos == "darwin" {
macosCodesign(target.BinaryName())
}
for i := range target.archiveFiles {
target.archiveFiles[i].src = strings.Replace(target.archiveFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -512,7 +513,10 @@ func buildZip(target target) {
}
build(target, tags)
codesign(target)
if goos == "windows" {
windowsCodesign(target.BinaryName())
}
for i := range target.archiveFiles {
target.archiveFiles[i].src = strings.Replace(target.archiveFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -576,15 +580,9 @@ func buildDeb(target target) {
for _, dep := range target.debdeps {
args = append(args, "-d", dep)
}
for _, service := range target.systemdServices {
args = append(args, "--deb-systemd", service)
}
if target.debpost != "" {
args = append(args, "--after-upgrade", target.debpost)
}
if target.debpre != "" {
args = append(args, "--before-install", target.debpre)
}
runPrint("fpm", args...)
}
@@ -717,7 +715,6 @@ func listFiles(dir string) []string {
if err != nil {
return err
}
if fi.Mode().IsRegular() {
res = append(res, path)
}
@@ -764,21 +761,13 @@ func shouldRebuildAssets(target, srcdir string) bool {
}
func proto() {
pv := protobufVersion()
dependencyRepos = append(dependencyRepos,
dependencyRepo{path: "protobuf", repo: "https://github.com/gogo/protobuf.git", commit: pv},
)
runPrint(goCmd, "get", fmt.Sprintf("github.com/gogo/protobuf/protoc-gen-gogofast@%v", pv))
os.MkdirAll("repos", 0755)
for _, dep := range dependencyRepos {
path := filepath.Join("repos", dep.path)
if _, err := os.Stat(path); err != nil {
runPrintInDir("repos", "git", "clone", dep.repo, dep.path)
} else {
runPrintInDir(path, "git", "fetch")
runPrintInDir(path, "git", "checkout", dep.commit)
}
runPrintInDir(path, "git", "checkout", dep.commit)
}
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/stdiscosrv")
}
@@ -799,7 +788,7 @@ func transifex() {
runPrint(goCmd, "run", "../../../../script/transifexdl.go")
}
func ldflags(program string) string {
func ldflags() string {
sep := '='
if goVersion > 0 && goVersion < 1.5 {
sep = ' '
@@ -807,14 +796,10 @@ func ldflags(program string) string {
b := new(bytes.Buffer)
b.WriteString("-w")
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Version%c%s", sep, version)
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Stamp%c%d", sep, buildStamp())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.User%c%s", sep, buildUser())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Host%c%s", sep, buildHost())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Program%c%s", sep, program)
if v := os.Getenv("EXTRA_LDFLAGS"); v != "" {
fmt.Fprintf(b, " %s", v)
}
fmt.Fprintf(b, " -X main.Version%c%s", sep, version)
fmt.Fprintf(b, " -X main.BuildStamp%c%d", sep, buildStamp())
fmt.Fprintf(b, " -X main.BuildUser%c%s", sep, buildUser())
fmt.Fprintf(b, " -X main.BuildHost%c%s", sep, buildHost())
return b.String()
}
@@ -1173,15 +1158,6 @@ func zipFile(out string, files []archiveFile) {
}
}
func codesign(target target) {
switch goos {
case "windows":
windowsCodesign(target.BinaryName())
case "darwin":
macosCodesign(target.BinaryName())
}
}
func macosCodesign(file string) {
if pass := os.Getenv("CODESIGN_KEYCHAIN_PASS"); pass != "" {
bs, err := runError("security", "unlock-keychain", "-p", pass)
@@ -1280,11 +1256,3 @@ func (t target) BinaryName() string {
}
return t.binaryName
}
func protobufVersion() string {
bs, err := runError(goCmd, "list", "-f", "{{.Version}}", "-m", "github.com/gogo/protobuf")
if err != nil {
log.Fatal("Getting protobuf version:", err)
}
return string(bs)
}

143
cmd/stbench/main.go Normal file
View File

@@ -0,0 +1,143 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// This doesn't build on Windows due to the Rusage stuff.
// +build !windows
package main
import (
"flag"
"fmt"
"log"
"runtime"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/rc"
)
var homeDir = "h1"
var syncthingBin = "./bin/syncthing"
var test = "scan"
func main() {
flag.StringVar(&homeDir, "home", homeDir, "Home directory location")
flag.StringVar(&syncthingBin, "bin", syncthingBin, "Binary location")
flag.StringVar(&test, "test", test, "Test to run")
flag.Parse()
switch test {
case "scan":
// scan measures the resource usage required to perform the initial
// scan, without cleaning away the database first.
testScan()
}
}
// testScan starts a process and reports on the resource usage required to
// perform the initial scan.
func testScan() {
log.Println("Starting...")
p := rc.NewProcess("127.0.0.1:8081")
if err := p.Start(syncthingBin, "-home", homeDir, "-no-browser"); err != nil {
log.Println(err)
return
}
defer p.Stop()
wallTime := awaitScanComplete(p)
report(p, wallTime)
}
// awaitScanComplete waits for a folder to transition idle->scanning and
// then scanning->idle and returns the time taken for the scan.
func awaitScanComplete(p *rc.Process) time.Duration {
log.Println("Awaiting scan completion...")
var t0, t1 time.Time
lastEvent := 0
loop:
for {
evs, err := p.Events(lastEvent)
if err != nil {
continue
}
for _, ev := range evs {
if ev.Type == "StateChanged" {
data := ev.Data.(map[string]interface{})
log.Println(ev)
if data["to"].(string) == "scanning" {
t0 = ev.Time
continue
}
if !t0.IsZero() && data["to"].(string) == "idle" {
t1 = ev.Time
break loop
}
}
lastEvent = ev.ID
}
time.Sleep(250 * time.Millisecond)
}
return t1.Sub(t0)
}
// report stops the given process and reports on its resource usage in two
// ways: human readable to stderr, and CSV to stdout.
func report(p *rc.Process, wallTime time.Duration) {
sv, err := p.SystemVersion()
if err != nil {
log.Println(err)
return
}
ss, err := p.SystemStatus()
if err != nil {
log.Println(err)
return
}
proc, err := p.Stop()
if err != nil {
return
}
rusage, ok := proc.SysUsage().(*syscall.Rusage)
if !ok {
return
}
log.Println("Version:", sv.Version)
log.Println("Alloc:", ss.Alloc/1024, "KiB")
log.Println("Sys:", ss.Sys/1024, "KiB")
log.Println("Goroutines:", ss.Goroutines)
log.Println("Wall time:", wallTime)
log.Println("Utime:", time.Duration(rusage.Utime.Nano()))
log.Println("Stime:", time.Duration(rusage.Stime.Nano()))
if runtime.GOOS == "darwin" {
// Darwin reports in bytes, Linux seems to report in KiB even
// though the manpage says otherwise.
rusage.Maxrss /= 1024
}
log.Println("MaxRSS:", rusage.Maxrss, "KiB")
fmt.Printf("%s,%d,%d,%d,%.02f,%.02f,%.02f,%d\n",
sv.Version,
ss.Alloc/1024,
ss.Sys/1024,
ss.Goroutines,
wallTime.Seconds(),
time.Duration(rusage.Utime.Nano()).Seconds(),
time.Duration(rusage.Stime.Nano()).Seconds(),
rusage.Maxrss)
}

19
cmd/stcli/LICENSE Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2014 Audrius Butkevičius
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,95 +1,115 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"bytes"
"context"
"crypto/tls"
"fmt"
"net"
"net/http"
"strings"
"github.com/syncthing/syncthing/lib/config"
"github.com/AudriusButkevicius/cli"
)
type APIClient struct {
http.Client
cfg config.GUIConfiguration
apikey string
httpClient http.Client
endpoint string
apikey string
username string
password string
id string
csrf string
}
func getClient(cfg config.GUIConfiguration) *APIClient {
var instance *APIClient
func getClient(c *cli.Context) *APIClient {
if instance != nil {
return instance
}
endpoint := c.GlobalString("endpoint")
if !strings.HasPrefix(endpoint, "http") {
endpoint = "http://" + endpoint
}
httpClient := http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
return net.Dial(cfg.Network(), cfg.Address())
InsecureSkipVerify: c.GlobalBool("insecure"),
},
},
}
return &APIClient{
Client: httpClient,
cfg: cfg,
apikey: cfg.APIKey,
client := APIClient{
httpClient: httpClient,
endpoint: endpoint,
apikey: c.GlobalString("apikey"),
username: c.GlobalString("username"),
password: c.GlobalString("password"),
}
}
func (c *APIClient) Endpoint() string {
if c.cfg.Network() == "unix" {
return "http://unix/"
}
url := c.cfg.URL()
if !strings.HasSuffix(url, "/") {
url += "/"
}
return url
}
func (c *APIClient) Do(req *http.Request) (*http.Response, error) {
req.Header.Set("X-API-Key", c.apikey)
resp, err := c.Client.Do(req)
if err != nil {
return nil, err
}
return resp, checkResponse(resp)
}
func (c *APIClient) Get(url string) (*http.Response, error) {
request, err := http.NewRequest("GET", c.Endpoint()+"rest/"+url, nil)
if err != nil {
return nil, err
}
return c.Do(request)
}
func (c *APIClient) Post(url, body string) (*http.Response, error) {
request, err := http.NewRequest("POST", c.Endpoint()+"rest/"+url, bytes.NewBufferString(body))
if err != nil {
return nil, err
}
return c.Do(request)
}
func checkResponse(response *http.Response) error {
if response.StatusCode == 404 {
return fmt.Errorf("Invalid endpoint or API call")
} else if response.StatusCode == 403 {
return fmt.Errorf("Invalid API key")
} else if response.StatusCode != 200 {
data, err := responseToBArray(response)
if err != nil {
return err
if client.apikey == "" {
request, err := http.NewRequest("GET", client.endpoint, nil)
die(err)
response := client.handleRequest(request)
client.id = response.Header.Get("X-Syncthing-ID")
if client.id == "" {
die("Failed to get device ID")
}
body := strings.TrimSpace(string(data))
return fmt.Errorf("Unexpected HTTP status returned: %s\n%s", response.Status, body)
for _, item := range response.Cookies() {
if item.Name == "CSRF-Token-"+client.id[:5] {
client.csrf = item.Value
goto csrffound
}
}
die("Failed to get CSRF token")
csrffound:
}
return nil
instance = &client
return &client
}
func (client *APIClient) handleRequest(request *http.Request) *http.Response {
if client.apikey != "" {
request.Header.Set("X-API-Key", client.apikey)
}
if client.username != "" || client.password != "" {
request.SetBasicAuth(client.username, client.password)
}
if client.csrf != "" {
request.Header.Set("X-CSRF-Token-"+client.id[:5], client.csrf)
}
response, err := client.httpClient.Do(request)
die(err)
if response.StatusCode == 404 {
die("Invalid endpoint or API call")
} else if response.StatusCode == 401 {
die("Invalid username or password")
} else if response.StatusCode == 403 {
if client.apikey == "" {
die("Invalid CSRF token")
}
die("Invalid API key")
} else if response.StatusCode != 200 {
body := strings.TrimSpace(string(responseToBArray(response)))
if body != "" {
die(body)
}
die("Unknown HTTP status returned: " + response.Status)
}
return response
}
func httpGet(c *cli.Context, url string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("GET", client.endpoint+"/rest/"+url, nil)
die(err)
return client.handleRequest(request)
}
func httpPost(c *cli.Context, url string, body string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("POST", client.endpoint+"/rest/"+url, bytes.NewBufferString(body))
die(err)
return client.handleRequest(request)
}

188
cmd/stcli/cmd_devices.go Normal file
View File

@@ -0,0 +1,188 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "devices",
HideHelp: true,
Usage: "Device command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List registered devices",
Requires: &cli.Requires{},
Action: devicesList,
},
{
Name: "add",
Usage: "Add a new device",
Requires: &cli.Requires{"device id", "device name?"},
Action: devicesAdd,
},
{
Name: "remove",
Usage: "Remove an existing device",
Requires: &cli.Requires{"device id"},
Action: devicesRemove,
},
{
Name: "get",
Usage: "Get a property of a device",
Requires: &cli.Requires{"device id", "property"},
Action: devicesGet,
},
{
Name: "set",
Usage: "Set a property of a device",
Requires: &cli.Requires{"device id", "property", "value..."},
Action: devicesSet,
},
},
})
}
func devicesList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, device := range cfg.Devices {
if !first {
fmt.Fprintln(writer)
}
fmt.Fprintln(writer, "ID:\t", device.DeviceID, "\t")
fmt.Fprintln(writer, "Name:\t", device.Name, "\t(name)")
fmt.Fprintln(writer, "Address:\t", strings.Join(device.Addresses, " "), "\t(address)")
fmt.Fprintln(writer, "Compression:\t", device.Compression, "\t(compression)")
fmt.Fprintln(writer, "Certificate name:\t", device.CertName, "\t(certname)")
fmt.Fprintln(writer, "Introducer:\t", device.Introducer, "\t(introducer)")
first = false
}
writer.Flush()
}
func devicesAdd(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
newDevice := config.DeviceConfiguration{
DeviceID: id,
Name: nid,
Addresses: []string{"dynamic"},
}
if len(c.Args()) > 1 {
newDevice.Name = c.Args()[1]
}
if len(c.Args()) > 2 {
addresses := c.Args()[2:]
for _, item := range addresses {
if item == "dynamic" {
continue
}
validAddress(item)
}
newDevice.Addresses = addresses
}
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID == id {
die("Device " + nid + " already exists")
}
}
cfg.Devices = append(cfg.Devices, newDevice)
setConfig(c, cfg)
}
func devicesRemove(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
if nid == getMyID(c) {
die("Cannot remove yourself")
}
cfg := getConfig(c)
for i, device := range cfg.Devices {
if device.DeviceID == id {
last := len(cfg.Devices) - 1
cfg.Devices[i] = cfg.Devices[last]
cfg.Devices = cfg.Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + nid + " not found")
}
func devicesGet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
fmt.Println(device.Name)
case "address":
fmt.Println(strings.Join(device.Addresses, "\n"))
case "compression":
fmt.Println(device.Compression.String())
case "certname":
fmt.Println(device.CertName)
case "introducer":
fmt.Println(device.Introducer)
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
return
}
die("Device " + nid + " not found")
}
func devicesSet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
config := getConfig(c)
for i, device := range config.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
config.Devices[i].Name = strings.Join(c.Args()[2:], " ")
case "address":
for _, item := range c.Args()[2:] {
if item == "dynamic" {
continue
}
validAddress(item)
}
config.Devices[i].Addresses = c.Args()[2:]
case "compression":
err := config.Devices[i].Compression.UnmarshalText([]byte(c.Args()[2]))
die(err)
case "certname":
config.Devices[i].CertName = strings.Join(c.Args()[2:], " ")
case "introducer":
config.Devices[i].Introducer = parseBool(c.Args()[2])
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
setConfig(c, config)
return
}
die("Device " + nid + " not found")
}

67
cmd/stcli/cmd_errors.go Normal file
View File

@@ -0,0 +1,67 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Requires: &cli.Requires{},
Action: errorsShow,
},
{
Name: "push",
Usage: "Push an error to active clients",
Requires: &cli.Requires{"error message..."},
Action: errorsPush,
},
{
Name: "clear",
Usage: "Clear pending errors",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/error/clear"),
},
},
})
}
func errorsShow(c *cli.Context) {
response := httpGet(c, "system/error")
var data map[string][]map[string]interface{}
json.Unmarshal(responseToBArray(response), &data)
writer := newTableWriter()
for _, item := range data["errors"] {
time := item["when"].(string)[:19]
time = strings.Replace(time, "T", " ", 1)
err := item["message"].(string)
err = strings.TrimSpace(err)
fmt.Fprintln(writer, time+":\t"+err)
}
writer.Flush()
}
func errorsPush(c *cli.Context) {
err := strings.Join(c.Args(), " ")
response := httpPost(c, "system/error", strings.TrimSpace(err))
if response.StatusCode != 200 {
err = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
}

361
cmd/stcli/cmd_folders.go Normal file
View File

@@ -0,0 +1,361 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"path/filepath"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/fs"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "folders",
HideHelp: true,
Usage: "Folder command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List available folders",
Requires: &cli.Requires{},
Action: foldersList,
},
{
Name: "add",
Usage: "Add a new folder",
Requires: &cli.Requires{"folder id", "directory"},
Action: foldersAdd,
},
{
Name: "remove",
Usage: "Remove an existing folder",
Requires: &cli.Requires{"folder id"},
Action: foldersRemove,
},
{
Name: "override",
Usage: "Override changes from other nodes for a master folder",
Requires: &cli.Requires{"folder id"},
Action: foldersOverride,
},
{
Name: "get",
Usage: "Get a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersGet,
},
{
Name: "set",
Usage: "Set a property of a folder",
Requires: &cli.Requires{"folder id", "property", "value..."},
Action: foldersSet,
},
{
Name: "unset",
Usage: "Unset a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersUnset,
},
{
Name: "devices",
Usage: "Folder devices command group",
HideHelp: true,
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List of devices which the folder is shared with",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesList,
},
{
Name: "add",
Usage: "Share a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesAdd,
},
{
Name: "remove",
Usage: "Unshare a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesRemove,
},
{
Name: "clear",
Usage: "Unshare a folder with all devices",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesClear,
},
},
},
},
})
}
func foldersList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, folder := range cfg.Folders {
if !first {
fmt.Fprintln(writer)
}
fs := folder.Filesystem()
fmt.Fprintln(writer, "ID:\t", folder.ID, "\t")
fmt.Fprintln(writer, "Path:\t", fs.URI(), "\t(directory)")
fmt.Fprintln(writer, "Path type:\t", fs.Type(), "\t(directory-type)")
fmt.Fprintln(writer, "Folder type:\t", folder.Type, "\t(type)")
fmt.Fprintln(writer, "Ignore permissions:\t", folder.IgnorePerms, "\t(permissions)")
fmt.Fprintln(writer, "Rescan interval in seconds:\t", folder.RescanIntervalS, "\t(rescan)")
if folder.Versioning.Type != "" {
fmt.Fprintln(writer, "Versioning:\t", folder.Versioning.Type, "\t(versioning)")
for key, value := range folder.Versioning.Params {
fmt.Fprintf(writer, "Versioning %s:\t %s \t(versioning-%s)\n", key, value, key)
}
}
first = false
}
writer.Flush()
}
func foldersAdd(c *cli.Context) {
cfg := getConfig(c)
abs, err := filepath.Abs(c.Args()[1])
die(err)
folder := config.FolderConfiguration{
ID: c.Args()[0],
Path: filepath.Clean(abs),
FilesystemType: fs.FilesystemTypeBasic,
}
cfg.Folders = append(cfg.Folders, folder)
setConfig(c, cfg)
}
func foldersRemove(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for i, folder := range cfg.Folders {
if folder.ID == rid {
last := len(cfg.Folders) - 1
cfg.Folders[i] = cfg.Folders[last]
cfg.Folders = cfg.Folders[:last]
setConfig(c, cfg)
return
}
}
die("Folder " + rid + " not found")
}
func foldersOverride(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid && folder.Type == config.FolderTypeSendOnly {
response := httpPost(c, "db/override", "")
if response.StatusCode != 200 {
err := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
return
}
}
die("Folder " + rid + " not found or folder not master")
}
func foldersGet(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
value, ok := folder.Versioning.Params[arg]
if ok {
fmt.Println(value)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "directory":
fmt.Println(folder.Filesystem().URI())
case "directory-type":
fmt.Println(folder.Filesystem().Type())
case "type":
fmt.Println(folder.Type)
case "permissions":
fmt.Println(folder.IgnorePerms)
case "rescan":
fmt.Println(folder.RescanIntervalS)
case "versioning":
if folder.Versioning.Type != "" {
fmt.Println(folder.Versioning.Type)
}
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, directory-type, type, permissions, versioning, versioning-<key>")
}
return
}
die("Folder " + rid + " not found")
}
func foldersSet(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
val := strings.Join(c.Args()[2:], " ")
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
cfg.Folders[i].Versioning.Params[arg[11:]] = val
setConfig(c, cfg)
return
}
switch arg {
case "directory":
cfg.Folders[i].Path = val
case "directory-type":
var fsType fs.FilesystemType
fsType.UnmarshalText([]byte(val))
cfg.Folders[i].FilesystemType = fsType
case "type":
var t config.FolderType
if err := t.UnmarshalText([]byte(val)); err != nil {
die("Invalid folder type: " + err.Error())
}
cfg.Folders[i].Type = t
case "permissions":
cfg.Folders[i].IgnorePerms = parseBool(val)
case "rescan":
cfg.Folders[i].RescanIntervalS = parseInt(val)
case "versioning":
cfg.Folders[i].Versioning.Type = val
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, master, permissions, versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersUnset(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
if _, ok := folder.Versioning.Params[arg]; ok {
delete(cfg.Folders[i].Versioning.Params, arg)
setConfig(c, cfg)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "versioning":
cfg.Folders[i].Versioning.Type = ""
cfg.Folders[i].Versioning.Params = make(map[string]string)
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesList(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
fmt.Println(device.DeviceID)
}
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesAdd(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
if device.DeviceID == nid {
die("Device " + c.Args()[1] + " is already part of this folder")
}
}
for _, device := range cfg.Devices {
if device.DeviceID == nid {
cfg.Folders[i].Devices = append(folder.Devices, config.FolderDeviceConfiguration{
DeviceID: device.DeviceID,
})
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found in device list")
}
die("Folder " + rid + " not found")
}
func foldersDevicesRemove(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for ri, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for ni, device := range folder.Devices {
if device.DeviceID == nid {
last := len(folder.Devices) - 1
cfg.Folders[ri].Devices[ni] = folder.Devices[last]
cfg.Folders[ri].Devices = cfg.Folders[ri].Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found")
}
die("Folder " + rid + " not found")
}
func foldersDevicesClear(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
cfg.Folders[i].Devices = []config.FolderDeviceConfiguration{}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}

94
cmd/stcli/cmd_general.go Normal file
View File

@@ -0,0 +1,94 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, []cli.Command{
{
Name: "id",
Usage: "Get ID of the Syncthing client",
Requires: &cli.Requires{},
Action: generalID,
},
{
Name: "status",
Usage: "Configuration status, whether or not a restart is required for changes to take effect",
Requires: &cli.Requires{},
Action: generalStatus,
},
{
Name: "config",
Usage: "Configuration",
Requires: &cli.Requires{},
Action: generalConfiguration,
},
{
Name: "restart",
Usage: "Restart syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/restart"),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/shutdown"),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/reset"),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/upgrade"),
},
{
Name: "version",
Usage: "Syncthing client version",
Requires: &cli.Requires{},
Action: generalVersion,
},
}...)
}
func generalID(c *cli.Context) {
fmt.Println(getMyID(c))
}
func generalStatus(c *cli.Context) {
response := httpGet(c, "system/config/insync")
var status struct{ ConfigInSync bool }
json.Unmarshal(responseToBArray(response), &status)
if !status.ConfigInSync {
die("Config out of sync")
}
fmt.Println("Config in sync")
}
func generalConfiguration(c *cli.Context) {
response := httpGet(c, "system/config")
var jsResponse interface{}
json.Unmarshal(responseToBArray(response), &jsResponse)
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
enc.Encode(jsResponse)
}
func generalVersion(c *cli.Context) {
response := httpGet(c, "system/version")
version := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &version)
prettyPrintJSON(version)
}

127
cmd/stcli/cmd_gui.go Normal file
View File

@@ -0,0 +1,127 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "gui",
HideHelp: true,
Usage: "GUI command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all GUI configuration settings",
Requires: &cli.Requires{},
Action: guiDump,
},
{
Name: "get",
Usage: "Get a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiGet,
},
{
Name: "set",
Usage: "Set a GUI configuration setting",
Requires: &cli.Requires{"setting", "value"},
Action: guiSet,
},
{
Name: "unset",
Usage: "Unset a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiUnset,
},
},
})
}
func guiDump(c *cli.Context) {
cfg := getConfig(c).GUI
writer := newTableWriter()
fmt.Fprintln(writer, "Enabled:\t", cfg.Enabled, "\t(enabled)")
fmt.Fprintln(writer, "Use HTTPS:\t", cfg.UseTLS(), "\t(tls)")
fmt.Fprintln(writer, "Listen Addresses:\t", cfg.Address(), "\t(address)")
if cfg.User != "" {
fmt.Fprintln(writer, "Authentication User:\t", cfg.User, "\t(username)")
fmt.Fprintln(writer, "Authentication Password:\t", cfg.Password, "\t(password)")
}
if cfg.APIKey != "" {
fmt.Fprintln(writer, "API Key:\t", cfg.APIKey, "\t(apikey)")
}
writer.Flush()
}
func guiGet(c *cli.Context) {
cfg := getConfig(c).GUI
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "enabled":
fmt.Println(cfg.Enabled)
case "tls":
fmt.Println(cfg.UseTLS())
case "address":
fmt.Println(cfg.Address())
case "user":
if cfg.User != "" {
fmt.Println(cfg.User)
}
case "password":
if cfg.User != "" {
fmt.Println(cfg.Password)
}
case "apikey":
if cfg.APIKey != "" {
fmt.Println(cfg.APIKey)
}
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
}
func guiSet(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "enabled":
cfg.GUI.Enabled = parseBool(val)
case "tls":
cfg.GUI.RawUseTLS = parseBool(val)
case "address":
validAddress(val)
cfg.GUI.RawAddress = val
case "user":
cfg.GUI.User = val
case "password":
cfg.GUI.Password = val
case "apikey":
cfg.GUI.APIKey = val
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
setConfig(c, cfg)
}
func guiUnset(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "user":
cfg.GUI.User = ""
case "password":
cfg.GUI.Password = ""
case "apikey":
cfg.GUI.APIKey = ""
default:
die("Invalid setting: " + arg + "\nAvailable settings: user, password, apikey")
}
setConfig(c, cfg)
}

173
cmd/stcli/cmd_options.go Normal file
View File

@@ -0,0 +1,173 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "options",
HideHelp: true,
Usage: "Options command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all Syncthing option settings",
Requires: &cli.Requires{},
Action: optionsDump,
},
{
Name: "get",
Usage: "Get a Syncthing option setting",
Requires: &cli.Requires{"setting"},
Action: optionsGet,
},
{
Name: "set",
Usage: "Set a Syncthing option setting",
Requires: &cli.Requires{"setting", "value..."},
Action: optionsSet,
},
},
})
}
func optionsDump(c *cli.Context) {
cfg := getConfig(c).Options
writer := newTableWriter()
fmt.Fprintln(writer, "Sync protocol listen addresses:\t", strings.Join(cfg.ListenAddresses, " "), "\t(addresses)")
fmt.Fprintln(writer, "Global discovery enabled:\t", cfg.GlobalAnnEnabled, "\t(globalannenabled)")
fmt.Fprintln(writer, "Global discovery servers:\t", strings.Join(cfg.GlobalAnnServers, " "), "\t(globalannserver)")
fmt.Fprintln(writer, "Local discovery enabled:\t", cfg.LocalAnnEnabled, "\t(localannenabled)")
fmt.Fprintln(writer, "Local discovery port:\t", cfg.LocalAnnPort, "\t(localannport)")
fmt.Fprintln(writer, "Outgoing rate limit in KiB/s:\t", cfg.MaxSendKbps, "\t(maxsend)")
fmt.Fprintln(writer, "Incoming rate limit in KiB/s:\t", cfg.MaxRecvKbps, "\t(maxrecv)")
fmt.Fprintln(writer, "Reconnect interval in seconds:\t", cfg.ReconnectIntervalS, "\t(reconnect)")
fmt.Fprintln(writer, "Start browser:\t", cfg.StartBrowser, "\t(browser)")
fmt.Fprintln(writer, "Enable UPnP:\t", cfg.NATEnabled, "\t(nat)")
fmt.Fprintln(writer, "UPnP Lease in minutes:\t", cfg.NATLeaseM, "\t(natlease)")
fmt.Fprintln(writer, "UPnP Renewal period in minutes:\t", cfg.NATRenewalM, "\t(natrenew)")
fmt.Fprintln(writer, "Restart on Wake Up:\t", cfg.RestartOnWakeup, "\t(wake)")
reporting := "unrecognized value"
switch cfg.URAccepted {
case -1:
reporting = "false"
case 0:
reporting = "undecided/false"
case 1:
reporting = "true"
}
fmt.Fprintln(writer, "Anonymous usage reporting:\t", reporting, "\t(reporting)")
writer.Flush()
}
func optionsGet(c *cli.Context) {
cfg := getConfig(c).Options
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "address":
fmt.Println(strings.Join(cfg.ListenAddresses, "\n"))
case "globalannenabled":
fmt.Println(cfg.GlobalAnnEnabled)
case "globalannservers":
fmt.Println(strings.Join(cfg.GlobalAnnServers, "\n"))
case "localannenabled":
fmt.Println(cfg.LocalAnnEnabled)
case "localannport":
fmt.Println(cfg.LocalAnnPort)
case "maxsend":
fmt.Println(cfg.MaxSendKbps)
case "maxrecv":
fmt.Println(cfg.MaxRecvKbps)
case "reconnect":
fmt.Println(cfg.ReconnectIntervalS)
case "browser":
fmt.Println(cfg.StartBrowser)
case "nat":
fmt.Println(cfg.NATEnabled)
case "natlease":
fmt.Println(cfg.NATLeaseM)
case "natrenew":
fmt.Println(cfg.NATRenewalM)
case "reporting":
switch cfg.URAccepted {
case -1:
fmt.Println("false")
case 0:
fmt.Println("undecided/false")
case 1:
fmt.Println("true")
default:
fmt.Println("unknown")
}
case "wake":
fmt.Println(cfg.RestartOnWakeup)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
}
func optionsSet(c *cli.Context) {
config := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "address":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.ListenAddresses = c.Args().Tail()
case "globalannenabled":
config.Options.GlobalAnnEnabled = parseBool(val)
case "globalannserver":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.GlobalAnnServers = c.Args().Tail()
case "localannenabled":
config.Options.LocalAnnEnabled = parseBool(val)
case "localannport":
config.Options.LocalAnnPort = parsePort(val)
case "maxsend":
config.Options.MaxSendKbps = parseUint(val)
case "maxrecv":
config.Options.MaxRecvKbps = parseUint(val)
case "reconnect":
config.Options.ReconnectIntervalS = parseUint(val)
case "browser":
config.Options.StartBrowser = parseBool(val)
case "nat":
config.Options.NATEnabled = parseBool(val)
case "natlease":
config.Options.NATLeaseM = parseUint(val)
case "natrenew":
config.Options.NATRenewalM = parseUint(val)
case "reporting":
switch strings.ToLower(val) {
case "u", "undecided", "unset":
config.Options.URAccepted = 0
default:
boolvalue := parseBool(val)
if boolvalue {
config.Options.URAccepted = 1
} else {
config.Options.URAccepted = -1
}
}
case "wake":
config.Options.RestartOnWakeup = parseBool(val)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
setConfig(c, config)
}

72
cmd/stcli/cmd_report.go Normal file
View File

@@ -0,0 +1,72 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "report",
HideHelp: true,
Usage: "Reporting command group",
Subcommands: []cli.Command{
{
Name: "system",
Usage: "Report system state",
Requires: &cli.Requires{},
Action: reportSystem,
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Requires: &cli.Requires{},
Action: reportConnections,
},
{
Name: "usage",
Usage: "Usage report",
Requires: &cli.Requires{},
Action: reportUsage,
},
},
})
}
func reportSystem(c *cli.Context) {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
prettyPrintJSON(data)
}
func reportConnections(c *cli.Context) {
response := httpGet(c, "system/connections")
data := make(map[string]map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
var overall map[string]interface{}
for key, value := range data {
if key == "total" {
overall = value
continue
}
value["Device ID"] = key
prettyPrintJSON(value)
fmt.Println()
}
if overall != nil {
fmt.Println("=== Overall statistics ===")
prettyPrintJSON(overall)
}
}
func reportUsage(c *cli.Context) {
response := httpGet(c, "svc/report")
report := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &report)
prettyPrintJSON(report)
}

View File

@@ -1,60 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"strings"
"github.com/urfave/cli"
)
var errorsCommand = cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Action: expects(0, dumpOutput("system/error")),
},
{
Name: "push",
Usage: "Push an error to active clients",
ArgsUsage: "[error message]",
Action: expects(1, errorsPush),
},
{
Name: "clear",
Usage: "Clear pending errors",
Action: expects(0, emptyPost("system/error/clear")),
},
},
}
func errorsPush(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
errStr := strings.Join(c.Args(), " ")
response, err := client.Post("system/error", strings.TrimSpace(errStr))
if err != nil {
return err
}
if response.StatusCode != 200 {
errStr = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
bytes, err := responseToBArray(response)
if err != nil {
return err
}
body := string(bytes)
if body != "" {
errStr += "\nBody: " + body
}
return fmt.Errorf(errStr)
}
return nil
}

31
cmd/stcli/labels.go Normal file
View File

@@ -0,0 +1,31 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
var jsonAttributeLabels = map[string]string{
"folderMaxMiB": "Largest folder size in MiB",
"folderMaxFiles": "Largest folder file count",
"longVersion": "Long version",
"totMiB": "Total size in MiB",
"totFiles": "Total files",
"uniqueID": "Unique ID",
"numFolders": "Folder count",
"numDevices": "Device count",
"memoryUsageMiB": "Memory usage in MiB",
"memorySize": "Total memory in MiB",
"sha256Perf": "SHA256 Benchmark",
"At": "Last contacted",
"Completion": "Percent complete",
"InBytesTotal": "Total bytes received",
"OutBytesTotal": "Total bytes sent",
"ClientVersion": "Client version",
"alloc": "Memory allocated in bytes",
"sys": "Memory using in bytes",
"cpuPercent": "CPU load in percent",
"extAnnounceOK": "External announcments working",
"goroutines": "Number of Go routines",
"myID": "Client ID",
"tilde": "Tilde expands to",
"arch": "Architecture",
"os": "OS",
}

View File

@@ -1,192 +1,63 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"bufio"
"crypto/tls"
"encoding/json"
"flag"
"log"
"os"
"reflect"
"sort"
"github.com/AudriusButkevicius/recli"
"github.com/flynn-archive/go-shlex"
"github.com/mattn/go-isatty"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/urfave/cli"
"github.com/AudriusButkevicius/cli"
)
type ByAlphabet []cli.Command
func (a ByAlphabet) Len() int { return len(a) }
func (a ByAlphabet) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByAlphabet) Less(i, j int) bool { return a[i].Name < a[j].Name }
var cliCommands []cli.Command
func main() {
// This is somewhat a hack around a chicken and egg problem.
// We need to set the home directory and potentially other flags to know where the syncthing instance is running
// in order to get it's config ... which we then use to construct the actual CLI ... at which point it's too late
// to add flags there...
homeBaseDir := locations.GetBaseDir(locations.ConfigBaseDir)
guiCfg := config.GUIConfiguration{}
flags := flag.NewFlagSet("", flag.ContinueOnError)
flags.StringVar(&guiCfg.RawAddress, "gui-address", guiCfg.RawAddress, "Override GUI address (e.g. \"http://192.0.2.42:8443\")")
flags.StringVar(&guiCfg.APIKey, "gui-apikey", guiCfg.APIKey, "Override GUI API key")
flags.StringVar(&homeBaseDir, "home", homeBaseDir, "Set configuration directory")
// Implement the same flags at the lower CLI, with the same default values (pre-parse), but do nothing with them.
// This is so that we could reuse os.Args
fakeFlags := []cli.Flag{
cli.StringFlag{
Name: "gui-address",
Value: guiCfg.RawAddress,
Usage: "Override GUI address (e.g. \"http://192.0.2.42:8443\")",
},
cli.StringFlag{
Name: "gui-apikey",
Value: guiCfg.APIKey,
Usage: "Override GUI API key",
},
cli.StringFlag{
Name: "home",
Value: homeBaseDir,
Usage: "Set configuration directory",
},
}
// Do not print usage of these flags, and ignore errors as this can't understand plenty of things
flags.Usage = func() {}
_ = flags.Parse(os.Args[1:])
// Now if the API key and address is not provided (we are not connecting to a remote instance),
// try to rip it out of the config.
if guiCfg.RawAddress == "" && guiCfg.APIKey == "" {
// Update the base directory
err := locations.SetBaseDir(locations.ConfigBaseDir, homeBaseDir)
if err != nil {
log.Fatal(errors.Wrap(err, "setting home"))
}
// Load the certs and get the ID
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
log.Fatal(errors.Wrap(err, "reading device ID"))
}
myID := protocol.NewDeviceID(cert.Certificate[0])
// Load the config
cfg, err := config.Load(locations.Get(locations.ConfigFile), myID, events.NoopLogger)
if err != nil {
log.Fatalln(errors.Wrap(err, "loading config"))
}
guiCfg = cfg.GUI()
} else if guiCfg.Address() == "" || guiCfg.APIKey == "" {
log.Fatalln("Both -gui-address and -gui-apikey should be specified")
}
if guiCfg.Address() == "" {
log.Fatalln("Could not find GUI Address")
}
if guiCfg.APIKey == "" {
log.Fatalln("Could not find GUI API key")
}
client := getClient(guiCfg)
cfg, err := getConfig(client)
original := cfg.Copy()
if err != nil {
log.Fatalln(errors.Wrap(err, "getting config"))
}
// Copy the config and set the default flags
recliCfg := recli.DefaultConfig
recliCfg.IDTag.Name = "xml"
recliCfg.SkipTag.Name = "json"
commands, err := recli.New(recliCfg).Construct(&cfg)
if err != nil {
log.Fatalln(errors.Wrap(err, "config reflect"))
}
// Construct the actual CLI
app := cli.NewApp()
app.Name = "stcli"
app.HelpName = app.Name
app.Author = "The Syncthing Authors"
app.Name = "syncthing-cli"
app.Author = "Audrius Butkevičius"
app.Email = "audrius.butkevicius@gmail.com"
app.Usage = "Syncthing command line interface"
app.Version = build.Version
app.Flags = fakeFlags
app.Metadata = map[string]interface{}{
"client": client,
}
app.Commands = []cli.Command{
{
Name: "config",
HideHelp: true,
Usage: "Configuration modification command group",
Subcommands: commands,
app.Version = "0.1"
app.HideHelp = true
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "endpoint, e",
Value: "http://127.0.0.1:8384",
Usage: "End point to connect to",
EnvVar: "STENDPOINT",
},
cli.StringFlag{
Name: "apikey, k",
Value: "",
Usage: "API Key",
EnvVar: "STAPIKEY",
},
cli.StringFlag{
Name: "username, u",
Value: "",
Usage: "Username",
EnvVar: "STUSERNAME",
},
cli.StringFlag{
Name: "password, p",
Value: "",
Usage: "Password",
EnvVar: "STPASSWORD",
},
cli.BoolFlag{
Name: "insecure, i",
Usage: "Do not verify SSL certificate",
EnvVar: "STINSECURE",
},
showCommand,
operationCommand,
errorsCommand,
}
tty := isatty.IsTerminal(os.Stdin.Fd()) || isatty.IsCygwinTerminal(os.Stdin.Fd())
if !tty {
// Not a TTY, consume from stdin
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
if err != nil {
log.Fatalln(errors.Wrap(err, "parsing input"))
}
if len(input) == 0 {
continue
}
err = app.Run(append(os.Args, input...))
if err != nil {
log.Fatalln(err)
}
}
err = scanner.Err()
if err != nil {
log.Fatalln(err)
}
} else {
err = app.Run(os.Args)
if err != nil {
log.Fatalln(err)
}
}
if !reflect.DeepEqual(cfg, original) {
body, err := json.MarshalIndent(cfg, "", " ")
if err != nil {
log.Fatalln(err)
}
resp, err := client.Post("system/config", string(body))
if err != nil {
log.Fatalln(err)
}
if resp.StatusCode != 200 {
body, err := responseToBArray(resp)
if err != nil {
log.Fatalln(err)
}
log.Fatalln(string(body))
}
}
sort.Sort(ByAlphabet(cliCommands))
app.Commands = cliCommands
app.RunAndExitOnError()
}

View File

@@ -1,78 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"github.com/urfave/cli"
)
var operationCommand = cli.Command{
Name: "operations",
HideHelp: true,
Usage: "Operation command group",
Subcommands: []cli.Command{
{
Name: "restart",
Usage: "Restart syncthing",
Action: expects(0, emptyPost("system/restart")),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Action: expects(0, emptyPost("system/shutdown")),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Action: expects(0, emptyPost("system/reset")),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Action: expects(0, emptyPost("system/upgrade")),
},
{
Name: "folder-override",
Usage: "Override changes on folder (remote for sendonly, local for receiveonly)",
ArgsUsage: "[folder id]",
Action: expects(1, foldersOverride),
},
},
}
func foldersOverride(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
cfg, err := getConfig(client)
if err != nil {
return err
}
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid {
response, err := client.Post("db/override", "")
if err != nil {
return err
}
if response.StatusCode != 200 {
errStr := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
bytes, err := responseToBArray(response)
if err != nil {
return err
}
body := string(bytes)
if body != "" {
errStr += "\nBody: " + body
}
return fmt.Errorf(errStr)
}
return nil
}
}
return fmt.Errorf("Folder " + rid + " not found")
}

View File

@@ -1,44 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"github.com/urfave/cli"
)
var showCommand = cli.Command{
Name: "show",
HideHelp: true,
Usage: "Show command group",
Subcommands: []cli.Command{
{
Name: "version",
Usage: "Show syncthing client version",
Action: expects(0, dumpOutput("system/version")),
},
{
Name: "config-status",
Usage: "Show configuration status, whether or not a restart is required for changes to take effect",
Action: expects(0, dumpOutput("system/config/insync")),
},
{
Name: "system",
Usage: "Show system status",
Action: expects(0, dumpOutput("system/status")),
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Action: expects(0, dumpOutput("system/connections")),
},
{
Name: "usage",
Usage: "Show usage report",
Action: expects(0, dumpOutput("svc/report")),
},
},
}

View File

@@ -1,8 +1,4 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// Copyright (C) 2014 Audrius Butkevičius
package main
@@ -12,83 +8,158 @@ import (
"io/ioutil"
"net/http"
"os"
"regexp"
"sort"
"strconv"
"strings"
"text/tabwriter"
"unicode"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/urfave/cli"
"github.com/syncthing/syncthing/lib/protocol"
)
func responseToBArray(response *http.Response) ([]byte, error) {
func responseToBArray(response *http.Response) []byte {
defer response.Body.Close()
bytes, err := ioutil.ReadAll(response.Body)
if err != nil {
return nil, err
die(err)
}
return bytes, response.Body.Close()
return bytes
}
func emptyPost(url string) cli.ActionFunc {
return func(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
_, err := client.Post(url, "")
return err
func die(vals ...interface{}) {
if len(vals) > 1 || vals[0] != nil {
os.Stderr.WriteString(fmt.Sprintln(vals...))
os.Exit(1)
}
}
func dumpOutput(url string) cli.ActionFunc {
return func(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
response, err := client.Get(url)
if err != nil {
return err
func wrappedHTTPPost(url string) func(c *cli.Context) {
return func(c *cli.Context) {
httpPost(c, url, "")
}
}
func prettyPrintJSON(json map[string]interface{}) {
writer := newTableWriter()
remap := make(map[string]interface{})
for k, v := range json {
key, ok := jsonAttributeLabels[k]
if !ok {
key = firstUpper(k)
}
return prettyPrintResponse(c, response)
remap[key] = v
}
}
func getConfig(c *APIClient) (config.Configuration, error) {
cfg := config.Configuration{}
response, err := c.Get("system/config")
if err != nil {
return cfg, err
jsonKeys := make([]string, 0, len(remap))
for key := range remap {
jsonKeys = append(jsonKeys, key)
}
bytes, err := responseToBArray(response)
if err != nil {
return cfg, err
}
err = json.Unmarshal(bytes, &cfg)
if err == nil {
return cfg, err
}
return cfg, nil
}
func expects(n int, actionFunc cli.ActionFunc) cli.ActionFunc {
return func(ctx *cli.Context) error {
if ctx.NArg() != n {
plural := ""
if n != 1 {
plural = "s"
}
return fmt.Errorf("expected %d argument%s, got %d", n, plural, ctx.NArg())
sort.Strings(jsonKeys)
for _, k := range jsonKeys {
var value string
rvalue := remap[k]
switch rvalue.(type) {
case int, int16, int32, int64, uint, uint16, uint32, uint64, float32, float64:
value = fmt.Sprintf("%.0f", rvalue)
default:
value = fmt.Sprint(rvalue)
}
return actionFunc(ctx)
if value == "" {
continue
}
fmt.Fprintln(writer, k+":\t"+value)
}
writer.Flush()
}
func firstUpper(str string) string {
for i, v := range str {
return string(unicode.ToUpper(v)) + str[i+1:]
}
return ""
}
func newTableWriter() *tabwriter.Writer {
writer := new(tabwriter.Writer)
writer.Init(os.Stdout, 0, 8, 0, '\t', 0)
return writer
}
func getMyID(c *cli.Context) string {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
return data["myID"].(string)
}
func getConfig(c *cli.Context) config.Configuration {
response := httpGet(c, "system/config")
config := config.Configuration{}
json.Unmarshal(responseToBArray(response), &config)
return config
}
func setConfig(c *cli.Context, cfg config.Configuration) {
body, err := json.Marshal(cfg)
die(err)
response := httpPost(c, "system/config", string(body))
if response.StatusCode != 200 {
die("Unexpected status code", response.StatusCode)
}
}
func prettyPrintJSON(data interface{}) error {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(data)
}
func prettyPrintResponse(c *cli.Context, response *http.Response) error {
bytes, err := responseToBArray(response)
func parseBool(input string) bool {
val, err := strconv.ParseBool(input)
if err != nil {
return err
die(input + " is not a valid value for a boolean")
}
var data interface{}
if err := json.Unmarshal(bytes, &data); err != nil {
return err
}
// TODO: Check flag for pretty print format
return prettyPrintJSON(data)
return val
}
func parseInt(input string) int {
val, err := strconv.ParseInt(input, 0, 64)
if err != nil {
die(input + " is not a valid value for an integer")
}
return int(val)
}
func parseUint(input string) int {
val, err := strconv.ParseUint(input, 0, 64)
if err != nil {
die(input + " is not a valid value for an unsigned integer")
}
return int(val)
}
func parsePort(input string) int {
port := parseUint(input)
if port < 1 || port > 65535 {
die(input + " is not a valid port\nExpected value between 1 and 65535")
}
return port
}
func validAddress(input string) {
tokens := strings.Split(input, ":")
if len(tokens) != 2 {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
matched, err := regexp.MatchString("^[a-zA-Z0-9]+([-a-zA-Z0-9.]+[-a-zA-Z0-9]+)?$", tokens[0])
die(err)
if !matched {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
parsePort(tokens[1])
}
func parseDeviceID(input string) protocol.DeviceID {
device, err := protocol.DeviceIDFromString(input)
if err != nil {
die(input + " is not a valid device id")
}
return device
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,204 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"errors"
"io/ioutil"
"regexp"
"strings"
"sync"
raven "github.com/getsentry/raven-go"
"github.com/maruel/panicparse/stack"
)
const reportServer = "https://crash.syncthing.net/report/"
var loader = newGithubSourceCodeLoader()
func init() {
raven.SetSourceCodeLoader(loader)
}
var (
clients = make(map[string]*raven.Client)
clientsMut sync.Mutex
)
func sendReport(dsn, path string, report []byte) error {
pkt, err := parseReport(path, report)
if err != nil {
return err
}
clientsMut.Lock()
defer clientsMut.Unlock()
cli, ok := clients[dsn]
if !ok {
cli, err = raven.New(dsn)
if err != nil {
return err
}
clients[dsn] = cli
}
// The client sets release and such on the packet before sending, in the
// misguided idea that it knows this better than than the packet we give
// it. So we copy the values from the packet to the client first...
cli.SetRelease(pkt.Release)
cli.SetEnvironment(pkt.Environment)
defer cli.Wait()
_, errC := cli.Capture(pkt, nil)
return <-errC
}
func parseReport(path string, report []byte) (*raven.Packet, error) {
parts := bytes.SplitN(report, []byte("\n"), 2)
if len(parts) != 2 {
return nil, errors.New("no first line")
}
version, err := parseVersion(string(parts[0]))
if err != nil {
return nil, err
}
report = parts[1]
foundPanic := false
var subjectLine []byte
for {
parts = bytes.SplitN(report, []byte("\n"), 2)
if len(parts) != 2 {
return nil, errors.New("no panic line found")
}
line := parts[0]
report = parts[1]
if foundPanic {
// The previous line was our "Panic at ..." header. We are now
// at the beginning of the real panic trace and this is our
// subject line.
subjectLine = line
break
} else if bytes.HasPrefix(line, []byte("Panic at")) {
foundPanic = true
}
}
r := bytes.NewReader(report)
ctx, err := stack.ParseDump(r, ioutil.Discard, false)
if err != nil {
return nil, err
}
// Lock the source code loader to the version we are processing here.
if version.commit != "" {
// We have a commit hash, so we know exactly which source to use
loader.LockWithVersion(version.commit)
} else if strings.HasPrefix(version.tag, "v") {
// Lets hope the tag is close enough
loader.LockWithVersion(version.tag)
} else {
// Last resort
loader.LockWithVersion("master")
}
defer loader.Unlock()
var trace raven.Stacktrace
for _, gr := range ctx.Goroutines {
if gr.First {
trace.Frames = make([]*raven.StacktraceFrame, len(gr.Stack.Calls))
for i, sc := range gr.Stack.Calls {
trace.Frames[len(trace.Frames)-1-i] = raven.NewStacktraceFrame(0, sc.Func.Name(), sc.SrcPath, sc.Line, 3, nil)
}
break
}
}
pkt := &raven.Packet{
Message: string(subjectLine),
Platform: "go",
Release: version.tag,
Environment: version.environment(),
Tags: raven.Tags{
raven.Tag{Key: "version", Value: version.version},
raven.Tag{Key: "tag", Value: version.tag},
raven.Tag{Key: "codename", Value: version.codename},
raven.Tag{Key: "runtime", Value: version.runtime},
raven.Tag{Key: "goos", Value: version.goos},
raven.Tag{Key: "goarch", Value: version.goarch},
raven.Tag{Key: "builder", Value: version.builder},
},
Extra: raven.Extra{
"url": reportServer + path,
},
Interfaces: []raven.Interface{&trace},
}
if version.commit != "" {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: "commit", Value: version.commit})
}
return pkt, nil
}
// syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC
var longVersionRE = regexp.MustCompile(`syncthing\s+(v[^\s]+)\s+"([^"]+)"\s\(([^\s]+)\s+([^-]+)-([^)]+)\)\s+([^\s]+)`)
type version struct {
version string // "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep"
tag string // "v1.1.4-rc.1"
commit string // "6aaae618", blank when absent
codename string // "Erbium Earthworm"
runtime string // "go1.12.5"
goos string // "darwin"
goarch string // "amd64"
builder string // "jb@kvin.kastelo.net"
}
func (v version) environment() string {
if v.commit != "" {
return "Development"
}
if strings.Contains(v.tag, "-rc.") {
return "Candidate"
}
if strings.Contains(v.tag, "-") {
return "Beta"
}
return "Stable"
}
func parseVersion(line string) (version, error) {
m := longVersionRE.FindStringSubmatch(line)
if len(m) == 0 {
return version{}, errors.New("unintelligeble version string")
}
v := version{
version: m[1],
codename: m[2],
runtime: m[3],
goos: m[4],
goarch: m[5],
builder: m[6],
}
parts := strings.Split(v.version, "+")
v.tag = parts[0]
if len(parts) > 1 {
fields := strings.Split(parts[1], "-")
if len(fields) >= 2 && strings.HasPrefix(fields[1], "g") {
v.commit = fields[1][1:]
}
}
return v, nil
}

View File

@@ -1,64 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"io/ioutil"
"testing"
)
func TestParseVersion(t *testing.T) {
cases := []struct {
longVersion string
parsed version
}{
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
},
},
}
for _, tc := range cases {
v, err := parseVersion(tc.longVersion)
if err != nil {
t.Error(err)
continue
}
if v != tc.parsed {
t.Error(v)
}
}
}
func TestParseReport(t *testing.T) {
bs, err := ioutil.ReadFile("_testdata/panic.log")
if err != nil {
t.Fatal(err)
}
pkt, err := parseReport("1/2/345", bs)
if err != nil {
t.Fatal(err)
}
bs, err = pkt.JSON()
if err != nil {
t.Fatal(err)
}
fmt.Printf("%s\n", bs)
}

View File

@@ -1,114 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"path/filepath"
"strings"
"sync"
"time"
)
const (
urlPrefix = "https://raw.githubusercontent.com/syncthing/syncthing/"
httpTimeout = 10 * time.Second
)
type githubSourceCodeLoader struct {
mut sync.Mutex
version string
cache map[string]map[string][][]byte // version -> file -> lines
client *http.Client
}
func newGithubSourceCodeLoader() *githubSourceCodeLoader {
return &githubSourceCodeLoader{
cache: make(map[string]map[string][][]byte),
client: &http.Client{Timeout: httpTimeout},
}
}
func (l *githubSourceCodeLoader) LockWithVersion(version string) {
l.mut.Lock()
l.version = version
if _, ok := l.cache[version]; !ok {
l.cache[version] = make(map[string][][]byte)
}
}
func (l *githubSourceCodeLoader) Unlock() {
l.mut.Unlock()
}
func (l *githubSourceCodeLoader) Load(filename string, line, context int) ([][]byte, int) {
filename = filepath.ToSlash(filename)
lines, ok := l.cache[l.version][filename]
if !ok {
// Cache whatever we managed to find (or nil if nothing, so we don't try again)
defer func() {
l.cache[l.version][filename] = lines
}()
knownPrefixes := []string{"/lib/", "/cmd/"}
var idx int
for _, pref := range knownPrefixes {
idx = strings.Index(filename, pref)
if idx >= 0 {
break
}
}
if idx == -1 {
return nil, 0
}
url := urlPrefix + l.version + filename[idx:]
resp, err := l.client.Get(url)
if err != nil || resp.StatusCode != http.StatusOK {
fmt.Println("Loading source:", err.Error())
return nil, 0
}
data, err := ioutil.ReadAll(resp.Body)
_ = resp.Body.Close()
if err != nil {
fmt.Println("Loading source:", err.Error())
return nil, 0
}
lines = bytes.Split(data, []byte{'\n'})
}
return getLineFromLines(lines, line, context)
}
func getLineFromLines(lines [][]byte, line, context int) ([][]byte, int) {
if lines == nil {
// cached error from ReadFile: return no lines
return nil, 0
}
line-- // stack trace lines are 1-indexed
start := line - context
var idx int
if start < 0 {
start = 0
idx = line
} else {
idx = context
}
end := line + context + 1
if line >= len(lines) {
return nil, 0
}
if end > len(lines) {
end = len(lines)
}
return lines[start:end], idx
}

View File

@@ -1,160 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// Command stcrashreceiver is a trivial HTTP server that allows two things:
//
// - uploading files (crash reports) named like a SHA256 hash using a PUT request
// - checking whether such file exists using a HEAD request
//
// Typically this should be deployed behind something that manages HTTPS.
package main
import (
"bytes"
"compress/gzip"
"flag"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"path"
"path/filepath"
"strings"
)
const maxRequestSize = 1 << 20 // 1 MiB
func main() {
dir := flag.String("dir", ".", "Directory to store reports in")
dsn := flag.String("dsn", "", "Sentry DSN")
listen := flag.String("listen", ":22039", "HTTP listen address")
flag.Parse()
cr := &crashReceiver{
dir: *dir,
dsn: *dsn,
}
log.SetOutput(os.Stdout)
if err := http.ListenAndServe(*listen, cr); err != nil {
log.Fatalln("HTTP serve:", err)
}
}
type crashReceiver struct {
dir string
dsn string
}
func (r *crashReceiver) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// The final path component should be a SHA256 hash in hex, so 64 hex
// characters. We don't care about case on the request but use lower
// case internally.
reportID := strings.ToLower(path.Base(req.URL.Path))
if len(reportID) != 64 {
http.Error(w, "Bad request", http.StatusBadRequest)
return
}
for _, c := range reportID {
if c >= 'a' && c <= 'f' {
continue
}
if c >= '0' && c <= '9' {
continue
}
http.Error(w, "Bad request", http.StatusBadRequest)
return
}
// The location of the report on disk, compressed
fullPath := filepath.Join(r.dir, r.dirFor(reportID), reportID) + ".gz"
switch req.Method {
case http.MethodGet:
r.serveGet(fullPath, w, req)
case http.MethodHead:
r.serveHead(fullPath, w, req)
case http.MethodPut:
r.servePut(reportID, fullPath, w, req)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
// serveGet responds to GET requests by serving the uncompressed report.
func (r *crashReceiver) serveGet(fullPath string, w http.ResponseWriter, _ *http.Request) {
fd, err := os.Open(fullPath)
if err != nil {
http.Error(w, "Not found", http.StatusNotFound)
return
}
defer fd.Close()
gr, err := gzip.NewReader(fd)
if err != nil {
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
_, _ = io.Copy(w, gr) // best effort
}
// serveHead responds to HEAD requests by checking if the named report
// already exists in the system.
func (r *crashReceiver) serveHead(fullPath string, w http.ResponseWriter, _ *http.Request) {
if _, err := os.Lstat(fullPath); err != nil {
http.Error(w, "Not found", http.StatusNotFound)
}
}
// servePut accepts and stores the given report.
func (r *crashReceiver) servePut(reportID, fullPath string, w http.ResponseWriter, req *http.Request) {
// Ensure the destination directory exists
if err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {
log.Println("Creating directory:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Read at most maxRequestSize of report data.
log.Println("Receiving report", reportID)
lr := io.LimitReader(req.Body, maxRequestSize)
bs, err := ioutil.ReadAll(lr)
if err != nil {
log.Println("Reading report:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Compress the report for storage
buf := new(bytes.Buffer)
gw := gzip.NewWriter(buf)
_, _ = gw.Write(bs) // can't fail
gw.Close()
// Create an output file with the compressed report
err = ioutil.WriteFile(fullPath, buf.Bytes(), 0644)
if err != nil {
log.Println("Saving report:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Send the report to Sentry
if r.dsn != "" {
go func() {
// There's no need for the client to have to wait for this part.
if err := sendReport(r.dsn, reportID, bs); err != nil {
log.Println("Failed to send report:", err)
}
}()
}
}
// 01234567890abcdef... => 01/23
func (r *crashReceiver) dirFor(base string) string {
return filepath.Join(base[0:2], base[2:4])
}

View File

@@ -18,7 +18,6 @@ import (
"net"
"net/http"
"net/url"
"sort"
"strconv"
"strings"
"sync"
@@ -280,10 +279,6 @@ func (s *apiSrv) handleAnnounce(remote net.IP, deviceID protocol.DeviceID, addre
dbAddrs[i].Expires = expire
}
// The address slice must always be sorted for database merges to work
// properly.
sort.Sort(databaseAddressOrder(dbAddrs))
seen := now.UnixNano()
if s.repl != nil {
s.repl.send(key, dbAddrs, seen)
@@ -300,66 +295,28 @@ func certificateBytes(req *http.Request) []byte {
return req.TLS.PeerCertificates[0].Raw
}
var bs []byte
if hdr := req.Header.Get("X-SSL-Cert"); hdr != "" {
if strings.Contains(hdr, "%") {
// Nginx using $ssl_client_escaped_cert
// The certificate is in PEM format with url encoding.
// We need to decode for the PEM decoder
hdr, err := url.QueryUnescape(hdr)
if err != nil {
// Decoding failed
return nil
}
bs = []byte(hdr)
} else {
// Nginx using $ssl_client_cert
// The certificate is in PEM format but with spaces for newlines. We
// need to reinstate the newlines for the PEM decoder. But we need to
// leave the spaces in the BEGIN and END lines - the first and last
// space - alone.
bs = []byte(hdr)
firstSpace := bytes.Index(bs, []byte(" "))
lastSpace := bytes.LastIndex(bs, []byte(" "))
for i := firstSpace + 1; i < lastSpace; i++ {
if bs[i] == ' ' {
bs[i] = '\n'
}
bs := []byte(hdr)
// The certificate is in PEM format but with spaces for newlines. We
// need to reinstate the newlines for the PEM decoder. But we need to
// leave the spaces in the BEGIN and END lines - the first and last
// space - alone.
firstSpace := bytes.Index(bs, []byte(" "))
lastSpace := bytes.LastIndex(bs, []byte(" "))
for i := firstSpace + 1; i < lastSpace; i++ {
if bs[i] == ' ' {
bs[i] = '\n'
}
}
} else if hdr := req.Header.Get("X-Forwarded-Tls-Client-Cert"); hdr != "" {
// Traefik 2 passtlsclientcert
// The certificate is in PEM format with url encoding but without newlines
// and start/end statements. We need to decode, reinstate the newlines every 64
// character and add statements for the PEM decoder
hdr, err := url.QueryUnescape(hdr)
if err != nil {
block, _ := pem.Decode(bs)
if block == nil {
// Decoding failed
return nil
}
for i := 64; i < len(hdr); i += 65 {
hdr = hdr[:i] + "\n" + hdr[i:]
}
hdr = "-----BEGIN CERTIFICATE-----\n" + hdr
hdr = hdr + "\n-----END CERTIFICATE-----\n"
bs = []byte(hdr)
return block.Bytes
}
if bs == nil {
return nil
}
block, _ := pem.Decode(bs)
if block == nil {
// Decoding failed
return nil
}
return block.Bytes
return nil
}
// fixupAddresses checks the list of addresses, removing invalid ones and

View File

@@ -10,7 +10,6 @@
package main
import (
"log"
"sort"
"time"
@@ -264,15 +263,12 @@ func (s *levelDBStore) Stop() {
// chosen for any duplicates.
func merge(a, b DatabaseRecord) DatabaseRecord {
// Both lists must be sorted for this to work.
if !sort.IsSorted(databaseAddressOrder(a.Addresses)) {
log.Println("Warning: bug: addresses not correctly sorted in merge")
a.Addresses = sortedAddressCopy(a.Addresses)
}
if !sort.IsSorted(databaseAddressOrder(b.Addresses)) {
// no warning because this is the side we read from disk and it may
// legitimately predate correct sorting.
b.Addresses = sortedAddressCopy(b.Addresses)
}
sort.Slice(a.Addresses, func(i, j int) bool {
return a.Addresses[i].Address < a.Addresses[j].Address
})
sort.Slice(b.Addresses, func(i, j int) bool {
return b.Addresses[i].Address < b.Addresses[j].Address
})
res := DatabaseRecord{
Addresses: make([]DatabaseAddress, 0, len(a.Addresses)+len(b.Addresses)),
@@ -356,24 +352,3 @@ func expire(addrs []DatabaseAddress, now int64) []DatabaseAddress {
}
return addrs
}
func sortedAddressCopy(addrs []DatabaseAddress) []DatabaseAddress {
sorted := make([]DatabaseAddress, len(addrs))
copy(sorted, addrs)
sort.Sort(databaseAddressOrder(sorted))
return sorted
}
type databaseAddressOrder []DatabaseAddress
func (s databaseAddressOrder) Less(a, b int) bool {
return s[a].Address < s[b].Address
}
func (s databaseAddressOrder) Swap(a, b int) {
s[a], s[b] = s[b], s[a]
}
func (s databaseAddressOrder) Len() int {
return len(s)
}

View File

@@ -3,14 +3,12 @@
package main
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
)
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import io "io"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -21,7 +19,7 @@ var _ = math.Inf
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type DatabaseRecord struct {
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses"`
@@ -34,7 +32,7 @@ func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
func (m *DatabaseRecord) String() string { return proto.CompactTextString(m) }
func (*DatabaseRecord) ProtoMessage() {}
func (*DatabaseRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{0}
return fileDescriptor_database_0f49e029703a04f5, []int{0}
}
func (m *DatabaseRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -44,15 +42,15 @@ func (m *DatabaseRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, erro
return xxx_messageInfo_DatabaseRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseRecord.Merge(m, src)
func (dst *DatabaseRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseRecord.Merge(dst, src)
}
func (m *DatabaseRecord) XXX_Size() int {
return m.Size()
@@ -73,7 +71,7 @@ func (m *ReplicationRecord) Reset() { *m = ReplicationRecord{} }
func (m *ReplicationRecord) String() string { return proto.CompactTextString(m) }
func (*ReplicationRecord) ProtoMessage() {}
func (*ReplicationRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{1}
return fileDescriptor_database_0f49e029703a04f5, []int{1}
}
func (m *ReplicationRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -83,15 +81,15 @@ func (m *ReplicationRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, e
return xxx_messageInfo_ReplicationRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ReplicationRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_ReplicationRecord.Merge(m, src)
func (dst *ReplicationRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_ReplicationRecord.Merge(dst, src)
}
func (m *ReplicationRecord) XXX_Size() int {
return m.Size()
@@ -111,7 +109,7 @@ func (m *DatabaseAddress) Reset() { *m = DatabaseAddress{} }
func (m *DatabaseAddress) String() string { return proto.CompactTextString(m) }
func (*DatabaseAddress) ProtoMessage() {}
func (*DatabaseAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{2}
return fileDescriptor_database_0f49e029703a04f5, []int{2}
}
func (m *DatabaseAddress) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -121,15 +119,15 @@ func (m *DatabaseAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, err
return xxx_messageInfo_DatabaseAddress.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseAddress.Merge(m, src)
func (dst *DatabaseAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseAddress.Merge(dst, src)
}
func (m *DatabaseAddress) XXX_Size() int {
return m.Size()
@@ -145,34 +143,10 @@ func init() {
proto.RegisterType((*ReplicationRecord)(nil), "main.ReplicationRecord")
proto.RegisterType((*DatabaseAddress)(nil), "main.DatabaseAddress")
}
func init() { proto.RegisterFile("database.proto", fileDescriptor_b90fe3356ea5df07) }
var fileDescriptor_b90fe3356ea5df07 = []byte{
// 270 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x90, 0x41, 0x4a, 0xc4, 0x30,
0x18, 0x85, 0x9b, 0x49, 0x1d, 0x99, 0x08, 0xa3, 0x06, 0x94, 0x20, 0x12, 0x4b, 0xdd, 0x74, 0xd5,
0x01, 0x5d, 0xb9, 0x74, 0xd0, 0x0b, 0xe4, 0x06, 0xe9, 0xe4, 0x77, 0x08, 0x3a, 0x4d, 0x49, 0x2a,
0xe8, 0x29, 0xf4, 0x58, 0x5d, 0xce, 0xd2, 0x95, 0x68, 0x7b, 0x11, 0x69, 0x26, 0x55, 0x14, 0x37,
0xb3, 0x7b, 0xdf, 0xff, 0xbf, 0x97, 0xbc, 0x84, 0x4c, 0x95, 0xac, 0x65, 0x21, 0x1d, 0xe4, 0x95,
0x35, 0xb5, 0xa1, 0xf1, 0x4a, 0xea, 0xf2, 0xe4, 0xdc, 0x42, 0x65, 0xdc, 0xcc, 0x8f, 0x8a, 0xc7,
0xbb, 0xd9, 0xd2, 0x2c, 0x8d, 0x07, 0xaf, 0x36, 0xd6, 0xf4, 0x05, 0x91, 0xe9, 0x4d, 0x48, 0x0b,
0x58, 0x18, 0xab, 0xe8, 0x15, 0x99, 0x48, 0xa5, 0x2c, 0x38, 0x07, 0x8e, 0xa1, 0x04, 0x67, 0x7b,
0x17, 0x47, 0x79, 0x7f, 0x62, 0x3e, 0x18, 0xaf, 0x37, 0xeb, 0x79, 0xdc, 0xbc, 0x9f, 0x45, 0xe2,
0xc7, 0x4d, 0x8f, 0xc9, 0x78, 0xa5, 0x7d, 0x6e, 0x94, 0xa0, 0x6c, 0x47, 0x04, 0xa2, 0x94, 0xc4,
0x0e, 0xa0, 0x64, 0x38, 0x41, 0x19, 0x16, 0x5e, 0x7f, 0x7b, 0x15, 0x8b, 0xfd, 0x34, 0x50, 0x5a,
0x93, 0x43, 0x01, 0xd5, 0x83, 0x5e, 0xc8, 0x5a, 0x9b, 0x32, 0x74, 0x3a, 0x20, 0xf8, 0x1e, 0x9e,
0x19, 0x4a, 0x50, 0x36, 0x11, 0xbd, 0xfc, 0xdd, 0x72, 0xb4, 0x55, 0xcb, 0x7f, 0xda, 0xa4, 0xb7,
0x64, 0xff, 0x4f, 0x8e, 0x32, 0xb2, 0x1b, 0x32, 0xe1, 0xde, 0x01, 0xfb, 0x0d, 0x3c, 0x55, 0xda,
0x86, 0x77, 0x62, 0x31, 0xe0, 0xfc, 0xb4, 0xf9, 0xe4, 0x51, 0xd3, 0x72, 0xb4, 0x6e, 0x39, 0xfa,
0x68, 0x39, 0x7a, 0xed, 0x78, 0xb4, 0xee, 0x78, 0xf4, 0xd6, 0xf1, 0xa8, 0x18, 0xfb, 0x3f, 0xbf,
0xfc, 0x0a, 0x00, 0x00, 0xff, 0xff, 0x7a, 0xa2, 0xf6, 0x1e, 0xb0, 0x01, 0x00, 0x00,
}
func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
@@ -180,51 +154,44 @@ func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
}
func (m *DatabaseRecord) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
var i int
_ = i
var l int
_ = l
if m.Missed != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
i--
dAtA[i] = 0x20
}
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if m.Misses != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
i--
dAtA[i] = 0x10
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
for _, msg := range m.Addresses {
dAtA[i] = 0xa
i++
i = encodeVarintDatabase(dAtA, i, uint64(msg.Size()))
n, err := msg.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n
}
}
return len(dAtA) - i, nil
if m.Misses != 0 {
dAtA[i] = 0x10
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
}
if m.Seen != 0 {
dAtA[i] = 0x18
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
}
if m.Missed != 0 {
dAtA[i] = 0x20
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
}
return i, nil
}
func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
@@ -232,48 +199,40 @@ func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
}
func (m *ReplicationRecord) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ReplicationRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
var i int
_ = i
var l int
_ = l
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
if len(m.Key) > 0 {
dAtA[i] = 0xa
i++
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
i += copy(dAtA[i:], m.Key)
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
for _, msg := range m.Addresses {
dAtA[i] = 0x12
i++
i = encodeVarintDatabase(dAtA, i, uint64(msg.Size()))
n, err := msg.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n
}
}
if len(m.Key) > 0 {
i -= len(m.Key)
copy(dAtA[i:], m.Key)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
i--
dAtA[i] = 0xa
if m.Seen != 0 {
dAtA[i] = 0x18
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
}
return len(dAtA) - i, nil
return i, nil
}
func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
@@ -281,40 +240,32 @@ func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
}
func (m *DatabaseAddress) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseAddress) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
var i int
_ = i
var l int
_ = l
if m.Expires != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Expires))
i--
dAtA[i] = 0x10
}
if len(m.Address) > 0 {
i -= len(m.Address)
copy(dAtA[i:], m.Address)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
i--
dAtA[i] = 0xa
i++
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
i += copy(dAtA[i:], m.Address)
}
return len(dAtA) - i, nil
if m.Expires != 0 {
dAtA[i] = 0x10
i++
i = encodeVarintDatabase(dAtA, i, uint64(m.Expires))
}
return i, nil
}
func encodeVarintDatabase(dAtA []byte, offset int, v uint64) int {
offset -= sovDatabase(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
return offset + 1
}
func (m *DatabaseRecord) Size() (n int) {
if m == nil {
@@ -379,7 +330,14 @@ func (m *DatabaseAddress) Size() (n int) {
}
func sovDatabase(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
for {
n++
x >>= 7
if x == 0 {
break
}
}
return n
}
func sozDatabase(x uint64) (n int) {
return sovDatabase(uint64((x << 1) ^ uint64((int64(x) >> 63))))
@@ -399,7 +357,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -427,7 +385,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -436,9 +394,6 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -461,7 +416,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Misses |= int32(b&0x7F) << shift
m.Misses |= (int32(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -480,7 +435,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= int64(b&0x7F) << shift
m.Seen |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -499,7 +454,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Missed |= int64(b&0x7F) << shift
m.Missed |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -513,9 +468,6 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
@@ -543,7 +495,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -571,7 +523,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -581,9 +533,6 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -603,7 +552,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -612,9 +561,6 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -637,7 +583,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= int64(b&0x7F) << shift
m.Seen |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -651,9 +597,6 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
@@ -681,7 +624,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -709,7 +652,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -719,9 +662,6 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -741,7 +681,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Expires |= int64(b&0x7F) << shift
m.Expires |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -755,9 +695,6 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
@@ -773,7 +710,6 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
func skipDatabase(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -805,8 +741,10 @@ func skipDatabase(dAtA []byte) (n int, err error) {
break
}
}
return iNdEx, nil
case 1:
iNdEx += 8
return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -823,34 +761,76 @@ func skipDatabase(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthDatabase
}
iNdEx += length
return iNdEx, nil
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupDatabase
for {
var innerWire uint64
var start int = iNdEx
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
innerWireType := int(innerWire & 0x7)
if innerWireType == 4 {
break
}
next, err := skipDatabase(dAtA[start:])
if err != nil {
return 0, err
}
iNdEx = start + next
}
depth--
return iNdEx, nil
case 4:
return iNdEx, nil
case 5:
iNdEx += 4
return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthDatabase
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
panic("unreachable")
}
var (
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupDatabase = fmt.Errorf("proto: unexpected end of group")
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("database.proto", fileDescriptor_database_0f49e029703a04f5) }
var fileDescriptor_database_0f49e029703a04f5 = []byte{
// 270 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x90, 0x41, 0x4a, 0xc4, 0x30,
0x18, 0x85, 0x9b, 0x49, 0x1d, 0x99, 0x08, 0xa3, 0x06, 0x94, 0x20, 0x12, 0x4b, 0xdd, 0x74, 0xd5,
0x01, 0x5d, 0xb9, 0x74, 0xd0, 0x0b, 0xe4, 0x06, 0xe9, 0xe4, 0x77, 0x08, 0x3a, 0x4d, 0x49, 0x2a,
0xe8, 0x29, 0xf4, 0x58, 0x5d, 0xce, 0xd2, 0x95, 0x68, 0x7b, 0x11, 0x69, 0x26, 0x55, 0x14, 0x37,
0xb3, 0x7b, 0xdf, 0xff, 0xbf, 0x97, 0xbc, 0x84, 0x4c, 0x95, 0xac, 0x65, 0x21, 0x1d, 0xe4, 0x95,
0x35, 0xb5, 0xa1, 0xf1, 0x4a, 0xea, 0xf2, 0xe4, 0xdc, 0x42, 0x65, 0xdc, 0xcc, 0x8f, 0x8a, 0xc7,
0xbb, 0xd9, 0xd2, 0x2c, 0x8d, 0x07, 0xaf, 0x36, 0xd6, 0xf4, 0x05, 0x91, 0xe9, 0x4d, 0x48, 0x0b,
0x58, 0x18, 0xab, 0xe8, 0x15, 0x99, 0x48, 0xa5, 0x2c, 0x38, 0x07, 0x8e, 0xa1, 0x04, 0x67, 0x7b,
0x17, 0x47, 0x79, 0x7f, 0x62, 0x3e, 0x18, 0xaf, 0x37, 0xeb, 0x79, 0xdc, 0xbc, 0x9f, 0x45, 0xe2,
0xc7, 0x4d, 0x8f, 0xc9, 0x78, 0xa5, 0x7d, 0x6e, 0x94, 0xa0, 0x6c, 0x47, 0x04, 0xa2, 0x94, 0xc4,
0x0e, 0xa0, 0x64, 0x38, 0x41, 0x19, 0x16, 0x5e, 0x7f, 0x7b, 0x15, 0x8b, 0xfd, 0x34, 0x50, 0x5a,
0x93, 0x43, 0x01, 0xd5, 0x83, 0x5e, 0xc8, 0x5a, 0x9b, 0x32, 0x74, 0x3a, 0x20, 0xf8, 0x1e, 0x9e,
0x19, 0x4a, 0x50, 0x36, 0x11, 0xbd, 0xfc, 0xdd, 0x72, 0xb4, 0x55, 0xcb, 0x7f, 0xda, 0xa4, 0xb7,
0x64, 0xff, 0x4f, 0x8e, 0x32, 0xb2, 0x1b, 0x32, 0xe1, 0xde, 0x01, 0xfb, 0x0d, 0x3c, 0x55, 0xda,
0x86, 0x77, 0x62, 0x31, 0xe0, 0xfc, 0xb4, 0xf9, 0xe4, 0x51, 0xd3, 0x72, 0xb4, 0x6e, 0x39, 0xfa,
0x68, 0x39, 0x7a, 0xed, 0x78, 0xb4, 0xee, 0x78, 0xf4, 0xd6, 0xf1, 0xa8, 0x18, 0xfb, 0x3f, 0xbf,
0xfc, 0x0a, 0x00, 0x00, 0xff, 0xff, 0x7a, 0xa2, 0xf6, 0x1e, 0xb0, 0x01, 0x00, 0x00,
}

View File

@@ -1,4 +0,0 @@
[stdiscosrv]
title=Syncthing discovery server
description=Lets syncthing clients discover each other
ports=8443/tcp

View File

@@ -1,3 +0,0 @@
# Default settings for syncthing-discosrv (stdiscosrv).
## Add Options here:
DISCOSRV_OPTS=

View File

@@ -1,25 +0,0 @@
[Unit]
Description=Syncthing Discovery Server
After=network.target
Documentation=man:stdiscosrv(1)
[Service]
WorkingDirectory=/var/lib/syncthing-discosrv
EnvironmentFile=/etc/default/syncthing-discosrv
ExecStart=/usr/bin/stdiscosrv $DISCOSRV_OPTS
# Hardening
User=syncthing-discosrv
Group=syncthing
ProtectSystem=strict
ReadWritePaths=/var/lib/syncthing-discosrv
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
Alias=syncthing-discosrv.service

View File

@@ -9,15 +9,17 @@ package main
import (
"crypto/tls"
"flag"
"fmt"
"log"
"net"
"net/http"
"os"
"runtime"
"strconv"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syndtr/goleveldb/leveldb/opt"
@@ -63,11 +65,34 @@ var levelDBOptions = &opt.Options{
WriteBuffer: 32 << 20, // default 4<<20
}
var (
Version string
BuildStamp string
BuildUser string
BuildHost string
BuildDate time.Time
LongVersion string
)
func init() {
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`stdiscosrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
debug = false
)
func main() {
const (
cleanIntv = 1 * time.Hour
statsIntv = 5 * time.Minute
)
var listen string
var dir string
var metricsListen string
@@ -89,18 +114,14 @@ func main() {
flag.StringVar(&metricsListen, "metrics-listen", "", "Metrics listen address")
flag.StringVar(&replicationPeers, "replicate", "", "Replication peers, id@address, comma separated")
flag.StringVar(&replicationListen, "replication-listen", ":19200", "Replication listen address")
showVersion := flag.Bool("version", false, "Show version")
flag.Parse()
log.Println(build.LongVersion)
if *showVersion {
return
}
log.Println(LongVersion)
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv", 20*365)
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv")
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}

View File

@@ -1,4 +0,0 @@
#!/bin/bash
addgroup --system syncthing
adduser --system --home /var/lib/syncthing-discosrv --ingroup syncthing syncthing-discosrv

View File

@@ -17,7 +17,6 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -83,7 +82,7 @@ func checkServers(deviceID protocol.DeviceID, servers ...string) {
}
func checkServer(deviceID protocol.DeviceID, server string) checkResult {
disco, err := discover.NewGlobal(server, tls.Certificate{}, nil, events.NoopLogger)
disco, err := discover.NewGlobal(server, tls.Certificate{}, nil)
if err != nil {
return checkResult{error: err}
}

View File

@@ -82,7 +82,7 @@ func generateOneFile(fd io.ReadSeeker, p1 string, s int64) error {
return err
}
os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
_ = os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
t := time.Now().Add(-time.Duration(rand.Intn(30*86400)) * time.Second)
return os.Chtimes(p1, t, t)

View File

@@ -13,15 +13,11 @@ import (
"time"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
)
func dump(ldb backend.Backend) {
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
func dump(ldb *db.Lowlevel) {
it := ldb.NewIterator(nil, nil)
for it.Next() {
key := it.Key()
switch key[0] {

View File

@@ -10,10 +10,8 @@ import (
"container/heap"
"encoding/binary"
"fmt"
"log"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
)
type SizedElement struct {
@@ -39,14 +37,11 @@ func (h *ElementHeap) Pop() interface{} {
return x
}
func dumpsize(ldb backend.Backend) {
func dumpsize(ldb *db.Lowlevel) {
h := &ElementHeap{}
heap.Init(h)
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
it := ldb.NewIterator(nil, nil)
var ele SizedElement
for it.Next() {
key := it.Key()

View File

@@ -10,10 +10,8 @@ import (
"bytes"
"encoding/binary"
"fmt"
"log"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -33,7 +31,7 @@ type sequenceKey struct {
sequence uint64
}
func idxck(ldb backend.Backend) (success bool) {
func idxck(ldb *db.Lowlevel) (success bool) {
folders := make(map[uint32]string)
devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32)
@@ -44,10 +42,7 @@ func idxck(ldb backend.Backend) (success bool) {
var localDeviceKey uint32
success = true
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
it := ldb.NewIterator(nil, nil)
for it.Next() {
key := it.Key()
switch key[0] {
@@ -172,8 +167,8 @@ func idxck(ldb backend.Backend) (success bool) {
if needsLocally(vl) {
_, ok := needs[gk]
if !ok {
dev := deviceToIDs[string(vl.Versions[0].Device)]
fi := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
dev, _ := deviceToIDs[string(vl.Versions[0].Device)]
fi, _ := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
}

View File

@@ -13,7 +13,7 @@ import (
"os"
"path/filepath"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/db"
)
func main() {
@@ -30,7 +30,7 @@ func main() {
path = filepath.Join(defaultConfigDir(), "index-v0.14.0.db")
}
ldb, err := backend.OpenLevelDBRO(path)
ldb, err := db.OpenRO(path)
if err != nil {
log.Fatal(err)
}

View File

@@ -193,9 +193,9 @@
<script>
angular.module('syncthing', [
])
.config(['$httpProvider', function($httpProvider) {
.config(function($httpProvider) {
$httpProvider.defaults.timeout = 5000;
}])
})
.filter('bytes', function() {
return function(bytes, precision) {
if (isNaN(parseFloat(bytes)) || !isFinite(bytes)) return '-';

View File

@@ -7,13 +7,13 @@ package main
import (
"bytes"
"compress/gzip"
"context"
"crypto/tls"
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"math/rand"
"mime"
"net"
"net/http"
@@ -29,7 +29,6 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
@@ -371,7 +370,10 @@ func handleGetRequest(w http.ResponseWriter, r *http.Request) {
mut.RUnlock()
// Shuffle
rand.Shuffle(relays)
for i := range relays {
j := rand.Intn(i + 1)
relays[i], relays[j] = relays[j], relays[i]
}
json.NewEncoder(w).Encode(map[string][]*relay{
"relays": relays,
@@ -481,7 +483,7 @@ func handleRelayTest(request request) {
if debug {
log.Println("Request for", request.relay)
}
if !client.TestRelay(context.TODO(), request.relay.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if !client.TestRelay(request.relay.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if debug {
log.Println("Test for relay", request.relay, "failed")
}
@@ -634,7 +636,7 @@ func createTestCertificate() tls.Certificate {
}
certFile, keyFile := filepath.Join(tmpDir, "cert.pem"), filepath.Join(tmpDir, "key.pem")
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv", 20*365)
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv")
if err != nil {
log.Fatalln("Failed to create test X509 key pair:", err)
}

View File

@@ -126,7 +126,7 @@ func refreshStats() {
go func(rel *relay) {
t0 := time.Now()
stats := fetchStats(rel)
duration := time.Since(t0).Seconds()
duration := time.Now().Sub(t0).Seconds()
result := "success"
if stats == nil {
result = "failed"

View File

@@ -1,9 +0,0 @@
[strelaysrv]
title=Syncthing relay server
description=Proxies traffic of syncthing client behind firewalls
ports=22067/tcp
[strelaysrv-metrics]
title=Syncthing relay metrics
description=Provides metrics about the syncthing relay server
ports=22070/tcp

View File

@@ -1,5 +0,0 @@
# Default settings for syncthing-relaysrv (strelaysrv).
NAT=true
## Add Options here:
RELAYSRV_OPTS=

View File

@@ -1,25 +1,17 @@
[Unit]
Description=Syncthing Relay Server
Description=Syncthing relay server
After=network.target
Documentation=man:strelaysrv(1)
[Service]
WorkingDirectory=/var/lib/syncthing-relaysrv
EnvironmentFile=/etc/default/syncthing-relaysrv
ExecStart=/usr/bin/strelaysrv -nat=${NAT} $RELAYSRV_OPTS
User=strelaysrv
Group=strelaysrv
ExecStart=/usr/bin/strelaysrv
WorkingDirectory=/var/lib/strelaysrv
# Hardening
User=syncthing-relaysrv
Group=syncthing
ProtectSystem=strict
ReadWritePaths=/var/lib/syncthing-relaysrv
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectSystem=full
ProtectHome=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target
Alias=syncthing-relaysrv.service

View File

@@ -14,13 +14,12 @@ import (
"os/signal"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync/atomic"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
@@ -34,6 +33,24 @@ import (
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
)
var (
Version string
BuildStamp string
BuildUser string
BuildHost string
BuildDate time.Time
LongVersion string
)
func init() {
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`strelaysrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
listen string
debug bool
@@ -99,14 +116,8 @@ func main() {
flag.IntVar(&natTimeout, "nat-timeout", 10, "NAT discovery timeout in seconds")
flag.BoolVar(&pprofEnabled, "pprof", false, "Enable the built in profiling on the status server")
flag.IntVar(&networkBufferSize, "network-buffer", 2048, "Network buffer size (two of these per proxied connection)")
showVersion := flag.Bool("version", false, "Show version")
flag.Parse()
if *showVersion {
fmt.Println(build.LongVersion)
return
}
if extAddress == "" {
extAddress = listen
}
@@ -135,7 +146,7 @@ func main() {
}
}
log.Println(build.LongVersion)
log.Println(LongVersion)
maxDescriptors, err := osutil.MaximizeOpenFileLimit()
if maxDescriptors > 0 {
@@ -155,7 +166,7 @@ func main() {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv", 20*365)
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv")
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
@@ -183,7 +194,7 @@ func main() {
log.Println("ID:", id)
}
wrapper := config.Wrap("config", config.New(id), events.NoopLogger)
wrapper := config.Wrap("config", config.New(id))
wrapper.SetOptions(config.OptionsConfiguration{
NATLeaseM: natLease,
NATRenewalM: natRenewal,

View File

@@ -1,4 +0,0 @@
#!/bin/bash
addgroup --system syncthing
adduser --system --home /var/lib/syncthing-relaysrv --ingroup syncthing syncthing-relaysrv

View File

@@ -22,7 +22,7 @@ import (
var (
sessionMut = sync.RWMutex{}
activeSessions = make([]*session, 0)
pendingSessions = make(map[string]*session)
pendingSessions = make(map[string]*session, 0)
numProxies int64
bytesProxied int64
)

View File

@@ -10,8 +10,6 @@ import (
"runtime"
"sync/atomic"
"time"
"github.com/syncthing/syncthing/lib/build"
)
var rc *rateCalculator
@@ -42,10 +40,10 @@ func getStatus(w http.ResponseWriter, r *http.Request) {
sessionMut.Lock()
// This can potentially be double the number of pending sessions, as each session has two keys, one for each side.
status["version"] = build.Version
status["buildHost"] = build.Host
status["buildUser"] = build.User
status["buildDate"] = build.Date
status["version"] = Version
status["buildHost"] = BuildHost
status["buildUser"] = BuildUser
status["buildDate"] = BuildDate
status["startTime"] = rc.startTime
status["uptimeSeconds"] = time.Since(rc.startTime) / time.Second
status["numPendingSessionKeys"] = len(pendingSessions)
@@ -88,9 +86,9 @@ func getStatus(w http.ResponseWriter, r *http.Request) {
}
type rateCalculator struct {
counter *int64 // atomic, must remain 64-bit aligned
rates []int64
prev int64
counter *int64
startTime time.Time
}

View File

@@ -4,7 +4,6 @@ package main
import (
"bufio"
"context"
"crypto/tls"
"flag"
"log"
@@ -20,8 +19,6 @@ import (
)
func main() {
ctx := context.Background()
log.SetOutput(os.Stdout)
log.SetFlags(log.LstdFlags | log.Lshortfile)
@@ -79,7 +76,7 @@ func main() {
}()
for {
conn, err := client.JoinSession(ctx, <-recv)
conn, err := client.JoinSession(<-recv)
if err != nil {
log.Fatalln("Failed to join", err)
}
@@ -93,13 +90,13 @@ func main() {
log.Fatal(err)
}
invite, err := client.GetInvitationFromRelay(ctx, uri, id, []tls.Certificate{cert}, 10*time.Second)
invite, err := client.GetInvitationFromRelay(uri, id, []tls.Certificate{cert}, 10*time.Second)
if err != nil {
log.Fatal(err)
}
log.Println("Received invitation", invite)
conn, err := client.JoinSession(ctx, invite)
conn, err := client.JoinSession(invite)
if err != nil {
log.Fatalln("Failed to join", err)
}
@@ -107,7 +104,7 @@ func main() {
connectToStdio(stdin, conn)
log.Println("Finished", conn.RemoteAddr(), conn.LocalAddr())
} else if test {
if client.TestRelay(ctx, uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4) {
if client.TestRelay(uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4) {
log.Println("OK")
} else {
log.Println("FAIL")

View File

@@ -1,57 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"encoding/json"
"flag"
"os"
"sort"
"github.com/syncthing/syncthing/lib/upgrade"
)
const defaultURL = "https://api.github.com/repos/syncthing/syncthing/releases?per_page=25"
func main() {
url := flag.String("u", defaultURL, "GitHub releases url")
flag.Parse()
rels := upgrade.FetchLatestReleases(*url, "")
if rels == nil {
// An error was already logged
os.Exit(1)
}
sort.Sort(upgrade.SortByRelease(rels))
rels = filterForLatest(rels)
if err := json.NewEncoder(os.Stdout).Encode(rels); err != nil {
os.Exit(1)
}
}
// filterForLatest returns the latest stable and prerelease only. If the
// stable version is newer (comes first in the list) there is no need to go
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
var havePre bool
for _, rel := range rels {
if !rel.Prerelease {
// We found a stable version, we're good now.
filtered = append(filtered, rel)
break
}
if rel.Prerelease && !havePre {
// We remember the first prerelease we find.
filtered = append(filtered, rel)
havePre = true
}
}
return filtered
}

View File

@@ -0,0 +1,69 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"encoding/json"
"io"
"github.com/syncthing/syncthing/lib/events"
)
// The auditService subscribes to events and writes these in JSON format, one
// event per line, to the specified writer.
type auditService struct {
w io.Writer // audit destination
stop chan struct{} // signals time to stop
started chan struct{} // signals startup complete
stopped chan struct{} // signals stop complete
}
func newAuditService(w io.Writer) *auditService {
return &auditService{
w: w,
stop: make(chan struct{}),
started: make(chan struct{}),
stopped: make(chan struct{}),
}
}
// Serve runs the audit service.
func (s *auditService) Serve() {
defer close(s.stopped)
sub := events.Default.Subscribe(events.AllEvents)
defer events.Default.Unsubscribe(sub)
enc := json.NewEncoder(s.w)
// We're ready to start processing events.
close(s.started)
for {
select {
case ev := <-sub.C():
enc.Encode(ev)
case <-s.stop:
return
}
}
}
// Stop stops the audit service.
func (s *auditService) Stop() {
close(s.stop)
}
// WaitForStart returns once the audit service is ready to receive events, or
// immediately if it's already running.
func (s *auditService) WaitForStart() {
<-s.started
}
// WaitForStop returns once the audit service has stopped.
// (Needed by the tests.)
func (s *auditService) WaitForStop() {
<-s.stopped
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package syncthing
package main
import (
"bytes"
@@ -17,32 +17,27 @@ import (
func TestAuditService(t *testing.T) {
buf := new(bytes.Buffer)
evLogger := events.NewLogger()
go evLogger.Serve()
defer evLogger.Stop()
sub := evLogger.Subscribe(events.AllEvents)
defer sub.Unsubscribe()
service := newAuditService(buf)
// Event sent before start, will not be logged
evLogger.Log(events.ConfigSaved, "the first event")
// Make sure the event goes through before creating the service
<-sub.C()
events.Default.Log(events.ConfigSaved, "the first event")
service := newAuditService(buf, evLogger)
go service.Serve()
service.WaitForStart()
// Event that should end up in the audit log
evLogger.Log(events.ConfigSaved, "the second event")
events.Default.Log(events.ConfigSaved, "the second event")
// We need to give the events time to arrive, since the channels are buffered etc.
time.Sleep(10 * time.Millisecond)
service.Stop()
service.WaitForStop()
// This event should not be logged, since we have stopped.
evLogger.Log(events.ConfigSaved, "the third event")
events.Default.Log(events.ConfigSaved, "the third event")
result := buf.String()
result := string(buf.Bytes())
t.Log(result)
if strings.Contains(result, "first event") {

View File

@@ -22,15 +22,11 @@ func init() {
panic("Couldn't find block profiler")
}
l.Debugln("Starting block profiling")
go func() {
err := saveBlockingProfiles(profiler) // Only returns on error
l.Warnln("Block profiler failed:", err)
panic("Block profiler failed")
}()
go saveBlockingProfiles(profiler)
}
}
func saveBlockingProfiles(profiler *pprof.Profile) error {
func saveBlockingProfiles(profiler *pprof.Profile) {
runtime.SetBlockProfileRate(1)
t0 := time.Now()
@@ -39,16 +35,16 @@ func saveBlockingProfiles(profiler *pprof.Profile) error {
fd, err := os.Create(fmt.Sprintf("block-%05d-%07d.pprof", syscall.Getpid(), startms))
if err != nil {
return err
panic(err)
}
err = profiler.WriteTo(fd, 0)
if err != nil {
return err
panic(err)
}
err = fd.Close()
if err != nil {
return err
panic(err)
}
}
return nil
}

View File

@@ -6,8 +6,8 @@
//+build noupgrade
package build
package main
func init() {
Tags = append(Tags, "noupgrade")
BuildTags = append(BuildTags, "noupgrade")
}

View File

@@ -6,8 +6,8 @@
//+build race
package build
package main
func init() {
Tags = append(Tags, "race")
BuildTags = append(BuildTags, "race")
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package syncthing
package main
import (
"math"

View File

@@ -6,7 +6,7 @@
//+build solaris
package syncthing
package main
import (
"encoding/binary"

View File

@@ -6,7 +6,7 @@
//+build !windows,!solaris
package syncthing
package main
import "syscall"
import "time"

View File

@@ -6,7 +6,7 @@
//+build windows
package syncthing
package main
import "syscall"
import "time"

View File

@@ -1,152 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"sort"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
const (
headRequestTimeout = 10 * time.Second
putRequestTimeout = time.Minute
)
// uploadPanicLogs attempts to upload all the panic logs in the named
// directory to the crash reporting server as urlBase. Uploads are attempted
// with the newest log first.
//
// This can can block for a long time. The context can set a final deadline
// for this.
func uploadPanicLogs(ctx context.Context, urlBase, dir string) {
files, err := filepath.Glob(filepath.Join(dir, "panic-*.log"))
if err != nil {
l.Warnln("Failed to list panic logs:", err)
return
}
sort.Sort(sort.Reverse(sort.StringSlice(files)))
for _, file := range files {
if strings.Contains(file, ".reported.") {
// We've already sent this file. It'll be cleaned out at some
// point.
continue
}
if err := uploadPanicLog(ctx, urlBase, file); err != nil {
l.Warnln("Reporting crash:", err)
} else {
// Rename the log so we don't have to try to report it again. This
// succeeds, or it does not. There is no point complaining about it.
_ = os.Rename(file, strings.Replace(file, ".log", ".reported.log", 1))
}
}
}
// uploadPanicLog attempts to upload the named panic log to the crash
// reporting server at urlBase. The panic ID is constructed as the sha256 of
// the log contents. A HEAD request is made to see if the log has already
// been reported. If not, a PUT is made with the log contents.
func uploadPanicLog(ctx context.Context, urlBase, file string) error {
data, err := ioutil.ReadFile(file)
if err != nil {
return err
}
// Remove log lines, for privacy.
data = filterLogLines(data)
hash := fmt.Sprintf("%x", sha256.Sum256(data))
l.Infof("Reporting crash found in %s (report ID %s) ...\n", filepath.Base(file), hash[:8])
url := fmt.Sprintf("%s/%s", urlBase, hash)
headReq, err := http.NewRequest(http.MethodHead, url, nil)
if err != nil {
return err
}
// Set a reasonable timeout on the HEAD request
headCtx, headCancel := context.WithTimeout(ctx, headRequestTimeout)
defer headCancel()
headReq = headReq.WithContext(headCtx)
resp, err := http.DefaultClient.Do(headReq)
if err != nil {
return err
}
resp.Body.Close()
if resp.StatusCode == http.StatusOK {
// It's known, we're done
return nil
}
putReq, err := http.NewRequest(http.MethodPut, url, bytes.NewReader(data))
if err != nil {
return err
}
// Set a reasonable timeout on the PUT request
putCtx, putCancel := context.WithTimeout(ctx, putRequestTimeout)
defer putCancel()
putReq = putReq.WithContext(putCtx)
resp, err = http.DefaultClient.Do(putReq)
if err != nil {
return err
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("upload: %s", resp.Status)
}
return nil
}
// filterLogLines returns the data without any log lines between the first
// line and the panic trace. This is done in-place: the original data slice
// is destroyed.
func filterLogLines(data []byte) []byte {
filtered := data[:0]
matched := false
for _, line := range bytes.Split(data, []byte("\n")) {
switch {
case !matched && bytes.HasPrefix(line, []byte("Panic ")):
// This begins the panic trace, set the matched flag and append.
matched = true
fallthrough
case len(filtered) == 0 || matched:
// This is the first line or inside the panic trace.
if len(filtered) > 0 {
// We add the newline before rather than after because
// bytes.Split sees the \n as *separator* and not line
// ender, so ir will generate a last empty line that we
// don't really want. (We want to keep blank lines in the
// middle of the trace though.)
filtered = append(filtered, '\n')
}
// Remove the device ID prefix. The "plus two" stuff is because
// the line will look like "[foo] whatever" and the end variable
// will end up pointing at the ] and we want to step over that
// and the following space.
if end := bytes.Index(line, []byte("]")); end > 1 && end < len(line)-2 && bytes.HasPrefix(line, []byte("[")) {
line = line[end+2:]
}
filtered = append(filtered, line...)
}
}
return filtered
}

View File

@@ -1,37 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"testing"
)
func TestFilterLogLines(t *testing.T) {
in := []byte(`[ABCD123] syncthing version whatever
here is more log data
and more
...
and some more
yet more
Panic detected at like right now
here is panic data
and yet more panic stuff
`)
filtered := []byte(`syncthing version whatever
Panic detected at like right now
here is panic data
and yet more panic stuff
`)
result := filterLogLines(in)
if !bytes.Equal(result, filtered) {
t.Logf("%q\n", result)
t.Error("it should have been filtered")
}
}

View File

@@ -7,9 +7,22 @@
package main
import (
"os"
"strings"
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("main", "Main package")
l = logger.DefaultLogger.NewFacility("main", "Main package")
httpl = logger.DefaultLogger.NewFacility("http", "REST API")
)
func shouldDebugHTTP() bool {
return l.ShouldDebug("http")
}
func init() {
l.SetDebug("main", strings.Contains(os.Getenv("STTRACE"), "main") || os.Getenv("STTRACE") == "all")
l.SetDebug("http", strings.Contains(os.Getenv("STTRACE"), "http") || os.Getenv("STTRACE") == "all")
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"bytes"
@@ -20,7 +20,7 @@ import (
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
"golang.org/x/crypto/bcrypt"
ldap "gopkg.in/ldap.v2"
"gopkg.in/ldap.v2"
)
var (
@@ -28,14 +28,14 @@ var (
sessionsMut = sync.NewMutex()
)
func emitLoginAttempt(success bool, username string, evLogger events.Logger) {
evLogger.Log(events.LoginAttempt, map[string]interface{}{
func emitLoginAttempt(success bool, username string) {
events.Default.Log(events.LoginAttempt, map[string]interface{}{
"success": success,
"username": username,
})
}
func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfiguration, ldapCfg config.LDAPConfiguration, next http.Handler, evLogger events.Logger) http.Handler {
func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfiguration, ldapCfg config.LDAPConfiguration, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if guiCfg.IsValidAPIKey(r.Header.Get("X-API-Key")) {
next.ServeHTTP(w, r)
@@ -53,7 +53,7 @@ func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfigura
}
}
l.Debugln("Sessionless HTTP request with authentication; this is expensive.")
httpl.Debugln("Sessionless HTTP request with authentication; this is expensive.")
error := func() {
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
@@ -80,10 +80,11 @@ func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfigura
return
}
authOk := false
username := string(fields[0])
password := string(fields[1])
authOk := auth(username, password, guiCfg, ldapCfg)
authOk = auth(username, password, guiCfg, ldapCfg)
if !authOk {
usernameIso := string(iso88591ToUTF8([]byte(username)))
passwordIso := string(iso88591ToUTF8([]byte(password)))
@@ -94,7 +95,7 @@ func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfigura
}
if !authOk {
emitLoginAttempt(false, username, evLogger)
emitLoginAttempt(false, username)
error()
return
}
@@ -109,7 +110,7 @@ func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfigura
MaxAge: 0,
})
emitLoginAttempt(true, username, evLogger)
emitLoginAttempt(true, username)
next.ServeHTTP(w, r)
})
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"testing"
@@ -19,8 +19,6 @@ func init() {
}
func TestStaticAuthOK(t *testing.T) {
t.Parallel()
ok := authStatic("user", "pass", "user", string(passwordHashBytes))
if !ok {
t.Fatalf("should pass auth")
@@ -28,8 +26,6 @@ func TestStaticAuthOK(t *testing.T) {
}
func TestSimpleAuthUsernameFail(t *testing.T) {
t.Parallel()
ok := authStatic("userWRONG", "pass", "user", string(passwordHashBytes))
if ok {
t.Fatalf("should fail auth")
@@ -37,8 +33,6 @@ func TestSimpleAuthUsernameFail(t *testing.T) {
}
func TestStaticAuthPasswordFail(t *testing.T) {
t.Parallel()
ok := authStatic("user", "passWRONG", "user", string(passwordHashBytes))
if ok {
t.Fatalf("should fail auth")

142
cmd/syncthing/gui_csrf.go Normal file
View File

@@ -0,0 +1,142 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bufio"
"fmt"
"net/http"
"os"
"strings"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
)
// csrfTokens is a list of valid tokens. It is sorted so that the most
// recently used token is first in the list. New tokens are added to the front
// of the list (as it is the most recently used at that time). The list is
// pruned to a maximum of maxCsrfTokens, throwing away the least recently used
// tokens.
var csrfTokens []string
var csrfMut = sync.NewMutex()
const maxCsrfTokens = 25
// Check for CSRF token on /rest/ URLs. If a correct one is not given, reject
// the request with 403. For / and /index.html, set a new CSRF cookie if none
// is currently set.
func csrfMiddleware(unique string, prefix string, cfg config.GUIConfiguration, next http.Handler) http.Handler {
loadCsrfTokens()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Allow requests carrying a valid API key
if cfg.IsValidAPIKey(r.Header.Get("X-API-Key")) {
// Set the access-control-allow-origin header for CORS requests
// since a valid API key has been provided
w.Header().Add("Access-Control-Allow-Origin", "*")
next.ServeHTTP(w, r)
return
}
if strings.HasPrefix(r.URL.Path, "/rest/debug") {
// Debugging functions are only available when explicitly
// enabled, and can be accessed without a CSRF token
next.ServeHTTP(w, r)
return
}
// Allow requests for anything not under the protected path prefix,
// and set a CSRF cookie if there isn't already a valid one.
if !strings.HasPrefix(r.URL.Path, prefix) {
cookie, err := r.Cookie("CSRF-Token-" + unique)
if err != nil || !validCsrfToken(cookie.Value) {
httpl.Debugln("new CSRF cookie in response to request for", r.URL)
cookie = &http.Cookie{
Name: "CSRF-Token-" + unique,
Value: newCsrfToken(),
}
http.SetCookie(w, cookie)
}
next.ServeHTTP(w, r)
return
}
// Verify the CSRF token
token := r.Header.Get("X-CSRF-Token-" + unique)
if !validCsrfToken(token) {
http.Error(w, "CSRF Error", 403)
return
}
next.ServeHTTP(w, r)
})
}
func validCsrfToken(token string) bool {
csrfMut.Lock()
defer csrfMut.Unlock()
for i, t := range csrfTokens {
if t == token {
if i > 0 {
// Move this token to the head of the list. Copy the tokens at
// the front one step to the right and then replace the token
// at the head.
copy(csrfTokens[1:], csrfTokens[:i+1])
csrfTokens[0] = token
}
return true
}
}
return false
}
func newCsrfToken() string {
token := rand.String(32)
csrfMut.Lock()
csrfTokens = append([]string{token}, csrfTokens...)
if len(csrfTokens) > maxCsrfTokens {
csrfTokens = csrfTokens[:maxCsrfTokens]
}
defer csrfMut.Unlock()
saveCsrfTokens()
return token
}
func saveCsrfTokens() {
// We're ignoring errors in here. It's not super critical and there's
// nothing relevant we can do about them anyway...
name := locations[locCsrfTokens]
f, err := osutil.CreateAtomic(name)
if err != nil {
return
}
for _, t := range csrfTokens {
fmt.Fprintln(f, t)
}
f.Close()
}
func loadCsrfTokens() {
f, err := os.Open(locations[locCsrfTokens])
if err != nil {
return
}
defer f.Close()
s := bufio.NewScanner(f)
for s.Scan() {
csrfTokens = append(csrfTokens, s.Text())
}
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"bytes"
@@ -23,25 +23,21 @@ import (
"github.com/syncthing/syncthing/lib/sync"
)
const themePrefix = "theme-assets/"
type staticsServer struct {
assetDir string
assets map[string][]byte
availableThemes []string
mut sync.RWMutex
theme string
lastThemeChange time.Time
mut sync.RWMutex
theme string
}
func newStaticsServer(theme, assetDir string) *staticsServer {
s := &staticsServer{
assetDir: assetDir,
assets: auto.Assets(),
mut: sync.NewRWMutex(),
theme: theme,
lastThemeChange: time.Now().UTC(),
assetDir: assetDir,
assets: auto.Assets(),
mut: sync.NewRWMutex(),
theme: theme,
}
seen := make(map[string]struct{})
@@ -90,23 +86,8 @@ func (s *staticsServer) serveAsset(w http.ResponseWriter, r *http.Request) {
s.mut.RLock()
theme := s.theme
modificationTime := s.lastThemeChange
s.mut.RUnlock()
// If path starts with special prefix, get theme and file from path
if strings.HasPrefix(file, themePrefix) {
path := file[len(themePrefix):]
i := strings.IndexRune(path, '/')
if i == -1 {
http.NotFound(w, r)
return
}
theme = path[:i]
file = path[i+1:]
}
// Check for an override for the current theme.
if s.assetDir != "" {
p := filepath.Join(s.assetDir, theme, filepath.FromSlash(file))
@@ -144,12 +125,14 @@ func (s *staticsServer) serveAsset(w http.ResponseWriter, r *http.Request) {
}
}
etag := fmt.Sprintf("%d", modificationTime.Unix())
w.Header().Set("Last-Modified", modificationTime.Format(http.TimeFormat))
etag := fmt.Sprintf("%d", auto.Generated)
modified := time.Unix(auto.Generated, 0).UTC()
w.Header().Set("Last-Modified", modified.Format(http.TimeFormat))
w.Header().Set("Etag", etag)
if t, err := time.Parse(http.TimeFormat, r.Header.Get("If-Modified-Since")); err == nil {
if modificationTime.Equal(t) || modificationTime.Before(t) {
if modified.Equal(t) || modified.Before(t) {
w.WriteHeader(http.StatusNotModified)
return
}
@@ -216,7 +199,6 @@ func (s *staticsServer) mimeTypeForFile(file string) string {
func (s *staticsServer) setTheme(theme string) {
s.mut.Lock()
s.theme = theme
s.lastThemeChange = time.Now().UTC()
s.mut.Unlock()
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"bytes"
@@ -18,7 +18,6 @@ import (
"net/http/httptest"
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
"testing"
@@ -28,88 +27,54 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/ur"
"github.com/thejerf/suture"
)
var (
confDir = filepath.Join("testdata", "config")
token = filepath.Join(confDir, "csrftokens.txt")
)
func TestMain(m *testing.M) {
orig := locations.GetBaseDir(locations.ConfigBaseDir)
locations.SetBaseDir(locations.ConfigBaseDir, confDir)
exitCode := m.Run()
locations.SetBaseDir(locations.ConfigBaseDir, orig)
os.Exit(exitCode)
}
func TestCSRFToken(t *testing.T) {
t.Parallel()
t1 := newCsrfToken()
t2 := newCsrfToken()
max := 250
int := 5
if testing.Short() {
max = 20
int = 2
}
m := newCsrfManager("unique", "prefix", config.GUIConfiguration{}, nil, "")
t1 := m.newToken()
t2 := m.newToken()
t3 := m.newToken()
if !m.validToken(t3) {
t3 := newCsrfToken()
if !validCsrfToken(t3) {
t.Fatal("t3 should be valid")
}
for i := 0; i < max; i++ {
if i%int == 0 {
for i := 0; i < 250; i++ {
if i%5 == 0 {
// t1 and t2 should remain valid by virtue of us checking them now
// and then.
if !m.validToken(t1) {
if !validCsrfToken(t1) {
t.Fatal("t1 should be valid at iteration", i)
}
if !m.validToken(t2) {
if !validCsrfToken(t2) {
t.Fatal("t2 should be valid at iteration", i)
}
}
// The newly generated token is always valid
t4 := m.newToken()
if !m.validToken(t4) {
t4 := newCsrfToken()
if !validCsrfToken(t4) {
t.Fatal("t4 should be valid at iteration", i)
}
}
if m.validToken(t3) {
if validCsrfToken(t3) {
t.Fatal("t3 should have expired by now")
}
}
func TestStopAfterBrokenConfig(t *testing.T) {
t.Parallel()
cfg := config.Configuration{
GUI: config.GUIConfiguration{
RawAddress: "127.0.0.1:0",
RawUseTLS: false,
},
}
w := config.Wrap("/dev/null", cfg, events.NoopLogger)
w := config.Wrap("/dev/null", cfg)
srv := New(protocol.LocalDeviceID, w, "", "syncthing", nil, nil, nil, events.NoopLogger, nil, nil, nil, nil, nil, nil, nil, nil, false).(*service)
defer os.Remove(token)
srv := newAPIService(protocol.LocalDeviceID, w, "../../test/h1/https-cert.pem", "../../test/h1/https-key.pem", "", nil, nil, nil, nil, nil, nil, nil, nil)
srv.started = make(chan string)
sup := suture.New("test", suture.Spec{
@@ -140,8 +105,6 @@ func TestStopAfterBrokenConfig(t *testing.T) {
}
func TestAssetsDir(t *testing.T) {
t.Parallel()
// For any given request to $FILE, we should return the first found of
// - assetsdir/$THEME/$FILE
// - compiled in asset $THEME/$FILE
@@ -216,10 +179,8 @@ func expectURLToContain(t *testing.T, url, exp string) {
}
func TestDirNames(t *testing.T) {
t.Parallel()
names := dirNames("testdata")
expected := []string{"config", "default", "foo", "testfolder"}
expected := []string{"default", "foo", "testfolder"}
if diff, equal := messagediff.PrettyDiff(expected, names); !equal {
t.Errorf("Unexpected dirNames return: %#v\n%s", names, diff)
}
@@ -234,16 +195,13 @@ type httpTestCase struct {
}
func TestAPIServiceRequests(t *testing.T) {
t.Parallel()
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
cases := []httpTestCase{
// /rest/db
@@ -447,16 +405,13 @@ func testHTTPRequest(t *testing.T, baseURL string, tc httpTestCase, apikey strin
}
func TestHTTPLogin(t *testing.T) {
t.Parallel()
cfg := new(mockedConfig)
cfg.gui.User = "üser"
cfg.gui.Password = "$2a$10$IdIZTxTg/dCNuNEGlmLynOjqg4B1FvDKuIV5e0BB3pnWVHNb8.GSq" // bcrypt of "räksmörgås" in UTF-8
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
// Verify rejection when not using authorization
@@ -514,8 +469,10 @@ func TestHTTPLogin(t *testing.T) {
}
}
func startHTTP(cfg *mockedConfig) (string, *suture.Supervisor, error) {
m := new(mockedModel)
func startHTTP(cfg *mockedConfig) (string, error) {
model := new(mockedModel)
httpsCertFile := "../../test/h1/https-cert.pem"
httpsKeyFile := "../../test/h1/https-key.pem"
assetDir := "../../gui"
eventSub := new(mockedEventSub)
diskEventSub := new(mockedEventSub)
@@ -527,10 +484,8 @@ func startHTTP(cfg *mockedConfig) (string, *suture.Supervisor, error) {
addrChan := make(chan string)
// Instantiate the API service
urService := ur.New(cfg, m, connections, false)
summaryService := model.NewFolderSummaryService(cfg, m, protocol.LocalDeviceID, events.NoopLogger)
svc := New(protocol.LocalDeviceID, cfg, assetDir, "syncthing", m, eventSub, diskEventSub, events.NoopLogger, discoverer, connections, urService, summaryService, errorLog, systemLog, cpu, nil, false).(*service)
defer os.Remove(token)
svc := newAPIService(protocol.LocalDeviceID, cfg, httpsCertFile, httpsKeyFile, assetDir, model,
eventSub, diskEventSub, discoverer, connections, errorLog, systemLog, cpu)
svc.started = addrChan
// Actually start the API service
@@ -544,8 +499,7 @@ func startHTTP(cfg *mockedConfig) (string, *suture.Supervisor, error) {
addr := <-addrChan
tcpAddr, err := net.ResolveTCPAddr("tcp", addr)
if err != nil {
supervisor.Stop()
return "", nil, fmt.Errorf("Weird address from API service: %v", err)
return "", fmt.Errorf("Weird address from API service: %v", err)
}
host, _, _ := net.SplitHostPort(cfg.gui.RawAddress)
@@ -554,23 +508,20 @@ func startHTTP(cfg *mockedConfig) (string, *suture.Supervisor, error) {
}
baseURL := fmt.Sprintf("http://%s", net.JoinHostPort(host, strconv.Itoa(tcpAddr.Port)))
return baseURL, supervisor, nil
return baseURL, nil
}
func TestCSRFRequired(t *testing.T) {
t.Parallel()
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal("Unexpected error from getting base URL:", err)
}
defer sup.Stop()
cli := &http.Client{
Timeout: time.Minute,
Timeout: time.Second,
}
// Getting the base URL (i.e. "/") should succeed.
@@ -634,16 +585,13 @@ func TestCSRFRequired(t *testing.T) {
}
func TestRandomString(t *testing.T) {
t.Parallel()
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
cli := &http.Client{
Timeout: time.Second,
}
@@ -686,15 +634,10 @@ func TestRandomString(t *testing.T) {
}
func TestConfigPostOK(t *testing.T) {
t.Parallel()
cfg := bytes.NewBuffer([]byte(`{
"version": 15,
"folders": [
{
"id": "foo",
"path": "TestConfigPostOK"
}
{"id": "foo"}
]
}`))
@@ -705,12 +648,9 @@ func TestConfigPostOK(t *testing.T) {
if resp.StatusCode != http.StatusOK {
t.Error("Expected 200 OK, not", resp.Status)
}
os.RemoveAll("TestConfigPostOK")
}
func TestConfigPostDupFolder(t *testing.T) {
t.Parallel()
cfg := bytes.NewBuffer([]byte(`{
"version": 15,
"folders": [
@@ -732,11 +672,10 @@ func testConfigPost(data io.Reader) (*http.Response, error) {
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
return nil, err
}
defer sup.Stop()
cli := &http.Client{
Timeout: time.Second,
}
@@ -747,17 +686,14 @@ func testConfigPost(data io.Reader) (*http.Response, error) {
}
func TestHostCheck(t *testing.T) {
t.Parallel()
// An API service bound to localhost should reject non-localhost host Headers
cfg := new(mockedConfig)
cfg.gui.RawAddress = "127.0.0.1:0"
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
// A normal HTTP get to the localhost-bound service should succeed
@@ -814,11 +750,10 @@ func TestHostCheck(t *testing.T) {
cfg = new(mockedConfig)
cfg.gui.RawAddress = "127.0.0.1:0"
cfg.gui.InsecureSkipHostCheck = true
baseURL, sup, err = startHTTP(cfg)
baseURL, err = startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
// A request with a suspicious Host header should be allowed
@@ -838,11 +773,10 @@ func TestHostCheck(t *testing.T) {
cfg = new(mockedConfig)
cfg.gui.RawAddress = "0.0.0.0:0"
cfg.gui.InsecureSkipHostCheck = true
baseURL, sup, err = startHTTP(cfg)
baseURL, err = startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
// A request with a suspicious Host header should be allowed
@@ -859,18 +793,12 @@ func TestHostCheck(t *testing.T) {
// This should all work over IPv6 as well
if runningInContainer() {
// Working IPv6 in Docker can't be taken for granted.
return
}
cfg = new(mockedConfig)
cfg.gui.RawAddress = "[::1]:0"
baseURL, sup, err = startHTTP(cfg)
baseURL, err = startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
// A normal HTTP get to the localhost-bound service should succeed
@@ -911,8 +839,6 @@ func TestHostCheck(t *testing.T) {
}
func TestAddressIsLocalhost(t *testing.T) {
t.Parallel()
testcases := []struct {
address string
result bool
@@ -956,16 +882,13 @@ func TestAddressIsLocalhost(t *testing.T) {
}
func TestAccessControlAllowOriginHeader(t *testing.T) {
t.Parallel()
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
cli := &http.Client{
Timeout: time.Second,
}
@@ -987,16 +910,13 @@ func TestAccessControlAllowOriginHeader(t *testing.T) {
}
func TestOptionsRequest(t *testing.T) {
t.Parallel()
const testAPIKey = "foobarbaz"
cfg := new(mockedConfig)
cfg.gui.APIKey = testAPIKey
baseURL, sup, err := startHTTP(cfg)
baseURL, err := startHTTP(cfg)
if err != nil {
t.Fatal(err)
}
defer sup.Stop()
cli := &http.Client{
Timeout: time.Second,
}
@@ -1023,16 +943,13 @@ func TestOptionsRequest(t *testing.T) {
}
func TestEventMasks(t *testing.T) {
t.Parallel()
cfg := new(mockedConfig)
defSub := new(mockedEventSub)
diskSub := new(mockedEventSub)
svc := New(protocol.LocalDeviceID, cfg, "", "syncthing", nil, defSub, diskSub, events.NoopLogger, nil, nil, nil, nil, nil, nil, nil, nil, false).(*service)
defer os.Remove(token)
svc := newAPIService(protocol.LocalDeviceID, cfg, "", "", "", nil, defSub, diskSub, nil, nil, nil, nil, nil)
if mask := svc.getEventMask(""); mask != DefaultEventMask {
t.Errorf("incorrect default mask %x != %x", int64(mask), int64(DefaultEventMask))
if mask := svc.getEventMask(""); mask != defaultEventMask {
t.Errorf("incorrect default mask %x != %x", int64(mask), int64(defaultEventMask))
}
expected := events.FolderSummary | events.LocalChangeDetected
@@ -1045,10 +962,10 @@ func TestEventMasks(t *testing.T) {
t.Errorf("incorrect parsed mask %x != %x", int64(mask), int64(expected))
}
if res := svc.getEventSub(DefaultEventMask); res != defSub {
if res := svc.getEventSub(defaultEventMask); res != defSub {
t.Errorf("should have returned the given default event sub")
}
if res := svc.getEventSub(DiskEventMask); res != diskSub {
if res := svc.getEventSub(diskEventMask); res != diskSub {
t.Errorf("should have returned the given disk event sub")
}
if res := svc.getEventSub(events.LocalIndexUpdated); res == nil || res == defSub || res == diskSub {
@@ -1057,8 +974,6 @@ func TestEventMasks(t *testing.T) {
}
func TestBrowse(t *testing.T) {
t.Parallel()
pathSep := string(os.PathSeparator)
tmpDir, err := ioutil.TempDir("", "syncthing")
@@ -1111,8 +1026,6 @@ func TestBrowse(t *testing.T) {
}
func TestPrefixMatch(t *testing.T) {
t.Parallel()
cases := []struct {
s string
prefix string
@@ -1132,44 +1045,6 @@ func TestPrefixMatch(t *testing.T) {
}
}
func TestCheckExpiry(t *testing.T) {
dir, err := ioutil.TempDir("", "syncthing-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
// Self signed certificates expiring in less than a month are errored so we
// can regenerate in time.
crt, err := tlsutil.NewCertificate(filepath.Join(dir, "crt"), filepath.Join(dir, "key"), "foo.example.com", 29)
if err != nil {
t.Fatal(err)
}
if err := checkExpiry(crt); err == nil {
t.Error("expected expiry error")
}
// Certificates with at least 31 days of life left are fine.
crt, err = tlsutil.NewCertificate(filepath.Join(dir, "crt"), filepath.Join(dir, "key"), "foo.example.com", 31)
if err != nil {
t.Fatal(err)
}
if err := checkExpiry(crt); err != nil {
t.Error("expected no error:", err)
}
if runtime.GOOS == "darwin" {
// Certificates with too long an expiry time are not allowed on macOS
crt, err = tlsutil.NewCertificate(filepath.Join(dir, "crt"), filepath.Join(dir, "key"), "foo.example.com", 1000)
if err != nil {
t.Fatal(err)
}
if err := checkExpiry(crt); err == nil {
t.Error("expected expiry error")
}
}
}
func equalStrings(a, b []string) bool {
if len(a) != len(b) {
return false
@@ -1181,24 +1056,3 @@ func equalStrings(a, b []string) bool {
}
return true
}
// runningInContainer returns true if we are inside Docker or LXC. It might
// be prone to false negatives if things change in the future, but likely
// not false positives.
func runningInContainer() bool {
if runtime.GOOS != "linux" {
return false
}
bs, err := ioutil.ReadFile("/proc/1/cgroup")
if err != nil {
return false
}
if bytes.Contains(bs, []byte("/docker/")) {
return true
}
if bytes.Contains(bs, []byte("/lxc/")) {
return true
}
return false
}

View File

@@ -23,15 +23,11 @@ func init() {
rate = i
}
l.Debugln("Starting heap profiling")
go func() {
err := saveHeapProfiles(rate) // Only returns on error
l.Warnln("Heap profiler failed:", err)
panic("Heap profiler failed")
}()
go saveHeapProfiles(rate)
}
}
func saveHeapProfiles(rate int) error {
func saveHeapProfiles(rate int) {
runtime.MemProfileRate = rate
var memstats, prevMemstats runtime.MemStats
@@ -42,21 +38,21 @@ func saveHeapProfiles(rate int) error {
if memstats.HeapInuse > prevMemstats.HeapInuse {
fd, err := os.Create(name + ".tmp")
if err != nil {
return err
panic(err)
}
err = pprof.WriteHeapProfile(fd)
if err != nil {
return err
panic(err)
}
err = fd.Close()
if err != nil {
return err
panic(err)
}
os.Remove(name) // Error deliberately ignored
_ = os.Remove(name) // Error deliberately ignored
err = os.Rename(name+".tmp", name)
if err != nil {
return err
panic(err)
}
prevMemstats = memstats

125
cmd/syncthing/locations.go Normal file
View File

@@ -0,0 +1,125 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"os"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/syncthing/syncthing/lib/fs"
)
type locationEnum string
// Use strings as keys to make printout and serialization of the locations map
// more meaningful.
const (
locConfigFile locationEnum = "config"
locCertFile = "certFile"
locKeyFile = "keyFile"
locHTTPSCertFile = "httpsCertFile"
locHTTPSKeyFile = "httpsKeyFile"
locDatabase = "database"
locLogFile = "logFile"
locCsrfTokens = "csrfTokens"
locPanicLog = "panicLog"
locAuditLog = "auditLog"
locGUIAssets = "GUIAssets"
locDefFolder = "defFolder"
)
// Platform dependent directories
var baseDirs = map[string]string{
"config": defaultConfigDir(), // Overridden by -home flag
"home": homeDir(), // User's home directory, *not* -home flag
}
// Use the variables from baseDirs here
var locations = map[locationEnum]string{
locConfigFile: "${config}/config.xml",
locCertFile: "${config}/cert.pem",
locKeyFile: "${config}/key.pem",
locHTTPSCertFile: "${config}/https-cert.pem",
locHTTPSKeyFile: "${config}/https-key.pem",
locDatabase: "${config}/index-v0.14.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-${timestamp}.log",
locAuditLog: "${config}/audit-${timestamp}.log",
locGUIAssets: "${config}/gui",
locDefFolder: "${home}/Sync",
}
// expandLocations replaces the variables in the location map with actual
// directory locations.
func expandLocations() error {
for key, dir := range locations {
for varName, value := range baseDirs {
dir = strings.Replace(dir, "${"+varName+"}", value, -1)
}
var err error
dir, err = fs.ExpandTilde(dir)
if err != nil {
return err
}
locations[key] = dir
}
return nil
}
// defaultConfigDir returns the default configuration directory, as figured
// out by various the environment variables present on each platform, or dies
// trying.
func defaultConfigDir() string {
switch runtime.GOOS {
case "windows":
if p := os.Getenv("LocalAppData"); p != "" {
return filepath.Join(p, "Syncthing")
}
return filepath.Join(os.Getenv("AppData"), "Syncthing")
case "darwin":
dir, err := fs.ExpandTilde("~/Library/Application Support/Syncthing")
if err != nil {
l.Fatalln(err)
}
return dir
default:
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
return filepath.Join(xdgCfg, "syncthing")
}
dir, err := fs.ExpandTilde("~/.config/syncthing")
if err != nil {
l.Fatalln(err)
}
return dir
}
}
// homeDir returns the user's home directory, or dies trying.
func homeDir() string {
home, err := fs.ExpandTilde("~")
if err != nil {
l.Fatalln(err)
}
return home
}
func timestampedLoc(key locationEnum) string {
// We take the roundtrip via "${timestamp}" instead of passing the path
// directly through time.Format() to avoid issues when the path we are
// expanding contains numbers; otherwise for example
// /home/user2006/.../panic-20060102-150405.log would get both instances of
// 2006 replaced by 2015...
tpl := locations[key]
now := time.Now().Format("20060102-150405")
return strings.Replace(tpl, "${timestamp}", now, -1)
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,42 @@
// Copyright (C) 2019 The Syncthing Authors.
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package build
package main
import (
"testing"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestShortIDCheck(t *testing.T) {
cfg := config.Wrap("/tmp/test", config.Configuration{
Devices: []config.DeviceConfiguration{
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 0, 0}},
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 1, 1}}, // first 56 bits same, differ in the first 64 bits
},
})
if err := checkShortIDs(cfg); err != nil {
t.Error("Unexpected error:", err)
}
cfg = config.Wrap("/tmp/test", config.Configuration{
Devices: []config.DeviceConfiguration{
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 64, 0}},
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 64, 1}}, // first 64 bits same
},
})
if err := checkShortIDs(cfg); err == nil {
t.Error("Should have gotten an error")
}
}
func TestAllowedVersions(t *testing.T) {
testcases := []struct {
ver string

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package ur
package main
import (
"errors"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package ur
package main
import (
"bufio"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package ur
package main
import (
"errors"

View File

@@ -6,7 +6,7 @@
// +build solaris
package ur
package main
import (
"os/exec"

View File

@@ -6,7 +6,7 @@
// +build freebsd openbsd dragonfly
package ur
package main
import "errors"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package ur
package main
import (
"encoding/binary"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"github.com/syncthing/syncthing/lib/config"
@@ -44,8 +44,6 @@ func (c *mockedConfig) Replace(cfg config.Configuration) (config.Waiter, error)
func (c *mockedConfig) Subscribe(cm config.Committer) {}
func (c *mockedConfig) Unsubscribe(cm config.Committer) {}
func (c *mockedConfig) Folders() map[string]config.FolderConfiguration {
return nil
}
@@ -70,62 +68,6 @@ func (c *mockedConfig) RequiresRestart() bool {
return false
}
func (c *mockedConfig) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) {}
func (c *mockedConfig) AddOrUpdatePendingFolder(id, label string, device protocol.DeviceID) {}
func (c *mockedConfig) MyName() string {
return ""
}
func (c *mockedConfig) ConfigPath() string {
return ""
}
func (c *mockedConfig) SetGUI(gui config.GUIConfiguration) (config.Waiter, error) {
return noopWaiter{}, nil
}
func (c *mockedConfig) SetOptions(opts config.OptionsConfiguration) (config.Waiter, error) {
return noopWaiter{}, nil
}
func (c *mockedConfig) Folder(id string) (config.FolderConfiguration, bool) {
return config.FolderConfiguration{}, false
}
func (c *mockedConfig) FolderList() []config.FolderConfiguration {
return nil
}
func (c *mockedConfig) SetFolder(fld config.FolderConfiguration) (config.Waiter, error) {
return noopWaiter{}, nil
}
func (c *mockedConfig) Device(id protocol.DeviceID) (config.DeviceConfiguration, bool) {
return config.DeviceConfiguration{}, false
}
func (c *mockedConfig) RemoveDevice(id protocol.DeviceID) (config.Waiter, error) {
return noopWaiter{}, nil
}
func (c *mockedConfig) IgnoredDevice(id protocol.DeviceID) bool {
return false
}
func (c *mockedConfig) IgnoredFolder(device protocol.DeviceID, folder string) bool {
return false
}
func (c *mockedConfig) GlobalDiscoveryServers() []string {
return nil
}
func (c *mockedConfig) StunServers() []string {
return nil
}
type noopWaiter struct{}
func (noopWaiter) Wait() {}

View File

@@ -0,0 +1,17 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
type mockedConnections struct{}
func (m *mockedConnections) Status() map[string]interface{} {
return nil
}
func (m *mockedConnections) NATType() string {
return ""
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
type mockedCPUService struct{}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"time"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"time"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"time"

View File

@@ -4,10 +4,9 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"net"
"time"
"github.com/syncthing/syncthing/lib/connections"
@@ -48,12 +47,12 @@ func (m *mockedModel) ConnectionStats() map[string]interface{} {
return nil
}
func (m *mockedModel) DeviceStatistics() (map[string]stats.DeviceStatistics, error) {
return nil, nil
func (m *mockedModel) DeviceStatistics() map[string]stats.DeviceStatistics {
return nil
}
func (m *mockedModel) FolderStatistics() (map[string]stats.FolderStatistics, error) {
return nil, nil
func (m *mockedModel) FolderStatistics() map[string]stats.FolderStatistics {
return nil
}
func (m *mockedModel) CurrentFolderFile(folder string, file string) (protocol.FileInfo, bool) {
@@ -151,40 +150,3 @@ func (m *mockedModel) WatchError(folder string) error {
func (m *mockedModel) LocalChangedFiles(folder string, page, perpage int) []db.FileInfoTruncated {
return nil
}
func (m *mockedModel) Serve() {}
func (m *mockedModel) Stop() {}
func (m *mockedModel) Index(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo) error {
return nil
}
func (m *mockedModel) IndexUpdate(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo) error {
return nil
}
func (m *mockedModel) Request(deviceID protocol.DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (protocol.RequestResponse, error) {
return nil, nil
}
func (m *mockedModel) ClusterConfig(deviceID protocol.DeviceID, config protocol.ClusterConfig) error {
return nil
}
func (m *mockedModel) Closed(conn protocol.Connection, err error) {}
func (m *mockedModel) DownloadProgress(deviceID protocol.DeviceID, folder string, updates []protocol.FileDownloadProgressUpdate) error {
return nil
}
func (m *mockedModel) AddConnection(conn connections.Connection, hello protocol.HelloResult) {}
func (m *mockedModel) OnHello(protocol.DeviceID, net.Addr, protocol.HelloResult) error {
return nil
}
func (m *mockedModel) GetHello(protocol.DeviceID) protocol.HelloIntf {
return nil
}
func (m *mockedModel) StartDeadlockDetector(timeout time.Duration) {}

View File

@@ -8,24 +8,17 @@ package main
import (
"bufio"
"context"
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/syncthing"
)
var (
@@ -39,8 +32,6 @@ const (
loopThreshold = 60 * time.Second
logFileAutoCloseDelay = 5 * time.Second
logFileMaxOpenTime = time.Minute
panicUploadMaxWait = 30 * time.Second
panicUploadNoticeWait = 10 * time.Second
)
func monitorMain(runtimeOptions RuntimeOptions) {
@@ -50,15 +41,7 @@ func monitorMain(runtimeOptions RuntimeOptions) {
logFile := runtimeOptions.logFile
if logFile != "-" {
var fileDst io.Writer
if runtimeOptions.logMaxSize > 0 {
open := func(name string) (io.WriteCloser, error) {
return newAutoclosedFile(name, logFileAutoCloseDelay, logFileMaxOpenTime), nil
}
fileDst = newRotatedFile(logFile, open, int64(runtimeOptions.logMaxSize), runtimeOptions.logMaxFiles)
} else {
fileDst = newAutoclosedFile(logFile, logFileAutoCloseDelay, logFileMaxOpenTime)
}
var fileDst io.Writer = newAutoclosedFile(logFile, logFileAutoCloseDelay, logFileMaxOpenTime)
if runtime.GOOS == "windows" {
// Translate line breaks to Windows standard
@@ -88,11 +71,9 @@ func monitorMain(runtimeOptions RuntimeOptions) {
childEnv := childEnv()
first := true
for {
maybeReportPanics()
if t := time.Since(restarts[0]); t < loopThreshold {
l.Warnf("%d restarts in %v; not retrying further", countRestarts, t)
os.Exit(syncthing.ExitError.AsInt())
os.Exit(exitError)
}
copy(restarts[0:], restarts[1:])
@@ -103,19 +84,18 @@ func monitorMain(runtimeOptions RuntimeOptions) {
stderr, err := cmd.StderrPipe()
if err != nil {
panic(err)
l.Fatalln("stderr:", err)
}
stdout, err := cmd.StdoutPipe()
if err != nil {
panic(err)
l.Fatalln("stdout:", err)
}
l.Infoln("Starting syncthing")
err = cmd.Start()
if err != nil {
l.Warnln("Error starting the main Syncthing process:", err)
panic("Error starting the main Syncthing process")
l.Fatalln(err)
}
stdoutMut.Lock()
@@ -147,7 +127,7 @@ func monitorMain(runtimeOptions RuntimeOptions) {
select {
case s := <-stopSign:
l.Infof("Signal %d received; exiting", s)
cmd.Process.Signal(sigTerm)
cmd.Process.Kill()
<-exit
return
@@ -161,14 +141,17 @@ func monitorMain(runtimeOptions RuntimeOptions) {
// Successful exit indicates an intentional shutdown
return
} else if exiterr, ok := err.(*exec.ExitError); ok {
if exiterr.ExitCode() == syncthing.ExitUpgrade.AsInt() {
// Restart the monitor process to release the .old
// binary as part of the upgrade process.
l.Infoln("Restarting monitor...")
if err = restartMonitor(args); err != nil {
l.Warnln("Restart:", err)
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
switch status.ExitStatus() {
case exitUpgrading:
// Restart the monitor process to release the .old
// binary as part of the upgrade process.
l.Infoln("Restarting monitor...")
if err = restartMonitor(args); err != nil {
l.Warnln("Restart:", err)
}
return
}
return
}
}
}
@@ -189,13 +172,6 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
br := bufio.NewReader(stderr)
var panicFd *os.File
defer func() {
if panicFd != nil {
_ = panicFd.Close()
maybeReportPanics()
}
}()
for {
line, err := br.ReadString('\n')
if err != nil {
@@ -222,7 +198,7 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
}
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
panicFd, err = os.Create(timestampedLoc(locPanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
@@ -327,81 +303,6 @@ func restartMonitorWindows(args []string) error {
return cmd.Start()
}
// rotatedFile keeps a set of rotating logs. There will be the base file plus up
// to maxFiles rotated ones, each ~ maxSize bytes large.
type rotatedFile struct {
name string
create createFn
maxSize int64 // bytes
maxFiles int
currentFile io.WriteCloser
currentSize int64
}
// the createFn should act equivalently to os.Create
type createFn func(name string) (io.WriteCloser, error)
func newRotatedFile(name string, create createFn, maxSize int64, maxFiles int) *rotatedFile {
return &rotatedFile{
name: name,
create: create,
maxSize: maxSize,
maxFiles: maxFiles,
}
}
func (r *rotatedFile) Write(bs []byte) (int, error) {
// Check if we're about to exceed the max size, and if so close this
// file so we'll start on a new one.
if r.currentSize+int64(len(bs)) > r.maxSize {
r.currentFile.Close()
r.currentFile = nil
r.currentSize = 0
}
// If we have no current log, rotate old files out of the way and create
// a new one.
if r.currentFile == nil {
r.rotate()
fd, err := r.create(r.name)
if err != nil {
return 0, err
}
r.currentFile = fd
}
n, err := r.currentFile.Write(bs)
r.currentSize += int64(n)
return n, err
}
func (r *rotatedFile) rotate() {
// The files are named "name", "name.0", "name.1", ...
// "name.(r.maxFiles-1)". Increase the numbers on the
// suffixed ones.
for i := r.maxFiles - 1; i > 0; i-- {
from := numberedFile(r.name, i-1)
to := numberedFile(r.name, i)
err := os.Rename(from, to)
if err != nil && !os.IsNotExist(err) {
fmt.Println("LOG: Rotating logs:", err)
}
}
// Rename the base to base.0
err := os.Rename(r.name, numberedFile(r.name, 0))
if err != nil && !os.IsNotExist(err) {
fmt.Println("LOG: Rotating logs:", err)
}
}
// numberedFile adds the number between the file name and the extension.
func numberedFile(name string, num int) string {
ext := filepath.Ext(name) // contains the dot
withoutExt := name[:len(name)-len(ext)]
return fmt.Sprintf("%s.%d%s", withoutExt, num, ext)
}
// An autoclosedFile is an io.WriteCloser that opens itself for appending on
// Write() and closes itself after an interval of no writes (closeDelay) or
// when the file has been open for too long (maxOpenTime). A call to Write()
@@ -517,10 +418,10 @@ func (f *autoclosedFile) closerLoop() {
func childEnv() []string {
var env []string
for _, str := range os.Environ() {
if strings.HasPrefix(str, "STNORESTART=") {
if strings.HasPrefix("STNORESTART=", str) {
continue
}
if strings.HasPrefix(str, "STMONITORED=") {
if strings.HasPrefix("STMONITORED=", str) {
continue
}
env = append(env, str)
@@ -528,39 +429,3 @@ func childEnv() []string {
env = append(env, "STMONITORED=yes")
return env
}
// maybeReportPanics tries to figure out if crash reporting is on or off,
// and reports any panics it can find if it's enabled. We spend at most
// panicUploadMaxWait uploading panics...
func maybeReportPanics() {
// Try to get a config to see if/where panics should be reported.
cfg, err := loadOrDefaultConfig(protocol.EmptyDeviceID, events.NoopLogger)
if err != nil {
l.Warnln("Couldn't load config; not reporting crash")
return
}
// Bail if we're not supposed to report panics.
opts := cfg.Options()
if !opts.CREnabled {
return
}
// Set up a timeout on the whole operation.
ctx, cancel := context.WithTimeout(context.Background(), panicUploadMaxWait)
defer cancel()
// Print a notice if the upload takes a long time.
go func() {
select {
case <-ctx.Done():
return
case <-time.After(panicUploadNoticeWait):
l.Warnln("Uploading crash reports is taking a while, please wait...")
}
}()
// Report the panics.
dir := locations.GetBaseDir(locations.ConfigBaseDir)
uploadPanicLogs(ctx, opts.CRURL, dir)
}

View File

@@ -7,7 +7,6 @@
package main
import (
"io"
"io/ioutil"
"os"
"path/filepath"
@@ -15,123 +14,6 @@ import (
"time"
)
func TestRotatedFile(t *testing.T) {
// Verify that log rotation happens.
dir, err := ioutil.TempDir("", "syncthing")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
open := func(name string) (io.WriteCloser, error) {
return os.Create(name)
}
logName := filepath.Join(dir, "log.txt")
testData := []byte("12345678\n")
maxSize := int64(len(testData) + len(testData)/2)
// We allow the log file plus two rotated copies.
rf := newRotatedFile(logName, open, maxSize, 2)
// Write some bytes.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
// They should be in the log.
checkSize(t, logName, len(testData))
checkNotExist(t, logName+".0")
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData))
checkNotExist(t, logName+".1")
// Write another byte. That should fit without causing an extra rotate.
_, _ = rf.Write([]byte{42})
checkSize(t, logName, len(testData)+1)
checkSize(t, numberedFile(logName, 0), len(testData))
checkNotExist(t, numberedFile(logName, 1))
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData)+1) // the one we wrote extra to, now rotated
checkSize(t, numberedFile(logName, 1), len(testData))
checkNotExist(t, numberedFile(logName, 2))
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData))
checkSize(t, numberedFile(logName, 1), len(testData)+1)
checkNotExist(t, numberedFile(logName, 2)) // exceeds maxFiles so deleted
}
func TestNumberedFile(t *testing.T) {
// Mostly just illustrates where the number ends up and makes sure it
// doesn't crash without an extension.
cases := []struct {
in string
num int
out string
}{
{
in: "syncthing.log",
num: 42,
out: "syncthing.42.log",
},
{
in: filepath.Join("asdfasdf", "syncthing.log.txt"),
num: 42,
out: filepath.Join("asdfasdf", "syncthing.log.42.txt"),
},
{
in: "syncthing-log",
num: 42,
out: "syncthing-log.42",
},
}
for _, tc := range cases {
res := numberedFile(tc.in, tc.num)
if res != tc.out {
t.Errorf("numberedFile(%q, %d) => %q, expected %q", tc.in, tc.num, res, tc.out)
}
}
}
func checkSize(t *testing.T, name string, size int) {
t.Helper()
info, err := os.Lstat(name)
if err != nil {
t.Fatal(err)
}
if info.Size() != int64(size) {
t.Errorf("%s wrong size: %d != expected %d", name, info.Size(), size)
}
}
func checkNotExist(t *testing.T, name string) {
t.Helper()
_, err := os.Lstat(name)
if !os.IsNotExist(err) {
t.Errorf("%s should not exist", name)
}
}
func TestAutoClosedFile(t *testing.T) {
os.RemoveAll("_autoclose")
defer os.RemoveAll("_autoclose")

View File

@@ -38,10 +38,7 @@ func savePerfStats(file string) {
t0 := time.Now()
for t := range time.NewTicker(250 * time.Millisecond).C {
if err := syscall.Getrusage(syscall.RUSAGE_SELF, &rusage); err != nil {
continue
}
syscall.Getrusage(syscall.RUSAGE_SELF, &rusage)
curTime := time.Now().UnixNano()
timeDiff := curTime - prevTime
curUsage := rusage.Utime.Nano() + rusage.Stime.Nano()

View File

@@ -0,0 +1,231 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture"
)
// The folderSummaryService adds summary information events (FolderSummary and
// FolderCompletion) into the event stream at certain intervals.
type folderSummaryService struct {
*suture.Supervisor
cfg configIntf
model modelIntf
stop chan struct{}
immediate chan string
// For keeping track of folders to recalculate for
foldersMut sync.Mutex
folders map[string]struct{}
// For keeping track of when the last event request on the API was
lastEventReq time.Time
lastEventReqMut sync.Mutex
}
func newFolderSummaryService(cfg configIntf, m modelIntf) *folderSummaryService {
service := &folderSummaryService{
Supervisor: suture.New("folderSummaryService", suture.Spec{
PassThroughPanics: true,
}),
cfg: cfg,
model: m,
stop: make(chan struct{}),
immediate: make(chan string),
folders: make(map[string]struct{}),
foldersMut: sync.NewMutex(),
lastEventReqMut: sync.NewMutex(),
}
service.Add(serviceFunc(service.listenForUpdates))
service.Add(serviceFunc(service.calculateSummaries))
return service
}
func (c *folderSummaryService) Stop() {
c.Supervisor.Stop()
close(c.stop)
}
// listenForUpdates subscribes to the event bus and makes note of folders that
// need their data recalculated.
func (c *folderSummaryService) listenForUpdates() {
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged | events.RemoteDownloadProgress | events.DeviceConnected | events.FolderWatchStateChanged)
defer events.Default.Unsubscribe(sub)
for {
// This loop needs to be fast so we don't miss too many events.
select {
case ev := <-sub.C():
if ev.Type == events.DeviceConnected {
// When a device connects we schedule a refresh of all
// folders shared with that device.
data := ev.Data.(map[string]string)
deviceID, _ := protocol.DeviceIDFromString(data["id"])
c.foldersMut.Lock()
nextFolder:
for _, folder := range c.cfg.Folders() {
for _, dev := range folder.Devices {
if dev.DeviceID == deviceID {
c.folders[folder.ID] = struct{}{}
continue nextFolder
}
}
}
c.foldersMut.Unlock()
continue
}
// The other events all have a "folder" attribute that they
// affect. Whenever the local or remote index is updated for a
// given folder we make a note of it.
data := ev.Data.(map[string]interface{})
folder := data["folder"].(string)
switch ev.Type {
case events.StateChanged:
if data["to"].(string) == "idle" && data["from"].(string) == "syncing" {
// The folder changed to idle from syncing. We should do an
// immediate refresh to update the GUI. The send to
// c.immediate must be nonblocking so that we can continue
// handling events.
c.foldersMut.Lock()
select {
case c.immediate <- folder:
delete(c.folders, folder)
default:
c.folders[folder] = struct{}{}
}
c.foldersMut.Unlock()
}
default:
// This folder needs to be refreshed whenever we do the next
// refresh.
c.foldersMut.Lock()
c.folders[folder] = struct{}{}
c.foldersMut.Unlock()
}
case <-c.stop:
return
}
}
}
// calculateSummaries periodically recalculates folder summaries and
// completion percentage, and sends the results on the event bus.
func (c *folderSummaryService) calculateSummaries() {
const pumpInterval = 2 * time.Second
pump := time.NewTimer(pumpInterval)
for {
select {
case <-pump.C:
t0 := time.Now()
for _, folder := range c.foldersToHandle() {
c.sendSummary(folder)
}
// We don't want to spend all our time calculating summaries. Lets
// set an arbitrary limit at not spending more than about 30% of
// our time here...
wait := 2*time.Since(t0) + pumpInterval
pump.Reset(wait)
case folder := <-c.immediate:
c.sendSummary(folder)
case <-c.stop:
return
}
}
}
// foldersToHandle returns the list of folders needing a summary update, and
// clears the list.
func (c *folderSummaryService) foldersToHandle() []string {
// We only recalculate summaries if someone is listening to events
// (a request to /rest/events has been made within the last
// pingEventInterval).
c.lastEventReqMut.Lock()
last := c.lastEventReq
c.lastEventReqMut.Unlock()
if time.Since(last) > defaultEventTimeout {
return nil
}
c.foldersMut.Lock()
res := make([]string, 0, len(c.folders))
for folder := range c.folders {
res = append(res, folder)
delete(c.folders, folder)
}
c.foldersMut.Unlock()
return res
}
// sendSummary send the summary events for a single folder
func (c *folderSummaryService) sendSummary(folder string) {
// The folder summary contains how many bytes, files etc
// are in the folder and how in sync we are.
data, err := folderSummary(c.cfg, c.model, folder)
if err != nil {
return
}
events.Default.Log(events.FolderSummary, map[string]interface{}{
"folder": folder,
"summary": data,
})
for _, devCfg := range c.cfg.Folders()[folder].Devices {
if devCfg.DeviceID.Equals(myID) {
// We already know about ourselves.
continue
}
if _, ok := c.model.Connection(devCfg.DeviceID); !ok {
// We're not interested in disconnected devices.
continue
}
// Get completion percentage of this folder for the
// remote device.
comp := jsonCompletion(c.model.Completion(devCfg.DeviceID, folder))
comp["folder"] = folder
comp["device"] = devCfg.DeviceID.String()
events.Default.Log(events.FolderCompletion, comp)
}
}
func (c *folderSummaryService) gotEventRequest() {
c.lastEventReqMut.Lock()
c.lastEventReq = time.Now()
c.lastEventReqMut.Unlock()
}
// serviceFunc wraps a function to create a suture.Service without stop
// functionality.
type serviceFunc func()
func (f serviceFunc) Serve() { f() }
func (f serviceFunc) Stop() {}

View File

@@ -6,7 +6,7 @@
// +build !windows
package syncthing
package main
import (
"os"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package syncthing
package main
import "syscall"

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package api
package main
import (
"archive/zip"
@@ -14,7 +14,7 @@ import (
)
// getRedactedConfig redacting some parts of config
func getRedactedConfig(s *service) config.Configuration {
func getRedactedConfig(s *apiService) config.Configuration {
rawConf := s.cfg.RawCopy()
rawConf.GUI.APIKey = "REDACTED"
if rawConf.GUI.Password != "" {

Some files were not shown because too many files have changed in this diff Show More