The relay and discosrv didn't use the new lib/build package, now they
do. Conversely the lib/build package wasn't aware there might be other
users and hard coded the program name - now it's set by the build
script
This adds a field `guiAddressUsed` to the system status response, that
holds the current listening address actually in use. This may be
different from the one stored in the config because it may have been
overridden by environment or command line flag.
The GUI now checks this field to see if we are listening on localhost.
If we are not, the authentication required warning is displayed,
regardless of the *configured* listening address.
This adds a field `guiAddressUsed` to the system status response, that
holds the current listening address actually in use. This may be
different from the one stored in the config because it may have been
overridden by environment or command line flag.
The GUI now checks this field to see if we are listening on localhost.
If we are not, the authentication required warning is displayed,
regardless of the *configured* listening address.
Allow extending LDFLAGS by setting EXTRA_LDFLAGS to be able to pass
-extldflags=-zrelro -ldflags=-extldflags=-znow for Arch Linux packaging
to get full relro.
This is an experiment in testing, based on the advise to always call
t.Parallel() at the start of every test. Doing so makes tests run in
parallel, which is usually faster, but also exposes package level state
and potential race conditions better.
To support this I had to redesign the CSRF manager to not be package
global, which was indeed an improvement. And tests run five times faster
now.
This splits large writes into smaller ones when using a rate limit,
making them into a legitimate trickle rather than large bursts with a
long time in between.
This is the result of:
- Changing build.go to take the protobuf version from the modules
instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code
Assume a folder error was set due to bad ignores on the latest scan.
Previously, doing a manual rescan would result in:
1. Clearing the folder error, which schedules (immediately) an fs
watcher restart
2. Attempting to load the ignores, which fails, so we set a folder
error and bail.
3. Now the fs watcher restarts, as scheduled, so we trigger a scan.
Goto 1.
This change fixes this by not clearing the error until the error is
actually cleared, that is, if both the health check and ignore loading
succeeds.
This introduces a better set of defaults for large databases. I've
experimentally determined that it results in much better throughput in a
couple of scenarios with large databases, but I can't give any
guarantees the values are always optimal. They're probably no worse than
the defaults though.
This is a tiny tool to grab the GitHub releases info and generate a
more concise version of it. The conciseness comes from two aspects:
- We select only the latest stable and pre. There is no need to offer
upgrades to versions that are older than the latest. (There might be, in
the future, when we hit 2.0. We can revisit this at that time.)
- We use our structs to deserialize and reserialize the data. This means
we remove all attributes that we don't understand and hence don't
require.
All in all the new response is about 10% the size of the previous one and
avoids the issue where we only serve a bunch of release candidates and
no stable.
NATSymmetricUDPFirewall actually is not NAT at all, but means the machine has a global IP address and an UDP firewall in front (RFC calls it Symmetric UDP Firewall). This is punchable fine, both theoretically and also practically in testing.
This changes the on disk format for new raw reports to be gzip
compressed. Also adds the ability to serve these reports in plain text,
to insulate web browsers from the change (previously we just served the
raw reports from disk using Caddy).
* lib/model: Don't panic on failed chmod-back on directory (fixes#5836)
This makes the "in writable dir"-wrapper log chmod-back errors instead
of panicking. To do that we need a logger so the function moved into the
model package which is also the only place it's used. The tests came
along.
(The test also exercised osutil.RenameOrCopy like some sort of
piggybacking. I removed that.)
This adds a set of magical environment variables that can be used to
tweak the database parameters. It's totally undocumented and not
intended to be a long term or supported thing.
It's ugly, but there is a backstory. I have a couple of large
installations where the database options are inefficient or otherwise
suboptimal (24/7 compaction going on and stuff like that). I don't
*know* the correct database parameters, nor yet the formula or method to
derive them by, so this requires experimentation. Experimentation needs
to happen partly in production, and rolling out new builds for every
tweak isn't practical. This provides override points for all reasonable
values, while not changing anything by default.
Ideally, at the end of such experimentation, we'll know which values are
relevant to change and in what manner, and can provide a more user
friendly knob to do so - or do it automatically based on the database
size.
* add skeleton for lib/syncthing
* copy syncthingMain to lib/syncthing (verbatim)
* Remove code to deduplicate copies of syncthingMain
* fix simple build errors
* move stuff from main to syncthing with minimal mod
* merge runtime options
* actually use syncthing.App
* pass io.writer to lib/syncthing for auditing
* get rid of env stuff in lib/syncthing
* add .Error() and comments
* review: Remove fs interactions from lib
* and go 1.13 happened
* utility functions
Per the sync/atomic bug note:
> On ARM, x86-32, and 32-bit MIPS, it is the caller's
> responsibility to arrange for 64-bit alignment of 64-bit words
> accessed atomically. The first word in a variable or in an
> allocated struct, array, or slice can be relied upon to be
> 64-bit aligned.
All atomic accesses of 64-bit variables in syncthing code base are
currently ok (i.e they are all 64-bit aligned).
Generally, the bug is triggered because of incorrect alignement
of struct fields. Free variables (declared in a function) are
guaranteed to be 64-bit aligned by the Go compiler.
To ensure the code remains correct upon further addition/removal
of fields, which would change the currently correct alignment, I
added the following comment where required:
// atomic, must remain 64-bit aligned
See https://golang.org/pkg/sync/atomic/#pkg-note-BUG.
* lib/versioner: Add placeholder to provide the absolute file path to external commands
This commit adds support for a new placeholder, %FILE_PATH_FULL%, to the
command of the external versioner. The placeholder will be replaced by
the absolute path of the file that should be deleted.
* Revert "lib/versioner: Add placeholder to provide the absolute file path to external commands"
This reverts commit fb48962b94.
* lib/versioner: Replace all placeholders in external command (fixes#5849)
Before this commit, only these placeholders were replaced that span a
whole word, for example "%FOLDER_PATH%". Words that consisted of more
than one placeholder or additional characters, for example
"%FOLDER_PATH%/%FILE_PATH%", were left untouched.
* fixup! lib/versioner: Replace all placeholders in external command (fixes#5849)
Use a global raven.Client because they allocate an http.Client for each,
with a separate CA bundle and infinite connection idle time. Infinite
connection idle time means that if the client is never used again it
will always keep the connection around, not verifying whether it's
closed server side or not. This leaks about a megabyte of memory for
each client every created.
client.Close() doesn't help with this because the http.Client is still
around, retained by its own goroutines.
The thing with the map is just to retain the API on sendReport, even
though there will in practice only ever be one DSN per process
instance...
* lib/ur: Implement crash (panic) reporting (fixes#959)
This implements a simple crash reporting method. It piggybacks on the
panic log files created by the monitor process, picking these up and
uploading them from the usage reporting routine.
A new config value points to the crash receiver base URL, which defaults
to "https://crash.syncthing.net/newcrash" (following the pattern of
"https://data.syncthing.net/newdata" for usage reports, but allowing us
to separate the service as required).
* lib/fs, lib/model: Add error channel to Watch to avoid panics (fixes#5697)
* forgot unsupported watch
* and more non(-standard)-unixy fixes
* and windows test
* review
* lib/api, lib/connections, gui: Show connection error for disconnected devices (fixes#3345)
This adds functionality in the connetions service to track the last
error per address. That is in turn exposed in the /rest/system/status
API method, as that is also where we already show the listener status
from the connection service.
The GUI uses this info where it lists addresses, showing errors (if any)
in red underneath each address.
I also slightly refactored the existing status method on the connection
service to have a better name and return typed information.
* ok
* review
* formatting
* review
The check in ClusterConfig() when iterating through announced devices
in a folder explicitly skips entries without a non-zero IndexID.
Therefore, the check for IndexID == 0 just below will never be true
and the intended cleanup of local index data will not happen.
Plainly remove that check to make the intended case distinction work.
* lib/protocol: Wait for reader/writer loops on close (fixes#4170)
* waitgroup
* lib/model: Don't hold lock while closing connection
* fix comments
* review (lock once, func argument) and naming
* lib/model: Send cluster config before releasing pmut
* reshuffle
* add model.connReady to track cluster-config status
* Corrected comments/strings
* do it in protocol
Using AngularJS markup (that is, AngularJS expressions enclosed in double curly braces) in HTML attributes that reference URLs is not recommended: the browser may attempt to fetch the URL before the AngularJS compiler evaluates the markup, resulting in a request for an invalid URL.
This constructs the map of hashes of zero blocks from constants instead
of calculating it at startup time. A new test verifies that the map is
correct.
* cmd/syncthing, lib/gui: Separate gui into own package (ref #4085)
* fix tests
* Don't use main as interface name (make old go happy)
* gui->api
* don't leak state via locations and use in-tree config
* let api (un-)subscribe to config
* interface naming and exporting
* lib/ur
* fix tests and lib/foldersummary
* shorter URVersion and ur debug fix
* review
* model.JsonCompletion(FolderCompletion) -> FolderCompletion.Map()
* rename debug facility https -> api
* folder summaries in model
* disassociate unrelated constants
* fix merge fail
* missing id assignement
The change in 225c0dda80 (#5511) results in
conflicts being immediately scanned and synced, while the test expects just one
conflict on either side. Change it to expect the same conflict on both sides.
* lib/tlsutil: Enable TLS 1.3 when available, on test builds (fixes#5065)
This enables TLS 1.3 negotiation on Go 1.12 by setting the GODEBUG
variable. For now, this just gets enabled on test versions (those with a
dash in the version number).
Users wishing to enable this on production builds can set GODEBUG
manually.
The string representation of connections now includes the TLS version
and cipher suite. This becomes part of the log output on connections.
That is, when talking to an old client:
Established secure connection .../TLS1.2-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
and now potentially:
Established secure connection .../TLS1.3-TLS_AES_128_GCM_SHA256
(The cipher suite was there previously in the log output, but not the
TLS version.)
I also added this info as a new Crypto() method on the connection, and
propagate this out to the API and GUI, where it can be seen in the
connection address hover (although with bad word wrapping sometimes).
* wip
* wip
Flush the batch when exceeding a certain size, instead of when reaching a number
of batched operations.
Move batch to lowlevel to be able to use it in NamespacedKV.
Increase the leveldb memory buffer from 4 to 16 MiB.
* cleanup Fatal in lib/config/config.go
* cleanup Fatal in lib/config/folderconfiguration.go
* cleanup Fatal in lib/model/model.go
* cleanup Fatal in cmd/syncthing/monitor.go
* cleanup Fatal in cmd/syncthing/main.go
* cleanup Fatal in lib/api
* remove Fatal methods from logger
* lowercase in errors.Wrap
* one less channel
I'm working through linter complaints, these are some fixes. Broad
categories:
1) Ignore errors where we can ignore errors: add "_ = ..." construct.
you can argue that this is annoying noise, but apart from silencing the
linter it *does* serve the purpose of highlighting that an error is
being ignored. I think this is OK, because the linter highlighted some
error cases I wasn't aware of (starting CPU profiles, for example).
2) Untyped constants where we though we had set the type.
3) A real bug where we ineffectually assigned to a shadowed err.
4) Some dead code removed.
There'll be more of these, because not all packages are fixed, but the
diff was already large enough.
This adds a folder option "CopyOwnershipFromParent" which, when set,
makes Syncthing attempt to retain the owner/group information when
syncing files. Specifically, at the finisher stage we look at the parent
dir to get owner/group and then attempt a Lchown call on the temp file.
For this to succeed Syncthing must be running with the appropriate
permissions. On Linux this is CAP_FOWNER, which can be granted by the
service manager on startup or set on the binary in the filesystem. Other
operating systems do other things, but often it's not required to run as
full "root". On Windows this patch does nothing - ownership works
differently there and is generally less of a deal, as permissions are
inherited as ACLs anyway.
There are unit tests on the Lchown functionality, which requires the
above permissions to run. There is also a unit test on the folder which
uses the fake filesystem and hence does not need special permissions.
This removes our vendored dependencies. They provide no value for our
own build or development processes. For our source releases, the build
job can accomplish the same thing by a "go mod vendor" to recreate the
vendor dir (from the cryptographically verified dependencies).
To do so the BlockMap struct has been removed. It behaves like any other prefixed
part of the database, but was not integrated in the recent keyer refactor. Now
the database is only flushed when files are in a consistent state.
There was a problem in iterating the sequence index that could result
in missing updates. The issue is that while the index was (correctly)
iterated in a snapshot, the actual file infos were read dirty outside of
the snapshot. This fixes this by doing the reads inside the snapshot,
and also updates a couple of other places that did the same thing more
or less harmfully (I didn't investigate).
To avoid similar issues in the future I did some renaming of the
getFile* methods - the ones in a transaction are just getFile, while the
ones directly on the database are variants of getFileDirty to highlight
what's going on.
This adds booleans to the /system/version response to advice the GUI
whether the running version is a candidate release or not. (We could
parse it from the version string, but why duplicate the logic.)
Additionally the settings dialog locks down the upgrade and usage
reporting options on candidate releases. This matches the current
behavior, it just makes it obvious what actually *can* be chosen.
* lib/fs, lib/model: Improve filesystem operations during tests (fixes#5422)
Introduces MustFilesystem that panics on errors and should be used for operations
during testing which must never fail.
Create temporary directories outside of testdata.
* don't do a filesystem, just a wrapper around os for testing
* fix copyright
This avoids waiting until next ping and timeout until the connection is actually
closed both by notifying the peer of the disconnect and by immediately closing
the local end of the connection after that. As a nice side effect, info level
logging about dropped connections now have the actual reason in it, not a generic
timeout error which looks like a real problem with the connection.
* go mod init; rm -rf vendor
* tweak proto files and generation
* go mod vendor
* clean up build.go
* protobuf literals in tests
* downgrade gogo/protobuf
Here the event Logger is rewritten as a service with a main loop instead
of mutexes. This loop has a select with essentially two legs: incoming
events, and subscription changes. When both are possible select will
chose one randomly, thus ensuring that in practice unsubscribes will
happen timely and not block the system.
Updates the package and fixes a test that depended on the old behavior
of Write() being equivalent to Reset()+Write() which is no longer the
case. The scanner already did resets after each block write, so this is
fine.
This changes the TLS and certificate handling in a few ways:
- We always use TLS 1.2, both for sync connections (as previously) and
the GUI/REST/discovery stuff. This is a tightening of the requirements
on the GUI. AS far as I can tell from caniusethis.com every browser from
2013 and forward supports TLS 1.2, so I think we should be fine.
- We always greate ECDSA certificates. Previously we'd create
ECDSA-with-RSA certificates for sync connections and pure RSA
certificates for the web stuff. The new default is more modern and the
same everywhere. These certificates are OK in TLS 1.2.
- We use the Go CPU detection stuff to choose the cipher suites to use,
indirectly. The TLS package uses CPU capabilities probing to select
either AES-GCM (fast if we have AES-NI) or ChaCha20 (faster if we
don't). These CPU detection things aren't exported though, so the tlsutil
package now does a quick TLS handshake with itself as part of init().
If the chosen cipher suite was AES-GCM we prioritize that, otherwise we
prefer ChaCha20. Some might call this ugly. I think it's awesome.
Fixes error "code in directory GOPATH/src/github.com/golang/lint/golint expects import golang.org/x/lint/golint" during the
go run build.go setup command.
See https://github.com/golang/lint/issues/415
In a recent change (#5201) this return disappeared. The effect is that
we first shortcut the file and then also treat it normally. This results
in to database updates after each other, which are bound to end up in
the same batch. This means we remove one sequence entry and add two.
Not marking the issues as fixed, because I need to do more testing and
there are other discrepancies...
This adds a thin type that holds the state associated with the
leveldb.DB, leaving the huge Instance type more or less stateless. Also
moves some keying stuff into the DB package so that other packages need
not know the keying specifics.
(This does not, yet, fix the cmd/stindex program, in order to keep the
diff size down. Hence the keying constants are still exported.)
* lib/model, cmd/syncthing: Wait for folder restarts to complete (fixes#5233)
This is the somewhat ugly - but on the other hand clear - fix for what
is really a somewhat thorny issue. To avoid zombie folder runners a new
mutex is introduced that protects the RestartFolder operation. I hate
adding more mutexes but the alternatives I can think of are worse.
The other part of it is that the POST /rest/system/config operation now
waits for the config commit to complete. The point of this is that until
the commit has completed we should not accept another config commit. If
we did, we could end up with two separate RestartFolders queued in the
background. While they are both correct, and will run without
interfering with each other, we can't guarantee the order in which they
will run. Thus it could happen that the newer config got committed
first, and the older config commited after that, leaving us with the
wrong config running.
* test
* wip
* hax
* hax
* unflake test
* per folder mutexes
* paranoia
* race
* lib/fs: Add fakefs
This adds a new fake filesystem type. It's described rather extensively
in fakefs.go, but the main point is that it's for testing: when you want
to spin up a Syncthing and have a terabyte or two of random files that
can be synced somewhere, or an inifitely large filesystem to sync files
into.
It has pseudorandom properties such that data read from one fakefs can
be written into another fakefs and read back and it will look
consistent, without any of the data actually being stored.
To use:
<folder id="default" path="whatever" ...>
<filesystemType>fake</filesystemType>
This will create an empty fake filesystem. You can also specify that it
should be prefilled with files:
<folder id="default" path="whatever?size=2000000" ...>
<filesystemType>fake</filesystemType>
This will create a filesystem filled with 2TB of random data that can be
scanned and synced. There are more options, see fakefs.go.
Prefilled data is based on a deterministic seed, so you can index the
data and restart Syncthing and the index is still correct for all the
stored data.
The previous "Bad Request" was really confusing as it implies it's
somethign wrong with the request, which there isn't - the problem is
that server configuration forbids the request.
This removes the user and group juggling, which would fail when given
for example a PGID that already existed as the "syncthing" group could
then not be created with that PGID. It's not reasonable to expect the
user to know which group/user names/IDs are already present in the
Docker image.
Instead we now just launch under the specified IDs, while manually
setting the HOME env var to give us a home directory - the only thing we
needed the user entry for anyway.
Also updates to Go 1.11 and building without upgrades instead of
disabling by env var.
* release:
lib/model: Fixes on receive-only test setup and pulling (#5136)
lib/fs: Don't add path separators at end of path (fixes#5144) (#5146)
lib/fs: Evaluate root when watching not on fs creation (fixes#5043) (#5105)
The problem here is that we would update the sequence index before
updating the FileInfos, which would result in a high sequence number
pointing to a low-sequence FileInfo. The index sender would pick up the
high sequence number, send the old file, and think everything was good.
On the receiving side the old file is a no-op and ignored. The file
remains out of sync until another update for it happens.
This fixes that by correcting the order of operations in the database
update: first we remove old sequence index entries, then we update the
FileInfos (which now don't have anything pointing to them) and then we
add the sequence indexes (which the index sender can see).
The other option is to add "proper" transactions where required at the
database layer. I actually have a branch for that, but it's literally
thousands of lines of diff and I'm putting that off for another day as
this solves the problem...
The problem here is that we would update the sequence index before
updating the FileInfos, which would result in a high sequence number
pointing to a low-sequence FileInfo. The index sender would pick up the
high sequence number, send the old file, and think everything was good.
On the receiving side the old file is a no-op and ignored. The file
remains out of sync until another update for it happens.
This fixes that by correcting the order of operations in the database
update: first we remove old sequence index entries, then we update the
FileInfos (which now don't have anything pointing to them) and then we
add the sequence indexes (which the index sender can see).
The other option is to add "proper" transactions where required at the
database layer. I actually have a branch for that, but it's literally
thousands of lines of diff and I'm putting that off for another day as
this solves the problem...
This removes the out of disk space check from CheckHealth. The disk space is now
only checked if there are files to pull, in which case pulling those files is
stopped, but everything else (dirs, links, deletes) keeps running -> can recover
disk space through pulling.
Removes the manual handling of the AUTHORS file, giving every committer automatic credit for their contribution.
Manual tweaks to the file are still accepted and retained by the scripts.
A dedicated user is necessary to create relative references via
~/<folder> or $HOME/<folder>. Having the syncthing process just running
under a unprivileged UID/GID, will remove the home folder relation and
therefore will result in nonexistent shares after update.
Signed-off-by: Benedikt Heine <bebe@bebehei.de>
Adds a receive only folder type that does not send changes, and where the user can optionally revert local changes. Also changes some of the icons to make the three folder types distinguishable.
This is an improvement of PR #4493 and related to (and maybe fixing) #4961
and #4475. Maybe fixing, because there is no clear reproducer for that
problem.
The previous PR added a mechanism to resurrect missing parent directories,
if there is a valid child file to be pulled. The same mechanism does not
exist for dirs and symlinks, even though a missing parent can happen for
those items as well. Therefore this PR extends the resurrection to all types
of pulled items.
In addition I moved the IsDeleted branch while iterating over
processDirectly to the existing IsDeleted branch in the WithNeed iteration.
This saves one pointless assignment and IsDeleted query. Also
Allows for configuring the UID and GID Syncthing runs as in the container. Uses su-exec from the Alpine repos to accomplish this. Addition of su-exec results in <2MB increase in image size.
This makes the environment variables easier to read/change.
Also expand the list so it's not inline, more readable that way.
The new architecture syntax for snapcraft allows specifying both the building architecture and the running architecture of the snap, so if we specify the build-on architecture as the host architecture and the run-on architecture as the target architecture, then snapcraft shouldn't need to install any cross-compilers, etc.
We have the invalid bit to indicate that a file isn't good. That's enough for remote devices. For ourselves, it would be good to know sometimes why the file isn't good - because it's an unsupported type, because it matches an ignore pattern, or because we detected the data is bad and we need to rescan it.
Or, and this is the main future reason for the PR, because it's a change detected on a receive only device. We will want something like the invalid flag for those changes, but marking them as invalid today means the scanner will rehash them. Hence something more fine grained is required.
This introduces a LocalFlags fields to the FileInfo where we can stash things that we care about locally. For example,
FlagLocalUnsupported = 1 << 0 // The kind is unsupported, e.g. symlinks on Windows
FlagLocalIgnored = 1 << 1 // Matches local ignore patterns
FlagLocalMustRescan = 1 << 2 // Doesn't match content on disk, must be rechecked fully
The LocalFlags fields isn't sent over the wire; instead the Invalid attribute is calculated based on the flags at index sending time. It's on the FileInfo anyway because that's what we serialize to database etc.
The actual Invalid flag should after this just be considered when building the global state and figuring out availability for remote devices. It is not used for local file index entries.
It's pretty much the GitHub standard, says the obvious, and does it
better than our old one which was a little bit rambling.
Intentionally just doing it instead of discussing it, in the hope of
avoiding trolling. So sue me.
To optimize WithNeed, which is called for the local device whenever an index
update is received. No tracking for remote devices to conserve db space, as
WithNeed is only queried for completion.
I'm trying to slowly clean this up a bit, and moving functionality out
into the folder types and having those methods not reach into model is
part of it. That can mean takign some odd arguments in the meantime,
some of those should probably become interfaces or properties on folder
in the long term.
The functionality was anyway mostly implemented there and not isolated
in the folderScanner type. The attempt to refactor it out in the other
direction wouldn't work given that the event loop and stuff is on
`folder`.
This adds a couple of utilities for transporting databases in JSON and a
test to load a database and verify a couple of invalid bits. The test
itself is quite pointless at the moment, but it lays the groundwork for
testing the migration of this data in the next step (after the invalid
bit should be changed to local flags for local files).
When that happens we need to have a database in the old format already
there in order to be able to test the migration.
The directive lives in its own isolated scope (where we put the visible() function). The stuff transcluded into the notification directive lives in the root scope and doesn't have access to the directive scope. Hence we cannot call dismiss() from inside the directive.
Similarly, config does not exist by itself in the directive scope, we need $parent to access the root scope.
Reference: https://docs.angularjs.org/guide/directive#isolating-the-scope-of-a-directive
How this worked before is a mystery. My guess is Angular bug with directive scope that was fixed in 1.3. One possibility is that transclude plus scope: true (which doesn't make sense as that is supposed to be an object) resulted in the root scope being used in the directive itself. This would then "work" as long as there is only one notification, as visible() and dismiss() would then be registered on the root scope, thus accessible from within the notification but also overridden by any notification rendered.
The directive lives in its own isolated scope (where we put the visible() function). The stuff transcluded into the notification directive lives in the root scope and doesn't have access to the directive scope. Hence we cannot call dismiss() from inside the directive.
Similarly, config does not exist by itself in the directive scope, we need $parent to access the root scope.
Reference: https://docs.angularjs.org/guide/directive#isolating-the-scope-of-a-directive
How this worked before is a mystery. My guess is Angular bug with directive scope that was fixed in 1.3. One possibility is that transclude plus scope: true (which doesn't make sense as that is supposed to be an object) resulted in the root scope being used in the directive itself. This would then "work" as long as there is only one notification, as visible() and dismiss() would then be registered on the root scope, thus accessible from within the notification but also overridden by any notification rendered.
To newer names better reflecting their types and yet sorting together
with folder.go. Doing it now without asking because there are no open
PRs that will get killed by it, and to avoid bikeshedding the names.
The actual pull method (which is really the only thing that differs
between them) is now an interface member which gets overridden by the
subclass.
"Subclass?!" Well, this is dynamic dispatch with overriding, I guess.
Added EXPOSE to Dockerfile. this way these ports will show up in docker GUIs like cockpit.
Added VOLUME parameter, this renders creating the folder (/var/syncthing) obsolete.
Instead of walking and unmarshalling the entire db and sorting the resulting
file infos by sequence, add store device keys by sequence number in the
database. Thus only the required file infos need be unmarshalled and are already
sorted by index.
Bumping the limit to 2 * the max block size (16 MiB) is a slight
increase compared to previously. Nonetheless I think it's good to allow
us to queue one request and have one on the way in, or conversely have
one large block on the way in and be able to ask for smaller blocks from
others at the same time.
We did this to minimize source size and make it compile faster. Nowadays
byte slice literals compile as fast as anything, and the source isn't
checked in so it doesn't matter how big it is.
Not using base64 avoids a decode step at startup and makes the binary
about 400k smaller.
Two small behavior changes: don't "charge" the data to the global rate
limit until it's been accepted by the device specific limiter, and fix
the send/recv direction in the log print on per device rate limits.
clearAddresses write locks the struct and then calls notify. notify in turn tries to obtain a read lock on the same mutex. The result was a deadlock. This change unlocks the struct before calling notify.
Copied over from settings. Moves ignore patterns from its own modal into a tab
of the folder edit modal. Advanced settings are in a tab instead of a expandable
section and versioning gets it's own tab.
Also added modal hidden event handler to remove the anchor in the url.
Given that we've taken on the resposibility of maintaining this forked
package I've added it to the Syncthing organization. We still vendor it
like an external package, because it's convenient to keep it as a fork
of upstream to easier merge and file pull requests towards them.
When dropping delta index IDs due to upgrade, only drop our local one.
Previously, when dropping all of them, we would trigger a full send in
both directions on first connect after upgrade. Then the other side
would upgrade, doing the same thing. Net effect is full index data gets
sent twice in both directions.
With this change we just drop our local ID, meaning we will send our
full index on first connect after upgrade. When the other side upgrades,
they will do the same. This is a bit less cruel.
Unignored files are marked as conflicting while scanning, which is then resolved
in the subsequent pull. Automatically reconciles needed items on send-only
folders, if they do not actually differ except for internal metadata.
This doesn't happen today, but it might in the future if the block size
were increased or made variable and we were talking to a client from the
future.
The current 500 "test failed" looks and sounds like a problem in the
relay pool server, while it actually indicates a problem on the
announcing side. Instead use 400 "connection test failed" to indicate
that the request was bad and what was the test.
* lib/db: Don't panic on negative counts (fixes#4659)
So, negative counts should never happen and hence the original idea to
panic. However, this sucks as the panic will happen in a folder runner,
be automatically swallowed by suture, and the runner gets restarted but
now we are in a bad state. (Related: #4758)
At the time of writing the global list is somewhat in flux (we've
changed how ignored files are handled, invalid bits, etc.) and I think
that can cause unusual conditions here. Hence just fixing up the numbers
instead until the next full recount.
* release:
lib/osutil: Fix priority lowering on Windows
lib/osutil: Don't attempt to reduce our niceness level (fixes#4681)
lib/osutil: Check PGID before trying to set it (fixes#4679)
When scanner.Walk detects a change, it now returns the new file info as well as the old file info. It also finds deleted and ignored files while scanning.
Also directory deletions are now always committed to db after their children to prevent temporary failure on remote due to non-empty directory.
This removes a number of timing related things, leaving just the total
test timeout now bumped to one minute. Normally we get the filesystem
events within a second or so, so this doesn't affect the test time in
the successfull case. If we don't actually get the events we expect
within a minute I think we are legitimately in "failed" territory.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4715
LGTM: imsodin, AudriusButkevicius
Since #4340 pulls aren't happening every 10s anymore and may be delayed up to 1h.
This means that no folder error event reaches the web UI for a long time, thus no
failed items will show up for a long time. Now errors are populated when the
web UI is opened.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4650
LGTM: AudriusButkevicius
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
Previous "reasonable resource limits" cause test failures with Go 1.9.
They were added to protect the previous generation of the build server
and no longer needed.
Fixes issue #4653.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4656
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
This adds one new feature, that discovery servers can have ?nolookup to
be used only for announces. The default set of discovery servers is
changed to:
- discovery.s.n used for lookups. This is dual stack load balanced over
all discovery servers, and returns both IPv4 and IPV6 results when they
exist.
- discovery-v4.s.n used for announces. This has IPv4 addresses only and
the discovery servers will update the unspecified address with the IPv4
source address, as usual.
- discovery-v6.s.n which is exactly the same for IPv6.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4647
This no longer pokes at model internals, and only touches the config.
As a result, model handles this in CommitConfiguration, which restarts
the folders if things change, which repopulate m.folderDevice, m.deviceFolder
and other interal mappings.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4639
These files always have the symlink bit set, because they are reparse
points. Nonetheless they are not symlinks, and Lstat reports a size for
them. We use this fact to disambiguate, and hope fervently that nothing
else matches this description so it comes back to bite us...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4622
Just because there are a ton of people struggling to set env vars.
Perhaps this should live in advanced settings, and perhaps we should have a button to view the log.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4604
LGTM: calmh, imsodin
Also attempt to handle this nicer by ignoring the truncate failure when
it doesn't matter, and recover by deleting the temp file when it does.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4594
This keeps the data we need about sequence numbers and object counts
persistently in the database. The sizeTracker is expanded into a
metadataTracker than handled multiple folders, and the Counts struct is
made protobuf serializable. It gains a Sequence field to assist in
tracking that as well, and a collection of Counts become a CountsSet
(for serialization purposes).
The initial database scan is also a consistency check of the global
entries. This shouldn't strictly be necessary. Nonetheless I added a
created timestamp to the metadata and set a variable to compare against
that. When the time since the metadata creation is old enough, we drop
the metadata and rebuild from scratch like we used to, while also
consistency checking.
A new environment variable STCHECKDBEVERY can override this interval,
and for example be set to zero to force the check immediately.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4547
LGTM: imsodin
So STDEADLOCK seems to do the same thing as STDEADLOCKTIMEOUT, except in
the other package. Consolidate?
STDEADLOCKTHRESHOLD is actually called STLOCKTHRESHOLD, correct the help
text.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4598
Two issues since the filesystem migration;
- filepath.Base() doesn't do what the code expected when the path ends
in a slash, and we make sure the path ends in a slash.
- the return should be fully qualified, not just the last component.
With this change it works as before, and passes the new test for it.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4591
LGTM: imsodin
Fix the folder restart behavior (ignore Label), improve the API for that
(imho).
Also removes the tab switch animation in the settings modal, because
annoying.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4577
* release:
lib/connections: Trust the model to tell us if we are connected
build: More signatures, more better (ref #3420)
lib/model: Trigger a pull when ignore patterns change
This should address issue as described in https://forum.syncthing.net/t/stun-nig-party-with-paused-devices/10942/13
Essentially the model and the connection service goes out of sync in terms of thinking if we are connected or not.
Resort to model as being the ultimate source of truth.
I can't immediately pin down how this happens, yet some ideas.
ConfigSaved happens in separate routine, so it's possbile that we have some sort of device removed yet connection comes in parallel kind of thing.
However, in this case the connection exists in the model, and does not exist in the connection service and the only way for the connection to be removed
in the connection service is device removal from the config.
Given the subject, this might also be related to the device being paused.
Also, adds more info to the logs
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4533
This should address issue as described in https://forum.syncthing.net/t/stun-nig-party-with-paused-devices/10942/13
Essentially the model and the connection service goes out of sync in terms of thinking if we are connected or not.
Resort to model as being the ultimate source of truth.
I can't immediately pin down how this happens, yet some ideas.
ConfigSaved happens in separate routine, so it's possbile that we have some sort of device removed yet connection comes in parallel kind of thing.
However, in this case the connection exists in the model, and does not exist in the connection service and the only way for the connection to be removed
in the connection service is device removal from the config.
Given the subject, this might also be related to the device being paused.
Also, adds more info to the logs
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4533
We need to reset prevSeq so that we force a full check when someone
reconnects - the sequence number may not have changed due to the
reconnect. (This is a regression; we did this before f6ea2a7.)
Also add an optimization: we schedule a pull after scanning, but there
is no need to do so if no changes were detected. This matters now
because the scheduled pull actually traverses the database which is
expensive.
This, however, makes the pull not happen on initial scan if there were
no changes during the initial scan. Compensate by always scheduling a
pull after initial scan in the rwfolder itself.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4508
LGTM: imsodin, AudriusButkevicius
This is step one of a hundred fifty on the path to case insensitivity.
It brings in the basic case folding mechanism and adds it to the
mtimefs, as this is something outside the fileset that touches stuff in
the database based on name. No effort to convert or handle existing
entries when the insensitivity is changed, I don't think we need it...
Useless by itself but includes tests and will reduce the review load
along the way.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4521
This makes it OK to not have any listeners working. Specifically,
- We don't complain about an empty listener address
- We don't complain about not having anything to announce to global
discovery servers
- We don't send local discovery packets when there is nothing to
announce.
The last point also fixes a thing where the list of addresses for local
discovery was set at startup time and never refreshed.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4517
Well Tested(TM)
Introduces a potential issue where we always pick some connectable but dodgy connection that breaks
soon after the TLS handshake.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4489
Diff is large due to comment reformatting and indentation but all it
does is wrap the file mtime/size/permissions check in an "if
stat.IsRegular()".
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4507
This removes a significant, complex chunk of database code. The
"replace" operation walked both the old and new in lockstep and made the
relevant changes to make the new situation correct. But since delta
indexes we pretty much never need this - we just used replace to drop
the existing data and start over.
This makes that explicit and removes the complexity.
(This is one of those things that would be annoying to make case
insensitive, while the actual "drop and then insert" that we do is
easier.)
This is fairly well unit tested...
The one change to the tests is to cover the fact that previously replace
with something identical didn't bump the sequence number, while
obviously removing everything and re-inserting does. This is not
behavior we depend on anywhere.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4500
LGTM: imsodin, AudriusButkevicius
With VPNs and stuff we can get a single failure on an interface that
supposedly supports broadcasts without it being fatal.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4415
The folder marker conversion forgot to hide the .stfolder. This adds
that, for those who have not yet been converted.
Also adds Hide() calls to the folder start, to mend historical
unhidedness. (I'm sure this will upset someone who is manually managing
their .stignores in the other direction...)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4384
When STHASHING is set, don't benchmark as it's already decided. If weak
hashing isn't set to "auto", don't benchmark that either.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4349
These functions were very naive and slow. We haven't done much about
them because they pretty much don't matter at all for Syncthing
performance. They are however called very often in the discovery server
and these optimizations have a huge effect on the CPU load on the
public discovery servers.
The code isn't exactly obvious, but we have good test coverage on all
these functions.
benchmark old ns/op new ns/op delta
BenchmarkLuhnify-8 12458 1045 -91.61%
BenchmarkUnluhnify-8 12598 1074 -91.47%
BenchmarkChunkify-8 10792 104 -99.04%
benchmark old allocs new allocs delta
BenchmarkLuhnify-8 18 1 -94.44%
BenchmarkUnluhnify-8 18 1 -94.44%
BenchmarkChunkify-8 44 2 -95.45%
benchmark old bytes new bytes delta
BenchmarkLuhnify-8 1278 64 -94.99%
BenchmarkUnluhnify-8 1278 64 -94.99%
BenchmarkChunkify-8 42552 128 -99.70%
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4346
Currently all errors during pulling and the first of these errors again on
finishing are logged to info. Besides that the errors logged when finishing
are stored in f.errors. This PR moves all logging during pulling to the debug
channel (they might still be relevant in some obscure debugging case) and
uses the stored errors to log the main error per fail when all pulling
iterations are done and failed.
Additional instead of trying 11 times it now only tries 3 times.
This is the first part of what is discussed here:
https://forum.syncthing.net/t/reduce-verboseness-of-puller/10261
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4338
This updates kcp and uses our own fork which:
1. Keys sessions not just by remote address, but by remote address +
conversation id 2. Allows not to close connections that were passed directly
to the library. 3. Resets cache key if the session gets terminated.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4339
LGTM: calmh
Prior to this, the following is possible:
- Create a symlink "foo -> /somewhere", it gets synced
- Delete "foo", it gets versioned
- Create "foo/bar", it gets synced
- Delete "foo/bar", it gets versioned in "/somewhere/bar"
With this change, versioners should never version symlinks.
As of Go 1.8 it's valid to not have a GOPATH set. We ask the Go tool
what the GOPATH seems to be, and if we find ourselves (or a valid copy
of ourselves...) in that location we do not fiddle with the GOPATH.
This adds support for building with the source placed anywhere and no
GOPATH set. The build script handles this by creating a temporary GOPATH
in the system temp dir (or another specified location) and mirroring the
source there before building. The resulting binaries etc still end up in
the same place as usual, meaning at least the "build", "install", "tar",
"zip", "deb", "snap", "test", "vet", "lint", "metalint" and "clean"
commands work without a GOPATH. To this end these commands internally
use fully qualified package paths like
"github.com/syncthing/syncthing/cmd/..." instead of "./cmd/..." like
before.
There is a new command "gopath" that prepares and echoes the directory
of the temporary GOPATH. This can be used to run other non-build go
commands:
export GOPATH=$(go run build.go gopath) // GOPATH is now set
go test -v -race github.com/syncthing/syncthing/cmd/...
There is a new option "-no-build-gopath" that prevents the
check-and-copy step, instead assuming the temporary GOPATH is already
created and up to date. This is a performance optimization for build
servers running multiple builds commands in sequence:
go run build.go gopath // creates a temporary GOPATH
go run build.go -no-build-gopath -goos=... tar // reuses GOPATH
go run build.go -no-build-gopath -goos=... tar // reuses GOPATH
The temporary GOPATH is placed in the system temporary directory
(os.TempDir()) unless overridden by the STTMPDIR variable. It is named
after the hash of the current directory where build.go is run. The
reason for this is that the name should be unique to a source checkout
without risk for conflict, but still persistent between runs of
build.go.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4253
LGTM: AudriusButkevicius, imsodin
This moves a few things from script/ to a new directory meta/, and makes
them real Go tests. These are the authors, copyright, metalint and gofmt
checks. That means that they can now be run by
go test -v ./meta
and optionally filtered by the usual -run thing to go test. Also -short
will cut down on the metalint stuff and exclude the authors check (which
is slow because it runs git lots of times).
Mainly this makes everything easier on things like build servers where
we can now just run tests instead of do a bunch of scripting.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4252
Otherwise all the lines from includes will be shown in the web UI instead of
just the #include ... line. This problem was introduced in #3996.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4248
LGTM: calmh
This removes the special handling of minor versions as major when the
actual major is zero, and adds the special case that upgrades from 0.x
to 1.x are considered minor. 0.x to 2.x or 1.x to 2.x etc are still
considered major.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4226
This solves the erratic test failures on model.TestIgnores by ensuring
that the ignore patterns are reloaded even in the face of unchanged
timestamps.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4208
Things like the package name "syncthing" was hardcoded, which is not
awesome. With this in place we can build debs called syncthing-discosrv,
syncthing-relaysrv and syncthing-relaypoolsrv. I don't actually intend
to build and publish the relaypoolsrv, but the others can be good. Using
the "syncthing-" prefix to make the obvious "apt-cache search syncthing"
actually show them etc.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4206
LGTM: AudriusButkevicius, calmh
Starting stuff from init() is an antipattern, and the innerProcess
variable isn't 100% reliable. We should sort out the other uses of it as
well in due time.
Also removing the hack on innerProcess as I happened to see it and the
affected versions are now <1% users.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4185
The folder already knew how to stop properly, but the fs.Walk() didn't
and can potentially take a very long time. This adds context support to
Walk and the underlying scanning stuff, and passes in an appropriate
context from above. The stop channel in model.folder is replaced with a
context for this purpose.
To test I added an infiniteFS that represents a large amount of data
(not actually infinite, but close) and verify that walking it is
properly stopped. For that to be implemented smoothly I moved out the
Walk function to it's own type, as typically the implementer of a new
filesystem type might not need or want to reimplement Walk.
It's somewhat tricky to test that this actually works properly on the
actual sendReceiveFolder and so on, as those are started from inside the
model and the filesystem isn't easily pluggable etc. Instead I've tested
that part manually by adding a huge folder and verifying that pause,
resume and reconfig do the right things by looking at debug output.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4117
So, when first implementing the database layer I added panics on every
unexpected error condition mostly to be sure to flush out bugs and
inconsistencies. Then it became sort of standard, and we don't seem to
have many bugs here any more so the panics are usually caused by things
like checksum errors on read. But it's not an optimal user experience to
crash all the time.
Here I've weeded out most of the panics, while retaining a few "can't
happen" ones like errors on marshalling and write that we really can't
recover from.
For the rest, I'm mostly treating any read error as "entry didn't
exist". This should mean we'll rescan the file and correct the info (if
scanning) or treat it as a new file and do conflict handling (when
pulling). In some cases things like our global stats may be slightly
incorrect until a restart, if a database entry goes suddenly missing
during runtime.
All in all, I think this makes us a bit more robust and friendly without
introducing too many risks for the user. If the database is truly toast,
probably many other things on the system will be toast as well...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4118
This adds a pattern validator to the GUI listen port field that checks
for port numbers 1024 and above. Also adds a help link pointing to the
(new) page talking about GUI listen port numbers. That page has
information on how to work around the restriction, in general terms.
Also changes the header from "GUI Listen Addresses" to the singular
version, because we only support one listen address today.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4116
Given the saveConfig() is async, it might not have happened before we
try to save the ignores and unpause. Likewise must wait for saving
ignores before unpausing or the scan might start before ignores are on
disk.
Javsacript <3
Harmonize how we use batches in the model, using ProtoSize() to judge
the actual weight of the entire batch instead of estimating. Use smaller
batches in the block map - I think we might have though that batch.Len()
in the leveldb was the batch size in bytes, but it's actually number of
operations.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4114
The mechanism to disallow manual scans before the initial scan completed
(#3996) , had the side effect, that if the initial scan failed, no further
scans are allowed. So this marks the initial scan as finished regardless of
whether it succeeded or not.
There was also redundant code in rofolder and a pointless check for folder
health in scanSubsIfHealthy (happens in internalScanFolderSubdirs as well).
This also moves logging from folder.go to ro/rw-folder.go to include the
information about whether it is send-only or send-receive
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4104
This adds a parameter "events" to the /rest/events endpoint. It should
be a comma separated list of the events the consumer is interested in.
When not given it defaults to the current set of events, so it's
backwards compatible.
The API service then manages subscriptions, creating them as required
for each requested event mask. Old subscriptions are not "garbage
collected" - it's assumed that in normal usage the set of event
subscriptions will be small enough. Possibly lower than before, as we
will not set up the disk event subscription unless it's actually used.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4092
This deprecates the current minDiskFreePct setting and introduces
minDiskFree. The latter is, in it's serialized form, a string with a
unit. We accept percentages ("2.35%") and absolute values ("250 k", "12.5
Gi"). Common suffixes are understood. The config editor lets the user
enter the string, and validates it.
We still default to "1 %", but the user can change that to an absolute
value at will.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4087
LGTM: AudriusButkevicius, imsodin
This adds a new config AllowedNetworks per device, which when set should
contain a list of network prefixes (192.168.0.0/126 etc) that are
allowed for the given device. The connection service will not attempt
connections to addresses outside of the given networks and incoming
connections will be rejected as well.
I've added the config to the normal device editor and shown it (when
set) in the device summary on the main screen.
There's a unit test for the IsAllowedNetwork method, I've done some
manual sanity testing on top of that.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4073
One more step on the path of the great refactoring. Touches rwfolder a
little bit since it uses the Lstat from fs as well, but mostly this is
just on the scanner as rwfolder is scheduled for a later refactor.
There are a couple of usages of fs.DefaultFilesystem that will in the
end become a filesystem injected from the top, but that comes later.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4070
LGTM: AudriusButkevicius, imsodin
Click the transfer rate to toggle between binary-exponent bytes (KiB/s,
MiB/s) and metric based bits (kb/s, Mb/s). The setting is persisted in
browser local storage (best effort).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4074
The ng-list directive makes angular handle lists by doing comma
separation in the roundtrip. We could simplify our normal config dialog
this way as well...
Also adds devices that were inexplicably not available in the advanced
config.
This reveals some other uglyness, as the "devices" config of a folder
now shows as "[object Object], [object Object]" - previously it was
invisible. I think that's fine for now.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4071
This change sorts the language selection menu in Syncthing's home page so
that the languages are displayed alphabetically. The issue was discussed in
#3813. There were few ways of doing this. Sorting the language names in
transifix file did not work due to access control. So I sorted the languages
directly in languageSelectdirective.js by inverting and sorting the language
info retrieved from localeService.js.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4052
Adds a unit test to ensure we don't scan symlinks on Windows. For the
rwfolder, trusts that the logic in the invalid check is correct and that
the check is actually called from the need loop.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4042
Basically, if we don't care about the sync status of the file we should
not tag someone else out of sync because they don't have the latest
version. This solves *my* "Syncing - 100%" scenario at least.
The reason this happens seems to be like this, in my situation. I have
three devices, connected in a "line": A-B-C. A is a Mac and litters
.DS_Store files everywhere. I've ignored these, but some escaped into
the folders before I did so. I've also ignored them on B and C but at
different stages. B was flagging C as out of sync, because at the point
the ignores were introduced C had a lower version of .DS_Store than A.
Now none of them are sending updates about it any more since it's
ignored...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3981
Other routines use atomics, hence even if we are under a lock, we should
too.
We might atomically store with
Not sure how it happens, but it's between lines
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3974
After this change,
- Symlinks on Windows are always unsupported. Sorry.
- Symlinks are always enabled on other platforms. They are just a small
file like anything else. There is no need to special case them. If you
don't want to sync some symlinks, ignore them.
- The protocol doesn't differentiate between different "types" of
symlinks. If that distinction ever does become relevant the individual
devices can figure it out by looking at the destination when they
create the link.
It's backwards compatible in that all the old symlink types are still
understood to be symlinks, and the new SYMLINK type is equivalent to the
old SYMLINK_UNKNOWN which was always a valid way to do it.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3962
LGTM: AudriusButkevicius
Instead of just immediately dropping the event if the subscription isn't
ready to receive it, give it 15 ms to catch up. The value 15 ms is
grabbed out of thin air - it just seems reasonable to me.
The timer juggling makes the event send pretty much exactly twice as
slow as it was before, but we're still under a microsecond. I think it's
negligible compared to whatever event that just happened that we're
interested in logging (usually a file operation of some kind).
benchmark old ns/op new ns/op delta
BenchmarkBufferedSub-8 475 950 +100.00%
benchmark old allocs new allocs delta
BenchmarkBufferedSub-8 4 4 +0.00%
benchmark old bytes new bytes delta
BenchmarkBufferedSub-8 104 117 +12.50%
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3960
The monitor process should not set STNORESTART as this indicates the
intention from the user. Setting STMONITORED is enough, as this tells
the next Syncthing instance that it is running under the monitor
process.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3932
The Ping event is important, as it means that requests complete within
a sensible time. The disk events API didn't have the Ping event, so
if there were no disk events, the request would keep taking forever.
Unless, of course, there's a reverse proxy which times the request out
after a suitably large interval (or something else aborts it), in which
case Syncthing isn't very happy.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3929
Instead of
[I6KAH] 19:05:56 INFO: Single thread hash performance is 359 MB/s using minio/sha256-simd (354 MB/s using crypto/sha256).
it now says
[I6KAH] 19:06:16 INFO: Single thread SHA256 performance is 359 MB/s using minio/sha256-simd (354 MB/s using crypto/sha256).
[I6KAH] 19:06:17 INFO: Actual hashing performance is 299.01 MB/s
which is more informative. This is also the number it reports in usage
reporting.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3918
Can't do what I did, as the rolling function is not the same as the
non-rolling one. Instead this uses an improved version of the rolling
adler32 to accomplish the same thing. (PR filed on upstream, so should
be able to use that directly in the future.)
The rolling version of adler32 is just a wrapper around the standard
hash/adler32 when used in a non-rolling fashion, but it's inefficient as
it allocates a new hash instance for every Write(). This uses the
default version instead in the block hasher, and adds a test to verify
the result is the same as they were before. It reduces allocations by
88% and increases speed about 5%.
benchmark old ns/op new ns/op delta
BenchmarkHashFile-8 64434698 61303647 -4.86%
benchmark old MB/s new MB/s speedup
BenchmarkHashFile-8 276.65 290.78 1.05x
benchmark old allocs new allocs delta
BenchmarkHashFile-8 1238 150 -87.88%
benchmark old bytes new bytes delta
BenchmarkHashFile-8 17877363 49292 -99.72%
Syncthing adds some hidden files when a folder is added, but there is currently
no equivalent cleanup procedure. This change is conservative as not to
accidentally cause data loss.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3874
On Windows we would descend into SYMLINKD type links when we scanned
them successfully, as we would return nil from the walk function and the
filepath.Walk iterator apparently thought it OK to descend into the
symlinked directory.
With this change we always return filepath.SkipDir no matter what.
Tested on Windows 10 as admin, does what it should.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3875
Adds support for -auditfile= where is "-" for stdout, "--" for stderr, or a
filename. It can be left blank (or left out entirely) for the original
behaviour of creating a timestamped filename.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3860
Also tweaks the proto definitions:
- [packed=false] on the block_indexes field to retain compat with
v0.14.16 and earlier.
- Uses the vendored protobuf package in include paths.
And, "build.go setup" will install the vendored protoc-gen-gogofast.
This should ensure that a proto rebuild isn't so dependent on whatever
version of the compiler and package the developer has installed...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3864
The protobuf encoder now produces packed arrays for things like []int32,
which is actually correct according to the proto3 spec. However
Syncthing v0.14.16 and earlier doesn't support this. This reverts the
encoding change, but keeps the updated decoder so that we are both more
compatible with other proto3 implementations and can move to the updated
encoder in the future.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3856
This avoids unnecessary browser request failures and retries. Eg:
- Browser reuses existing HTTP connection for GUI refresh request
- Server closes connection with request in flight
- Browser retries GET request.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3854
Since we anyway need the folderConfig for this I'm skipping the copying
of all it's attributes that rwfolder did and just keeping the original
around instead.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3825
This adds support for AES_256_GCM_SHA384 (in there since Go 1.5, a bit
of a shame we missed it) and ChaCha20-Poly1305 (if built with Go 1.8;
ignored on older Gos).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3822
The test for the error string is fragile, and the error string changed
in Go 1.8 so the relevant part is no longer a prefix. This covers it
with a test though, so it should be fine in the future as well.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3818
Instead, trust (and test) that the temp file has appropriate permissions
from the start. The only place where this changes our behavior is for
ignores which go from 0644 to 0600. I'm OK with that.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3756
This changes the "seen" map that we're anyway keeping around to track
the modtimes of loaded files instead. When doing a Load() we check that
1) the file we are loading is in the modtime set, and 2) that none of
the files in the modtime set have changed modtimes. If that's the case
we do a quick return without parsing anything or clearing the cache.
This required adding two one seconds sleeps in the tests to make sure
the modtimes were updated when we expect cache reloads, because I'm on a
crappy filesystem with one second timestamp granularity. That also
proves it works...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3754
The current way is quite confusing for new users - we create a default
folder, but it's not usable with the default folder created somewhere
else. Instead, when setting up for the first time with two devices, the
default folder must be removed and recreated on one of them. This comes
up on IRC and the forum now and then.
I think this matches expectactions better.
Another alternative would be to remove it entirely (not create a default
folder), but then we should also add some guidance in the UI on how to
proceed.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3751
Fsyncing the file has a small performance penalty and seems unnecessary. The
file will be fsynced anyway, when the changes are commited to the database.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3749
This makes the device ID a real type that can be used in the protobuf
schema. That avoids the juggling back and forth from []byte in a bunch
of places and simplifies the code.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3695
Since delta indexes it's perfectly normal for us to need files that are
currently unavailable due to devices being disconnected. This doesn't
imply a failure, so we should not show the "Failed Items" line and
corresponding eternal spinner (since it would never be filled in, since
there is no failure).
We still show state "Out of Sync" (correct) and the list of files we
need (correct).
The discrepancy between global and local sizes is fine and expected in
the presence of ignores. This just moves the "we have ignore patterns"
indication to the actual local size metric, as an explanation of why it
may differ from the global size...
The ```syncthing-resume.service``` will show as a failed service in case
there are no syncthing processes running after resume but it can be
safely ignored because it makes no difference.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3630
This adds autodetection of the fastest hashing library on startup, thus
handling the performance regression. It also adds an environment
variable to control the selection, STHASHING=standard (Go standard
library version, avoids SIGILL crash when the minio library has bugs on
odd CPUs), STHASHING=minio (to force using the minio version) or unset
for the default autodetection.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3617
When the GUI/API is bound to localhost, we enforce that the Host header
looks like localhost. This can be disabled by setting
insecureSkipHostCheck in the GUI config.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3558
When files that were previously marked as deleted became ignored, we
used to do nothing at all. This changes that behavior to set the Invalid
bit (that we should rename to Ignored). This then becomes an update to
other devices that they should not trust our knowledge about the file in
question.
Read this diff without whitespace...
Tested by
- creating a bunch of files on s1
- letting them sync to s2
- shutting down s2
- deleting the files on s1 and rescanning
- adding the files to .stignore on s1 and rescanning
- starting up s2 and letting it sync
- observing the files are not deleted on s2, and it considers itself up
to date.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3557
We used to consider deleted files & directories 128 bytes large. After
the delta indexes change a bug slipped in where deleted files would be
weighted according to their old non-deleted size. Both ways are
incorrect (but the latest change made it worse), as if there are more
files deleted than remaining data in the repo the needSize can be
greater than the globalSize, resulting in a negative completion
percentage.
This change makes it so that deleted items are zero bytes large, which
makes more sense. Instead we expose the number of files that we need to
delete as a separate field in the Completion() result, and hack the
percentage down to 95% complete if it was 100% complete but we need to
delete files. This latter part is sort of ugly, but necessary to give
the user some sort of feedback.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3556
We previously set the mtime on the temp file, and then renamed it to the
real path. Unfortunately that means we'd save the real timestamp under
the under the temp name ".syncthing.foo.tmp" when the actual file that
we will look up on the next scan is "foo". This moves the Chtimes later,
ensuring that it gets recorded correctly under the right name.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3519
These are no longer required with Go 1.7. Change made by removing the
functions, doing a global s/osutil.Remove/os.Remove/.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3514
So there were some issues here. The main problem was that
model.Close(deviceID) was overloaded to mean "the connection was closed
by the protocol layer" and "i want to close this connection". That meant
it could get called twice - once *to* close the connection and then once
more when the connection *was* closed.
After this refactor there is instead a Closed(conn) method that is the
callback. I didn't need to change the parameter in the end, but I think
it's clearer what it means when it takes the connection that was closed
instead of a device ID. To close a connection, the new close(deviceID)
method is used instead, which only closes the underlying connection and
leaves the cleanup to the Closed() callback.
I also changed how we do connection switching. Instead of the connection
service calling close and then adding the connection, it just adds the
new connection. The model knows that it already has a connection and
makes sure to close and clean out that one before adding the new
connection.
To make sure to sequence this properly I added a new map of channels
that get created on connection add and closed by Closed(), so that
AddConnection() can do the close and wait for the cleanup to happen
before proceeding.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3490
The completion of remote devices was based only on the average of the percentages of all folders, which is irrelevant in case of two folders with very different sizes.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3481
LGTM: calmh, AudriusButkevicius
Furthermore:
1. Cleans configs received, migrates them as we receive them.
2. Clears indexes of devices we no longer share the folder with
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3478
It seems that it would be impossible to drop down to relay after establishing a direct connection
Also, we should not drop the existing connection until after we've passed the validation steps,
and it seems it's being dropped in two places unnecesserily at the moment.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3480
This adds a new nanoseconds field to the FileInfo, populates it during
scans and sets the non-truncated time in Chtimes calls.
The actual file modification time is defined as modified_s seconds +
modified_ns nanoseconds. It's expected that the modified_ns field is <=
1e9 (that is, all whole seconds should go in the modified_s field) but
not really enforced. Given that it's an int32 the timestamp can be
adjusted += ~2.9 seconds by the modified_ns field...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3431
This adds a config to enable debug functions on the API server, which is
by default disabled. When enabled, the /rest/debug things become
available and become available without requiring a CSRF token (although
authentication is required if configured).
We also add a new endpoint /rest/debug/cpuprof?duration=15s (with the
duration being configurable, defaulting to 30s). This runs a CPU profile
for the duration and returns it as a file. It sets headers so that a
browser will save the file with an informative name.
The same is done for heap profiles, /rest/debug/heapprof, which does not
take any parameters.
The purpose of this is that any user can enable debugging under
advanced, then point their browser to the endpoint above and get a file
that contains a CPU or heap profile we can use, with the filename
telling us what version and architecture the profile is from.
On the command line, this becomes
curl -O -J http://localhost:8082/rest/debug/cpuprof?duration=5s
curl: Saved to filename
'syncthing-cpu-darwin-amd64-v0.14.3+4-g935bcc0-110307.pprof'
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3467
Otherwise if the file grows during scanning the block list will be out
of sync with the stated size and things get confused. We could fixup the
size afterwards based on the block list, but then we might see other
inconsistencies as the mtime should have changed to reflect the new size
etc. Better stick to the original state and let the next scan pick up
the change.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3442
This used to happen by itself as the connecting device always sent an
Index message and we triggered on that. Nowadays there's no guarantee
for that, but we anyway need to send out one event to let listeners know
the state of folders shared with the device.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3438
We could have a file to sync with permissions rw------- but we'd create
the temp file with rw-rw-rw- minus umask, usually rw-r--r--. This
potentially exposes private data while the file is being synced.
Similarly, when ignorePerms was set and we were reusing a temp files we
would set the permissions to rw-r--r-- explicitly, potentially
overriding a strict umask that would otherwise have had the file be
rw-------.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3437
The previous commit loosened the locking around database updates.
Apparently that was not fine - what happens is that parallell updates
to the same file for different devices stomp on each others updates to
the global index, leaving it missing one of the two devices.
This lets us add message types in the future, for authentication or
other purposes, without completely breaking old clients. I see this as
similar behavior to adding fields to messages - newer clients must
simple be aware that older ones may ignore the message and act
accordingly.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3390
This slightly changes the interface used for committing configuration
changes. The two parts are now:
- VerifyConfiguration, which runs synchronously and locked, and can
abort the config change. These callbacks shouldn't *do* anything
apart from looking at the config changes and saying yes or no. No
change from previously.
- CommitConfiguration, which runs asynchronously (one goroutine per
call) *after* replacing the config and releasing any locks. Returning
false from these methods sets the "requires restart" flag, which now
lives in the config.Wrapper.
This should be deadlock free as the CommitConfiguration calls can take
as long as they like and can wait for locks to be released when they
need to tweak things. I think this should be safe compared to before as
the CommitConfiguration calls were always made from a random background
goroutine (typically one from the HTTP server), so it was always
concurrent with everything else anyway.
Hence the CommitResponse type is gone, instead you get an error back on
verification failure only, and need to explicitly check
w.RequiresRestart() afterwards if you care.
As an added bonus this fixes a bug where we would reset the "requires
restart" indicator if a config that did not require restart was saved,
even if we already were in the requires-restart state.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3386
* relaypoolsrv/master: (32 commits)
Fetch deps of deps X_x
Here we go with gvt bugs
Screw godep
Add solaris support back in
Add font awesome
No value is less than zero
Screw solaris
Godeps
Refactor javascript, always show table, add sorting
Add local geoip
Update dependencies
Hey look, had to check all code out on linux to fix the deps
Update godeps, reduce amount of time spent testing a relay. Goddamit godeps.
Add timeouts, deal with overlapping markers, add a table, increase circle radiuses
Fix a couple of issues with the relays map (geoip, 'data unavailable')
Rate infos are in kbps, not kBps
Add support for header holding IP address
Update relay parameters even if it already exists (fixes#3)
Add missing space
Add homepage
...
A random "instance ID" is generated on each start of the local discovery
service. The instance ID is included in the announcement. When we see a
new instance ID we treat is a new device and respond with an
announcement of our own. Hence devices get to know each other quickly on
restart.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3385
This changes the BEP protocol to use protocol buffer serialization
instead of XDR, and therefore also the database format. The local
discovery protocol is also updated to be protocol buffer format.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3276
LGTM: AudriusButkevicius
This is a supplement patch to commit a58f69b which only fixed global
discovery. This patch adds the missing parts for the local discovery.
If the listen address scheme is set to tcp4:// or tcp6:// and no
explicit host is specified, an address should not be considered if the
source address does not match this scheme.
This prevents invalid URIs like tcp4://<IPv6 address>:<port> or tcp6://<IPv4
address>:<port> for local discovery.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3380
If the listen address scheme is set to tcp4:// or tcp6://, it needs to be
made sure that the remote address matches this scheme before it is added to
the database.
This prevents invalid URIs like tcp4://<IPv6 address>:<port> or tcp6://<IPv4
address>:<port>.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3378
This contains the following behavioral changes:
- Duplicate folder IDs is now fatal during startup
- Invalid folder flags in the ClusterConfig is fatal for the connection
(this will go away soon with the proto changes, as we won't have any
unknown flags any more then)
- Empty path is a folder error reported at runtime
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3370
Events API consumers rely on being able to detect that events were skipped
by the fact that the event ID has increased by more than 1. This is
documented, and is absolutely necessary when trying to maintain a local
model of Syncthing's state.
With the introduction of LocalChangeDetected, which is not exposed to the
Events API, this contract was broken.
This commit introduces separate concepts of a "Global ID" and a
"Subscription ID". The Global ID of an event is unique across all
subscriptions. The Subscription ID is local to a particular subscription,
and always increments by 1. They are both exposed over the Events API, but
the Subscription ID uses the key "id" for backwards compatibility, and
the "?since=xx" parameter refers to the Subscription ID (making the Global
ID for information only).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3351
LGTM: calmh
Also fixes an issue where the discovery cache call would only return the
newest cache entry for a given device instead of the merged addresses
from all cache entries (which is more useful).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3344
The various path cleaning operations done in in cleanedPath() removes
it, so we make sure it's added again at the end. This makes adding the
slash in prepare() unnecessary, but keep it anyway for display purposes
(people looking at the config).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3342
While attempting to fix#2782 I thought the problem was the
CheckFolderHealth method, so I cleaned it up. That turned out not to be
the case, but I think this is better anyhow.
It also moves the "create folder and marker if the folder was empty in
the index" code to StartFolder where I think it makes better sense.
This is covered by a number of existing tests.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3343
This adds a metric for "committed items" to the database instance that I
use in the test code, and a couple of tests that ensure that scans that
don't change anything also don't commit anything.
There was a case in the scanner where we set the invalid bit on files
that are ignored, even though they were already ignored and had the
invalid bit set. I had assumed this would result in an extra database
commit, but it was in fact filtered out by the Set... Anyway, I think we
can save some work on not pushing that change to the Set at all.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3298
This is in preparation for future changes, but also improves the
handling when talking to pre-v0.13 clients. It breaks out the Hello
message and magic from the rest of the protocol implementation, with the
intention that this small part of the protocol will survive future
changes.
To enable this, and future testing, the new ExchangeHello function takes
an interface that can be implemented by future Hello versions and
returns a version indendent result type. It correctly detects pre-v0.13
protocols and returns a "too old" error message which gets logged to the
user at warning level:
[I6KAH] 09:21:36 WARNING: Connecting to [...]:
the remote device speaks an older version of the protocol (v0.12) not
compatible with this version
Conversely, something entirely unknown will generate:
[I6KAH] 09:40:27 WARNING: Connecting to [...]:
the remote device speaks an unknown (newer?) version of the protocol
The intention is that in future iterations the Hello exchange will
succeed on at least one side and ExchangeHello will return the actual
data from the Hello together with ErrTooOld and an even more precise
message can be generated.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3289
I run a lot of builds. They're quite slow now:
jb@syno:~/s/g/s/syncthing $ BUILDDEBUG=1 ./build.sh
... snipped commands ...
runError: gometalinter --disable-all --deadline=60s --enable=varcheck . ./cmd/... ./lib/...
... in 13.00592726s
... build completed in 15.392265235s
That's 15 s total build time, 13 s of which is the varcheck call. The
build server is welcome to run it, but I don't want to on each build. :)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3285
The purpose of this operation is to separate the serving of GUI assets a
bit from the serving of the REST API. It's by no means complete. The end
goal is something like a combined server type that embeds a statics
server and an API server and wraps it in authentication and HTTPS and
stuff, plus possibly a named pipe server that only provides the API and
does not wrap in the same authentication etc.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3273
This sacrifices the ability to return an error when creating the service
for being more persistent in keeping it running.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3270
LGTM: AudriusButkevicius, canton7
As noted in the ticket I no longer agree that dev builds should not auto
upgrade. The main reason is that we give dev builds to users to test
specific fixes, and noone is happier by them being inadvertently stuck
on that version when a newer version including the fix is released.
For developers, it's first of all probably unlikely that development is
happening on a build that's older than release, and secondly STNOUPGRADE
can be set in the environment once and for all if it an issue.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3244
* relaysrv/master: (60 commits)
Add new dependencies
Add more logging in the case of relaypoolsrv internal server error
Dependency update
Update deps
Update packages, fix testutil. Goddamit godep.
Typo
Add signal handlers (fixes#15)
Update readme (fixes#16)
Limit number of connections (fixes#23)
Enable extra logging in pool.go even when -debug not specified
Add Antony Male to CONTRIBUTORS
Allow extAddress to be set from the command line
URLs should have Go units
Add CORS headers
Fix units
Expose provided by in status endpoint
Add ability to advertise provider
Change the URL
Rename relaysrv binary, see #11
Jail the whole thing a bit more
...
* discosrv/master: (64 commits)
Use atomics for statistics handling (fixes#45)
Lower case JSON fields are nicer
Change v13 to v2
Remove explicit relay handling
Update vendored github.com/cznic/ql (fixes#34)
Defer fd.Close() (fixes#37)
There is no "get dependencies" step
Add vendor/golang.org/x/net/context
Use Go 1.5 vendoring instead of Godeps
Add debug performance logging per request
Must close result sets
Set Retry-After header
Ignores
lru.Cache is not concurrency safe
We need a limit on the number of PostgreSQL connections
Correct example DSN (fixes#29)
Allow plain HTTP serving behind a proxy
Fix Query/Answer stats
Reduce our patience with slow clients somewhat
Discovery server should print device ID of certificate at startup
...
Git didn't really understand the multiple email addresses in the NICKS
file the same way I expected it to, and this fixes that. It also makes
AUTHORS the "master" file that everything else depends on, so it
now includes all of name, nickname and email addresses.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3243
Pull issue information from Github to show both the resolved issue
subject and the commit subject. Also show reviewer, when different from
author.
* #3201: api: /rest/system/browse behaves strangely on Windows
lib/osutil: Fix globbing at root (by @AudriusButkevicius, reviewed by
@calmh)
* #3174: Ignore patterns with non-ASCII characters causes out of memory
crash
vendor: Update github.com/gobwas/glob (by @calmh)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3228
With the previous setup, browsers were free to use a local cache for any
length of time they pleased: we didn't set an 'Expires' header (or max-age
directive), and Cache-Control just said "you're free to cache this".
Therefore be more explicit: we don't mind if browsers cache things, but they
MUST revalidate everything on every request.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3221
The log bar is hidden by CSS, but will appear briefly while the page is
loading (after the html is fetched, but before dev.css is fetched).
Hide it by using an inline style instead, so this does not happen.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3220
1. For the same internal port we ask for the same external port on all devices. This can be a problem if one device speaks over two protocols.
2. Always add a nil address even if we managed to get external address of the gateway, just because the gateway might be in DMZ behind another gateway.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3196
- Move to ipinfo.io for geoip, rather than Telize. Telize has been closed
down. ipinfo.io has apparently got decent availability, and allows
1,000 requests per day on the free tier. Since requests are made by the
client, this should be more than enough (and the total across all clients
should still be less than this).
- Fix issue where one nonresponsive relay would cause 'data unavailable'
to be shown for many relays. This was caused by the relay status
promise not being correctly added to the list of things being waited
for before the map was rendered. Any delayed relay status requests
would therefore occur after the map was rendered, which was too late.
Knowing why a relay server failed to join the pool can be important. This
is typically an issue which must be investigated after it occurred, so
having logs available is useful.
Running with -debug permanently enabled is impractical, due to the amount
of traffic that is generated, particularly when data is being transferred.
Logging is limited to at most one message per minute, although one message
per hour is more likely.
This allows relaysrv to listen on an unprivileged port, with port
forwarding directing traffic from 443, thus providing an alternative
to using setcap cap_net_bind_service=+ep
Add WorkingDirectory to create and use the certificates within
/var/lib/syncthing-relaysrv. Add RootDirectory to chroot(2) the whole
thing into that directory.
New node ID:s contain four Luhn check digits and are grouped
differently. Code uses NodeID type instead of string, so it's formatted
homogenously everywhere.
### DO NOT REPORT SECURITY ISSUES IN THIS ISSUE TRACKER
Instead, contact security@syncthing.net directly - see
https://syncthing.net/security.html for more information.
### DO NOT POST SUPPORT REQUESTS OR GENERAL QUESTIONS IN THIS ISSUE TRACKER
Please use the forum at https://forum.syncthing.net/ where a large number of
helpful people hang out. This issue tracker is for reporting bugs or feature
requests directly to the developers. Worst case you might get a short
"that's a bug, please report it on GitHub" response on the forum, in which
case we thank you for your patience and following our advice. :)
### Please use the correct issue tracker
If your problem relates to a Syncthing wrapper or [sub-project](https://github.com/syncthing) such as [Syncthing for Android](https://github.com/syncthing/syncthing-android/issues), [SyncTrayzor](https://github.com/canton7/synctrayzor) or the [documentation](https://github.com/syncthing/docs/issues), please use their respective issue trackers.
### Does your log mention database corruption?
If your Syncthing log reports panics because of database corruption it is most likely a fault with your system's storage or memory. Affected log entries will contain lines starting with `panic: leveldb`. You will need to delete the index database to clear this, by running `syncthing -reset-database`.
### Please do post actual bug reports and feature requests.
If your issue is a bug report, replace this boilerplate with a description
of the problem, being sure to include at least:
- what happened,
- what you expected to happen instead, and
- any steps to reproduce the problem.
Also fill out the version information below and add log output or
screenshots as appropriate.
If your issue is a feature request, simply replace this template text in
If you believe that you've found a Syncthing-related security vulnerability, please report it by sending email to the address security@syncthing.net. The PGP key for security@syncthing.net (B683AD7B76CAB013) below can be used to send encrypted mail or to verify responses received from that address.
[](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildLinuxCross&guest=1)
[](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildWindows&guest=1)
[](https://build.syncthing.net/viewType.html?buildTypeId=Syncthing_BuildMac&guest=1)
apiRequestsTotal=makeCounter("api_requests_total","Number of API requests.","type","result")
apiRequestsSeconds=makeSummary("api_requests_seconds","Latency of API requests.","type")
relayTestsTotal=makeCounter("tests_total","Number of relay tests.","result")
relayTestActionsSeconds=makeSummary("test_actions_seconds","Latency of relay test actions.","type")
locationLookupSeconds=makeSummary("location_lookup_seconds","Latency of location lookups.").WithLabelValues()
metricsRequestsSeconds=makeSummary("metrics_requests_seconds","Latency of metric requests.").WithLabelValues()
scrapeSeconds=makeSummary("relay_scrape_seconds","Latency of metric scrapes from remote relays.","result")
relayUptime=makeGauge("relay_uptime","Uptime of relay","relay")
relayPendingSessionKeys=makeGauge("relay_pending_session_keys","Number of pending session keys (two keys per session, one per each side of the connection)","relay")
relayActiveSessions=makeGauge("relay_active_sessions","Number of sessions that are happening, a session contains two parties","relay")
relayConnections=makeGauge("relay_connections","Number of devices connected to the relay","relay")
relayProxies=makeGauge("relay_proxies","Number of active proxy routines sending data between peers (two proxies per session, one for each way)","relay")
relayBytesProxied=makeGauge("relay_bytes_proxied","Number of bytes proxied by the relay","relay")
relayGoRoutines=makeGauge("relay_go_routines","Number of Go routines in the process","relay")
relaySessionRate=makeGauge("relay_session_rate","Rate applied per session","relay")
relayGlobalRate=makeGauge("relay_global_rate","Global rate applied on the whole relay","relay")
relayBuildInfo=makeGauge("relay_build_info","Build information about a relay","relay","go_version","go_os","go_arch")
relayLocationInfo=makeGauge("relay_location_info","Location information about a relay","relay","city","country","continent")
This is the relay server for the `syncthing` project.
:exclamation:Warnings:exclamation: - Read or regret
-----
By default, all relay servers will join to the default public relay pool, which means that the relay server will be available for public use, and **will consume your bandwidth** helping others to connect.
If you wish to disable this behaviour, please specify the `-pools=""` argument.
Please note that `strelaysrv` is only usable by `syncthing`**version v0.12 and onwards**.
To run `strelaysrv` you need to have port 22067 available to the internet, which means you might need to port forward it and/or allow it through your firewall.
Furthermore, by default `strelaysrv` will also expose a /status HTTP endpoint on port 22070, which is used by the pool servers to read metrics of the `strelaysrv`, such as the current transfer rates, how many clients are connected, etc. If you wish this information to be available you may need to port forward and allow it through your firewall. This is not mandatory for the `strelaysrv` to function, and is used only to gather metrics and present them in the overview page of the pool server.
At the point of writing the endpoint output looks as follows:
This URI contains a partial address of the relay server, as well as its options which in the future may be taken into account when choosing the most suitable relay.
Because the `-listen` option was not used `strelaysrv` does not know its external IP, therefore you should replace the host part of the URI with your public IP address on which the `strelaysrv` will be available:
If you do not care about certificate pinning (improved security) or do not care about passing verbose settings to the clients, you can shorten the URL to just the host part:
```bash
relay://192.0.2.1:22067
```
This URI can then be used in `syncthing` clients as one of the relay servers by adding the URI to the "Sync Protocol Listen Address" field, under Actions and Settings.
See `strelaysrv -help` for other options, such as rate limits, timeout intervals, etc.
Other items available in this repo
----
##### testutil
A test utility which can be used to test the connectivity of a relay server.
You need to generate two x509 key pairs (key.pem and cert.pem), one for the client and one for the server, in separate directories.
flag.StringVar(&dir,"keys",".","Directory where cert.pem and key.pem is stored")
flag.DurationVar(&networkTimeout,"network-timeout",networkTimeout,"Timeout for network operations between the client and the relay.\n\tIf no data is received between the client and the relay in this period of time, the connection is terminated.\n\tFurthermore, if no data is sent between either clients being relayed within this period of time, the session is also terminated.")
flag.DurationVar(&pingInterval,"ping-interval",pingInterval,"How often pings are sent")
flag.DurationVar(&messageTimeout,"message-timeout",messageTimeout,"Maximum amount of time we wait for relevant messages to arrive")
flag.IntVar(&sessionLimitBps,"per-session-rate",sessionLimitBps,"Per session rate limit, in bytes/s")
flag.IntVar(&globalLimitBps,"global-rate",globalLimitBps,"Global rate limit, in bytes/s")
flag.StringVar(&statusAddr,"status-srv",":22070","Listen address for status service (blank to disable)")
flag.StringVar(&poolAddrs,"pools",defaultPoolAddrs,"Comma separated list of relay pool addresses to join")
flag.StringVar(&providedBy,"provided-by","","An optional description about who provides the relay")
flag.StringVar(&extAddress,"ext-address","","An optional address to advertise as being available on.\n\tAllows listening on an unprivileged port with port forwarding from e.g. 443, and be connected to on port 443.")
flag.StringVar(&proto,"protocol","tcp","Protocol used for listening. 'tcp' for IPv4 and IPv6, 'tcp4' for IPv4, 'tcp6' for IPv6")
flag.BoolVar(&natEnabled,"nat",false,"Use UPnP/NAT-PMP to acquire external port mapping")
flag.IntVar(&natLease,"nat-lease",60,"NAT lease length in minutes")
flag.IntVar(&natRenewal,"nat-renewal",30,"NAT renewal frequency in minutes")
flag.IntVar(&natTimeout,"nat-timeout",10,"NAT discovery timeout in seconds")
flag.BoolVar(&pprofEnabled,"pprof",false,"Enable the built in profiling on the status server")
flag.IntVar(&networkBufferSize,"network-buffer",2048,"Network buffer size (two of these per proxied connection)")
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.