This adds a new field to the file information we keep, the "previous
blocks hash". This is the hash of the file contents as it was in its
previous incarnation. That is, every scan that updates the blocks hash
will move the current hash to the "previous" field.
This enables an addition to the conflict detection algorithm: if the
file to be synced is in conflict with the current file on disk
(version-counter wise), but it indicates that it was based on the
precise contents we have (new.prevBlocksHash == current.blocksHash),
then it's not really a conflict.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
This updates our logging framework from legacy freetext strings using
the `log` package to structured log entries using `log/slog`. I have
updated all INFO or higher level entries, but not yet DEBUG (😓)... So,
at a high level:
There is a slight change in log levels, effectively adding a new warning
level:
- DEBUG is still debug (ideally not for users but developers, though
this is something we need to work on)
- INFO is still info, though I've added more data here, effectively
making Syncthing more verbose by default (more on this below)
- WARNING is a new log level that is different from the _old_ WARNING
(more below)
- ERROR is what was WARNING before -- problems that must be dealt with,
and also bubbled as a popup in the GUI.
A new feature is that the logging level can be set per package to
something other than just debug or info, and hence I feel that we can
add a bit more things into INFO while moving some (in fact, most)
current INFO level warnings into WARNING. For example, I think it's
justified to get a log of synced files in INFO and sync failures in
WARNING. These are things that have historically been tricky to debug
properly, and having more information by default will be useful to many,
while still making it possible get close to told level of inscrutability
by setting the log level to WARNING. I'd like to get to a stage where
DEBUG is never necessary to just figure out what's going on, as opposed
to trying to narrow down a likely bug.
Code wise:
- Our logging object, generally known as `l` in each package, is now a
new adapter object that provides the old API on top of the newer one.
(This should go away once all old log entries are migrated.) This is
only for `l.Debugln` and `l.Debugf`.
- There is a new level tracker that keeps the log level for each
package.
- There is a nested setup of handlers, since the structure mandated by
`log/slog` is slightly convoluted (imho). We do this because we need to
do formatting at a "medium" level internally so we can buffer log lines
in text format but with separate timestamp and log level for the API/GUI
to consume.
- The `debug` API call becomes a `loglevels` API call, which can set the
log level to `DEBUG`, `INFO`, `WARNING` or `ERROR` per package. The GUI
is updated to handle this.
- Our custom `sync` package provided some debugging of mutexes quite
strongly integrated into the old logging framework, only turned on when
`STTRACE` was set to certain values at startup, etc. It's been a long
time since this has been useful; I removed it.
- The `STTRACE` env var remains and can be used the same way as before,
while additionally permitting specific log levels to be specified,
`STTRACE=model:WARN,scanner:DEBUG`.
- There is a new command line option `--log-level=INFO` to set the
default log level.
- The command line options `--log-flags` and `--verbose` go away, but
are currently retained as hidden & ignored options since we set them by
default in some of our startup examples and Syncthing would otherwise
fail to start.
Sample format messages:
```
2009-02-13 23:31:30 INF A basic info line (attr1="val with spaces" attr2=2 attr3="val\"quote" a=a log.pkg=slogutil)
2009-02-13 23:31:30 INF An info line with grouped values (attr1=val1 foo.attr2=2 foo.bar.attr3=3 a=a log.pkg=slogutil)
2009-02-13 23:31:30 INF An info line with grouped values via logger (foo.attr1=val1 foo.attr2=2 a=a log.pkg=slogutil)
2009-02-13 23:31:30 INF An info line with nested grouped values via logger (bar.foo.attr1=val1 bar.foo.attr2=2 a=a log.pkg=slogutil)
2009-02-13 23:31:30 WRN A warning entry (a=a log.pkg=slogutil)
2009-02-13 23:31:30 ERR An error (a=a log.pkg=slogutil)
```
---------
Co-authored-by: Ross Smith II <ross@smithii.com>
We've always, since the introduction of conflicts, had the policy that
deletes lose against any other change, for safety's sake. This is a
problem, however, because it means the sort order of versions is not a
total order.
That is, given two versions `A` and `B` that are currently in conflict,
we will sort them in a given order (let's say `A, B`, so `A < B` for
ordering purposes: we say "A wins over B" or "A is newer than B") and
consider the first in the list the winner. The loser (who has `B` on
disk) will process the conflict at some point and move the file to a
conflict copy and announce `A'` as the resolved conflict. The winner
(with `A` on disk) doesn't do anything.
However, if `A` is deleted the ordering changes. We still have `A < B`
and, of course, `Adel < A` (this is not even a conflict, just linear
order). In most sane systems this would imply the ordering `Adel < A <
B`, however in our case we in fact have `B < Adel` because any version
wins over a deleted one, so there is no logical ordering at all of the
files at this point. `Adel < A < B < Adel ???` In practice the deleted
version may end up at the head or the tail of the list, depending on the
order we do the compares.
Hence, at this point, "whatever" happens and it's not guaranteed to make
any sense. 😬
I propose that we resolve this my simply letting deletes be versions
like anything else and maintain a total ordering based on just version
vectors with the existing tie breakers like always. That means a delete
can win in a conflict situation, and the result should be that the file
is moved to a conflict copy on the losing device. I think this retains
the data safety to almost the same degree as previously, while removing
probably an entire class of strange out of sync bugs...
---
(A potential wrinkle here is that, ideally, we wouldn't even create the
conflict copy when the delete and the losing version represent the same
data -- same as when we handle normal modification conflicts. However,
the deleted FileInfo doesn't carry any information on what the contents
were, so we can't do that right now. A possible future extension would
be to carry the block list hash of the deleted data in the deleted
FileInfo and use that for this purpose, but I don't want to complicate
this PR with that. The block list hash itself also isn't a
protocol-defined thing at the moment, it's something implementation
dependent that we just use locally.)
This makes a couple of backwards compatible changes to the
ClusterConfig:
- Remove the `ignore_permissions` and `ignore_delete` booleans which
we've never read or used for anything
- Remove the `disable_temp_indexes` boolean and option entirely. We did
use this one, and about 1% of users have set the option. The only thing
it does is inhibits sending of periodical DownloadProgress messages
while downloading data, which is a minuscule bandwidth optimisation
given that we're already sending data at the time.
- Change the `read_only` boolean (which indicated send-only folders) to
an enum `FolderType`, where the values zero and one match the existing
usage. Again, we don't actually use this value, but I can see that we
might want to and then it makes more sense for it to be more
comprehensive.
- Change the `paused` boolean to an enum `StopReason`, where zero
indicates not stopped and one indicates paused, exactly the same wire
representation as previously but leaves space for additional stop
reasons (errors etc).
Only Require either matching UID & GID OR matching Names.
If the 2 devices have a different Name => UID mapping, they can never be
totaly equal. Therefore when syncing we try matching the Name and fall
back to the UID. However when scanning for changes we currently require
both the Name & UID to match. This leads to forever having out of sync
files back and forth, or local additions when receive only.
This patch does not change the sending behavoir. It only change what we
decide is equal for exisiting files with mismapped Name => UID,
The added testcases show the change: Test 1,5,6 are the same as current.
Test 2,3 Are what change with this patch (from false to true). Test 4 is
a subset of test 2 they is currently special cased as true, which does
not chnage.
Co-authored-by: Jakob Borg <jakob@kastelo.net>
* main:
feat(config): expose folder and device info as metrics (fixes#9519) (#10148)
chore: add issue types to GitHub issue templates
build: remove schedule from PR metadata job
chore(protocol): only allow enc. password changes on cluster config (#10145)
chore(protocol): don't start connection routines a second time (#10146)
In practice we already always call SetPassword and ClusterConfig
together. However it's not just "sensible" to do that, it's required: If
the passwords change, the remote device needs to know about that to
check that the enc. setup is valid/consistent (e.g. tokens match,
folder-type is appropriate, ...).
And with the passwords set later, there's no point in adding them as
part of creating a new connection.
This is a "followup" (if one can call it that 4 years later :) ) to
resp. fix for the following commit:
924b96856f
Co-authored-by: Jakob Borg <jakob@kastelo.net>
The copier routine refactor resulted in bad buffer pool handling,
putting a buffer back into the pool twice. This simplifies and removes
the danger prone Upgrade() method.
* main:
fix(gui): fix previous commit
fix(gui): mark unseen disconnected devices as inactive (#10048)
fix(strings): differentiate setup(n) and set(v) up (#10024)
chore(fs): changes to allow Filesystem to be implemented externally (#10040)
chore(config): resolve primary STUN servers via SRV record (fixes#10029) (#10031)
build: push artifacts to Azure (#10044)
chore(gui, man, authors): update docs, translations, and contributors
Switch the database from LevelDB to SQLite, for greater stability and
simpler code.
Co-authored-by: Tommy van der Vorst <tommy@pixelspark.nl>
Co-authored-by: bt90 <btom1990@googlemail.com>
We've had weak/rolling hashing in the code for quite a while. It was a
popular request for a while, based on the belief that rsync does this
and we should too. However, the benefit is quite small; we save on
average about 0.8% of transferred blocks over the population as a whole:
<img width="974" alt="Screenshot 2025-03-28 at 17 09 02"
src="https://github.com/user-attachments/assets/bbe10dea-f85e-4043-9823-7cef1220b4a2"
/>
This would be fine if the cost was comparably low, however the downside
of attempting rolling hash matching is that we (by default) do a
complete file read on the destination in order to look for matches
before we starting pulling blocks for the file. For any larger file this
means a sometimes long, I/O-intensive pause before the file starts
syncing, for usually no benefit.
I propose we simply rip off the bandaid and save the effort.
At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes#8247
Encrypted-to-encrypted connections (i.e., ones where both sides set a
password) used to work but were broken in the 1.28.0 release. The
culprit is the 5342bec1b refactor which slightly changed how the request
was constructed, resulting in a bad block hash field.
Co-authored-by: Simon Frei <freisim93@gmail.com>
The read/write loops may keep going for a while on a closing connection
with lots of read/write activity, as it's random which select case is
chosen. And if the connection is slow or even broken, a single
read/write
can take a long time/until timeout. Add initial non-blocking selects
with only the cases relevant to closing, to prioritize those.
### Purpose
Adds a new metric `syncthing_connections_active` which equals to the
amount of active connections per device.
Fixes#9527
<!--
Describe the purpose of this change. If there is an existing issue that
is
resolved by this pull request, ensure that the commit subject is on the
form
`Some short description (fixes#1234)` where 1234 is the issue number.
-->
### Testing
I've manually tested it by running syncthing with these changes locally
and examining the returned metrics from `/metrics`.
I've done the following things:
- Connect & disconnect a device
- Increase & decrease the number of connections and verify that the
value of the metric matches with the amount displayed in the GUI.
### Documentation
https://github.com/syncthing/docs/blob/main/includes/metrics-list.rst
needs to be regenerated with
[find-metrics.go](https://github.com/syncthing/docs/blob/main/_script/find-metrics/find-metrics.go)
## Authorship
Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
---------
Co-authored-by: Jakob Borg <jakob@kastelo.net>
This is a refactor of the protocol/model interface to take the actual
message as the parameter, instead of the broken-out fields:
```diff
type Model interface {
// An index was received from the peer device
- Index(conn Connection, folder string, files []FileInfo) error
+ Index(conn Connection, idx *Index) error
// An index update was received from the peer device
- IndexUpdate(conn Connection, folder string, files []FileInfo) error
+ IndexUpdate(conn Connection, idxUp *IndexUpdate) error
// A request was made by the peer device
- Request(conn Connection, folder, name string, blockNo, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error)
+ Request(conn Connection, req *Request) (RequestResponse, error)
// A cluster configuration message was received
- ClusterConfig(conn Connection, config ClusterConfig) error
+ ClusterConfig(conn Connection, config *ClusterConfig) error
// The peer device closed the connection or an error occurred
Closed(conn Connection, err error)
// The peer device sent progress updates for the files it is currently downloading
- DownloadProgress(conn Connection, folder string, updates []FileDownloadProgressUpdate) error
+ DownloadProgress(conn Connection, p *DownloadProgress) error
}
```
(and changing the `ClusterConfig` to `*ClusterConfig` for symmetry;
we'll be forced to use all pointers everywhere at some point anyway...)
The reason for this is that I have another thing cooking which is a
small troubleshooting change to check index consistency during transfer.
This required adding a field or two to the index/indexupdate messages,
and plumbing the extra parameters in umpteen changes is almost as big a
diff as this is. I figured let's do it once and avoid having to do that
in the future again...
The rest of the diff falls out of the change above, much of it being in
test code where we run these methods manually...
If syncOwnership is enabled and the remote uses for example a dockerized
Syncthing it can't fetch the ownername and groupname of the local
instance. Without this patch this led to an endless cycle of detected
changes on the remote and failing re-sync attempts.
This patch skips comparing the ownername and groupname if they zare empty
on one side.
See https://github.com/syncthing/syncthing/issues/9039 for details.
### Testing
Proposed by @calmh in
https://github.com/syncthing/syncthing/issues/9039#issuecomment-1870584783
and tested locally in my setup,
Setup PC 1:
- Syncthing is run in Docker as user `root` and has none of the users
configured that synchronize their files
Setup PC 2:
- this PC has all users locally setup
- Syncthing runs as `systemd` service as user `syncthing` and has
multiple capabilities set to set the correct owner and permissions
Setup PC 3:
- same as PC 2
Handling:
- `PC 1` is send & receive and uses just the `UID` and `GID` identifiers
to store the files
- `PC 2` and `PC 3` synchronize their files over `PC 1` but not directly
to each other
Outcome:
- `PC 2` and `PC 3` should send and receive their files with the correct
ownership and groups from `PC 1`
This adds our short device ID to the basic auth realm. This has at least
two consequences:
- It is different from what's presented by another device on the same
address (e.g., if I use SSH forwards to different dives on the same
local address), preventing credentials for one from being sent to
another.
- It is different from what we did previously, meaning we avoid cached
credentials from old versions interfering with the new login flow.
I don't *think* there should be things that depend on our precise realm
string, so this shouldn't break any existing setups...
Sneakily this also changes the session cookie and CSRF name, because I
think `id.Short().String()` is nicer than `id.String()[:5]` and the
short ID is two characters longer. That's also not a problem...
In principle a connection can close while it's in progress with
starting, and then it's undefined if we wait for goroutines to exit etc.
With this change, we will wait for start to complete before starting to
stop everything.
This adds the ability to have multiple concurrent connections to a single device. This is primarily useful when the network has multiple physical links for aggregated bandwidth. A single connection will never see a higher rate than a single link can give, but multiple connections are load-balanced over multiple links.
It is also incidentally useful for older multi-core CPUs, where bandwidth could be limited by the TLS performance of a single CPU core -- using multiple connections achieves concurrency in the required crypto calculations...
Co-authored-by: Simon Frei <freisim93@gmail.com>
Co-authored-by: tomasz1986 <twilczynski@naver.com>
Co-authored-by: bt90 <btom1990@googlemail.com>
This fixes various test issues with Go 1.20.
- Most tests rewritten to use fakefs where possible
- Some tests that were already skipped, or dubious (invasive,
unmaintainable, unclear what they even tested) have been removed
- Some actual code rewritten to better support testing in fakefs
Co-authored-by: Eric P <eric@kastelo.net>
The layout of the request differs based on whether it comes from an
untrusted device or a trusted device with encrypted enabled. Handle
both.
Closes#8819.
This adds a cache to the expensive key generation operations. It's fixes
size LRU/MRU stuff to keep memory usage bounded under absurd conditions.
Also closes#8600.
This adds the BlocksHash field from the FileInfo to our API output. It
can be useful for debugging, or for external tools. I'm intentionally
leaving it as an opaque base64 string because no meaning should be
derived from it: it's just a string.