This updates our logging framework from legacy freetext strings using
the `log` package to structured log entries using `log/slog`. I have
updated all INFO or higher level entries, but not yet DEBUG (😓)... So,
at a high level:
There is a slight change in log levels, effectively adding a new warning
level:
- DEBUG is still debug (ideally not for users but developers, though
this is something we need to work on)
- INFO is still info, though I've added more data here, effectively
making Syncthing more verbose by default (more on this below)
- WARNING is a new log level that is different from the _old_ WARNING
(more below)
- ERROR is what was WARNING before -- problems that must be dealt with,
and also bubbled as a popup in the GUI.
A new feature is that the logging level can be set per package to
something other than just debug or info, and hence I feel that we can
add a bit more things into INFO while moving some (in fact, most)
current INFO level warnings into WARNING. For example, I think it's
justified to get a log of synced files in INFO and sync failures in
WARNING. These are things that have historically been tricky to debug
properly, and having more information by default will be useful to many,
while still making it possible get close to told level of inscrutability
by setting the log level to WARNING. I'd like to get to a stage where
DEBUG is never necessary to just figure out what's going on, as opposed
to trying to narrow down a likely bug.
Code wise:
- Our logging object, generally known as `l` in each package, is now a
new adapter object that provides the old API on top of the newer one.
(This should go away once all old log entries are migrated.) This is
only for `l.Debugln` and `l.Debugf`.
- There is a new level tracker that keeps the log level for each
package.
- There is a nested setup of handlers, since the structure mandated by
`log/slog` is slightly convoluted (imho). We do this because we need to
do formatting at a "medium" level internally so we can buffer log lines
in text format but with separate timestamp and log level for the API/GUI
to consume.
- The `debug` API call becomes a `loglevels` API call, which can set the
log level to `DEBUG`, `INFO`, `WARNING` or `ERROR` per package. The GUI
is updated to handle this.
- Our custom `sync` package provided some debugging of mutexes quite
strongly integrated into the old logging framework, only turned on when
`STTRACE` was set to certain values at startup, etc. It's been a long
time since this has been useful; I removed it.
- The `STTRACE` env var remains and can be used the same way as before,
while additionally permitting specific log levels to be specified,
`STTRACE=model:WARN,scanner:DEBUG`.
- There is a new command line option `--log-level=INFO` to set the
default log level.
- The command line options `--log-flags` and `--verbose` go away, but
are currently retained as hidden & ignored options since we set them by
default in some of our startup examples and Syncthing would otherwise
fail to start.
Sample format messages:
```
2009-02-13 23:31:30 INF A basic info line (attr1="val with spaces" attr2=2 attr3="val\"quote" a=a log.pkg=slogutil)
2009-02-13 23:31:30 INF An info line with grouped values (attr1=val1 foo.attr2=2 foo.bar.attr3=3 a=a log.pkg=slogutil)
2009-02-13 23:31:30 INF An info line with grouped values via logger (foo.attr1=val1 foo.attr2=2 a=a log.pkg=slogutil)
2009-02-13 23:31:30 INF An info line with nested grouped values via logger (bar.foo.attr1=val1 bar.foo.attr2=2 a=a log.pkg=slogutil)
2009-02-13 23:31:30 WRN A warning entry (a=a log.pkg=slogutil)
2009-02-13 23:31:30 ERR An error (a=a log.pkg=slogutil)
```
---------
Co-authored-by: Ross Smith II <ross@smithii.com>
We've always, since the introduction of conflicts, had the policy that
deletes lose against any other change, for safety's sake. This is a
problem, however, because it means the sort order of versions is not a
total order.
That is, given two versions `A` and `B` that are currently in conflict,
we will sort them in a given order (let's say `A, B`, so `A < B` for
ordering purposes: we say "A wins over B" or "A is newer than B") and
consider the first in the list the winner. The loser (who has `B` on
disk) will process the conflict at some point and move the file to a
conflict copy and announce `A'` as the resolved conflict. The winner
(with `A` on disk) doesn't do anything.
However, if `A` is deleted the ordering changes. We still have `A < B`
and, of course, `Adel < A` (this is not even a conflict, just linear
order). In most sane systems this would imply the ordering `Adel < A <
B`, however in our case we in fact have `B < Adel` because any version
wins over a deleted one, so there is no logical ordering at all of the
files at this point. `Adel < A < B < Adel ???` In practice the deleted
version may end up at the head or the tail of the list, depending on the
order we do the compares.
Hence, at this point, "whatever" happens and it's not guaranteed to make
any sense. 😬
I propose that we resolve this my simply letting deletes be versions
like anything else and maintain a total ordering based on just version
vectors with the existing tie breakers like always. That means a delete
can win in a conflict situation, and the result should be that the file
is moved to a conflict copy on the losing device. I think this retains
the data safety to almost the same degree as previously, while removing
probably an entire class of strange out of sync bugs...
---
(A potential wrinkle here is that, ideally, we wouldn't even create the
conflict copy when the delete and the losing version represent the same
data -- same as when we handle normal modification conflicts. However,
the deleted FileInfo doesn't carry any information on what the contents
were, so we can't do that right now. A possible future extension would
be to carry the block list hash of the deleted data in the deleted
FileInfo and use that for this purpose, but I don't want to complicate
this PR with that. The block list hash itself also isn't a
protocol-defined thing at the moment, it's something implementation
dependent that we just use locally.)
This makes a couple of backwards compatible changes to the
ClusterConfig:
- Remove the `ignore_permissions` and `ignore_delete` booleans which
we've never read or used for anything
- Remove the `disable_temp_indexes` boolean and option entirely. We did
use this one, and about 1% of users have set the option. The only thing
it does is inhibits sending of periodical DownloadProgress messages
while downloading data, which is a minuscule bandwidth optimisation
given that we're already sending data at the time.
- Change the `read_only` boolean (which indicated send-only folders) to
an enum `FolderType`, where the values zero and one match the existing
usage. Again, we don't actually use this value, but I can see that we
might want to and then it makes more sense for it to be more
comprehensive.
- Change the `paused` boolean to an enum `StopReason`, where zero
indicates not stopped and one indicates paused, exactly the same wire
representation as previously but leaves space for additional stop
reasons (errors etc).
* main:
feat(config): expose folder and device info as metrics (fixes#9519) (#10148)
chore: add issue types to GitHub issue templates
build: remove schedule from PR metadata job
chore(protocol): only allow enc. password changes on cluster config (#10145)
chore(protocol): don't start connection routines a second time (#10146)
In practice we already always call SetPassword and ClusterConfig
together. However it's not just "sensible" to do that, it's required: If
the passwords change, the remote device needs to know about that to
check that the enc. setup is valid/consistent (e.g. tokens match,
folder-type is appropriate, ...).
And with the passwords set later, there's no point in adding them as
part of creating a new connection.
This is a "followup" (if one can call it that 4 years later :) ) to
resp. fix for the following commit:
924b96856f
Co-authored-by: Jakob Borg <jakob@kastelo.net>
Switch the database from LevelDB to SQLite, for greater stability and
simpler code.
Co-authored-by: Tommy van der Vorst <tommy@pixelspark.nl>
Co-authored-by: bt90 <btom1990@googlemail.com>
We've had weak/rolling hashing in the code for quite a while. It was a
popular request for a while, based on the belief that rsync does this
and we should too. However, the benefit is quite small; we save on
average about 0.8% of transferred blocks over the population as a whole:
<img width="974" alt="Screenshot 2025-03-28 at 17 09 02"
src="https://github.com/user-attachments/assets/bbe10dea-f85e-4043-9823-7cef1220b4a2"
/>
This would be fine if the cost was comparably low, however the downside
of attempting rolling hash matching is that we (by default) do a
complete file read on the destination in order to look for matches
before we starting pulling blocks for the file. For any larger file this
means a sometimes long, I/O-intensive pause before the file starts
syncing, for usually no benefit.
I propose we simply rip off the bandaid and save the effort.
This is a refactor of the protocol/model interface to take the actual
message as the parameter, instead of the broken-out fields:
```diff
type Model interface {
// An index was received from the peer device
- Index(conn Connection, folder string, files []FileInfo) error
+ Index(conn Connection, idx *Index) error
// An index update was received from the peer device
- IndexUpdate(conn Connection, folder string, files []FileInfo) error
+ IndexUpdate(conn Connection, idxUp *IndexUpdate) error
// A request was made by the peer device
- Request(conn Connection, folder, name string, blockNo, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error)
+ Request(conn Connection, req *Request) (RequestResponse, error)
// A cluster configuration message was received
- ClusterConfig(conn Connection, config ClusterConfig) error
+ ClusterConfig(conn Connection, config *ClusterConfig) error
// The peer device closed the connection or an error occurred
Closed(conn Connection, err error)
// The peer device sent progress updates for the files it is currently downloading
- DownloadProgress(conn Connection, folder string, updates []FileDownloadProgressUpdate) error
+ DownloadProgress(conn Connection, p *DownloadProgress) error
}
```
(and changing the `ClusterConfig` to `*ClusterConfig` for symmetry;
we'll be forced to use all pointers everywhere at some point anyway...)
The reason for this is that I have another thing cooking which is a
small troubleshooting change to check index consistency during transfer.
This required adding a field or two to the index/indexupdate messages,
and plumbing the extra parameters in umpteen changes is almost as big a
diff as this is. I figured let's do it once and avoid having to do that
in the future again...
The rest of the diff falls out of the change above, much of it being in
test code where we run these methods manually...
Cleanup after #9275.
This renames `fmut` -> `mut`, removes the deadlock detector and
associated plumbing, renames some things from `...PRLocked` to
`...RLocked` and similar, and updates comments.
Apart from the removal of the deadlock detection machinery, no
functional code changes... i.e. almost 100% diff noise, have fun
reviewing.
This adds the ability to have multiple concurrent connections to a single device. This is primarily useful when the network has multiple physical links for aggregated bandwidth. A single connection will never see a higher rate than a single link can give, but multiple connections are load-balanced over multiple links.
It is also incidentally useful for older multi-core CPUs, where bandwidth could be limited by the TLS performance of a single CPU core -- using multiple connections achieves concurrency in the required crypto calculations...
Co-authored-by: Simon Frei <freisim93@gmail.com>
Co-authored-by: tomasz1986 <twilczynski@naver.com>
Co-authored-by: bt90 <btom1990@googlemail.com>
Instead of separately tracking the token.
Also changes serviceMap to have a channel version of RemoveAndWait, so
that it's possible to do the removal under a lock but wait outside of
the lock. And changed where we do that in connection close, reversing
the change that happened when I added the serviceMap in 40b3b9ad1.
This fixes various test issues with Go 1.20.
- Most tests rewritten to use fakefs where possible
- Some tests that were already skipped, or dubious (invasive,
unmaintainable, unclear what they even tested) have been removed
- Some actual code rewritten to better support testing in fakefs
Co-authored-by: Eric P <eric@kastelo.net>
This adds a cache to the expensive key generation operations. It's fixes
size LRU/MRU stuff to keep memory usage bounded under absurd conditions.
Also closes#8600.
all: Add package runtimeos for runtime.GOOS comparisons
I grew tired of hand written string comparisons. This adds generated
constants for the GOOS values, and predefined Is$OS constants that can
be iffed on. In a couple of places I rewrote trivial switch:es to if:s,
and added Illumos where we checked for Solaris (because they are
effectively the same, and if we're going to target one of them that
would be Illumos...).
This commit replaces `os.MkdirTemp` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `os.MkdirTemp`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
With this change we emulate a case sensitive filesystem on top of
insensitive filesystems. This means we correctly pick up case-only renames
and throw a case conflict error when there would be multiple files differing
only in case.
This safety check has a small performance hit (about 20% more filesystem
operations when scanning for changes). The new advanced folder option
`caseSensitiveFS` can be used to disable the safety checks, retaining the
previous behavior on systems known to be fully case sensitive.
Co-authored-by: Jakob Borg <jakob@kastelo.net>