Compare commits

...

298 Commits
v1.27.8 ... v1

Author SHA1 Message Date
Jakob Borg
0945304a79 build: fix detection of next rc version 2025-06-20 11:17:23 +02:00
Jakob Borg
9703dd9f57 build: import release workflow changes from main 2025-06-20 11:12:05 +02:00
yparitcher
259e9ef08e fix(protocol): slightly loosen/correct ownership comparison criteria (fixes #9879) (#10176)
Only Require either matching UID & GID OR matching Names.

If the 2 devices have a different Name => UID mapping, they can never be
totaly equal. Therefore when syncing we try matching the Name and fall
back to the UID. However when scanning for changes we currently require
both the Name & UID to match. This leads to forever having out of sync
files back and forth, or local additions when receive only.

This patch does not change the sending behavoir. It only change what we
decide is equal for exisiting files with mismapped Name => UID,

The added testcases show the change: Test 1,5,6 are the same as current.
Test 2,3 Are what change with this patch (from false to true). Test 4 is
a subset of test 2 they is currently special cased as true, which does
not chnage.

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-06-20 09:55:42 +02:00
Simon Frei
6a0c6128d8 fix(watchaggregator): properly handle sub-second watch durations (fixes #9927) (#10179)
I'll let Audrius words from the ticket explain this :)

> I'm a bit lost, time.Duration is an int64, yet watcher delay is float,
> anything sub 1s gets rounded down to 0, so you just end up going into
an
> infinite loop.


https://github.com/syncthing/syncthing/issues/9927#issuecomment-2967736106
2025-06-15 10:29:33 +02:00
Jakob Borg
b05ece0681 build: more resilient pushes to releases 2025-06-07 13:18:58 +02:00
Jakob Borg
e9133ef82b docs: link to Docker image, APT, in release notes 2025-06-05 19:19:05 +02:00
Jakob Borg
67ba20d777 build: also create relaysrv and discosrv releases 2025-06-05 19:19:05 +02:00
Jakob Borg
21da0d7890 fix(stupgrades): return latest stable & pre for each major 2025-06-05 19:19:05 +02:00
ardevd
ebbe57d0ab fix(syncthing): avoid writing panic log to nil fd (#10154)
### Purpose

This change fixes a logical bug in the panic log writing where we could
end up writing to a uninitialized file descriptor.

On the very first iteration, `panicFd` is nil. We enter the if `panicFd
== nil { … }` block, check for “panic:” or “fatal error:”, and if
neither matches, we skip instantiating `panicFd` altogether. However,
immediately after, still within `if panicFd == nil { … }`, we call
`panicFd.WriteString("Panic at ...")`. But `panicFd` would in this case
be `nil`, which will cause a run‐time panic.

It's not clear to me why panicFd is only initialized if the lines start
with "panic:" or "fatal error:" so I've left that logic untouched. With
this change we at least avoid the risk of writing to a nil
filedescriptor.
## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-06-03 07:39:21 +02:00
Jakob Borg
f4abc71dcc chore: copyright in next-version script 2025-06-02 20:41:42 +02:00
Jakob Borg
8aa02da93a build: include "v" prefix in version tags... 2025-06-02 19:59:45 +02:00
Jakob Borg
0e560486db build: use own script instead of svu
We use a slightly different handling of features between prereleases.
2025-06-02 19:49:23 +02:00
Syncthing Release Automation
57d413099d chore(gui, man, authors): update docs, translations, and contributors 2025-06-02 05:24:58 +00:00
Jakob Borg
1fdf07933c feat(config): expose folder and device info as metrics (fixes #9519) (#10148)
Tihs makes it easier to use metrics based on device and folder labels,
names, and other attributes. Other metrics which are based on folder or
device ID can be joined with these info metrics to enrich their label
sets.

```
# HELP syncthing_config_device_info Provides additional information labels on devices
# TYPE syncthing_config_device_info gauge
syncthing_config_device_info{device="I6KAH76-66SLLLB-5PFXSOA-UFJCDZC-YAOMLEK-CP2GB32-BV5RQST-3PSROAU",introducer="false",name="s1",paused="false",untrusted="false"} 1

# HELP syncthing_config_folder_info Provides additional information labels on folders
# TYPE syncthing_config_folder_info gauge
syncthing_config_folder_info{folder="default",label="The default folder",path="s2",paused="false",type="sendreceive"} 1
```

With this you can e.g. query for

```
syncthing_connections_active * on(device) group_left syncthing_config_device_info
```

Fixes #9519 
Closes #10074 
Closes #10147
2025-05-31 17:09:23 +02:00
Jakob Borg
c50678618f chore: add issue types to GitHub issue templates 2025-05-30 11:57:27 +02:00
Jakob Borg
8094b459e4 build: remove schedule from PR metadata job
It shouldn't have touched non-PR issues, but it did
2025-05-30 11:57:27 +02:00
Simon Frei
6765867a2e chore(protocol): only allow enc. password changes on cluster config (#10145)
In practice we already always call SetPassword and ClusterConfig
together. However it's not just "sensible" to do that, it's required: If
the passwords change, the remote device needs to know about that to
check that the enc. setup is valid/consistent (e.g. tokens match,
folder-type is appropriate, ...).
And with the passwords set later, there's no point in adding them as
part of creating a new connection.

This is a "followup" (if one can call it that 4 years later :) ) to
resp. fix for the following commit:
924b96856f

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-05-30 09:52:47 +02:00
Simon Frei
4fb8ee6a6f chore(protocol): don't start connection routines a second time (#10146) 2025-05-30 06:28:42 +00:00
Jakob Borg
674834ccf4 build: properly propagate build tags to Debian build (#10144)
Previously all were ignored except noupgrade which was hard coded...
2025-05-29 15:06:57 +00:00
Jakob Borg
3bd2bff23b fix(protocol): avoid deadlock with concurrent connection start and close (#10140) 2025-05-29 14:56:58 +00:00
Jakob Borg
40660c5fb7 build: add labeler workflow for PRs (#10143)
Use labels to categorise release notes
2025-05-29 10:04:08 +02:00
Jakob Borg
d940d094a1 build(deps): update our notify package from upstream (#10142) 2025-05-28 15:04:24 +00:00
Jakob Borg
9d67727989 build(deps): update dependencies (#10141) 2025-05-28 13:52:08 +00:00
Jakob Borg
6f51700a7f docs: general notes about v2 coming (#10135)
This adds a file that will be prepended to release notes (tag messages,
GitHub releases, forum posts) for v1 releases. I'd like there to be
something there to flag that things are going to change.
2025-05-27 10:01:04 +02:00
Marcel Meyer
598915193a refactor: use slices package for sorting (#10136)
Few more complicated usages of the sort packages are left.

### Purpose

Make progress towards replacing the sort package with slices package.
2025-05-26 20:37:49 +02:00
Jakob Borg
905e5ec07f build: handle multiple general release notes 2025-05-26 16:27:23 +02:00
Jakob Borg
4075b886d0 build: no need to build on the branches that just trigger tags 2025-05-26 15:21:21 +02:00
Jakob Borg
cade790198 build: use specific token for pushing release tags 2025-05-26 14:13:02 +02:00
Luke Hamburg
98555a9a80 fix(gui): update uncamel() to handle strings like 'IDs' (fixes #10128) (#10131)
> ⚠️ resubmission targeting `main` instead of `v2`

### Purpose

Updates `uncamel()` function in
[uncamelFilter.js](https://github.com/syncthing/syncthing/blob/v2/gui/default/syncthing/core/uncamelFilter.js)
to fix camelCase conversion edge cases, see #10128

This adds an array called `reservedStrings` which will be printed as-is,
e.g. `IDs`, `LAN` etc. I pre-populated this with what I believe makes
sense, but of course this is easily updated.

### Testing

I compiled all the config variables I could find in
`syncthing/lib/config/*configuration.go` and tested this new function
against them. Everything seemed to pass.

### Screenshot


![Image](https://github.com/user-attachments/assets/af8c9821-58b3-4a6a-8462-bead8a6d845a)
2025-05-26 11:43:38 +00:00
Marcel Meyer
48b757cac1 refactor: use slices package for sort (#10132)
The sort package is still used in places that were not trivial to
change. Since Go 1.21 slices package can be uswed for sort. See
https://go.dev/doc/go1.21#slices

### Purpose

Make some progress with the migration to a more up-to-date syntax.
2025-05-26 13:37:26 +02:00
Jakob Borg
58c85fc9db build: process for automatic release tags (#10133)
Make the release tagging consistent. Push to release branch to create a
stable release; push to release-rc to release a new candidate.
2025-05-26 13:33:53 +02:00
Syncthing Release Automation
ddd98a818a chore(gui, man, authors): update docs, translations, and contributors 2025-05-26 03:55:20 +00:00
Jakob Borg
64b5a1b738 fix(syncthing): ensure both config and data dirs exist at startup (fixes #10126) (#10127)
Previously we'd only ensure the config dir, which is often but not
always the same as the data dir.

Fixes #10126
2025-05-25 08:10:17 +02:00
Ashish Bhate
1a131a56f2 fix(versioner): fix perms of created folders (fixes #9626) (#10105)
As suggested in the linked issue, I've updated the versioner code to use
the permissions of the corresponding directory in the synced folder,
when creating the folder in the versions directory

### Testing
- Some tests are included with the PR. Happy to add more if you think
there are some edge-cases that we're missing.
- I've tested manually on linux to confirm the permissions of the
created directories.
- I haven't tested on Windows or OSX (I don't have access to these OS)
2025-05-24 07:35:32 +02:00
pullmerge
beda37f28b refactor: use slices.Contains to simplify code (#10121)
There is a [new function](https://pkg.go.dev/slices@go1.21.0#Contains)
added in the go1.21 standard library, which can make the code more
concise and easy to read.
2025-05-23 10:36:06 +00:00
Jakob Borg
2532ac35cf build(deps): update dependency due to build breakage (#10120) 2025-05-21 06:52:29 +00:00
Jakob Borg
bcd30ceaec chore: move golangci-lint & meta to separate PR-only workflow (#10119)
For now. Existing code is not golangci-lint clean, but new PRs should
be, ideally.
2025-05-21 08:32:49 +02:00
Jakob Borg
9a3493c2f4 build: reactivate golangci-lint (#10118)
With DeepSource becoming (imho) less and less useful, let's get this one
back on track. It will likely require adjusting over time.
2025-05-20 14:03:43 +02:00
André Colomb
fa404d5a0d chore(gui): add Serbian (sr) translation template (#10116)
Based on user request from Weblate, user `@vlazic`.
2025-05-19 21:06:38 +00:00
Syncthing Release Automation
73ad18fbfb chore(gui, man, authors): update docs, translations, and contributors 2025-05-19 03:56:31 +00:00
Syncthing Release Automation
1dd264894a chore(gui, man, authors): update docs, translations, and contributors 2025-05-12 03:54:02 +00:00
Marcus B Spencer
8c3d2f3bc5 fix(config): mark audit log options as needing restart (fixes #10099) (#10100)
### Testing

Change the `auditEnabled` option and you should get a prompt in the Web
GUI.
Restart and change the `auditFile` option, and you should get that same
prompt.

The prompt you should get is shown in the screenshots below.

### Screenshots


![Screenshot_20250507_122546](https://github.com/user-attachments/assets/23ce7c42-5e60-4f88-ac58-f312a9a1f5cc)

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-05-09 10:49:11 +00:00
Hazem Krimi
702ed8ecc1 fix(config): deep copy configuration defaults (fixes #9916) (#10101)
### Purpose

Setting default configuration was not working properly since the
defaults struct is not deeply copied.

### Testing

Try running commands to change default configuration and either inspect
`config.xml` or `/rest/config` result to see the applied changed.
Example:
```
./syncthing cli config defaults folder versioning params set keep 5
```
2025-05-09 07:40:32 +02:00
Syncthing Release Automation
b038650810 chore(gui, man, authors): update docs, translations, and contributors 2025-05-05 03:52:42 +00:00
Hazem Krimi
a16bf555c0 feat(gui): close a modal when pressing ESC after switching modal tabs (fixes #9489) (#10092)
### Purpose

As stated in #9489 after clicking on a tab link to switch tabs in a
modal you can no longer close the modal through clicking the ESC key
unless you click anywhere on the modal to focus on it again.

### Testing

- Click on a modal that has tabs like "Settings" or "Add Folder" and
switch tabs then click on ESC.
- Check if clicking outside of a modal in the backdrop should still
close the modal.

### Demo


https://github.com/user-attachments/assets/a010db9a-72f7-4160-a7db-ddfebffb4834
2025-05-02 14:53:54 +00:00
Jakob Borg
cd6ea60fa1 build(deps): update dependencies (#10091)
Without bumping Go version
2025-05-01 18:15:37 +00:00
domain
0bf21d9db2 fix(strelaysrv): make the session limiter session-dependent (fixes #10072) (#10073)
### Purpose

Make the session limiter only apply to current session.

### Testing

Relay 2 or more sessions and check if the sum of the connection speed
can exceed the specified per-session rate.

2 sessions (-global-rate=50000000 and -per-session-rate=6250000):


![图片](https://github.com/user-attachments/assets/133e531a-ed49-4890-aef7-821c628bcfc8)

1 session (-global-rate=50000000 and -per-session-rate=6250000):


![图片](https://github.com/user-attachments/assets/ac89ea53-2d8e-4347-9bbc-4780d85e38d7)
2025-04-30 14:25:01 +00:00
Jakob Borg
f61843ef2e build: artifact uploads destination OCI 2025-04-29 14:01:25 -05:00
Syncthing Release Automation
23e8366f8d chore(gui, man, authors): update docs, translations, and contributors 2025-04-28 03:52:12 +00:00
Ross Smith II
93e72cc83f chore(gui): use go list --deps for dependency list (#10071) 2025-04-26 02:24:31 +00:00
Marcus B Spencer
190dff142c feat(config): add option for audit file (fixes #9481) (#10066) 2025-04-23 22:32:23 +07:00
bt90
c667ada63a chore(api): log X-Forwarded-For (#10035)
### Purpose

Fix https://github.com/syncthing/syncthing/issues/9336

The `emitLoginAttempt` function now checks for the presence of an
`X-Forwarded-For` header. The IP from this header is only used if the
connecting host is either on loopback or on the same LAN.

In the case of a host pretending to be a proxy, we'd still have both IPs
in the logs, which should make this much less critical from a security
standpoint.

### Testing

1. directly via localhost
2. via proxy an localhost

#### Logs

```
[3JPXJ] 2025/04/11 15:00:40 INFO: Wrong credentials supplied during API authorization from 127.0.0.1
[3JPXJ] 2025/04/11 15:03:04 INFO: Wrong credentials supplied during API authorization from 192.168.178.5 proxied by 127.0.0.1
```

#### Event API

```
  {
    "id": 23,
    "globalID": 23,
    "time": "2025-04-11T15:00:40.578577402+02:00",
    "type": "LoginAttempt",
    "data": {
      "remoteAddress": "127.0.0.1",
      "success": false,
      "username": "sdfsd"
    }
  },
  {
    "id": 24,
    "globalID": 24,
    "time": "2025-04-11T15:03:04.423403976+02:00",
    "type": "LoginAttempt",
    "data": {
      "proxy": "127.0.0.1",
      "remoteAddress": "192.168.178.5",
      "success": false,
      "username": "sdfsd"
    }
  }
```

### Documentation

https://github.com/syncthing/docs/pull/907

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-04-23 06:01:13 +00:00
Ross Smith II
93ae30d889 chore(gui): update dependency copyrights, add script for periodic maintenance (#10067)
### Purpose

This PR parses the output of `go mod graph` and updates the copyright
list in our [about
modal](486eebc4ac/gui/default/syncthing/core/aboutModalView.html (L38)).

If there are no changes, the program is silent. Otherwise, it reports
what additions, and deletions it made. It does not rewrite existing
copyright notices, but it does remove notices that we no longer use, as
well as add new ones.

It uses a GitHub API to try to determine the copyright string in the
license file. If one is not found, it defaults to `Copyright &copy;
<this_year> the <owner/repo> authors`. If a proper copyright is found,
simply update the notice in `aboutModalView.html`, and it will be used.
2025-04-23 12:41:05 +07:00
Syncthing Release Automation
486eebc4ac chore(gui, man, authors): update docs, translations, and contributors 2025-04-21 03:52:26 +00:00
Jakob Borg
ff33d976d1 chore(syncthing): remove support for TLS 1.2 sync connections (#10064)
This cleans up the option to allow old TLS 1.2 sync connections. The
flag existed for compatibility with old Syncthing versions that don't
support TLS 1.3, which is approximately Syncthing 1.2.2 (September 2019)
and older. ("Approximately" because it depends on the Go version it's
built with and that's when we switched to building with Go 1.13.)

Ref #10062 because it reminded me this exists.
2025-04-21 10:30:43 +07:00
TheCreeper
69890b4282 fix(osutil): give threads same I/O priority on Linux (#10063) 2025-04-21 02:30:52 +00:00
Jakob Borg
533c9a6ab0 chore(stun): switch lookup warning to debug level 2025-04-17 07:29:10 +07:00
Syncthing Release Automation
9521bb3931 chore(gui, man, authors): update docs, translations, and contributors 2025-04-14 03:51:01 +00:00
Jakob Borg
e46a0f99c3 chore: add missing copyright in new files from infra branch (#10055)
Let's see if it passes
2025-04-13 09:25:16 +00:00
Jakob Borg
ed97e365b2 Merge branch 'infrastructure'
* infrastructure:
  feat(stdiscosrv): configurable desired not-found rate
  chore(blobs): generalised blob storage
  chore(stdiscosrv): path style s3
  feat(ursv): add os/arch/distribution metric
  chore(strelaypoolsrv): limit number of returned relays
  build(infra): run in Docker environment for pushes
  chore(stupgrades): expose latest release as a metric
2025-04-13 09:41:45 +02:00
Jakob Borg
b4776ea4e0 feat(stdiscosrv): configurable desired not-found rate 2025-04-13 09:41:16 +02:00
Jakob Borg
b5ffd0a796 chore(blobs): generalised blob storage 2025-04-13 09:41:16 +02:00
Jakob Borg
c74299b59a chore(stdiscosrv): path style s3 2025-04-13 09:40:14 +02:00
Jakob Borg
8b6d837483 feat(ursv): add os/arch/distribution metric 2025-04-13 09:40:14 +02:00
Jakob Borg
3e74b3dee2 chore(strelaypoolsrv): limit number of returned relays
Avoid unnecessarily enormous responses by returning a random subset of
relays.
2025-04-13 09:40:14 +02:00
Jakob Borg
2902da996c build(infra): run in Docker environment for pushes 2025-04-13 09:40:14 +02:00
Jakob Borg
f6f144bf17 chore(stupgrades): expose latest release as a metric 2025-04-13 09:40:11 +02:00
Sébastien WENSKE
ab5c42f4a0 feat(api, gui): allow authentication bypass for metrics (#10045)
### Purpose

Give the ability to skip authentication for prometheus metrics
("/metrics").

### Testing

When authentication is enabled and "Metrics Without Auth" is checked
(not the default), the "/metrics" path remains accessible even when
disconnected.

### Screenshots


![image](https://github.com/user-attachments/assets/144b696b-dd72-46f4-94d5-cd21848e4a4c)

### Documentation

https://github.com/syncthing/docs/pull/906
2025-04-13 07:35:57 +00:00
Jakob Borg
7db3f7eaac Merge branch 'release-1.29.5'
* release-1.29.5:
  build: push artifacts to Azure (#10044)
  fix(syncthing): use separate lock file instead of locking the certificate (fixes #10053) (#10054)
2025-04-12 14:57:04 +02:00
Jakob Borg
f0b666269b build: push artifacts to Azure (#10044)
Provider migration
2025-04-12 14:55:24 +02:00
Jakob Borg
190a59842c fix(syncthing): use separate lock file instead of locking the certificate (fixes #10053) (#10054)
Apparently that nukes the cert under some circumstances on some Windows
🤷
2025-04-12 14:49:23 +02:00
Jakob Borg
40888c1a66 fix(syncthing): use separate lock file instead of locking the certificate (fixes #10053) (#10054)
Apparently that nukes the cert under some circumstances on some Windows
🤷
2025-04-12 14:46:57 +02:00
Jakob Borg
fa0d933e49 fix(gui): fix previous commit 2025-04-09 15:39:09 +02:00
tomasz1986
8372c0288f fix(gui): mark unseen disconnected devices as inactive (#10048)
Currently, the "Disconnected (Inactive)" status is only given to devices
that have not been seen for 7 days or longer. However, this is not the
case when adding a new device, or after resetting the database. Those
devices are only marked as "Disconnected", and they will stay like that
even if a long time passes without any connectivity. Moreover, the lack
of an "Inactive" status may confuse the user to believe that their
disconnect is only temporary.

For this reason, always mark devices that have not been seen yet as
"Disconnected (Inactive)".

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-04-08 22:08:00 +02:00
Paul Donald
5f5d672a7d fix(strings): differentiate setup(n) and set(v) up (#10024)
Correct GUI strings, translations and comments to use proper grammar.
2025-04-08 12:45:05 +00:00
Tommy van der Vorst
d23cd197e1 chore(fs): changes to allow Filesystem to be implemented externally (#10040)
### Purpose

The `fs.Filesystem` interface contains two parts that cannot be
implemented externally because they are private:

* `filesystemWrapperType`: this PR changes `unwrapFilesystem` to
downcast to a specific concrete type
* `underlying`: this PR simply moves it to an unexported interface

### Testing

Regular tests pass.
2025-04-08 12:39:39 +00:00
bt90
d7ca483df1 chore(config): resolve primary STUN servers via SRV record (fixes #10029) (#10031)
### Purpose

Fixes #10029

### Testing

```
[3JPXJ] 2025/04/03 14:36:44.601454 stun.go:146: DEBUG: Running stun for Stun@udp://[::]:22000 via fyc5mja4mz5s0vmz1txx.syncthing.net:9999
[3JPXJ] 2025/04/03 14:36:54.185157 stun.go:170: DEBUG: Stun@udp://[::]:22000 stun discovery on fyc5mja4mz5s0vmz1txx.syncthing.net:9999 resulted in no address
[3JPXJ] 2025/04/03 14:36:54.185204 stun.go:146: DEBUG: Running stun for Stun@udp://[::]:22000 via stun.internetcalls.com:3478
```

### Documentation

https://github.com/syncthing/docs/pull/904
2025-04-08 12:23:57 +00:00
Jakob Borg
e48be98cd5 build: push artifacts to Azure (#10044)
Provider migration
2025-04-08 09:43:19 +02:00
Syncthing Release Automation
e9a2ff3aa6 chore(gui, man, authors): update docs, translations, and contributors 2025-04-07 03:50:00 +00:00
Jakob Borg
2301f72c5b fix(config): zero filesystemtype is "basic" (#10038)
For legacy purposes
2025-04-04 19:28:39 +00:00
Tommy van der Vorst
f7c8efd93c fix(config): properly apply defaults when reading folder configuration (#10034) 2025-04-04 16:46:12 +00:00
Sébastien WENSKE
3e7ccf7c48 chore(model): add metric for total number of conflicts (#10037) 2025-04-04 09:24:04 -07:00
bt90
6bc2784e9a build: replace underscore in Debian version (#10032)
The workflow building Debian packages chokes on branches containing
underscores:

```
{:timestamp=>"2025-04-03T10:31:46.749835+0000", :message=>"Invalid package configuration: The version looks invalid for Debian packages. Debian version field must contain only alphanumerics and . (period), + (plus), - (hyphen) or ~ (tilde). I have '1.29.5~dev.13.ga38df11f~srv_stun' which which isn't valid.", :level=>:error}
```

This replaces the offending `_` with a `~` which should yield a valid
version.
2025-04-03 14:28:33 +02:00
Tommy van der Vorst
f15d50c2e8 feat(fs, config): add support for custom filesystem type construction (#9887)
For Synctrain I would like to create a virtual filesystem that exposes
iOS' photo library. This can only be accessed through APIs.
2025-04-03 10:12:23 +02:00
Jakob Borg
f9007ed106 build(deps): update dependencies (#10020)
deps deps deps
2025-04-02 08:51:37 +02:00
bt90
05cc6b0f43 chore(fs): speed up case normalization (#10013)
### Purpose

Resurrecting https://github.com/syncthing/syncthing/pull/9365

### Testing

Current benchmark results:

```
goos: linux
goarch: amd64
pkg: github.com/syncthing/syncthing/lib/fs
cpu: AMD EPYC 7763 64-Core Processor                
                                           │  ../old.txt  │             ../new.txt              │
                                           │    sec/op    │   sec/op     vs base                │
UnicodeLowercase/ASCII_lowercase-4            43.68n ± 4%   12.19n ± 1%  -72.09% (p=0.000 n=10)
UnicodeLowercase/ASCII_mixedcase_start-4     200.65n ± 2%   59.49n ± 3%  -70.35% (p=0.000 n=10)
UnicodeLowercase/ASCII_mixedcase_end-4        95.50n ± 2%   59.10n ± 2%  -38.12% (p=0.000 n=10)
UnicodeLowercase/Latin1_lowercase-4           122.5n ± 1%   131.4n ± 1%   +7.27% (p=0.000 n=10)
UnicodeLowercase/Latin1_mixedcase_start-4     339.9n ± 2%   309.2n ± 1%   -9.05% (p=0.000 n=10)
UnicodeLowercase/Latin1_mixedcase_end-4       183.6n ± 2%   174.3n ± 1%   -5.04% (p=0.000 n=10)
UnicodeLowercase/Unicode_lowercase-4          456.6n ± 1%   440.5n ± 1%   -3.53% (p=0.000 n=10)
UnicodeLowercase/Unicode_mixedcase_start-4    625.9n ± 1%   595.6n ± 1%   -4.83% (p=0.000 n=10)
UnicodeLowercase/Unicode_mixedcase_end-4      516.2n ± 1%   495.5n ± 1%   -4.02% (p=0.000 n=10)
geomean                                       214.1n        150.4n       -29.72%

                                           │  ../old.txt  │             ../new.txt              │
                                           │     B/op     │    B/op     vs base                 │
UnicodeLowercase/ASCII_lowercase-4           0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/ASCII_mixedcase_start-4     24.00 ± 0%     24.00 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/ASCII_mixedcase_end-4       24.00 ± 0%     24.00 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Latin1_lowercase-4          0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Latin1_mixedcase_start-4    32.00 ± 0%     32.00 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Latin1_mixedcase_end-4      32.00 ± 0%     32.00 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Unicode_lowercase-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Unicode_mixedcase_start-4   48.00 ± 0%     48.00 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Unicode_mixedcase_end-4     48.00 ± 0%     48.00 ± 0%       ~ (p=1.000 n=10) ¹
geomean                                                 ²               +0.00%                ²
¹ all samples are equal
² summaries must be >0 to compute geomean

                                           │  ../old.txt  │             ../new.txt              │
                                           │  allocs/op   │ allocs/op   vs base                 │
UnicodeLowercase/ASCII_lowercase-4           0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/ASCII_mixedcase_start-4     1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/ASCII_mixedcase_end-4       1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Latin1_lowercase-4          0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Latin1_mixedcase_start-4    1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Latin1_mixedcase_end-4      1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Unicode_lowercase-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Unicode_mixedcase_start-4   1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=10) ¹
UnicodeLowercase/Unicode_mixedcase_end-4     1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=10) ¹
geomean                                                 ²               +0.00%                ²
¹ all samples are equal
² summaries must be >0 to compute geomean
```

I think the `+7%` for the lowercase Latin1 testcase is easily outweighed
by the ASCII and unicode improvements 🙂
2025-04-01 13:41:57 +02:00
Marcus B Spencer
1efcfeb3ad chore(config): remove discontinued secondary STUN servers (fixes #10011) (#10012)
Similarly to #10009, we will remove some discontinued STUN servers,
except instead of being the official primary server, it's some
unofficial secondary STUN servers.

### Testing

Use a STUN client (like [`pystun3`](https://pypi.org/project/pystun3))
to probe that the removed STUN servers are inactive.

### Documentation

syncthing/docs#902
2025-03-31 06:41:33 +00:00
Syncthing Release Automation
93195911bd chore(gui, man, authors): update docs, translations, and contributors 2025-03-31 03:50:00 +00:00
Jakob Borg
6085e3a5eb fix(stun): better error handling (ref #10008) (#10010) 2025-03-30 11:56:29 -07:00
Marcus B Spencer
e5b72da607 fix(config): remove discontinued primary STUN server (fixes #10008) (#10009)
The mechanism for primary STUN servers, is still intact, in case this
gets retried with a different domain.

### Purpose

As seen in [stun.syncthing.net doesn’t resolve
anymore](https://forum.syncthing.net/t/stun-syncthing-net-doesnt-resolve-anymore/24075/2?u=marbens)
on the forums, stun.syncthing.net has been shut down, so I think it's
probably a good idea to remove it.

### Testing

1. Have two or more devices
2. Disable Relaying
3. Have no Internet ports open on either end for incoming connections
trigger STUN)
4. Enable the `stun` debugging facility in the Actions -> Logs ->
Debugging Facilities
5. Verify that it doesn't output something like this within a few
seconds:
```
2025-03-30 05:51:32 Enabled debug data for "stun"
2025-03-30 05:51:47 Starting stun for Stun@udp://[::]:22000
2025-03-30 05:51:47 Running stun for Stun@udp://[::]:22000 via stun.syncthing.net:3478
2025-03-30 05:51:47 Stun@udp://[::]:22000 stun addr resolution on stun.syncthing.net:3478: lookup stun.syncthing.net: no such host
```

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-03-30 15:17:55 +02:00
mathias4833
0d6117d585 fix(gui): validate device ID in canonical form (fixes #7291) (#10006)
### Purpose

In the GUI, the device ID validation was case-sensitive and didn’t
account for dash variations, which allowed users to enter an existing
device ID without receiving proper feedback.

This fix ensures the ID is validated in its canonical form, thus
preventing the user from submitting the request if the device ID already
exists.

### Testing

To test this change, try adding a new device with an ID that matches an
existing device, but with a different case or dashes.
2025-03-29 17:52:02 +01:00
tomasz1986
629971687d feat(gui): explanation to options enabled or disabled per folder type (#9367)
Currently, some options are automatically enabled or disabled depending
on the folder type. However, there is no explanation in the GUI on why
the options are like that. Thus, add short explanatory notes to each
case, where the option is either disabled or enabled according to the
current folder type.
2025-03-28 15:17:08 +00:00
Tommy van der Vorst
3c955a9706 chore(lib): expose model methods to obtain progress (#9886)
### Purpose

This exposes four methods from `Model` through `Internals`. It allows
apps like Synctrain to obtain information about local/remote need and
sync progress.

### Testing

No testing seems necessary, functions are exported verbatim.

### Screenshots

N/a

### Documentation

Not public API, I am aware this interface may change at any time.

## Authorship

OK.

Co-authored-by: Ross Smith II <ross@smithii.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-03-28 13:44:01 +00:00
Ross Smith II
7f3c8dbff1 build: move nightly build schedule to separate workflow (#10000)
This allows users to easily disable nightly builds in their forks,
simply by disabling the
build-nightly action.

### Testing

I tested it in my fork, and it works.
2025-03-27 09:31:51 +00:00
Jakob Borg
7762e39fb3 chore(syncthing): use file lock on certificate to prevent multiple instances (#10003)
This adds the locking from the SQLite branch, in preparation, so that we
do not inadvertently permit running an instance of each.
2025-03-27 09:26:21 +00:00
Jakob Borg
4235b2c406 chore(ur): add RSS to reported stats (#10002)
For easier comparison in the future.
2025-03-27 10:17:10 +01:00
Syncthing Release Automation
3fd090bfa7 chore(gui, man, authors): update docs, translations, and contributors 2025-03-24 03:49:44 +00:00
Syncthing Release Automation
6dfa54efa6 chore(gui, man, authors): update docs, translations, and contributors 2025-03-17 03:49:35 +00:00
mathias4833
67575e1736 fix(api): prevent tilde expansion in path suggestions (fixes #9990) (#9992)
### Purpose

Path autocompletion wasn't working when using `~` as a shortcut for the
home directory. The issue occurred because the tilde was expanded to
/home/user, which caused the suggestion to no longer match the input
(thus preventing the autocompletion from appearing in the suggestion
list).

To fix this, I replaced the custom `parentAndBase` function, which
handled path splitting in a more complex way, with `filepath.Split` from
the standard `path/filepath` package. This prevents tilde expansion
while keeping the expected behavior for path splitting.

### Testing

The issue has been tested manually on Linux.

### Screenshots


![screenshot](https://github.com/user-attachments/assets/49dd96e2-6d75-4476-946d-0dfb2ac474ef)
2025-03-15 21:06:38 +01:00
Jakob Borg
65923fc255 fix(syncthing): don't auto upgrade to higher major on startup (#9989)
We avoided upgrading to newer major versions during normal auto upgrade
procedures, but currently not in the initial upgrade check on startup.
2025-03-13 07:59:19 +00:00
Jakob Borg
aea763868f build(deps): update dependencies (#9988) 2025-03-13 07:41:05 +00:00
Syncthing Release Automation
26b134ae7b chore(gui, man, authors): update docs, translations, and contributors 2025-03-10 03:45:18 +00:00
Emil Lundberg
893071d2ba refactor(api): extract method configMuxBuilder.postAdjustGui and add test coverage (#9979)
This is extracted from PR #9175. This deduplicates `SetPassword` calls
and makes `postAdjustGui` a single place where PR #9175 can add
another adjustment step for sanitizing changes to WebAuthn
credentials.

This also adds tests to validate that the refactored logic was not
broken.
2025-03-09 21:31:06 +01:00
Emil Lundberg
435f2d2178 refactor(api): make shutdown timeout configurable for tests (#9980)
This is extracted from PR #9175, which adds some tests that seem to
need a longer shutdown timeout when running on GitHub Actions.
2025-03-07 12:50:33 +01:00
Emil Lundberg
8461ca539b refactor(api): deduplicate HTTP test helpers and allow session cookie access (#9977)
These refactorizations were made in [PR #9175][1] to accommodate a few
new variants of authentication method and body content. On request from
reviewers, this PR extracts it as a smaller refactorization to review in
isolation.

### Purpose

This extracts a shared `httpRequest` base function from `httpGet` and
`httpPost`, which will be used in PR #9175 for new helper functions
`httpGetCsrf` (hiding all optional parameters except the CSRF token),
`httpPostCsrf` (same) and `httpPostCsrfAuth` (hiding basic auth
parameters). A `getSessionCookie` function is also extracted from
`hasSessionCookie` and will be used to test that concurrent WebAuthn
authentications result in separate sessions (indicated by different
session cookies).
2025-03-07 11:07:01 +01:00
Jakob Borg
fb977dc61d build: correct API call for Weblate statistics
Something changed...
2025-03-03 08:11:29 +01:00
Jakob Borg
ee7ab4ce25 build(deps): update dependencies (#9978) 2025-03-01 21:33:13 +00:00
polyfloyd
c3ce9713d9 chore(etc): remove /usr/bin prefix from Linux .desktop files (#9966)
Exec accepts just the program name and will look for it in $PATH. This
makes these files work on NixOS which does not create a /usr/bin
directory.
2025-02-28 20:59:07 +01:00
Jakob Borg
6a147091c5 build: use Go 1.24, minimum is Go 1.23 (#9960) 2025-02-12 10:16:46 +01:00
Jakob Borg
6208c36417 fix(policy): do not require multiple maintainers for build changes 2025-02-12 09:47:09 +01:00
Syncthing Release Automation
453fd20eeb chore(gui, man, authors): update docs, translations, and contributors 2025-02-10 03:46:14 +00:00
Tommy van der Vorst
28f0cffdb6 chore(fs): build kqueue instead of fsevents watcher on iOS (#9950)
### Purpose

On iOS, the FSEvents API for watching files (also used on macOS) is not
available, but `kqueue` is. This PR ensures `kqueue` support is built on
iOS instead of the FSEvents based watcher implementation.

Before this PR, you could already use the `kqueue` build option to force
its usage. Unfortunately `gomobile`, the tool that I use to build
Syncthing for iOS and macOS for Synctrain, does not support setting
different build flags for iOS and macOS (unless I build separately for
each, which is a bit of a hassle because XCode nonsense). I am assuming
there are good reasons to support FSEvents even though `kqueue` is also
available on macOS (but I'm not sure why?). I do know FSEvents has been
working fine for me on macOS so it seems best to use FSEvents on macOS
and kqueue on iOS.

Note that this also requires https://github.com/syncthing/notify/pull/4
to be merged in `synchting/notify` (until that is done, this PR will
fail to build on iOS due to `notify` still trying to link to `fsevents`
stuff when the `kqueue` build flag is not set).

### Testing

I compiled both `syncthing/notify` and syncthing with this PR applied,
and used that to successfully build the Synctrain iOS app, which after
this PR works fine and should follow up file changes a bit quicker.

### Screenshots

n/a

### Documentation

n/a

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-02-07 15:40:53 +00:00
Jakob Borg
87c16c6cf5 build(deps): update dependencies (#9951) 2025-02-07 08:44:26 +00:00
dashangcun
5495c98e63 refactor: using slices.Contains to simplify the code (#9918)
### Purpose

This is a [new function](https://pkg.go.dev/slices@go1.21.0#Contains)
added in the go1.21 standard library, which can make the code more
concise and easy to read.

### Testing

Describe what testing has been done, and how the reviewer can test the
change
if new tests are not included.

### Screenshots

If this is a GUI change, include screenshots of the change. If not,
please
feel free to just delete this section.

### Documentation

If this is a user visible change (including API and protocol changes),
add a link here
to the corresponding pull request on https://github.com/syncthing/docs
or describe
the documentation changes necessary.

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.

Signed-off-by: dashangcun <907225865@qq.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-02-07 08:21:24 +00:00
Jakob Borg
da7d5ce608 build: switch to cloud code signing for Windows (#9948)
The requirements for Windows code signing changed in 2023, so that newly
generated certificates can only be stored in hardware modules. Luckily,
I managed to snag a three year certificate before that so it hasn't
affected us so much. Now though, it does, because our cert is expiring
in March.

This changes the code signing process for Windows to use a cloud
service, Azure Trusted Signing. This appears to work equally well and
outsources the problem entirely, while also being cheaper than the
actual certificate was to begin with. 🤷

The signing entity will be Kastelo AB and not the Syncthing Foundation,
because the latter is almost impossible to get a certificate for as it's
not a normal corporate entity whose existence can be verified, etc. This
is also how it was prior to the latest certificate; it's not ideal, but
I think it's acceptable under the circumstances.
2025-02-06 10:43:23 +01:00
Syncthing Release Automation
b300c297c6 chore(gui, man, authors): update docs, translations, and contributors 2025-02-03 03:45:29 +00:00
Syncthing Release Automation
124673f7a8 chore(gui, man, authors): update docs, translations, and contributors 2025-01-27 03:45:14 +00:00
Jakob Borg
0395cf2bc0 fix(model): clarify errors on Windows user/group lookup (fixes #9929) (#9930)
Currently, this just results in a very ambiguous `setting metadata: lookup
failed` while it could report what it's looking up and why it failed
(not found, etc).
2025-01-20 09:59:05 +01:00
Syncthing Release Automation
36cd70040a chore(gui, man, authors): update docs, translations, and contributors 2025-01-20 03:45:10 +00:00
Jakob Borg
1fbd396ffa chore(scanner): don't warn about cancelled scan (#9920)
We expect a context cancellation when a folder is restarted during a
scan.
2025-01-13 17:35:55 +00:00
Syncthing Release Automation
2834bad85e chore(gui, man, authors): update docs, translations, and contributors 2025-01-13 03:47:27 +00:00
Jakob Borg
516f3e29e8 chore(proto): change symlinktarget to be byte sequence (fixes #9913) (#9914) 2025-01-11 17:38:29 +01:00
Jakob Borg
ab20c16982 fix(api): don't crash requests after failing to unmarshal tokens (fixes #9909) (#9912)
Unclear why this would happen, but apparently it does?
2025-01-09 21:47:10 +01:00
Jakob Borg
0c1df81ee9 fix(scanner): don't warn when legitimately skipping a directory (#9911)
Such as ignored directories etc.
2025-01-09 18:42:18 +00:00
Jakob Borg
d324b2ac86 fix(db): correct unsafe RLock order (fixes #9906) (#9910)
Marshal() was called with the read lock held, and in turn called
Created() which also takes the read lock. This is fine by itself, but
there is a risk of deadlock if another call to lock the mutex happens
concurrently, as the lock call will block the inner rlock and the outer
rlock can never become unlocked.

It's an easy fix as marshalling is guaranteed to be called with a read
lock and does not need to call any methods that read lock themselves.
2025-01-09 19:33:04 +01:00
Jakob Borg
0231089b99 fix(db): add JSON tags to types used in API returns (fixes #9907) (#9908)
These types are also used in the API, and hence need JSON tags.
2025-01-09 09:26:34 +00:00
Jakob Borg
74ffb85467 fix(model): correct type of fileInfoType in API browse response (fixes #9904) (#9905)
This was broken in the Protobuf refactor. The reason is that with the
previous library, generated types would have a JSON marshalling method
that would automatically return strings for enums. In the current
library you need to use the jsonpb marshaller for that, but these are
hand crafted structs so we can't do that. The easy solution is to just
use strings directly, since this is an API-only type anyway.
2025-01-08 11:08:33 +01:00
Syncthing Release Automation
dcd280e6e2 chore(gui, man, authors): update docs, translations, and contributors 2025-01-06 03:47:27 +00:00
Jakob Borg
4e56dbd883 build(deps): update dependencies (#9900) 2025-01-01 19:49:52 +00:00
Syncthing Release Automation
13d7881b80 chore(gui, man, authors): update docs, translations, and contributors 2024-12-30 03:46:20 +00:00
Simon Frei
1d2c53bf3c chore(scanner): avoid scan failures and report if they happen (#9888) 2024-12-28 17:09:38 +01:00
Syncthing Release Automation
9c449c966b chore(gui, man, authors): update docs, translations, and contributors 2024-12-23 03:46:12 +00:00
Jakob Borg
ec2d4638e3 build: fix publish-nightly tools checkout 2024-12-22 09:02:11 +01:00
André Colomb
0ce92befc8 chore(gui): fix merge conflict on Weblate (#9882)
Commit dee920a840 was merged into
Weblate's repository, but later replaced on main by
2167ce9656. This led to a merge conflict,
which this commit fixes cleanly. After merging, we can unlock Weblate
again.
2024-12-21 21:54:17 +00:00
Jakob Borg
2167ce9656 build: reinstate docker push for main 2024-12-21 16:17:21 +01:00
Jakob Borg
0dc85d74aa build: also push to ghcr.io 2024-12-21 15:43:49 +01:00
Jakob Borg
b6e3f8037b build: workaround for tag builds
The GitHub checkout action does weird stuff with tags which breaks `git
describe`. This works around that so we get proper release builds on tag
pushes.
2024-12-21 14:35:04 +01:00
Jakob Borg
00e7161a8f build: compat.json should be in the signed packages bundle 2024-12-20 10:34:29 +01:00
Jakob Borg
b5a7879eca fix(stdiscosrv): handle announcements properly :p (#9881)
Further protobuf refactor damage, also adding some better debugging
2024-12-19 20:43:46 +00:00
Jakob Borg
4355dc69ea fix(discovery): properly unmarshal local discovery (#9880)
Damage from recent protobuf refactoring
2024-12-19 20:16:44 +00:00
Jakob Borg
371ba69447 build: slightly streamline build dependencies 2024-12-19 15:08:17 +01:00
Jakob Borg
79ef57d0ea fix(scanner): continue walk after special files (fixes #9872) (#9877)
We must skip unix sockets, fifos, etc when scanning as these are not
filetypes we can handle. Currently we return a "bug" error, which
results in the walk being aborted and the rest of the tree being
essentially invisible to Syncthing. Instead, just ignore these files and
continue onwards.

This might well be #9859 as well but I can't confirm.
2024-12-19 08:27:53 +01:00
Jakob Borg
8bd6bdd397 chore(model): clarify log message (fixes #9875) (#9876) 2024-12-18 07:56:06 +00:00
Simon Frei
ce3248cea7 chore(fs): add debug logging for case cache registry (#9869)
Clearly something is up with it, but I still have no clue what. This
might give some clue when affected user enable debug logging.
2024-12-16 12:04:24 +01:00
Jakob Borg
99a6f3a5b6 docs: update section on code signing 2024-12-16 11:42:34 +01:00
Jakob Borg
c2e10dc156 build: skip Docker push for main, reserve for release 2024-12-16 11:39:13 +01:00
Jakob Borg
fc914f3237 build: sign asc files using ezapt
And same keys as APT archive
2024-12-16 11:35:41 +01:00
Jakob Borg
811d3752d0 build: loki vars are in secrets 2024-12-16 09:26:21 +01:00
Jakob Borg
00827dd5c1 build: consolidate release environment for actions
signing & docker -> release
2024-12-16 08:57:35 +01:00
Syncthing Release Automation
a981c21d27 chore(gui, man, authors): update docs, translations, and contributors 2024-12-16 03:50:41 +00:00
Jakob Borg
163dc122f3 build: update compat.yaml for Go 1.24 (rc1) (fixes #9870) 2024-12-15 11:09:14 +01:00
Jakob Borg
83727e0824 build(deps): update dependencies (#9866) 2024-12-10 14:33:47 +01:00
Jakob Borg
529d3ef764 ci: reduce frequency of dependabot nags 2024-12-10 09:53:46 +01:00
Jakob Borg
fefbf4dcc8 build: run release flows on tag pushes 2024-12-09 08:32:15 +01:00
André Colomb
b9c6d3ae09 fix(config): skip GUI port probing for UNIX sockets (fixes #9855) (#9858)
When creating an initial default config, we usually probe for a free
TCP port.  But when a UNIX socket is specified via the `STGUIADDRESS=`
override or the `--gui-address=unix:///...` command line syntax, parsing
that option will fail during port probing.

The solution is to just skip the port probing when the address is
determined to specify something other than a TCP socket.

### Testing

Start with a fresh home directory each time.
1. Specify a UNIX socket for the GUI (works with this PR):

TMPHOME=$(mktemp -d); ./syncthing --home=$TMPHOME
--gui-address=unix://$TMPHOME/socket

2. Specify no GUI address (probes for a free port if default is taken,
   as before):

       TMPHOME=$(mktemp -d); ./syncthing --home=$TMPHOME

3. Specify a TCP GUI address (probes whether the given port is taken,
   as before):

TMPHOME=$(mktemp -d); ./syncthing --home=$TMPHOME
--gui-address=127.0.0.1:8385
2024-12-09 07:24:42 +00:00
Syncthing Release Automation
7bea8c758a chore(gui, man, authors): update docs, translations, and contributors 2024-12-09 03:50:45 +00:00
Alex Ionescu
479c0d3f16 fix(gui): reflect folder password visibility by changing button icon (#9857)
Make the "toggle password visibility" button on encrypted
folders change its icon from `fa-eye` to `fa-eye-slash` while the
password is visible.
2024-12-08 21:20:31 +01:00
Jakob Borg
d9ce7c3166 build: build all the things (#9845)
Build packages for stdiscosrv and strelaysrv as well.
2024-12-05 10:09:33 -05:00
Jakob Borg
69979996d9 build(infra): also push Docker images to ghcr.io 2024-12-03 07:54:06 -05:00
Jakob Borg
da58e5c50c build(deps): update dependencies (#9852) 2024-12-03 12:48:10 +00:00
Syncthing Release Automation
44e259142f chore(gui, man, authors): update docs, translations, and contributors 2024-12-02 03:50:30 +00:00
Jakob Borg
77970d5113 refactor: use modern Protobuf encoder (#9817)
At a high level, this is what I've done and why:

- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.

For the sake of discussion, some things I tried that turned out not to
work...

### Embedding / wrapping

Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:

```
package protocol

type FileInfo struct {
  *generated.FileInfo
}
```

This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.

### Aliasing

```
package protocol

type FileInfo = generated.FileInfo
```

Doesn't help because you can't attach methods to it, plus all the above.

### Generating the types into the target package like we do now and
attaching methods

This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).

### Methods to functions

I considered just moving all the methods we attach to functions in a
specific package, so that for example

```
package protocol

func (f FileInfo) Equal(other FileInfo) bool
```

would become

```
package fileinfos

func Equal(a, b *generated.FileInfo) bool
```

and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.

Fixes #8247
2024-12-01 16:50:17 +01:00
André Colomb
2b8ee4c7a5 chore(gui): cache input type in each advanced settings category (#9802)
Each section in the advanced settings dialog has similar code to insert
repeated input fields for each option. But only the first section (GUI
options) was adjusted in #9743 to avoid calling the `inputTypeFor()`
function repeatedly.

Apply the same caching to a locally scoped variable for each ng-repeat
entry by defining it in an ng-init directive.
2024-12-01 12:30:05 +00:00
bt90
be952e5f2d chore(config): add Chinese STUN servers (#9843) 2024-11-30 08:33:55 +01:00
Jakob Borg
43ebac4242 fix(model): create fileset under lock (#9840)
I came accross this in another context and didn't investigate fully, but
literally ten lines above this code, in another method, we say that
filesets _must_ be created under the lock. It's either one or the other
and I'm taking the safer route here.

---------

Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-11-28 13:41:44 +00:00
Jakob Borg
f08a0ed01c build(deps): update dependencies (#9833) 2024-11-25 07:25:24 +00:00
Jakob Borg
612fdff377 build: automatically update APT repository on release
This uses https://github.com/kastelo/ezapt to generate and sign the
archive, and uploads it to blob storage.
2024-11-24 22:55:12 +01:00
Jakob Borg
8ccb7f1924 fix(protocol): allow encrypted-to-encrypted connections again
Encrypted-to-encrypted connections (i.e., ones where both sides set a
password) used to work but were broken in the 1.28.0 release. The
culprit is the 5342bec1b refactor which slightly changed how the request
was constructed, resulting in a bad block hash field.

Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-11-24 22:55:12 +01:00
André Colomb
65d0ca8aa9 fix(config): respect GUI address override in fresh default config (fixes #9783) (#9675)
### Purpose

When generating a new `config.xml` file with default options, the GUI
address is populated with a hard-coded default value of
`127.0.0.1:8384`, except for a random free port if that default one is
occupied. This is independent from the GUI configuration default address
defined in the protobuf description. More importantly, it ignores any
`STGUIADDRESS` override given via environment variable or command-line
option, thus probing for the default port instead of the one specified
via override.

The `ProbeFreePorts()` function now respects the override, by reading
the `GUIConfiguration.Address()` method instead of using hard-coded
defaults.

When not calling `ProbeFreePorts()`, the override should still be
persisted rather than the default address. This happens only when
generating a fresh default `config.xml`, never on an existing one.
2024-11-19 11:01:43 +00:00
Jakob Borg
e82ed6e3d3 style: gofumpt all the things (#9829)
Literally `gofumpt -w .` from the top level dir. Guaranteed to be minor
style changes only and nothing else.

@imsodin per request?
2024-11-19 11:32:56 +01:00
Syncthing Release Automation
4b815fc086 chore(gui, man, authors): update docs, translations, and contributors 2024-11-18 03:49:35 +00:00
Jakob Borg
7eaf843de2 chore(api): add block and goroutine profiles to support bundle (#9824) 2024-11-16 09:43:17 +01:00
Jakob Borg
110e1ae6f9 fix(model): don't panic in index consistency print (fixes #9821) (#9823)
We try to compare to the last fileinfo, but apparently we can end up
here with an empty file list and crash on out of index.
2024-11-14 19:59:34 +00:00
Jakob Borg
896f9725ec chore: update policy to allow approvals by contributors (#9818)
This adds `allow_contributor: true` which allows approvals by
contributors to the PR (but still not the author themself, which is a
different thing). This allows things like pushing minor fixups while
also approving.

The `ignore_update_merges: true` option makes it so that someone is not
considered a "contributor" just because they push the merge button to
update the branch. In principle this is not needed given the above, but
I like it for clarity.
2024-11-12 10:50:19 +01:00
Tobias Frölich
1a529e9d5d fix(gui): expand tildes for subdir check (fixes #9400) (#9788)
### Purpose

This closes #9400 by always expanding tildes when parent/subdir checks
are done.

### Testing

I tested this by creating folders with paths to parent or subdirectories
of the default folder that include a tilde in their path as shown in the
attached screenshots.
With this change, overlap will be detected regardless of wether or not
tildes are used in other folder paths.

### Screenshots

Default Folder:

![2024-10-26-At-08h40m33s](https://github.com/user-attachments/assets/07df090c-4481-41ec-b741-d2785fc848d5)
Newly created folder (parent directory in this case)

![2024-10-26-At-08h40m13s](https://github.com/user-attachments/assets/636fa1fd-41dc-44d9-ac90-0a4937c9921c)

---------

Signed-off-by: tobifroe <froeltob@pm.me>
2024-11-12 09:24:00 +01:00
Hireworks
36ef17df8f fix(model): check if remote folder state before pulling files (fixes #9686) (#9732)
### Purpose

As discussed in #9686 
Syncthing currently does not check folderstate on remote device before
pulling. If no devices have a valid folderstate (i.e all devices have
the folder paused) it will still attempt to pull. On large folders this
will cause a hanging "Syncing" status.

This checks whether at least one connected device has the file available
and has a valid folderstate.

### Testing
Tested locally on multiple devices.
We're new to Go (all our stuff is Python) so please bear with!
Interested if there may be a better place to slot this in.

Thanks,
Jon

---------

Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-11-12 08:51:52 +01:00
Syncthing Release Automation
955ac7775e chore(gui, man, authors): update docs, translations, and contributors 2024-11-11 03:46:06 +00:00
tomasz1986
8f69e874c4 chore(gui): group logout/restart/shutdown buttons together in Actions menu (#9801)
Currently, the "Restart" and "Shutdown" buttons are displayed in the
middle of the Actions menu. On the other hand, the "Log Out" button is
displayed at the very bottom. However, in other cases, e.g. the menus in
operating systems like Windows or macOS, these kind of buttons are
usually grouped together.

Therefore, move the "Restart" and "Shutdown" buttons down, so that they
are listed together with the "Log Out" button. Also, change the order,
so that it goes from the least impactful ("Log Out") to the most
impactful ("Shutdown").

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

### Screenshots

#### Before


![image](https://github.com/user-attachments/assets/a51438ef-bb6f-4535-a972-8c1bc1dffa02)

#### After


![image](https://github.com/user-attachments/assets/535762d6-6f26-44ab-a402-db87bdcbfb36)

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-11-07 16:47:00 +01:00
Syncthing Release Automation
ac06fd97e9 chore(gui, man, authors): update docs, translations, and contributors 2024-11-04 03:47:49 +00:00
Syncthing Release Automation
3726b7d112 chore(gui, man, authors): update docs, translations, and contributors 2024-10-28 03:48:15 +00:00
Ross Smith II
377200591e fix(fs): fix directory junction handling (fixes #9775) (#9786)
### Purpose

This fixes #9775. I also improved the comments as they were lacking.

My apologies for introducing this bug. In summary, the bug was
```
mode = mode ^ (ModeSymlink | ModeIrregular)
```
didn't correctly reset those bits. This correctly resets them:
```
mode = mode &^ ModeSymlink &^ ModeIrregular
```
Tested and working in Windows 11 version 10.0.22631.4317. I didn't test
in other versions, but I'm sure this is the only issue.
2024-10-27 16:08:38 +01:00
Kapil Sareen
4afc898c2f fix(model): don't sync symbolic links on Android (fixes #9725) (#9782) 2024-10-26 09:29:38 +00:00
Simon Frei
ff7e4fef55 chore(nat, upnp): Make failure logging less reptitive (ref #9324) (#9785)
Currently we log on every single one of 10 retries deep in the upnp
stack. However we also return the failure as an error, which is bubbled
up a while until it's logged at debug level. Switch that around, such
that the repeat logging happens at debug level but the top-level happens
at info. There's some chance that this will newly log errors from
nat-pmp that were previously hidden in debug level - I hope those are
useful and not too numerous.

Also potentially this can even close #9324, my (very limited)
understanding of the reports/discussion there is that there's likely no
problem with syncthing beyond the excessive logging, it's some weird
router behaviour.
2024-10-25 21:04:22 +00:00
Simon Frei
9ffddb1923 fix(gui): apply small screen CSS changes earlier (fixes #9590) (#9756)
These CSS overrides address issues that are already present on wider
screens, so apply it there. Some experiments show we might even want to
up the limit more, but I am chicken and lazy, so I propose to use the
existing 470px media block.

Supersedes another PR after not getting any reaction to feedback there:
https://github.com/syncthing/syncthing/pull/9591#issuecomment-2212586134

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2024-10-25 22:49:22 +02:00
André Colomb
896b857fc4 fix(gui): hide lists with [object Object] in advanced settings (#9743)
As discussed in
https://github.com/syncthing/syncthing/pull/9175#discussion_r1730431703,
entries in advanced settings are unusable if they are comprised of a
list of objects. It just displays `[object Object], [object Object],
[object Object]`, e.g. for the devices a folder is shared with.

Filter out these config elements by detecting an array whose members are
not all strings or numbers, and setting them to `skip` type.

Fix some unnecessary repetition in calling `inputTypeFor()`, since it is
already cached in the `ng-init` directive.
2024-10-25 22:22:12 +02:00
Terrance
acc5d2675b fix(gui): add dark scheme styles for disabled checkboxes (fixes #9776) (#9777)
### Purpose

Fixes #9776 by tweaking the text/background colours of disabled checkbox
panels when dark mode is enabled.

It was [noted on that
issue](https://github.com/syncthing/syncthing/issues/9776#issuecomment-2424828520)
that there's a bigger issue around the correctness of using the
`disabled` attribute on a `<div>` in the first place, but this PR does
not attempt to change that.

### Testing

I've hooked up the GUI files against a release build as suggested below.

### Screenshots

Using the dark theme, or the default theme with a system dark scheme:


![image](https://github.com/user-attachments/assets/3c6bfa77-cc7a-4f3e-a5c2-83daf54dcc34)

Using the black theme:


![image](https://github.com/user-attachments/assets/768db657-aa52-4db0-8455-5194a00fc143)

These borrow the colours from dark theme text inputs and black theme
tabs for a consistent look (initially I tried the text colour of
disabled text inputs, but that produced some poor contrast).
2024-10-22 14:03:32 +02:00
Syncthing Release Automation
6ece4c1fd2 chore(gui, man, authors): update docs, translations, and contributors 2024-10-21 03:47:52 +00:00
Jakob Borg
cc09f0170d build(deps): update dependencies (#9773) 2024-10-17 13:05:53 +00:00
Syncthing Release Automation
bb234d6c0e chore(gui, man, authors): update docs, translations, and contributors 2024-10-14 03:47:49 +00:00
Syncthing Release Automation
e6acc64758 chore(gui, man, authors): update docs, translations, and contributors 2024-10-07 03:47:13 +00:00
André Colomb
f18cf545b9 fix(gui): improve device ID readability in black and dark themes (fixes #9757) (#9758) 2024-10-06 00:48:19 +02:00
Jakob Borg
6d64daaba3 chore(db): process "unchanged" files anyway (#9755)
Skipping these makes the sequence numbering inconcistent; we've received
a file and suppsedly added it to the database, but if you check the
sequence number afterwards it didn't increase, i.e., we trigger [this
failure
condition](47f48faed7/lib/model/indexhandler.go (L447-L459))
and, similarly, a future update will look like there was a hole in the
numbering.

I propose to at least temporarily remove this optimisation in order for
things to make more sense. Is there a reason to keep this beyond saving
some database operations?
2024-10-04 19:47:57 +00:00
Jakob Borg
47f48faed7 fix(upgrades): avoid clobbering cache when filtering (#9752)
The slice is shared, can't overwrite elements of it. (Upgrade server
only thing.)
2024-10-02 18:56:39 +00:00
Jakob Borg
cfa834177b build(deps): update dependencies (#9751) 2024-10-02 12:53:14 +00:00
tomasz1986
c454fc8baa chore(build): use conventional commit title in update script (#9747) 2024-09-30 15:59:14 -05:00
Jakob Borg
dbe7fa9155 Merge branch 'infrastructure'
* infrastructure:
  feat(ursrv): new metrics based approach
2024-09-30 14:17:02 -05:00
Jakob Borg
4d842f7d3b feat(ursrv): new metrics based approach 2024-09-30 14:16:27 -05:00
Syncthing Release Automation
0e68221c91 gui, man, authors: Update docs, translations, and contributors 2024-09-30 03:48:06 +00:00
Jakob Borg
19f63c7ea3 chore(model): improve tracking sentPrevSeq for index debugging (#9740) 2024-09-29 22:18:24 +00:00
Emil Lundberg
fb939ec496 fix(model): prevent division by zero in numHashers (#9744)
This should prevent the panic that occurred in this test run:
https://github.com/syncthing/syncthing/actions/runs/11095876010/job/30825046810

```
2024-09-29T21:01:53.5425372Z === RUN   TestIssue4357
2024-09-29T21:01:53.5505943Z panic: runtime error: integer divide by zero [recovered]
2024-09-29T21:01:53.5512200Z 	panic: runtime error: integer divide by zero
2024-09-29T21:01:53.5516633Z
2024-09-29T21:01:53.5523018Z goroutine 2655 [running]:
2024-09-29T21:01:53.5524157Z github.com/thejerf/suture/v4.(*Supervisor).runService.func2.2()
2024-09-29T21:01:53.5527176Z 	/home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:563 +0xd0
2024-09-29T21:01:53.5530556Z panic({0x1080d20?, 0x1851290?})
2024-09-29T21:01:53.5564723Z 	/home/runner/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.1.linux-amd64/src/runtime/panic.go:785 +0x132
2024-09-29T21:01:53.5566616Z github.com/syncthing/syncthing/lib/model.(*model).numHashers(0xc0006f6180, {0x117dc1a, 0x7})
2024-09-29T21:01:53.5568061Z 	/home/runner/work/syncthing/syncthing/lib/model/model.go:2581 +0x210
2024-09-29T21:01:53.5569912Z github.com/syncthing/syncthing/lib/model.(*folder).scanSubdirsChangedAndNew(0xc00c38c808, {0x0, 0x0, 0x0}, 0xc0003fc060)
2024-09-29T21:01:53.5571612Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:653 +0x250
2024-09-29T21:01:53.5573010Z github.com/syncthing/syncthing/lib/model.(*folder).scanSubdirs(0xc00c38c808, {0x0, 0x0, 0x0})
2024-09-29T21:01:53.5574447Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:512 +0xd0f
2024-09-29T21:01:53.5576011Z github.com/syncthing/syncthing/lib/model.(*folder).scanTimerFired(0xc00c38c808)
2024-09-29T21:01:53.5577367Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:916 +0x46
2024-09-29T21:01:53.5579010Z github.com/syncthing/syncthing/lib/model.(*folder).Serve(0xc00c38c808, {0x1307650, 0xc0006a0910})
2024-09-29T21:01:53.5580428Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:205 +0xd7e
2024-09-29T21:01:53.5581624Z github.com/thejerf/suture/v4.(*Supervisor).runService.func2()
2024-09-29T21:01:53.5582978Z 	/home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:567 +0x249
2024-09-29T21:01:53.5584400Z created by github.com/thejerf/suture/v4.(*Supervisor).runService in goroutine 2651
2024-09-29T21:01:53.5585872Z 	/home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:541 +0x32a
2024-09-29T21:01:53.5661413Z FAIL	github.com/syncthing/syncthing/lib/model	5.510s
```

### Testing

I have not been able to reproduce the panic throughout a few minutes of
continuously running the test without this fix, but judging by the
traceback it seems to only happen if the test happens to delete the
folder from config at the same time `scanTimerFired` triggers.
2024-09-30 00:01:52 +02:00
Jakob Borg
39df3173d4 chore(model): log sequence anomaly when update appears not to "take" (#9741)
I hope this doesn't fire, but 👻  I'm Seeing Things I Can't Explain 👻
2024-09-29 15:04:06 +00:00
maxice8
429672e0b4 docs(docker): add healthcheck to docker-compose (#9742)
### Purpose

Syncthing had a healthcheck API for a while, and the example Dockerfile
for it has it in the form of:

HEALTHCHECK --interval=1m --timeout=10s \
CMD curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o
--color=never OK || exit 1

Let's add it to the docker-compose as well

### Testing

I use this docker-compose.yml file to deploy via ansible (using
community.docker.docker_compose_v2) to my machine with success, using
`wait: true` in ansible for it to use `docker compose up --wait`.

```yml
- name: Enable syncthing docker
  community.docker.docker_compose_v2:
    project_src: /srv/syncthing
    wait: true
    wait_timeout: 90
```
2024-09-29 09:53:13 -05:00
Simon Frei
605fd6d726 fix(ignore): ensure normalization of patterns and paths match (fixes #9597) (#9717)
In ignores, normalize the input when parsing it.
When scanning, normalize earlier such that the path is already
normalized when checking ignores. This requires splitting normalization
of the string from normalization of the file, as we don't want to
attempt the latter if the file is ignored.

Closes #9598

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2024-09-28 17:16:44 +02:00
Jakob Borg
3c476542d2 fix(ur): actually send usage report directly when enabled (#9736)
There was a bug that the unique ID was not set when reporting was
enabled, and thus the reports where rejected by the server. The unique
ID got set only on startup, so next time Syncthing restarted.

This makes sure to set the unique ID when blank.
2024-09-28 17:02:05 +02:00
Jakob Borg
31874f3ebb chore(model): remove GUI/log warning on sequence anomaly (#9738)
I can see already in our Sentry data that there are a fair amount of
these warnings, and mostly the shape of it. Asking users to report them
will likely cause a lot of reporting effort to fairly little additional
value. We can do that when/if we have something more targeted to ask
for.
2024-09-28 16:38:07 +02:00
Jakob Borg
77942747db Merge branch 'infrastructure'
* infrastructure:
  feat(stupgrades): filter returned releases per compatibility
2024-09-26 10:25:02 +02:00
Jakob Borg
fe01b396ba feat(stupgrades): filter returned releases per compatibility 2024-09-26 10:22:23 +02:00
Jakob Borg
3583949706 refactor(upgrade): rename insecureGet which is no longer insecure (#9735) 2024-09-25 15:50:22 +00:00
Jakob Borg
23fc22ebc5 chore: add more advanced policy configuration (#9726)
This codifies a review policy which is closer to what I always
envisioned, but which isn't expressible using the normal checks in the
GitHub GUI. It would move the commit approval check from GitHub into the
policy-bot check which is already present to enforce the
conventional-commits standard. Approvals in general would still work the
same -- it's just that the bot picks it up and toggles the status
accordingly. From a GitHub side when this is enabled we'd remove the
requires-review check from there and let the bot decide that part. We
would still require builds and tests to pass of course.

There are a couple of relexations from the current policy, details in
the code but briefly:

- Changes to translations or dependencies by a trusted person don't
require review
- Trivial changes by a trusted person, explicitly marked as such, don't
require review

This enables less bureaucracy for things like adding new translated
languages and updating dependencies, and enables the trivial-change
workflow to a larger audience than, like, me, who could always just
bypass the rules by way of being admin.
2024-09-25 17:41:56 +02:00
Jakob Borg
cba163a1fd chore: enable TLS client cache for HTTPS where appropriate (#9721)
https://forum.syncthing.net/t/infrastructure-report-discovery-stuff/22819/4
2024-09-24 08:55:04 +02:00
Jakob Borg
a8e2c8edb6 fix(connections): announce PtP links again (fixes #9730) (#9731) 2024-09-23 14:32:19 +02:00
Syncthing Release Automation
3e501d9036 gui, man, authors: Update docs, translations, and contributors 2024-09-23 03:46:01 +00:00
bt90
9ca101756d chore(ursrv): add Nix detection (#9729)
Classify the builder `nix@nix` as [Nix](https://nixos.org/)

![369684243-172cab09-df6f-449a-a638-1f0a0c080ab3](https://github.com/user-attachments/assets/37a6e0a5-bdcb-4b31-8b36-eaaa42423382)
2024-09-22 14:03:40 +02:00
bt90
a873d12c65 chore(ursrv): extend F-Droid detection (#9728)
Our f-droid apps are currently built using `vagrant@bookworm`:

![grafik](https://github.com/user-attachments/assets/172cab09-df6f-449a-a638-1f0a0c080ab3)
2024-09-22 13:48:38 +02:00
Emil Lundberg
8ff670c564 fix(gui): get version from header when not authenticated (#9724)
### Purpose

Since #8757, the Syncthing GUI now has an unauthenticated state. One
consequence of this is that `$scope.versionBase()` is not initialized
while unauthenticated, which causes the `docsURL` function to truncate
links to just `https://docs.syncthing.net`/, discarding the section
path. This currently affects at least the "Help > Introduction" link
reachable both while logged in and not. The issue is exacerbated in
https://github.com/syncthing/syncthing/pull/9175 where we sometimes want
to show additional contextual help links from the login page to
particular sections of the docs.

I don't think it's any worse to try to preserve the section path even
without an explicit version tag, than to fall back to just the host and
lose all context the link was attempting to provide.

### Testing

- On commit b1ed2802fb (before):
  - Open the GUI, set a username and log out.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/
  - Log in.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/v1.27.10/intro/gui
- On commit 44fef31780 (after):
  - Open the GUI, set a username and log out.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/intro/gui
  - Log in.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/v1.27.10/intro/gui

### Screenshots

This is a GUI change, but affecting only URLs in the markup with no
visual changes.


### Drawbacks

If a `docsURL` call generates a versionless link to a docs page that
doesn't exist on https://docs.syncthing.net - presumably because
Syncthing is not the latest version and links to a deleted page? - then
this will lead to GitHub's generic 404 page with no link to the
Syncthing docs root. Before, any versionless link would also be a
pathless link, leading to the Syncthing docs root instead of a 404 page.
2024-09-22 09:47:02 +02:00
Jakob Borg
b1ed2802fb fix(connections): skip point-to-point interfaces when listing LANs (fixes #9719) (#9720)
Point-to-point interfaces are typically VPNs and similar which, for our
purposes, do not qualify as LANs.
2024-09-21 09:27:23 +02:00
Jakob Borg
b70cb580c8 build(deps): update all dependencies (#9723) 2024-09-21 09:25:27 +02:00
Sonu Kumar Saw
28be3ba788 chore(connections): lower log level from INFO to DEBUG for "already connected to this device" messages (fixes #9715) (#9722)
### Purpose

The primary aim of this change is to minimize log clutter in production
environments. There are many lines in the logs coming from an expected
race condition when two devices connect `already connected to this
device`. These messages do not indicate errors and can overwhelm the log
files with unnecessary noise.

By lowering the logging level, we enhance the usability of the logs,
making it easier for users and developers to identify actual issues
without being distracted

### Testing
1. Build syncthing locally
2. Start two Syncthing instances
```bash
./syncthing -no-browser -home=~/.config/syncthing1
./syncthing -no-browser -home=~/.config/syncthing2
```
3. Enable the DEBUG logs from UI for `connections` package
4. Connect the synching instances by adding remote devices from the UI
5. Observe the logs for the message `XXXX already connected to this
device`

### Screenshots


![image](https://github.com/user-attachments/assets/882ccb4c-d39d-463a-8f66-2aad97010700)

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
2024-09-21 07:19:58 +00:00
Jakob Borg
d4770ddc77 chore(cmd): clean up commands (#9705)
Move infrastructure related commands to under `cmd/infra` and
development stuff to `cmd/dev`. The default build command builds the
regular user facing binaries: syncthing, stdiscosrv, and strelaysrv.
2024-09-21 09:04:22 +02:00
Simon Frei
cbe1220680 chore(fs): put the caseFS as the outermost layer again (#9716)
Reasoning in comments. The main motivation is to avoid all the case
checks when walking the filesystem.

"again" as we already tried once, but it caused a major issue ragarding
mtimefs layer. The root of this problem has been fixed in the meantime
in ac8b3342a
2024-09-18 20:31:19 +02:00
André Colomb
0b95c5fa76 fix(meta): return read error in forbidden_words_test (#9706)
When reading a file fails, the error is currently swallowed / hidden.
Probably just a typo.
2024-09-18 19:12:24 +02:00
Jakob Borg
0343bca257 Merge branch 'infrastructure'
* infrastructure:
  chore(stdiscosrv): ensure incoming addresses are sorted and unique
  chore(stdiscosrv): use zero-allocation merge in the common case
  chore(stdiscosrv): properly clean out old addresses from memory
  chore(stdiscosrv): calculate IPv6 GUA
2024-09-16 09:33:15 +02:00
Syncthing Release Automation
878016db39 gui, man, authors: Update docs, translations, and contributors 2024-09-16 03:47:01 +00:00
Simon Frei
1f4fde9525 chore(protocol): prioritize closing a connection (#9711)
The read/write loops may keep going for a while on a closing connection
with lots of read/write activity, as it's random which select case is
chosen. And if the connection is slow or even broken, a single
read/write
can take a long time/until timeout. Add initial non-blocking selects
with only the cases relevant to closing, to prioritize those.
2024-09-15 21:13:56 +02:00
Jakob Borg
5b9d8a838f chore(stdiscosrv): ensure incoming addresses are sorted and unique 2024-09-15 17:01:16 +02:00
Jakob Borg
8b19cb1e11 chore(stdiscosrv): use zero-allocation merge in the common case 2024-09-15 15:26:40 +02:00
Jakob Borg
ce1e259bb4 chore(stdiscosrv): properly clean out old addresses from memory 2024-09-15 14:20:59 +02:00
Jakob Borg
2238a288d9 fix(model): shut down index sender faster (#9704) 2024-09-15 11:37:49 +02:00
Jakob Borg
c8ee2a5cf6 chore(stdiscosrv): calculate IPv6 GUA 2024-09-15 10:48:16 +02:00
Ross Smith II
1704827d04 chore(gui): update HumanDuration.js (#9710)
Relevant changes:

ko: Use correct names for month and hour in Korean (465eaed)
Hide unit count if 2 in Arabic (f90d847)
2024-09-15 10:21:18 +02:00
Jakob Borg
a156e88eef Merge branch 'infrastructure'
* infrastructure:
  chore(stdiscosrv): hide internal/undocumented flags
  chore(stdiscosrv): remove legacy replication
  chore(stdiscosrv): clean up s3 handling
  chore(stdiscosrv): less garbage in statistics
  chore(stdiscosrv): improve expire, logging
  chore(stdiscosrv): sched in loop
  chore(stdiscosrv): database writing logging
  chore(stdiscosrv): use order-preserving expire
  chore(stdiscosrv): simplify sorting
  chore(stdiscosrv): reduce allocations in cert handling
  chore(stdiscosrv): reduce unnecessary allocations in merge
  feat(stdiscosrv): enable HTTP profiler
  feat(discosrv): in-memory storage with S3 backing
  feat(stdiscosrv): make compression optional (and faster)
2024-09-13 08:50:24 +02:00
Jakob Borg
94d0195b63 chore(stdiscosrv): hide internal/undocumented flags 2024-09-13 08:49:13 +02:00
Jakob Borg
1616edcee3 chore(stdiscosrv): remove legacy replication 2024-09-13 08:49:13 +02:00
Jakob Borg
6505e123bb chore(stdiscosrv): clean up s3 handling 2024-09-13 08:48:04 +02:00
Jakob Borg
63e4659282 chore(stdiscosrv): less garbage in statistics 2024-09-13 08:48:04 +02:00
Jakob Borg
f3f5557c8e chore(stdiscosrv): improve expire, logging 2024-09-13 08:48:04 +02:00
Jakob Borg
b794726e1f chore(stdiscosrv): sched in loop 2024-09-13 08:48:04 +02:00
Jakob Borg
3d59740a0a chore(stdiscosrv): database writing logging 2024-09-13 08:48:04 +02:00
Jakob Borg
66fb65b01f chore(stdiscosrv): use order-preserving expire 2024-09-13 08:48:04 +02:00
Jakob Borg
5c2fcbfd19 chore(stdiscosrv): simplify sorting 2024-09-13 08:48:03 +02:00
Jakob Borg
f9b72330a8 chore(stdiscosrv): reduce allocations in cert handling 2024-09-13 08:48:03 +02:00
Jakob Borg
822b6ac36b chore(stdiscosrv): reduce unnecessary allocations in merge 2024-09-13 08:48:03 +02:00
Jakob Borg
77f7778292 feat(stdiscosrv): enable HTTP profiler 2024-09-13 08:48:03 +02:00
Jakob Borg
aed2c66e52 feat(discosrv): in-memory storage with S3 backing 2024-09-13 08:48:03 +02:00
Jakob Borg
68a1fd010f feat(stdiscosrv): make compression optional (and faster) 2024-09-13 08:33:03 +02:00
Simon Frei
ac8b3342ac chore(fs): only cache the cache for case FS, not the entire FS (#9701)
This would have addressed a recent issue that arose when re-ordering our
"filesystem layers". Specifically moving the caseFilesystem to the
outermost layer. The previous cache included the filesystem, and as such
all the layers below. This isn't desirable (to put it mildly), as you
can create different variants of filesystems with different layers for
the same path and options. Concretely this did happen with the mtime
layer, which isn't always present. A test for the mtime related breakage
was added in #9687, and I intend to redo the caseFilesystem reordering
after this.

Ref: #9677
Followup to: #9687
2024-09-12 20:35:21 +02:00
Jakob Borg
0ea90dd932 build: add generating compat.json (#9700)
This is to add the generation of `compat.json` as a release artifact. It
describes the runtime requirements of the release in question. The next
step is to have the upgrade server use this information to filter
releases provided to clients. This is per the discussion in #9656

---------

Co-authored-by: Ross Smith II <ross@smithii.com>
2024-09-11 09:29:49 +02:00
Jakob Borg
718b1ce2b7 chore(discovery,upgrade): use regular TLS certificate verification (#9673)
This changes the two remaining instances where we use insecure HTTPS to
use standard HTTPS certificate verification.

When we introduced these things, almost a decade ago, HTTPS certificates
were expensive and annoying to get, much of the web was still HTTP, and
many devices seemed to not have up-to-date CA bundles.

Nowadays _all_ of the web is HTTPS and I'm skeptical that any device can
work well without understanding LetsEncrypt certificates in particular.

Our current discovery servers use hardcoded certificates which has
several issues:
- Not great for security if it leaks as there is no way to rotate it
- Not great for infrastructure flexibility as we can't use many load
balancer or TLS termination services
- The certificate is a very oddball ECDSA-SHA384 type certificate which
has higher CPU cost than a more regular certificate, which has real
effects on our infrastructure

Using normal TLS certificates here improves these things.

I expect there will be some very few devices out there for which this
doesn't work. For the foreseeable future they can simply change the
config to use the old URLs and parameters -- it'll be years before we
can retire those entirely.

For the upgrade client this simply seems like better hygiene. While our
releases are signed anyway, protecting the metadata exchange is _better_
and, again, I doubt many clients will fail this today.
2024-09-11 09:29:19 +02:00
Simon Frei
29f7510f5a lib/fs: Add test reproducing missing mtimefs issue (ref #9677) (#9687)
The test is quite odd and specific, but it does reproduce the issue that
caused #9677, so I'd propose to add it to have a simple regression test
for the basic scenario. Also the option to the fakefs might come handy
for other scenarios where you want to quickly test some behaviour on a
filesystem without nanosecond precision, without actually needing access
to one.
2024-09-10 13:36:17 +02:00
Syncthing Release Automation
a7f9ed4a80 gui, man, authors: Update docs, translations, and contributors 2024-09-09 03:45:15 +00:00
tomasz1986
1baefea410 gui: Actually filter scope ID out of IPv6 address when using Remote GUI (ref #8084) (#9688)
gui: Actually filter scope ID out of IPv6 address when using Remote GUI
(ref #8084)

The current code does not work, because it uses a string in the
replace() method instead of regex. Thus, change it to a proper regex.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-09-08 12:42:32 +02:00
Jakob Borg
563cec8923 Merge branch 'release'
* release:
  Revert "lib/fs: Put the caseFS as the outermost layer (#9648)"
2024-09-06 09:39:09 +02:00
Jakob Borg
a3c340ece9 Revert "lib/fs: Put the caseFS as the outermost layer (#9648)"
This reverts commit 7517d18fbb.

Fixes #9677
2024-09-06 09:15:45 +02:00
André Colomb
cb24638ec9 lib/api: Correct ordering of Accept-Language codes by weight (fixes #9670) (#9671)
The preference for languages in the Accept-Language header field
should not be deduced from the listed order, but from the passed
"quality values", according to the HTTP specification:
https://httpwg.org/specs/rfc9110.html#field.accept-language

This implements the parsing of q=values and ordering within the API
backend, to not complicate things further in the GUI code.  Entries
with invalid (unparseable) quality values are discarded completely.

* gui: Fix API endpoint in comment.
2024-09-02 10:15:04 +02:00
Syncthing Release Automation
2fb24dc2cc gui, man, authors: Update docs, translations, and contributors 2024-09-02 03:45:15 +00:00
tomasz1986
9aa2d2c92f gui: Fix incorrect UI language auto detection (fixes #9668) (#9669)
gui: Fix incorrect UI language auto detection (fixes #9668)

Currently, the code only checks whether the detected language partially
matches one of the available languages. This means that if the detected
language is "fi" but among the available languages there is only "fil"
and no "fi", then it will match "fi" with "fil", even though the two are
completely different languages.

With this change, the matching is only done when there is a hyphen in
the language code, e.g. "en" will match with "en-US".

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-08-31 22:47:10 +02:00
Jakob Borg
d1c5100c98 Merge branch 'release'
* release:
  lib/upgrade: Send OS version header to upgrade server (#9663)
2024-08-30 11:32:49 +02:00
Jakob Borg
42e677c055 lib/model, lib/protocol: Index sending/receiving debugging (#9657)
This adds guardrails to the index sending and receiving, to verify that
what we thinks is happening is what actually happens.
2024-08-28 15:00:19 +02:00
Jakob Borg
27bba2c0c2 lib/upgrade: Send OS version header to upgrade server (#9663)
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
2024-08-28 08:32:03 +02:00
Jakob Borg
feff334547 lib/upgrade: Send OS version header to upgrade server (#9663)
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
2024-08-28 08:31:10 +02:00
Syncthing Release Automation
713cf357ce gui, man, authors: Update docs, translations, and contributors 2024-08-26 03:45:23 +00:00
Jakob Borg
5342bec1b7 lib/protocol: Further interface refactor (#9396)
This is a symmetric change to #9375 -- where that PR changed the
protocol->model interface, this changes the model->protocol one.
2024-08-24 12:45:10 +02:00
Emil Lundberg
7df75e681d gui: Replace global "Panel padding decrease" style with targeted class (#9659)
Transplanted from https://github.com/emlun/syncthing/pull/8 (meta-PR
into https://github.com/syncthing/syncthing/pull/9175) by request of
@acolomb (see:
https://github.com/emlun/syncthing/pull/8#discussion_r1724470574).

This padding decrease currently applies to _all_ collapsible panels, but
this padding decrease may not be appropriate for all collapsible panels.
In particular, it will not be appropriate for the collapsible panels
introduced in https://github.com/emlun/syncthing/pull/8.
2024-08-21 15:02:45 +02:00
Jakob Borg
8dc826b234 build: use Go 1.23, require minimum 1.22 (#9651)
🥳

---------

Co-authored-by: Ross Smith II <ross@smithii.com>
2024-08-19 20:26:08 +02:00
Syncthing Release Automation
9ef37e1485 gui, man, authors: Update docs, translations, and contributors 2024-08-19 03:45:18 +00:00
Simon Frei
7517d18fbb lib/fs: Put the caseFS as the outermost layer (#9648)
Reasoning in comments. The main motivation is to avoid all the case
checks when walking the filesystem.
2024-08-13 10:59:31 +02:00
André Colomb
42d0fee536 gui: Add Irish (ga) translation template (#9646)
Based on user request from Weblate, user `@aindriu80`.

Looks promising based on the profile:
https://hosted.weblate.org/user/aindriu80/ Not sure whether almost
30.000 translations in about one month is realistic for a human though.
2024-08-12 13:32:29 +02:00
Syncthing Release Automation
2ca9d3b5c5 gui, man, authors: Update docs, translations, and contributors 2024-08-12 03:45:16 +00:00
Tommy van der Vorst
9cde068f2a lib/syncthing: Add wrapper for access to model (#9627)
### Purpose

Wrap access to Model for users that use the syncthing Go package. See
discussion:
https://github.com/syncthing/syncthing/pull/9619#pullrequestreview-2212484910

### Testing

It works with the iOS app. Other than that, there are no current users
of this API (to my knowledge) as Model was only exposed recently form
the iOS app.
2024-08-11 20:20:43 +02:00
Gusted
1243083831 cli: Remove go-shlex dependency (#9644) 2024-08-11 11:37:18 +02:00
Gusted
356c5055ad lib/sha256: Remove it (#9643)
### Purpose

Remove the `lib/sha256` package, because it's no longer necessary. Go's
standard library now has the same performance and is on par with
`sha256-simd` since [Since Go
1.21](1a64574f42).
Therefore using `sha256-simd` has no benefits anymore.

ARM already has optimized sha256 assembly code since
7b8a7f8272,
`sha256-simd` published their results before that optimized assembly was
implemented,
f941fedda8.
The assembly looks very similar and the benchmarks in the Go commit
match that of `sha256-simd`.

This patch removes all of the related code of `lib/sha256` and makes
`crypto/sha256` the 'default'.

Benchmark of `sha256-simd` and `crypto/sha256`:
<details>

```
cpu: AMD Ryzen 5 3600X 6-Core Processor
                │  simd.txt   │               go.txt                │
                │   sec/op    │    sec/op     vs base               │
Hash/8Bytes-12    63.25n ± 1%    73.38n ± 1%  +16.02% (p=0.002 n=6)
Hash/64Bytes-12   98.73n ± 1%   105.30n ± 1%   +6.65% (p=0.002 n=6)
Hash/1K-12        567.2n ± 1%    572.8n ± 1%   +0.99% (p=0.002 n=6)
Hash/8K-12        4.062µ ± 1%    4.062µ ± 1%        ~ (p=0.396 n=6)
Hash/1M-12        512.1µ ± 0%    510.6µ ± 1%        ~ (p=0.485 n=6)
Hash/5M-12        2.556m ± 1%    2.564m ± 0%        ~ (p=0.093 n=6)
Hash/10M-12       5.112m ± 0%    5.127m ± 0%        ~ (p=0.093 n=6)
geomean           13.82µ         14.27µ        +3.28%

                │   simd.txt   │               go.txt                │
                │     B/s      │     B/s       vs base               │
Hash/8Bytes-12    120.6Mi ± 1%   104.0Mi ± 1%  -13.81% (p=0.002 n=6)
Hash/64Bytes-12   618.2Mi ± 1%   579.8Mi ± 1%   -6.22% (p=0.002 n=6)
Hash/1K-12        1.682Gi ± 1%   1.665Gi ± 1%   -0.98% (p=0.002 n=6)
Hash/8K-12        1.878Gi ± 1%   1.878Gi ± 1%        ~ (p=0.310 n=6)
Hash/1M-12        1.907Gi ± 0%   1.913Gi ± 1%        ~ (p=0.485 n=6)
Hash/5M-12        1.911Gi ± 1%   1.904Gi ± 0%        ~ (p=0.093 n=6)
Hash/10M-12       1.910Gi ± 0%   1.905Gi ± 0%        ~ (p=0.093 n=6)
geomean           1.066Gi        1.032Gi        -3.18%
```

</details>


### Testing

Compiled and tested on Linux.

### Documentation

https://github.com/syncthing/docs/pull/874
2024-08-10 12:58:20 +01:00
Jakob Borg
19693734a3 build: Update dependencies (#9640) 2024-08-09 16:04:51 +02:00
Ross Smith II
17e60b9e0c Chmod -x non-executable files (fixes #9629) (#9630)
Fixed via
```
git ls-files -s | grep 100755 | grep -E -v '(run|sh)$' | cut -f 2 | xargs git update-index --chmod -x
```
See #9629.
2024-08-05 18:32:23 +02:00
Syncthing Release Automation
ac22b2d00a gui, man, authors: Update docs, translations, and contributors 2024-08-05 03:45:25 +00:00
Tommy van der Vorst
de0b4270df all: minimal set of changes for iOS app (#9619)
### Purpose

This PR contains the set of changes needed to make Syncthing work on iOS
for [my iOS app for
Syncthing](https://github.com/pixelspark/sushitrain).

Most changes originate from [the Mobius Sync
fork](http://github.com/MobiusSync/syncthing/tree/ios). I have removed
the changes from their fork that are not strictly needed for my app
(i.e. their changes to the GUI and command line utilities, for instance)
and squashed it all in a single commit.

In summary, the changes are:

* Resolve non-absolute paths to the 'Documents' folder (basically the
only one an app can/should write user data to by default on iOS)
* Tweaking of build flags/conditions for iOS (i.e. determine which
basicfs_watch, ignoreresult variant to build for iOS)
* Disable upgrade mechanism on iOS
* Make `RequestGlobal` and `PullerProgress` public symbols
* Expose syncthing.app's Model instance (app.M)
* Add no-op stub for SetLowPriority on iOS

I would very much appreciate these changes to be (eventually) merged to
mainline syncthing, as this would allow my iOS app to track the mainline
source code directly and removes the need (for me at least) for
maintaining a separate fork. Perhaps the Mobius folks can also benefit
from this (although as noted this branch does not contain their changes
to e.g. the GUI).

### Testing

This branch has been tested with the iOS app and appears to work fine.
The full set of MobiusSync changes has been used before with success.

### Screenshots

n/a

### Documentation

There should be no visible changes for users due to this set of changes.

---------

Co-authored-by: Simon Pickup <simon@pickupinfinity.com>
2024-07-31 07:31:14 +02:00
Syncthing Release Automation
e738af7c56 gui, man, authors: Update docs, translations, and contributors 2024-07-29 03:45:25 +00:00
Syncthing Release Automation
a28441a9bf gui, man, authors: Update docs, translations, and contributors 2024-07-22 03:45:28 +00:00
Simon Frei
2e313716e5 etc: Remove restart on suspend systemd service (ref #8448) (#9611)
The option to do the same in our own monitor process has been removed a
long time ago.
2024-07-17 09:29:49 +03:00
Syncthing Release Automation
0b5ff1f5f7 gui, man, authors: Update docs, translations, and contributors 2024-07-15 03:45:20 +00:00
Simon Frei
0fe6d97d3d lib/fs: Add missing locks to fakeFile methods (fixes #9499) (#9603)
fixes #9499
2024-07-09 10:33:30 +02:00
Simon Frei
0756e42a85 lib/api: Increase test request timeout (fixes #9455) (#9602)
Fixes #9455
2024-07-09 00:37:44 +02:00
Syncthing Release Automation
13ebe1c87f gui, man, authors: Update docs, translations, and contributors 2024-07-08 03:45:16 +00:00
Simon Frei
aea7fa5f22 lib/ignore: Remove unused patterns in cache (#9601)
Tiny cleanup I noticed while trying to fix/test another issue
(https://github.com/syncthing/syncthing/pull/9600). I shortly tried to
figure out what it was used for in the past, but gave up without
results.
2024-07-02 11:01:00 +00:00
Simon Frei
403ce7e597 lib/ignore: Fix caching of filenames with path separators on windows (#9600)
Previously we queried cache with backslashes, and stored entries with
slashes. As in no cache hits ever for non-toplevel files. I also
eventually remembered that cache is disabled by default, so this is a
bit pointless, but still right :P
2024-07-02 10:58:06 +00:00
Syncthing Release Automation
4704d3bc48 gui, man, authors: Update docs, translations, and contributors 2024-07-01 03:45:16 +00:00
André Colomb
2794b04243 gui: Add Filipino (fil) translation template (#9599) 2024-06-29 17:53:32 +02:00
Syncthing Release Automation
1ce64971fd gui, man, authors: Update docs, translations, and contributors 2024-06-24 03:45:42 +00:00
Syncthing Release Automation
eb6d80eac4 gui, man, authors: Update docs, translations, and contributors 2024-06-17 03:45:27 +00:00
Syncthing Release Automation
a8db3351ae gui, man, authors: Update docs, translations, and contributors 2024-06-10 03:45:25 +00:00
Ross Smith II
23a900e096 gui: Use localised time in duration (#9552)
https://github.com/syncthing/syncthing/pull/8291 inpired me to develop
this. I tested it with all the languages Syncthing currently supports,
and they all work.

The only issue is that when you change the language in the GUI, you have
to either refresh the page, or wait a few seconds for the page to
refresh by itself, before the duration is translated into the new
language.

### Screenshots


![2024-05-20_21-47-58](https://github.com/syncthing/syncthing/assets/220772/7e3b371e-3495-4e3e-853a-b5a41215e6c7)

### Documentation

The documentation for the translation widget is at
https://github.com/EvanHahn/HumanizeDuration.js/blob/main/README.md
2024-06-05 06:05:51 -04:00
Jakob Borg
5a304cf295 build: Skip autoprocs for build script 2024-06-04 08:55:37 -04:00
Jakob Borg
136b3742bf build: Update dependencies (#9565) 2024-06-04 13:58:49 +02:00
Jakob Borg
21e0f98fe2 Merge branch 'infrastructure'
* infrastructure:
  cmd/stupgrades: Basic process metrics
  cmd/stcrashreceiver: Ignore patterns, improve metrics
  cmd/strelaypoolsrv: More compact response, improved metrics
  cmd/stdiscosrv: Add AMQP replication
2024-06-04 07:18:35 -04:00
Jakob Borg
2bb5b2244b cmd/stupgrades: Basic process metrics 2024-06-03 19:50:28 +02:00
Jakob Borg
2f281799c1 cmd/stcrashreceiver: Ignore patterns, improve metrics 2024-06-03 19:50:28 +02:00
Jakob Borg
18a58a2ddc cmd/strelaypoolsrv: More compact response, improved metrics 2024-06-03 19:50:28 +02:00
Jakob Borg
f283215fce cmd/stdiscosrv: Add AMQP replication 2024-06-03 19:50:28 +02:00
Syncthing Release Automation
495809ac9e gui, man, authors: Update docs, translations, and contributors 2024-06-03 03:45:18 +00:00
527 changed files with 19201 additions and 36088 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1,2 +0,0 @@
/AUTHORS @calmh
/*.md @calmh

View File

@@ -1,6 +1,7 @@
name: Feature request
description: File a new feature request
labels: ["enhancement", "needs-triage"]
type: Feature
body:
- type: textarea

View File

@@ -1,6 +1,7 @@
name: Bug report
description: If you're actually looking for support instead, see "I need help / I have a question".
labels: ["bug", "needs-triage"]
type: Bug
body:
- type: markdown
attributes:

View File

@@ -3,11 +3,11 @@ updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: weekly
interval: monthly
open-pull-requests-limit: 10
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: weekly
interval: monthly
open-pull-requests-limit: 10

23
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
version: 1
labels:
- label: enhancement
title: ^feat\b
- label: bug
title: ^fix\b
- label: documentation
title: ^docs\b
- label: chore
title: ^chore\b
- label: chore
title: ^refactor\b
- label: build
title: ^build\b
- label: dependencies
title: ^build\(deps\)\b

17
.github/release.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
changelog:
exclude:
labels:
- dependencies
categories:
- title: Fixes
labels:
- bug
- title: Features
labels:
- enhancement
- title: Other
labels:
- '*'

View File

@@ -7,11 +7,15 @@ on:
- infra-*
env:
GO_VERSION: "~1.22.3"
GO_VERSION: "~1.24.0"
CGO_ENABLED: "0"
BUILD_USER: docker
BUILD_HOST: github.syncthing.net
permissions:
contents: read
packages: write
jobs:
docker-syncthing:
name: Build and push Docker images
@@ -41,6 +45,13 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build binaries
run: |
for arch in arm64 amd64; do
@@ -53,13 +64,13 @@ jobs:
- name: Set Docker tags (all branches)
run: |
tags=syncthing/${{ matrix.pkg }}:${{ github.sha }}
tags=docker.io/syncthing/${{ matrix.pkg }}:${{ github.sha }},ghcr.io/syncthing/infra/${{ matrix.pkg }}:${{ github.sha }}
echo "TAGS=$tags" >> $GITHUB_ENV
- name: Set Docker tags (latest)
if: github.ref == 'refs/heads/infrastructure'
run: |
tags=syncthing/${{ matrix.pkg }}:latest,${{ env.TAGS }}
tags=docker.io/syncthing/${{ matrix.pkg }}:latest,ghcr.io/syncthing/infra/${{ matrix.pkg }}:latest,${{ env.TAGS }}
echo "TAGS=$tags" >> $GITHUB_ENV
- name: Build and push

18
.github/workflows/build-nightly.yaml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Build Syncthing (Nightly)
on:
schedule:
# Run nightly build at 05:00 UTC
- cron: '00 05 * * *'
workflow_dispatch:
permissions:
contents: write
packages: write
jobs:
build-syncthing:
uses: ./.github/workflows/build-syncthing.yaml
# if we only want nightlies to run for specific users:
# if: contains(fromJSON('["syncthing", "calmh"]'), github.repository_owner)
secrets: inherit

View File

@@ -3,16 +3,17 @@ name: Build Syncthing
on:
pull_request:
push:
schedule:
# Run nightly build at 05:00 UTC
- cron: '00 05 * * *'
branches-ignore:
- release
- release-rc*
workflow_call:
workflow_dispatch:
env:
# The go version to use for builds. We set check-latest to true when
# installing, so we get the latest patch version that matches the
# expression.
GO_VERSION: "~1.22.3"
GO_VERSION: "~1.24.0"
# Optimize compatibility on the slow archictures.
GO386: softfloat
@@ -48,7 +49,7 @@ jobs:
runner: ["windows-latest", "ubuntu-latest", "macos-latest"]
# The oldest version in this list should match what we have in our go.mod.
# Variables don't seem to be supported here, or we could have done something nice.
go: ["~1.21.7", "~1.22.3"]
go: ["~1.23.0", "~1.24.0"]
runs-on: ${{ matrix.runner }}
steps:
- name: Set git to use LF
@@ -83,31 +84,11 @@ jobs:
go run build.go test | go-test-json-to-loki
env:
GOFLAGS: "-json"
LOKI_URL: ${{ vars.LOKI_URL }}
LOKI_USER: ${{ vars.LOKI_USER }}
LOKI_URL: ${{ secrets.LOKI_URL }}
LOKI_USER: ${{ secrets.LOKI_USER }}
LOKI_PASSWORD: ${{ secrets.LOKI_PASSWORD }}
LOKI_LABELS: "go=${{ matrix.go }},runner=${{ matrix.runner }},repo=${{ github.repository }},ref=${{ github.ref }}"
#
# Meta checks for formatting, copyright, etc
#
correctness:
name: Check correctness
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
check-latest: true
- name: Check correctness
run: |
go test -v ./meta
#
# The basic checks job is a virtual one that depends on the matrix tests,
# the correctness checks, and various builds that we always do. This makes
@@ -122,11 +103,11 @@ jobs:
runs-on: ubuntu-latest
needs:
- build-test
- correctness
- package-linux
- package-cross
- package-source
- package-debian
- package-windows
- govulncheck
steps:
- uses: actions/checkout@v4
@@ -137,8 +118,6 @@ jobs:
package-windows:
name: Package for Windows
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-'))
environment: signing
runs-on: windows-latest
steps:
- name: Set git to use LF
@@ -153,6 +132,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -178,22 +158,74 @@ jobs:
- name: Create packages
run: |
go run build.go -goarch amd64 zip
go run build.go -goarch arm zip
go run build.go -goarch arm64 zip
go run build.go -goarch 386 zip
$targets = 'syncthing', 'stdiscosrv', 'strelaysrv'
$archs = 'amd64', 'arm', 'arm64', '386'
foreach ($arch in $archs) {
foreach ($tgt in $targets) {
go run build.go -goarch $arch zip $tgt
}
}
env:
CGO_ENABLED: "0"
CODESIGN_SIGNTOOL: ${{ secrets.CODESIGN_SIGNTOOL }}
CODESIGN_CERTIFICATE_BASE64: ${{ secrets.CODESIGN_CERTIFICATE_BASE64 }}
CODESIGN_CERTIFICATE_PASSWORD: ${{ secrets.CODESIGN_CERTIFICATE_PASSWORD }}
CODESIGN_TIMESTAMP_SERVER: ${{ secrets.CODESIGN_TIMESTAMP_SERVER }}
- name: Archive artifacts
uses: actions/upload-artifact@v4
with:
name: unsigned-packages-windows
path: "*.zip"
codesign-windows:
name: Codesign for Windows
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
runs-on: windows-latest
needs:
- package-windows
steps:
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: unsigned-packages-windows
path: packages
- name: Extract packages
working-directory: packages
run: |
$files = Get-ChildItem "." -Filter *.zip
foreach ($file in $files) {
7z x $file.Name
}
- name: Sign files with Trusted Signing
uses: azure/trusted-signing-action@v0.5.1
with:
azure-tenant-id: ${{ secrets.AZURE_TRUSTED_SIGNING_TENANT_ID }}
azure-client-id: ${{ secrets.AZURE_TRUSTED_SIGNING_CLIENT_ID }}
azure-client-secret: ${{ secrets.AZURE_TRUSTED_SIGNING_CLIENT_SECRET }}
endpoint: ${{ secrets.AZURE_TRUSTED_SIGNING_ENDPOINT }}
trusted-signing-account-name: ${{ secrets.AZURE_TRUSTED_SIGNING_ACCOUNT }}
certificate-profile-name: ${{ secrets.AZURE_TRUSTED_SIGNING_PROFILE }}
files-folder: ${{ github.workspace }}\packages
files-folder-filter: exe
files-folder-recurse: true
file-digest: SHA256
timestamp-rfc3161: http://timestamp.acs.microsoft.com
timestamp-digest: SHA256
- name: Repackage packages
working-directory: packages
run: |
$files = Get-ChildItem "." -Filter *.zip
foreach ($file in $files) {
Remove-Item $file.Name
7z a -tzip $file.Name $file.BaseName
}
- name: Archive artifacts
uses: actions/upload-artifact@v4
with:
name: packages-windows
path: syncthing-windows-*.zip
path: "packages/*.zip"
#
# Linux
@@ -206,6 +238,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -229,7 +262,9 @@ jobs:
run: |
archs=$(go tool dist list | grep linux | sed 's#linux/##')
for goarch in $archs ; do
go run build.go -goarch "$goarch" tar
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -goarch "$goarch" tar "$tgt"
done
done
env:
CGO_ENABLED: "0"
@@ -238,7 +273,9 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: packages-linux
path: syncthing-linux-*.tar.gz
path: |
*.tar.gz
compat.json
#
# macOS
@@ -246,13 +283,14 @@ jobs:
package-macos:
name: Package for macOS
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-'))
environment: signing
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -299,7 +337,9 @@ jobs:
- name: Create package (amd64)
run: |
go run build.go -goarch amd64 zip
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -goarch amd64 zip "$tgt"
done
env:
CGO_ENABLED: "1"
@@ -313,7 +353,9 @@ jobs:
go "\$@"
EOT
chmod 755 xgo.sh
go run build.go -gocmd ./xgo.sh -goarch arm64 zip
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -gocmd ./xgo.sh -goarch arm64 zip "$tgt"
done
env:
CGO_ENABLED: "1"
@@ -337,15 +379,14 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: packages-macos
path: syncthing-*.zip
path: "*.zip"
notarize-macos:
name: Notarize for macOS
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-'))
environment: signing
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
needs:
- package-macos
- basics
runs-on: macos-latest
steps:
- name: Download artifacts
@@ -357,7 +398,7 @@ jobs:
run: |
APPSTORECONNECT_API_KEY_PATH="$RUNNER_TEMP/apikey.p8"
echo "$APPSTORECONNECT_API_KEY" | base64 -d -o "$APPSTORECONNECT_API_KEY_PATH"
for file in syncthing-macos-*.zip ; do
for file in *-macos-*.zip ; do
xcrun notarytool submit \
-k "$APPSTORECONNECT_API_KEY_PATH" \
-d "$APPSTORECONNECT_API_KEY_ID" \
@@ -380,6 +421,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -422,9 +464,11 @@ jobs:
goos="${plat%/*}"
goarch="${plat#*/}"
echo "::group ::$plat"
if ! go run build.go -goos "$goos" -goarch "$goarch" tar 2>/dev/null; then
echo "::warning ::Failed to build for $plat"
fi
for tgt in syncthing stdiscosrv strelaysrv ; do
if ! go run build.go -goos "$goos" -goarch "$goarch" tar "$tgt" 2>/dev/null; then
echo "::warning ::Failed to build $tgt for $plat"
fi
done
echo "::endgroup::"
done
env:
@@ -434,7 +478,7 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: packages-other
path: syncthing-*.tar.gz
path: "*.tar.gz"
#
# Source
@@ -447,6 +491,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -482,11 +527,10 @@ jobs:
sign-for-upgrade:
name: Sign for upgrade
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-'))
environment: signing
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
needs:
- basics
- package-windows
- codesign-windows
- package-linux
- package-macos
- package-cross
@@ -496,6 +540,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/checkout@v4
with:
@@ -514,7 +559,7 @@ jobs:
- name: Install signing tool
run: |
go install ./cmd/stsigtool
go install ./cmd/dev/stsigtool
- name: Sign archives
run: |
@@ -529,30 +574,44 @@ jobs:
env:
STSIGTOOL_PRIVATE_KEY: ${{ secrets.STSIGTOOL_PRIVATE_KEY }}
- name: Create and sign .asc files
- name: Create shasum files
run: |
sudo apt update
sudo apt -y install gnupg
export SIGNING_KEY="$RUNNER_TEMP/gpg-secret.asc"
echo "$GNUPG_SIGNING_KEY_BASE64" | base64 -d > "$SIGNING_KEY"
gpg --import < "$SIGNING_KEY"
pushd packages
files=(*.tar.gz *.zip)
sha1sum "${files[@]}" | gpg --clearsign > sha1sum.txt.asc
sha256sum "${files[@]}" | gpg --clearsign > sha256sum.txt.asc
gpg --sign --armour --detach syncthing-source-*.tar.gz
sha1sum "${files[@]}" > sha1sum.txt
sha256sum "${files[@]}" > sha256sum.txt
popd
rm -f "$SIGNING_KEY" .gnupg
version=$(go run build.go version)
echo "VERSION=$version" >> $GITHUB_ENV
- name: Sign shasum files
uses: docker://ghcr.io/kastelo/ezapt:latest
with:
args:
sign
packages/sha1sum.txt packages/sha256sum.txt
env:
GNUPG_SIGNING_KEY_BASE64: ${{ secrets.GNUPG_SIGNING_KEY_BASE64 }}
EZAPT_KEYRING_BASE64: ${{ secrets.APT_GPG_KEYRING_BASE64 }}
- name: Sign source
uses: docker://ghcr.io/kastelo/ezapt:latest
with:
args:
sign --detach --ascii
packages/syncthing-source-${{ env.VERSION }}.tar.gz
env:
EZAPT_KEYRING_BASE64: ${{ secrets.APT_GPG_KEYRING_BASE64 }}
- name: Archive artifacts
uses: actions/upload-artifact@v4
with:
name: packages-signed
path: packages/*
path: |
packages/*.tar.gz
packages/*.zip
packages/*.asc
packages/*.json
#
# Debian
@@ -565,6 +624,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -595,7 +655,9 @@ jobs:
- name: Package for Debian
run: |
for arch in amd64 i386 armhf armel arm64 ; do
go run build.go -no-upgrade -installsuffix=no-upgrade -goarch "$arch" deb
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -no-upgrade -installsuffix=no-upgrade -goarch "$arch" deb "$tgt"
done
done
env:
BUILD_USER: debian
@@ -613,10 +675,9 @@ jobs:
publish-nightly:
name: Publish nightly build
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && startsWith(github.ref, 'refs/heads/release-nightly')
environment: signing
environment: release
needs:
- sign-for-upgrade
- notarize-macos
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
@@ -655,7 +716,7 @@ jobs:
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync packages objstore:${{ secrets.S3_BUCKET }}/nightly
args: sync -v --no-update-modtime packages objstore:nightly
#
# Push release artifacts to Spaces
@@ -663,8 +724,10 @@ jobs:
publish-release-files:
name: Publish release files
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && github.ref == 'refs/heads/release'
environment: signing
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/tags/v'))
environment: release
permissions:
contents: write
needs:
- sign-for-upgrade
- package-debian
@@ -673,6 +736,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- name: Download signed packages
uses: actions/download-artifact@v4
@@ -708,7 +772,7 @@ jobs:
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync packages objstore:${{ secrets.S3_BUCKET }}/release/${{ env.VERSION }}
args: sync -v --no-update-modtime packages objstore:release/${{ env.VERSION }}
- name: Push to object store (latest)
uses: docker://docker.io/rclone/rclone:latest
@@ -721,7 +785,121 @@ jobs:
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync objstore:${{ secrets.S3_BUCKET }}/release/${{ env.VERSION }} objstore:${{ secrets.S3_BUCKET }}/release/latest
args: sync -v --no-update-modtime objstore:release/${{ env.VERSION }} objstore:release/latest
- name: Create GitHub releases and push binaries
run: |
maybePrerelease=""
if [[ $VERSION == *-* ]]; then
maybePrerelease="--prerelease"
fi
export GH_PROMPT_DISABLED=1
if ! gh release view --json name "$VERSION" >/dev/null 2>&1 ; then
gh release create "$VERSION" \
$maybePrerelease \
--title "$VERSION" \
--notes-from-tag
fi
gh release upload --clobber "$VERSION" \
packages/*.asc packages/*.json \
packages/syncthing-*.tar.gz \
packages/syncthing-*.zip \
packages/syncthing_*.deb
PKGS=$(pwd)/packages
cd /tmp # gh will not release for repo x while inside repo y
for repo in relaysrv discosrv ; do
export GH_REPO="syncthing/$repo"
if ! gh release view --json name "$VERSION" >/dev/null 2>&1 ; then
gh release create "$VERSION" \
$maybePrerelease \
--title "$VERSION" \
--notes "https://github.com/syncthing/syncthing/releases/tag/$VERSION"
fi
gh release upload --clobber "$VERSION" \
$PKGS/*.asc \
$PKGS/*${repo}*
done
env:
GH_TOKEN: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
#
# Push Debian/APT archive
#
publish-apt:
name: Publish APT
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
needs:
- package-debian
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- name: Download packages
uses: actions/download-artifact@v4
with:
name: debian-packages
path: packages
- name: Set version
run: |
version=$(go run build.go version)
echo "Version: $version"
echo "VERSION=$version" >> $GITHUB_ENV
# Decide whether packages should go to stable, candidate or nightly
- name: Prepare packages
run: |
kind=stable
if [[ $VERSION == *-rc.[0-9] ]] ; then
kind=candidate
elif [[ $VERSION == *-* ]] ; then
kind=nightly
fi
echo "Kind: $kind"
mkdir -p packages/syncthing/$kind
mv packages/*.deb packages/syncthing/$kind
- name: Pull archive
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync objstore:apt/dists dists
- name: Update archive
uses: docker://ghcr.io/kastelo/ezapt:latest
with:
args:
publish
--add packages
--dists dists
env:
EZAPT_KEYRING_BASE64: ${{ secrets.APT_GPG_KEYRING_BASE64 }}
- name: Push archive
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync -v --no-update-modtime dists objstore:apt/dists
#
# Build and push to Docker Hub
@@ -730,8 +908,11 @@ jobs:
docker-syncthing:
name: Build and push Docker images
runs-on: ubuntu-latest
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || github.ref == 'refs/heads/main' || github.ref == 'refs/heads/infrastructure' || startsWith(github.ref, 'refs/heads/release-'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/release-nightly' || github.ref == 'refs/heads/infrastructure' || startsWith(github.ref, 'refs/tags/v'))
environment: docker
permissions:
contents: read
packages: write
strategy:
matrix:
pkg:
@@ -752,6 +933,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
- uses: actions/setup-go@v5
with:
@@ -791,9 +973,18 @@ jobs:
uses: docker/login-action@v3
if: env.DOCKER_PUSH == 'true'
with:
registry: docker.io
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
if: env.DOCKER_PUSH == 'true'
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -805,13 +996,13 @@ jobs:
echo Release version, pushing to :latest and version tags
major=${version%.*.*}
minor=${version%.*}
tags=${{ matrix.image }}:$version,${{ matrix.image }}:$major,${{ matrix.image }}:$minor,${{ matrix.image }}:latest
tags=docker.io/${{ matrix.image }}:$version,ghcr.io/${{ matrix.image }}:$version,docker.io/${{ matrix.image }}:$major,ghcr.io/${{ matrix.image }}:$major,docker.io/${{ matrix.image }}:$minor,ghcr.io/${{ matrix.image }}:$minor,docker.io/${{ matrix.image }}:latest,ghcr.io/${{ matrix.image }}:latest
elif [[ $version == *-rc.@([0-9]|[0-9][0-9]) ]] ; then
echo Release candidate, pushing to :rc
tags=${{ matrix.image }}:rc
echo Release candidate, pushing to :rc and version tags
tags=docker.io/${{ matrix.image }}:$version,ghcr.io/${{ matrix.image }}:$version,docker.io/${{ matrix.image }}:rc,ghcr.io/${{ matrix.image }}:rc
else
echo Development version, pushing to :edge
tags=${{ matrix.image }}:edge
tags=docker.io/${{ matrix.image }}:edge,ghcr.io/${{ matrix.image }}:edge
fi
echo "DOCKER_TAGS=$tags" >> $GITHUB_ENV
echo "VERSION=$version" >> $GITHUB_ENV

49
.github/workflows/pr-linters.yaml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: Run PR linters
on:
pull_request:
workflow_dispatch:
permissions:
contents: read
pull-requests: read
jobs:
#
# golangci-lint runs a suite of static analysis checks on the code
#
golangci:
runs-on: ubuntu-latest
name: Golangci-lint
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: ensure asset generation
run: go run build.go assets
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
only-new-issues: true
#
# Meta checks for formatting, copyright, etc
#
meta:
name: Meta checks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- run: |
go run build.go assets
go test -v ./meta

27
.github/workflows/pr-metadata.yaml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: PR metadata
on:
pull_request_target:
types:
- opened
- reopened
- edited
- synchronize
permissions:
contents: read
pull-requests: write
jobs:
#
# Set labels on PRs, which are then used to categorise release notes
#
labels:
name: Set labels
runs-on: ubuntu-latest
steps:
- uses: srvaroa/labeler@v1
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -0,0 +1,60 @@
name: Release Syncthing
on:
push:
branches:
- release
- release-rc*
permissions:
contents: write
jobs:
create-release-tag:
name: Create release tag
runs-on: ubuntu-latest
environment: release
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
token: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
- uses: actions/setup-go@v5
with:
go-version: stable
- name: Determine version to release
run: |
if [[ "$GITHUB_REF_NAME" == "release" ]] ; then
next=$(go run ./script/next-version.go)
else
next=$(go run ./script/next-version.go --pre)
fi
echo "NEXT=$next" >> $GITHUB_ENV
echo "Next version is $next"
prev=$(git describe --exclude "*-*" --abbrev=0)
echo "PREV=$prev" >> $GITHUB_ENV
echo "Previous version is $prev"
- name: Determine release notes
run: |
go run ./script/relnotes.go --new-ver "$NEXT" --branch "$GITHUB_REF_NAME" --prev-ver "$PREV" > notes.md
env:
GITHUB_TOKEN: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
- name: Create and push tag
run: |
git config --global user.name 'Syncthing Release Automation'
git config --global user.email 'release@syncthing.net'
git tag -a -F notes.md --cleanup=whitespace "$NEXT"
git push origin "$NEXT"
- name: Trigger the build
uses: benc-uk/workflow-dispatch@v1
with:
workflow: build-syncthing.yaml
ref: refs/tags/${{ env.NEXT }}
token: ${{ secrets.ACTIONS_GITHUB_TOKEN }}

1
.gitignore vendored
View File

@@ -18,3 +18,4 @@ deb
/repos
/proto/scripts/protoc-gen-gosyncthing
/gui/next-gen-gui
/compat.json

View File

@@ -1,26 +1,67 @@
linters-settings:
maligned:
suggest-new: true
version: "2"
linters:
enable-all: true
default: all
disable:
- goimports
- cyclop
- depguard
- lll
- gochecknoinits
- gochecknoglobals
- gofmt
- scopelint
- gocyclo
- exhaustive
- exhaustruct
- forbidigo
- funlen
- wsl
- gochecknoglobals
- gochecknoinits
- gocognit
- goconst
- gocyclo
- godox
service:
golangci-lint-version: 1.21.x
prepare:
- rm -f go.sum # 1.12 -> 1.13 issues with QUIC-go
- GO111MODULE=on go mod vendor
- go run build.go assets
- gomoddirectives
- inamedparam
- interfacebloat
- ireturn
- lll
- maintidx
- mnd
- musttag
- nestif
- nlreturn
- nonamedreturns
- paralleltest
- prealloc
- predeclared
- protogetter
- recvcheck
- revive
- tagalign
- tagliatelle
- testpackage
- usetesting # go 1.24
- varnamelen
- whitespace
- wrapcheck
- wsl
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- internal/gen
- cmd/dev
- repos
- third_party$
- builtin$
- examples$
formatters:
enable:
- gofumpt
exclusions:
generated: lax
paths:
- internal/gen
- cmd/dev
- repos
- third_party$
- builtin$
- examples$

98
.policy.yml Normal file
View File

@@ -0,0 +1,98 @@
# This is the policy-bot configuration for this repository. It controls
# which approvals are required for any given pull request. The format is
# described at https://github.com/palantir/policy-bot. The syntax of the
# policy can be verified by the bot:
# curl https://pb.syncthing.net/api/validate -X PUT -T .policy.yml
# The policy below is what is required for any pull request.
policy:
approval:
- subject is conventional commit
- project metadata requires maintainer approval
- or:
- is approved by a syncthing contributor
- is a translation or dependency update by a contributor
- is a trivial change by a contributor
# Additionally, contributors can disapprove of a PR
disapproval:
requires:
teams:
- syncthing/contributors
# The rules for the policy are described below.
approval_rules:
# All commits (PRs before squashing) should have a valid conventional
# commit type subject.
- name: subject is conventional commit
requires:
conditions:
title:
matches:
- '^(feat|fix|docs|chore|refactor|build): [a-z].+'
- '^(feat|fix|docs|chore|refactor|build)\(\w+(, \w+)*\): [a-z].+'
# Changes to important project metadata and documentation, including this
# policy, require signoff by a maintainer
- name: project metadata requires maintainer approval
if:
changed_files:
paths:
- ^[^/]+\.md
- ^\.policy\.yml
- ^LICENSE
requires:
count: 1
teams:
- syncthing/maintainers
options:
ignore_update_merges: true
allow_contributor: true
# Regular pull requests require approval by an active contributor
- name: is approved by a syncthing contributor
requires:
count: 1
teams:
- syncthing/contributors
options:
ignore_update_merges: true
allow_contributor: true
# Changes to some files (translations, dependencies, compatibility) do not
# require approval if they were proposed by a contributor and have a
# matching commit subject
- name: is a translation or dependency update by a contributor
if:
only_changed_files:
paths:
- ^gui/default/assets/lang/
- ^go\.mod$
- ^go\.sum$
- ^compat\.yaml$
title:
matches:
- '^chore\(gui\):'
- '^build\(deps\):'
- '^build\(compat\):'
has_author_in:
teams:
- syncthing/contributors
# If the change is small and the label "trivial" is added, we accept that
# on trust. These PRs can be audited after the fact as appropriate.
# Features are not trivial.
- name: is a trivial change by a contributor
if:
modified_lines:
total: "< 25"
title:
not_matches:
- '^feat'
has_labels:
- trivial
has_author_in:
teams:
- syncthing/contributors

22
AUTHORS
View File

@@ -20,6 +20,7 @@ Alan Pope <alan@popey.com>
Alberto Donato <albertodonato@users.noreply.github.com>
Aleksey Vasenev <margtu-fivt@ya.ru>
Alessandro G. (alessandro.g89) <alessandro.g89@gmail.com>
Alex Ionescu <github@ionescu.sh>
Alex Lindeman <139387+aelindeman@users.noreply.github.com>
Alex Xu <alex.hello71@gmail.com>
Alexander Graf (alex2108) <register-github@alex-graf.de>
@@ -47,6 +48,7 @@ Arkadiusz Tymiński <gevleeog@gmail.com>
Aroun <login@b-vo.fr>
Arthur Axel fREW Schmidt (frioux) <frew@afoolishmanifesto.com> <frioux@gmail.com>
Artur Zubilewicz <AkaZecik@users.noreply.github.com>
Ashish Bhate <bhate.ashish@gmail.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com> <github@audrius.rocks>
Aurélien Rainone <476650+arl@users.noreply.github.com>
BAHADIR YILMAZ <bahadiryilmaz32@gmail.com>
@@ -96,6 +98,7 @@ Daniel Harte (norgeous) <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@u
Daniel Martí (mvdan) <mvdan@mvdan.cc>
Daniel Padrta <64928366+danpadcz@users.noreply.github.com>
Darshil Chanpura (dtchanpura) <dtchanpura@gmail.com> <dcprime314@gmail.com>
dashangcun <907225865@qq.com>
David Rimmer (dinosore) <dinosore@dbrsoftware.co.uk>
deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
DeflateAwning <11021263+DeflateAwning@users.noreply.github.com>
@@ -111,6 +114,7 @@ diemade <spamkill@posteo.ch>
digital <didev@dinid.net>
Dimitri Papadopoulos Orfanos <3234522+DimitriPapadopoulos@users.noreply.github.com>
Dmitry Saveliev (dsaveliev) <d.e.saveliev@gmail.com>
domain <32405309+szu17dmy@users.noreply.github.com>
Domenic Horner <domenic@tgxn.net>
Dominik Heidler (asdil12) <dominik@heidler.eu>
Elias Jarlebring (jarlebring) <jarlebring@gmail.com>
@@ -141,10 +145,13 @@ greatroar <61184462+greatroar@users.noreply.github.com>
Greg <gco@jazzhaiku.com>
guangwu <guoguangwu@magic-shield.com>
gudvinr <gudvinr@gmail.com>
Gusted <postmaster@gusted.xyz> <williamzijl7@hotmail.com>
Han Boetes <han@boetes.org>
HansK-p <42314815+HansK-p@users.noreply.github.com>
Harrison Jones (harrisonhjones) <harrisonhjones@users.noreply.github.com>
Hazem Krimi <me@hazemkrimi.tech>
Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Hireworks <129852174+hireworksltd@users.noreply.github.com>
Hugo Locurcio <hugo.locurcio@hugo.pro>
Iain Barnett <iainspeed@gmail.com>
Ian Johnson (anonymouse64) <ian.johnson@canonical.com> <person.uwsome@gmail.com>
@@ -188,6 +195,7 @@ Jörg Thalheim <Mic92@users.noreply.github.com>
Jędrzej Kula <kula.jedrek@gmail.com>
K.B.Dharun Krishna <kbdharunkrishna@gmail.com>
Kalle Laine <pahakalle@protonmail.com>
Kapil Sareen <kapilsareen584@gmail.com>
Karol Różycki (krozycki) <rozycki.karol@gmail.com>
Kebin Liu <lkebin@gmail.com>
Keith Harrison <keithh@protonmail.com>
@@ -216,8 +224,10 @@ luzpaz <luzpaz@users.noreply.github.com>
Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol (kilburn) <kilburn@la3.org>
Marcel Meyer <mm.marcelmeyer@gmail.com>
Marcin Dziadus (marcindziadus) <dziadus.marcin@gmail.com>
marco-m <marco.molteni@laposte.net>
Marcus B Spencer <marcus@marcusspencer.xyz> <marcus@marcusspencer.us>
Marcus Legendre <marcus.legendre@gmail.com>
Mario Majila <mariustshipichik@gmail.com>
Mark Pulford (mpx) <mark@kyne.com.au>
@@ -225,6 +235,7 @@ Martchus <martchus@gmx.net>
Martin Polehla <p0l0us@users.noreply.github.com>
Mateusz Naściszewski (mateon1) <matin1111@wp.pl>
Mateusz Ż <thedead4fun@live.com>
mathias4833 <67101597+mathias4833@users.noreply.github.com>
Matic Potočnik <hairyfotr@gmail.com>
Matt Burke (burkemw3) <mburke@amplify.com> <burkemw3@gmail.com>
Matt Robenolt <matt@ydekproductions.com>
@@ -232,6 +243,7 @@ Matteo Ruina <matteo.ruina@gmail.com>
Maurizio Tomasi <ziotom78@gmail.com>
Max <github@germancoding.com>
Max Schulze (kralo) <max.schulze@online.de> <kralo@users.noreply.github.com>
maxice8 <30738253+maxice8@users.noreply.github.com>
MaximAL <almaximal@ya.ru>
Maxime Thirouin <m@moox.io>
Maximilian <maxi.rostock@outlook.de> <public@complexvector.space>
@@ -269,6 +281,7 @@ Oyebanji Jacob Mayowa <oyebanji05@gmail.com>
Pablo <pbaeyens31+github@gmail.com>
Pascal Jungblut (pascalj) <github@pascalj.com> <mail@pascal-jungblut.com>
Paul Brit <paulbrit44@gmail.com>
Paul Donald <newtwen+github@gmail.com>
Pawel Palenica (qepasa) <pawelpalenica11@gmail.com>
Paweł Rozlach <vespian@users.noreply.github.com>
perewa <cavalcante.ten@gmail.com>
@@ -282,7 +295,9 @@ Philippe Schommers (filoozoom) <philippe@schommers.be>
Phill Luby (pluby) <phill.luby@newredo.com>
Pier Paolo Ramon <ramonpierre@gmail.com>
Piotr Bejda (piobpl) <piotrb10@gmail.com>
polyfloyd <polyfloyd@users.noreply.github.com>
Pramodh KP (pramodhkp) <pramodh.p@directi.com> <1507241+pramodhkp@users.noreply.github.com>
pullmerge <166967364+pullmerge@users.noreply.github.com>
Quentin Hibon <qh.public@yahoo.com>
Rahmi Pruitt <rjpruitt16@gmail.com>
red_led <red-led@users.noreply.github.com>
@@ -305,7 +320,9 @@ Severin von Wnuck-Lipinski <ss7@live.de>
Shaarad Dalvi <60266155+shaaraddalvi@users.noreply.github.com> <shdalv@microsoft.com>
Simon Frei (imsodin) <freisim93@gmail.com>
Simon Mwepu <simonmwepu@gmail.com>
Simon Pickup <simon@pickupinfinity.com>
Sly_tom_cat <slytomcat@mail.ru>
Sonu Kumar Saw <31889738+dev-saw99@users.noreply.github.com>
Stefan Kuntz (Stefan-Code) <stefan.github@gmail.com> <Stefan.github@gmail.com>
Stefan Tatschner (rumpelsepp) <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org> <stefan@rumpelsepp.org>
Steven Eckhoff <steven.eckhoff.opensource@gmail.com>
@@ -313,18 +330,23 @@ Suhas Gundimeda (snugghash) <suhas.gundimeda@gmail.com> <snugghash@gmail.com>
Sven Bachmann <dev@mcbachmann.de>
Syncthing Automation <automation@syncthing.net>
Syncthing Release Automation <release@syncthing.net>
Sébastien WENSKE <sebastien@wenske.fr>
Taylor Khan (nelsonkhan) <nelsonkhan@gmail.com>
Terrance <git@terrance.allofti.me>
TheCreeper <TheCreeper@users.noreply.github.com>
Thomas <9749173+uhthomas@users.noreply.github.com>
Thomas Hipp <thomashipp@gmail.com>
Tim Abell (timabell) <tim@timwise.co.uk>
Tim Howes (timhowes) <timhowes@berkeley.edu>
Tim Nordenfur <tim@gurka.se>
Tobias Frölich <40638719+tobifroe@users.noreply.github.com>
Tobias Klauser <tobias.klauser@gmail.com>
Tobias Nygren (tnn2) <tnn@nygren.pp.se>
Tobias Tom (tobiastom) <t.tom@succont.de>
Tom Jakubowski <tom@crystae.net>
Tomasz Wilczyński <5626656+tomasz1986@users.noreply.github.com> <twilczynski@naver.com>
Tommy Thorn <tommy-github-email@thorn.ws>
Tommy van der Vorst <tommy-github@pixelspark.nl> <tommy@pixelspark.nl>
Tully Robinson (tojrobinson) <tully@tojr.org>
Tyler Brazier (tylerbrazier) <tyler@tylerbrazier.com>
Tyler Kropp <kropptyler@gmail.com>

View File

@@ -49,6 +49,11 @@ services:
- 22000:22000/udp # QUIC file transfers
- 21027:21027/udp # Receive local discovery broadcasts
restart: unless-stopped
healthcheck:
test: curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o --color=never OK || exit 1
interval: 1m
timeout: 10s
retries: 3
```
## Discovery
@@ -84,6 +89,11 @@ services:
- /wherever/st-sync:/var/syncthing
network_mode: host
restart: unless-stopped
healthcheck:
test: curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o --color=never OK || exit 1
interval: 1m
timeout: 10s
retries: 3
```
Be aware that syncthing alone is now in control of what interfaces and ports it

View File

@@ -82,13 +82,11 @@ build process.
## Signed Releases
As of v0.10.15 and onwards, release binaries are GPG signed with the key
D26E6ED000654A3E, available from https://syncthing.net/security/ and
most key servers.
There is also a built-in automatic upgrade mechanism (disabled in some
distribution channels) which uses a compiled in ECDSA signature. macOS
binaries are also properly code signed.
Release binaries are GPG signed with the key available from
https://syncthing.net/security/. There is also a built-in automatic
upgrade mechanism (disabled in some distribution channels) which uses a
compiled in ECDSA signature. macOS and Windows binaries are also
code-signed.
## Documentation

12
buf.gen.yaml Normal file
View File

@@ -0,0 +1,12 @@
version: v2
managed:
enabled: true
override:
- file_option: go_package_prefix
value: github.com/syncthing/syncthing/internal/gen
plugins:
- remote: buf.build/protocolbuffers/go:v1.35.1
out: .
opt: module=github.com/syncthing/syncthing
inputs:
- directory: proto

10
buf.yaml Normal file
View File

@@ -0,0 +1,10 @@
version: v2
modules:
- path: proto
name: github.com/syncthing/syncthing
lint:
use:
- STANDARD
breaking:
use:
- WIRE_JSON

309
build.go
View File

@@ -4,8 +4,8 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:build ignore
// +build ignore
//go:build tools
// +build tools
package main
@@ -15,7 +15,6 @@ import (
"bytes"
"compress/flate"
"compress/gzip"
"encoding/base64"
"encoding/json"
"errors"
"flag"
@@ -33,8 +32,9 @@ import (
"text/template"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
buildpkg "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/upgrade"
"sigs.k8s.io/yaml"
)
var (
@@ -85,7 +85,6 @@ var targets = map[string]target{
"all": {
// Only valid for the "build" and "install" commands as it lacks all
// the archive creation stuff. buildPkgs gets filled out in init()
tags: []string{"purego"},
},
"syncthing": {
// The default target for "build", "install", "tar", "zip", "deb", etc.
@@ -96,41 +95,40 @@ var targets = map[string]target{
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/syncthing"},
binaryName: "syncthing", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "README.md", dst: "README.txt", perm: 0644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "README.md", dst: "README.txt", perm: 0o644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
// All files from etc/ and extra/ added automatically in init().
},
systemdService: "syncthing@*.service",
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "README.md", dst: "deb/usr/share/doc/syncthing/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing/AUTHORS.txt", perm: 0644},
{src: "man/syncthing.1", dst: "deb/usr/share/man/man1/syncthing.1", perm: 0644},
{src: "man/syncthing-config.5", dst: "deb/usr/share/man/man5/syncthing-config.5", perm: 0644},
{src: "man/syncthing-stignore.5", dst: "deb/usr/share/man/man5/syncthing-stignore.5", perm: 0644},
{src: "man/syncthing-device-ids.7", dst: "deb/usr/share/man/man7/syncthing-device-ids.7", perm: 0644},
{src: "man/syncthing-event-api.7", dst: "deb/usr/share/man/man7/syncthing-event-api.7", perm: 0644},
{src: "man/syncthing-faq.7", dst: "deb/usr/share/man/man7/syncthing-faq.7", perm: 0644},
{src: "man/syncthing-networking.7", dst: "deb/usr/share/man/man7/syncthing-networking.7", perm: 0644},
{src: "man/syncthing-rest-api.7", dst: "deb/usr/share/man/man7/syncthing-rest-api.7", perm: 0644},
{src: "man/syncthing-security.7", dst: "deb/usr/share/man/man7/syncthing-security.7", perm: 0644},
{src: "man/syncthing-versioning.7", dst: "deb/usr/share/man/man7/syncthing-versioning.7", perm: 0644},
{src: "etc/linux-systemd/system/syncthing@.service", dst: "deb/lib/systemd/system/syncthing@.service", perm: 0644},
{src: "etc/linux-systemd/system/syncthing-resume.service", dst: "deb/lib/systemd/system/syncthing-resume.service", perm: 0644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0644},
{src: "etc/linux-sysctl/30-syncthing.conf", dst: "deb/usr/lib/sysctl.d/30-syncthing.conf", perm: 0644},
{src: "etc/firewall-ufw/syncthing", dst: "deb/etc/ufw/applications.d/syncthing", perm: 0644},
{src: "etc/linux-desktop/syncthing-start.desktop", dst: "deb/usr/share/applications/syncthing-start.desktop", perm: 0644},
{src: "etc/linux-desktop/syncthing-ui.desktop", dst: "deb/usr/share/applications/syncthing-ui.desktop", perm: 0644},
{src: "assets/logo-32.png", dst: "deb/usr/share/icons/hicolor/32x32/apps/syncthing.png", perm: 0644},
{src: "assets/logo-64.png", dst: "deb/usr/share/icons/hicolor/64x64/apps/syncthing.png", perm: 0644},
{src: "assets/logo-128.png", dst: "deb/usr/share/icons/hicolor/128x128/apps/syncthing.png", perm: 0644},
{src: "assets/logo-256.png", dst: "deb/usr/share/icons/hicolor/256x256/apps/syncthing.png", perm: 0644},
{src: "assets/logo-512.png", dst: "deb/usr/share/icons/hicolor/512x512/apps/syncthing.png", perm: 0644},
{src: "assets/logo-only.svg", dst: "deb/usr/share/icons/hicolor/scalable/apps/syncthing.svg", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "README.md", dst: "deb/usr/share/doc/syncthing/README.txt", perm: 0o644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing/AUTHORS.txt", perm: 0o644},
{src: "man/syncthing.1", dst: "deb/usr/share/man/man1/syncthing.1", perm: 0o644},
{src: "man/syncthing-config.5", dst: "deb/usr/share/man/man5/syncthing-config.5", perm: 0o644},
{src: "man/syncthing-stignore.5", dst: "deb/usr/share/man/man5/syncthing-stignore.5", perm: 0o644},
{src: "man/syncthing-device-ids.7", dst: "deb/usr/share/man/man7/syncthing-device-ids.7", perm: 0o644},
{src: "man/syncthing-event-api.7", dst: "deb/usr/share/man/man7/syncthing-event-api.7", perm: 0o644},
{src: "man/syncthing-faq.7", dst: "deb/usr/share/man/man7/syncthing-faq.7", perm: 0o644},
{src: "man/syncthing-networking.7", dst: "deb/usr/share/man/man7/syncthing-networking.7", perm: 0o644},
{src: "man/syncthing-rest-api.7", dst: "deb/usr/share/man/man7/syncthing-rest-api.7", perm: 0o644},
{src: "man/syncthing-security.7", dst: "deb/usr/share/man/man7/syncthing-security.7", perm: 0o644},
{src: "man/syncthing-versioning.7", dst: "deb/usr/share/man/man7/syncthing-versioning.7", perm: 0o644},
{src: "etc/linux-systemd/system/syncthing@.service", dst: "deb/lib/systemd/system/syncthing@.service", perm: 0o644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0o644},
{src: "etc/linux-sysctl/30-syncthing.conf", dst: "deb/usr/lib/sysctl.d/30-syncthing.conf", perm: 0o644},
{src: "etc/firewall-ufw/syncthing", dst: "deb/etc/ufw/applications.d/syncthing", perm: 0o644},
{src: "etc/linux-desktop/syncthing-start.desktop", dst: "deb/usr/share/applications/syncthing-start.desktop", perm: 0o644},
{src: "etc/linux-desktop/syncthing-ui.desktop", dst: "deb/usr/share/applications/syncthing-ui.desktop", perm: 0o644},
{src: "assets/logo-32.png", dst: "deb/usr/share/icons/hicolor/32x32/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-64.png", dst: "deb/usr/share/icons/hicolor/64x64/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-128.png", dst: "deb/usr/share/icons/hicolor/128x128/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-256.png", dst: "deb/usr/share/icons/hicolor/256x256/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-512.png", dst: "deb/usr/share/icons/hicolor/512x512/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-only.svg", dst: "deb/usr/share/icons/hicolor/scalable/apps/syncthing.svg", perm: 0o644},
},
},
"stdiscosrv": {
@@ -142,23 +140,22 @@ var targets = map[string]target{
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stdiscosrv"},
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "README.txt", perm: 0644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "cmd/stdiscosrv/README.md", dst: "README.txt", perm: 0o644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
},
systemdService: "stdiscosrv.service",
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service", dst: "deb/lib/systemd/system/stdiscosrv.service", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0o644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0o644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0o644},
{src: "cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service", dst: "deb/lib/systemd/system/stdiscosrv.service", perm: 0o644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0o644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0o644},
},
tags: []string{"purego"},
},
"strelaysrv": {
name: "strelaysrv",
@@ -169,61 +166,47 @@ var targets = map[string]target{
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaysrv"},
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "cmd/strelaysrv/README.md", dst: "README.txt", perm: 0o644},
{src: "cmd/strelaysrv/LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
},
systemdService: "strelaysrv.service",
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/strelaysrv.service", dst: "deb/lib/systemd/system/strelaysrv.service", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0o644},
{src: "cmd/strelaysrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0o644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0o644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0o644},
{src: "cmd/strelaysrv/etc/linux-systemd/strelaysrv.service", dst: "deb/lib/systemd/system/strelaysrv.service", perm: 0o644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0o644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0o644},
},
},
"strelaypoolsrv": {
name: "strelaypoolsrv",
debname: "syncthing-relaypoolsrv",
debdeps: []string{"libc6"},
description: "Syncthing Relay Pool Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaypoolsrv"},
binaryName: "strelaypoolsrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/AUTHORS.txt", perm: 0644},
},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv"},
binaryName: "strelaypoolsrv",
},
"stupgrades": {
name: "stupgrades",
description: "Syncthing Upgrade Check Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stupgrades"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/stupgrades"},
binaryName: "stupgrades",
},
"stcrashreceiver": {
name: "stcrashreceiver",
description: "Syncthing Crash Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stcrashreceiver"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/stcrashreceiver"},
binaryName: "stcrashreceiver",
},
"ursrv": {
name: "ursrv",
description: "Syncthing Usage Reporting Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/ursrv"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/ursrv"},
binaryName: "ursrv",
},
}
@@ -232,15 +215,11 @@ func initTargets() {
all := targets["all"]
pkgs, _ := filepath.Glob("cmd/*")
for _, pkg := range pkgs {
pkg = filepath.Base(pkg)
if strings.HasPrefix(pkg, ".") {
// ignore dotfiles
if files, err := filepath.Glob(pkg + "/*.go"); err != nil || len(files) == 0 {
// No go files in the directory
continue
}
if noupgrade && pkg == "stupgrades" {
continue
}
all.buildPkgs = append(all.buildPkgs, fmt.Sprintf("github.com/syncthing/syncthing/cmd/%s", pkg))
all.buildPkgs = append(all.buildPkgs, fmt.Sprintf("github.com/syncthing/syncthing/%s", pkg))
}
targets["all"] = all
@@ -248,13 +227,13 @@ func initTargets() {
// and "extra" dirs.
syncthingPkg := targets["syncthing"]
for _, file := range listFiles("etc") {
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0644})
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0o644})
}
for _, file := range listFiles("extra") {
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0644})
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0o644})
}
for _, file := range listFiles("extra") {
syncthingPkg.installationFiles = append(syncthingPkg.installationFiles, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0644})
syncthingPkg.installationFiles = append(syncthingPkg.installationFiles, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0o644})
}
targets["syncthing"] = syncthingPkg
}
@@ -344,12 +323,14 @@ func runCommand(cmd string, target target) {
case "tar":
buildTar(target, tags)
writeCompatJSON()
case "zip":
buildZip(target, tags)
writeCompatJSON()
case "deb":
buildDeb(target)
buildDeb(target, tags)
case "vet":
metalintShort()
@@ -407,7 +388,6 @@ func parseFlags() {
func test(tags []string, pkgs ...string) {
lazyRebuildAssets()
tags = append(tags, "purego")
args := []string{"test", "-tags", strings.Join(tags, " ")}
if long {
timeout = longTimeout
@@ -441,7 +421,7 @@ func bench(tags []string, pkgs ...string) {
func integration(bench bool) {
lazyRebuildAssets()
args := []string{"test", "-v", "-timeout", "60m", "-tags"}
tags := "purego,integration"
tags := "integration"
if bench {
tags += ",benchmark"
}
@@ -629,7 +609,7 @@ func buildZip(target target, tags []string) {
fmt.Println(filename)
}
func buildDeb(target target) {
func buildDeb(target target, tags []string) {
os.RemoveAll("deb")
// "goarch" here is set to whatever the Debian packages expect. We correct
@@ -643,7 +623,7 @@ func buildDeb(target target) {
goarch = "arm"
}
build(target, []string{"noupgrade"})
build(target, append(tags, "noupgrade"))
for i := range target.installationFiles {
target.installationFiles[i].src = strings.Replace(target.installationFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -665,6 +645,9 @@ func buildDeb(target target) {
// than just 0.14.26. This rectifies that.
debver = strings.Replace(debver, "-", "~", -1)
}
if strings.Contains(debver, "_") {
debver = strings.Replace(debver, "_", "~", -1)
}
args := []string{
"-t", "deb",
"-s", "dir",
@@ -752,7 +735,7 @@ func shouldBuildSyso(dir string) (string, error) {
}
jsonPath := filepath.Join(dir, "versioninfo.json")
err = os.WriteFile(jsonPath, bs, 0644)
err = os.WriteFile(jsonPath, bs, 0o644)
if err != nil {
return "", errors.New("failed to create " + jsonPath + ": " + err.Error())
}
@@ -811,7 +794,7 @@ func copyFile(src, dst string, perm os.FileMode) error {
}
copy:
os.MkdirAll(filepath.Dir(dst), 0777)
os.MkdirAll(filepath.Dir(dst), 0o777)
if err := os.WriteFile(dst, in, perm); err != nil {
return err
}
@@ -836,12 +819,12 @@ func listFiles(dir string) []string {
func rebuildAssets() {
os.Setenv("SOURCE_DATE_EPOCH", fmt.Sprint(buildStamp()))
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/api/auto", "github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto")
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/api/auto", "github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv/auto")
}
func lazyRebuildAssets() {
shouldRebuild := shouldRebuildAssets("lib/api/auto/gui.files.go", "gui") ||
shouldRebuildAssets("cmd/strelaypoolsrv/auto/gui.files.go", "cmd/strelaypoolsrv/gui")
shouldRebuildAssets("cmd/infra/strelaypoolsrv/auto/gui.files.go", "cmd/infra/strelaypoolsrv/gui")
if withNextGenGUI {
shouldRebuild = buildNextGenGUI() || shouldRebuild
@@ -871,7 +854,7 @@ func buildNextGenGUI() bool {
for _, src := range listFiles("next-gen-gui/dist") {
rel, _ := filepath.Rel("next-gen-gui/dist", src)
dst := filepath.Join("gui", rel)
if err := copyFile(src, dst, 0644); err != nil {
if err := copyFile(src, dst, 0o644); err != nil {
fmt.Println("copy:", err)
os.Exit(1)
}
@@ -927,22 +910,9 @@ func updateDependencies() {
}
func proto() {
pv := protobufVersion()
repo := "https://github.com/gogo/protobuf.git"
path := filepath.Join("repos", "protobuf")
runPrint(goCmd, "install", fmt.Sprintf("github.com/gogo/protobuf/protoc-gen-gogofast@%v", pv))
os.MkdirAll("repos", 0755)
if _, err := os.Stat(path); err != nil {
runPrint("git", "clone", repo, path)
} else {
runPrintInDir(path, "git", "fetch")
}
runPrintInDir(path, "git", "checkout", pv)
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/cmd/stdiscosrv")
runPrint(goCmd, "generate", "proto/generate.go")
// buf needs to be installed
// https://buf.build/docs/installation/
runPrint("buf", "generate")
}
func testmocks() {
@@ -1377,10 +1347,7 @@ func zipFile(out string, files []archiveFile) {
}
func codesign(target target) {
switch goos {
case "windows":
windowsCodesign(target.BinaryName())
case "darwin":
if goos == "darwin" {
macosCodesign(target.BinaryName())
}
}
@@ -1404,70 +1371,6 @@ func macosCodesign(file string) {
}
}
func windowsCodesign(file string) {
st := "signtool.exe"
if path := os.Getenv("CODESIGN_SIGNTOOL"); path != "" {
st = path
}
for i, algo := range []string{"sha1", "sha256"} {
args := []string{"sign", "/fd", algo}
if f := os.Getenv("CODESIGN_CERTIFICATE_FILE"); f != "" {
args = append(args, "/f", f)
} else if b := os.Getenv("CODESIGN_CERTIFICATE_BASE64"); b != "" {
// Decode the PFX certificate from base64.
bs, err := base64.RawStdEncoding.DecodeString(b)
if err != nil {
log.Println("Codesign: signing failed: decoding base64:", err)
return
}
// Write it to a temporary file
f, err := os.CreateTemp("", "codesign-*.pfx")
if err != nil {
log.Println("Codesign: signing failed: creating temp file:", err)
return
}
_ = f.Chmod(0600) // best effort remove other users' access
defer os.Remove(f.Name())
if _, err := f.Write(bs); err != nil {
log.Println("Codesign: signing failed: writing temp file:", err)
return
}
if err := f.Close(); err != nil {
log.Println("Codesign: signing failed: closing temp file:", err)
return
}
// Use that when signing
args = append(args, "/f", f.Name())
}
if p := os.Getenv("CODESIGN_CERTIFICATE_PASSWORD"); p != "" {
args = append(args, "/p", p)
}
if tr := os.Getenv("CODESIGN_TIMESTAMP_SERVER"); tr != "" {
switch algo {
case "sha256":
args = append(args, "/tr", tr, "/td", algo)
default:
args = append(args, "/t", tr)
}
}
if i > 0 {
args = append(args, "/as")
}
args = append(args, file)
bs, err := runError(st, args...)
if err != nil {
log.Printf("Codesign: signing failed: %v: %s", err, string(bs))
return
}
log.Println("Codesign: successfully signed", file, "using", algo)
}
}
func metalint() {
lazyRebuildAssets()
runPrint(goCmd, "test", "-run", "Metalint", "./meta")
@@ -1485,14 +1388,6 @@ func (t target) BinaryName() string {
return t.binaryName
}
func protobufVersion() string {
bs, err := runError(goCmd, "list", "-f", "{{.Version}}", "-m", "github.com/gogo/protobuf")
if err != nil {
log.Fatal("Getting protobuf version:", err)
}
return string(bs)
}
func currentAndLatestVersions(n int) ([]string, error) {
bs, err := runError("git", "tag", "--sort", "taggerdate")
if err != nil {
@@ -1559,3 +1454,29 @@ func nextPatchVersion(ver string) string {
digits[len(digits)-1] = strconv.Itoa(n + 1)
return strings.Join(digits, ".")
}
func writeCompatJSON() {
bs, err := os.ReadFile("compat.yaml")
if err != nil {
log.Fatal("Reading compat.yaml:", err)
}
var entries []upgrade.ReleaseCompatibility
if err := yaml.Unmarshal(bs, &entries); err != nil {
log.Fatal("Parsing compat.yaml:", err)
}
rt := runtime.Version()
for _, e := range entries {
if !strings.HasPrefix(rt, e.Runtime) {
continue
}
bs, _ := json.MarshalIndent(e, "", " ")
if err := os.WriteFile("compat.json", bs, 0o644); err != nil {
log.Fatal("Writing compat.json:", err)
}
return
}
log.Fatalf("runtime %v not found in compat.yaml", rt)
}

View File

@@ -23,10 +23,11 @@ case "${1:-default}" in
prerelease)
script authors
script copyrights
build weblate
pushd man ; ./refresh.sh ; popd
git add -A gui man AUTHORS
git commit -m 'gui, man, authors: Update docs, translations, and contributors'
git commit -m 'chore(gui, man, authors): update docs, translations, and contributors'
;;
*)

View File

@@ -7,6 +7,7 @@
package main
import (
"crypto/sha256"
"errors"
"flag"
"fmt"
@@ -16,7 +17,6 @@ import (
"path/filepath"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/sha256"
)
func main() {

View File

@@ -15,6 +15,9 @@ import (
"strings"
"time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/discoproto"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/beacon"
"github.com/syncthing/syncthing/lib/discover"
@@ -75,20 +78,21 @@ func recv(bc beacon.Interface) {
continue
}
var ann discover.Announce
ann.Unmarshal(data[4:])
var ann discoproto.Announce
proto.Unmarshal(data[4:], &ann)
if ann.ID == myID {
id, _ := protocol.DeviceIDFromBytes(ann.Id)
if id == myID {
// This is one of our own fake packets, don't print it.
continue
}
// Print announcement details for the first packet from a given
// device ID and source address, or if -all was given.
key := ann.ID.String() + src.String()
key := id.String() + src.String()
if all || !seen[key] {
log.Printf("Announcement from %v\n", src)
log.Printf(" %v at %s\n", ann.ID, strings.Join(ann.Addresses, ", "))
log.Printf(" %v at %s\n", id, strings.Join(ann.Addresses, ", "))
seen[key] = true
}
}
@@ -96,11 +100,11 @@ func recv(bc beacon.Interface) {
// sends fake discovery announcements once every second
func send(bc beacon.Interface) {
ann := discover.Announce{
ID: myID,
ann := &discoproto.Announce{
Id: myID[:],
Addresses: []string{"tcp://fake.example.com:12345"},
}
bs, _ := ann.Marshal()
bs, _ := proto.Marshal(ann)
for {
bc.Send(bs)

View File

@@ -7,6 +7,7 @@
package main
import (
"crypto/sha256"
"flag"
"fmt"
"io"
@@ -14,7 +15,6 @@ import (
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/sha256"
)
func main() {

View File

@@ -8,6 +8,7 @@ package main
import (
"bytes"
"cmp"
"compress/gzip"
"context"
"io"
@@ -15,7 +16,7 @@ import (
"math"
"os"
"path/filepath"
"sort"
"slices"
"time"
)
@@ -177,8 +178,8 @@ func (d *diskStore) inventory() error {
})
return nil
})
sort.Slice(d.currentFiles, func(i, j int) bool {
return d.currentFiles[i].mtime < d.currentFiles[j].mtime
slices.SortFunc(d.currentFiles, func(a, b currentFile) int {
return cmp.Compare(a.mtime, b.mtime)
})
var oldest time.Duration
if len(d.currentFiles) > 0 {

View File

@@ -14,6 +14,7 @@ package main
import (
"context"
"crypto/sha256"
"encoding/json"
"fmt"
"io"
@@ -21,25 +22,29 @@ import (
"net/http"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/alecthomas/kong"
raven "github.com/getsentry/raven-go"
"github.com/prometheus/client_golang/prometheus/promhttp"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/ur"
)
const maxRequestSize = 1 << 20 // 1 MiB
type cli struct {
Dir string `help:"Parent directory to store crash and failure reports in" env:"REPORTS_DIR" default:"."`
DSN string `help:"Sentry DSN" env:"SENTRY_DSN"`
Listen string `help:"HTTP listen address" default:":8080" env:"LISTEN_ADDRESS"`
MaxDiskFiles int `help:"Maximum number of reports on disk" default:"100000" env:"MAX_DISK_FILES"`
MaxDiskSizeMB int64 `help:"Maximum disk space to use for reports" default:"1024" env:"MAX_DISK_SIZE_MB"`
SentryQueue int `help:"Maximum number of reports to queue for sending to Sentry" default:"64" env:"SENTRY_QUEUE"`
DiskQueue int `help:"Maximum number of reports to queue for writing to disk" default:"64" env:"DISK_QUEUE"`
Dir string `help:"Parent directory to store crash and failure reports in" env:"REPORTS_DIR" default:"."`
DSN string `help:"Sentry DSN" env:"SENTRY_DSN"`
Listen string `help:"HTTP listen address" default:":8080" env:"LISTEN_ADDRESS"`
MaxDiskFiles int `help:"Maximum number of reports on disk" default:"100000" env:"MAX_DISK_FILES"`
MaxDiskSizeMB int64 `help:"Maximum disk space to use for reports" default:"1024" env:"MAX_DISK_SIZE_MB"`
SentryQueue int `help:"Maximum number of reports to queue for sending to Sentry" default:"64" env:"SENTRY_QUEUE"`
DiskQueue int `help:"Maximum number of reports to queue for writing to disk" default:"64" env:"DISK_QUEUE"`
MetricsListen string `help:"HTTP listen address for metrics" default:":8081" env:"METRICS_LISTEN_ADDRESS"`
IngorePatterns string `help:"File containing ignore patterns (regexp)" env:"IGNORE_PATTERNS" type:"existingfile"`
}
func main() {
@@ -62,19 +67,38 @@ func main() {
}
go ss.Serve(context.Background())
var ip *ignorePatterns
if params.IngorePatterns != "" {
var err error
ip, err = loadIgnorePatterns(params.IngorePatterns)
if err != nil {
log.Fatalf("Failed to load ignore patterns: %v", err)
}
}
cr := &crashReceiver{
store: ds,
sentry: ss,
ignore: ip,
}
mux.Handle("/", cr)
mux.HandleFunc("/ping", func(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("OK"))
})
mux.Handle("/metrics", promhttp.Handler())
if params.MetricsListen != "" {
mmux := http.NewServeMux()
mmux.Handle("/metrics", promhttp.Handler())
go func() {
if err := http.ListenAndServe(params.MetricsListen, mmux); err != nil {
log.Fatalln("HTTP serve metrics:", err)
}
}()
}
if params.DSN != "" {
mux.HandleFunc("/newcrash/failure", handleFailureFn(params.DSN, filepath.Join(params.Dir, "failure_reports")))
mux.HandleFunc("/newcrash/failure", handleFailureFn(params.DSN, filepath.Join(params.Dir, "failure_reports"), ip))
}
log.SetOutput(os.Stdout)
@@ -83,7 +107,7 @@ func main() {
}
}
func handleFailureFn(dsn, failureDir string) func(w http.ResponseWriter, req *http.Request) {
func handleFailureFn(dsn, failureDir string, ignore *ignorePatterns) func(w http.ResponseWriter, req *http.Request) {
return func(w http.ResponseWriter, req *http.Request) {
result := "failure"
defer func() {
@@ -98,6 +122,11 @@ func handleFailureFn(dsn, failureDir string) func(w http.ResponseWriter, req *ht
return
}
if ignore.match(bs) {
result = "ignored"
return
}
var reports []ur.FailureReport
err = json.Unmarshal(bs, &reports)
if err != nil {
@@ -110,7 +139,7 @@ func handleFailureFn(dsn, failureDir string) func(w http.ResponseWriter, req *ht
return
}
version, err := parseVersion(reports[0].Version)
version, err := build.ParseVersion(reports[0].Version)
if err != nil {
http.Error(w, err.Error(), 400)
return
@@ -158,3 +187,42 @@ func saveFailureWithGoroutines(data ur.FailureData, failureDir string) (string,
}
return reportServer + path, nil
}
type ignorePatterns struct {
patterns []*regexp.Regexp
}
func loadIgnorePatterns(path string) (*ignorePatterns, error) {
bs, err := os.ReadFile(path)
if err != nil {
return nil, err
}
var patterns []*regexp.Regexp
for _, line := range strings.Split(string(bs), "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
re, err := regexp.Compile(line)
if err != nil {
return nil, err
}
patterns = append(patterns, re)
}
log.Printf("Loaded %d ignore patterns", len(patterns))
return &ignorePatterns{patterns: patterns}, nil
}
func (i *ignorePatterns) match(report []byte) bool {
if i == nil {
return false
}
for _, re := range i.patterns {
if re.Match(report) {
return true
}
}
return false
}

View File

@@ -18,6 +18,7 @@ import (
raven "github.com/getsentry/raven-go"
"github.com/maruel/panicparse/v2/stack"
"github.com/syncthing/syncthing/lib/build"
)
const reportServer = "https://crash.syncthing.net/report/"
@@ -105,7 +106,7 @@ func parseCrashReport(path string, report []byte) (*raven.Packet, error) {
return nil, errors.New("no first line")
}
version, err := parseVersion(string(parts[0]))
version, err := build.ParseVersion(string(parts[0]))
if err != nil {
return nil, err
}
@@ -143,12 +144,12 @@ func parseCrashReport(path string, report []byte) (*raven.Packet, error) {
}
// Lock the source code loader to the version we are processing here.
if version.commit != "" {
if version.Commit != "" {
// We have a commit hash, so we know exactly which source to use
loader.LockWithVersion(version.commit)
} else if strings.HasPrefix(version.tag, "v") {
loader.LockWithVersion(version.Commit)
} else if strings.HasPrefix(version.Tag, "v") {
// Lets hope the tag is close enough
loader.LockWithVersion(version.tag)
loader.LockWithVersion(version.Tag)
} else {
// Last resort
loader.LockWithVersion("main")
@@ -215,106 +216,26 @@ func crashReportFingerprint(message string) []string {
return []string{"{{ default }}", message}
}
// syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC [foo, bar]
// or, somewhere along the way the "+" in the version tag disappeared:
// syncthing v1.23.7-dev.26.gdf7b56ae.dirty-stversionextra "Fermium Flea" (go1.20.5 darwin-arm64) jb@ok.kastelo.net 2023-07-12 06:55:26 UTC [Some Wrapper, purego, stnoupgrade]
var (
longVersionRE = regexp.MustCompile(`syncthing\s+(v[^\s]+)\s+"([^"]+)"\s\(([^\s]+)\s+([^-]+)-([^)]+)\)\s+([^\s]+)[^\[]*(?:\[(.+)\])?$`)
gitExtraRE = regexp.MustCompile(`\.\d+\.g[0-9a-f]+`) // ".1.g6aaae618"
gitExtraSepRE = regexp.MustCompile(`[.-]`) // dot or dash
)
type version struct {
version string // "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep"
tag string // "v1.1.4-rc.1"
commit string // "6aaae618", blank when absent
codename string // "Erbium Earthworm"
runtime string // "go1.12.5"
goos string // "darwin"
goarch string // "amd64"
builder string // "jb@kvin.kastelo.net"
extra []string // "foo", "bar"
}
func (v version) environment() string {
if v.commit != "" {
return "Development"
}
if strings.Contains(v.tag, "-rc.") {
return "Candidate"
}
if strings.Contains(v.tag, "-") {
return "Beta"
}
return "Stable"
}
func parseVersion(line string) (version, error) {
m := longVersionRE.FindStringSubmatch(line)
if len(m) == 0 {
return version{}, errors.New("unintelligeble version string")
}
v := version{
version: m[1],
codename: m[2],
runtime: m[3],
goos: m[4],
goarch: m[5],
builder: m[6],
}
// Split the version tag into tag and commit. This is old style
// v1.2.3-something.4+11-g12345678 or newer with just dots
// v1.2.3-something.4.11.g12345678 or v1.2.3-dev.11.g12345678.
parts := []string{v.version}
if strings.Contains(v.version, "+") {
parts = strings.Split(v.version, "+")
} else {
idxs := gitExtraRE.FindStringIndex(v.version)
if len(idxs) > 0 {
parts = []string{v.version[:idxs[0]], v.version[idxs[0]+1:]}
}
}
v.tag = parts[0]
if len(parts) > 1 {
fields := gitExtraSepRE.Split(parts[1], -1)
if len(fields) >= 2 && strings.HasPrefix(fields[1], "g") {
v.commit = fields[1][1:]
}
}
if len(m) >= 8 && m[7] != "" {
tags := strings.Split(m[7], ",")
for i := range tags {
tags[i] = strings.TrimSpace(tags[i])
}
v.extra = tags
}
return v, nil
}
func packet(version version, reportType string) *raven.Packet {
func packet(version build.VersionParts, reportType string) *raven.Packet {
pkt := &raven.Packet{
Platform: "go",
Release: version.tag,
Environment: version.environment(),
Release: version.Tag,
Environment: version.Environment(),
Tags: raven.Tags{
raven.Tag{Key: "version", Value: version.version},
raven.Tag{Key: "tag", Value: version.tag},
raven.Tag{Key: "codename", Value: version.codename},
raven.Tag{Key: "runtime", Value: version.runtime},
raven.Tag{Key: "goos", Value: version.goos},
raven.Tag{Key: "goarch", Value: version.goarch},
raven.Tag{Key: "builder", Value: version.builder},
raven.Tag{Key: "version", Value: version.Version},
raven.Tag{Key: "tag", Value: version.Tag},
raven.Tag{Key: "codename", Value: version.Codename},
raven.Tag{Key: "runtime", Value: version.Runtime},
raven.Tag{Key: "goos", Value: version.GOOS},
raven.Tag{Key: "goarch", Value: version.GOARCH},
raven.Tag{Key: "builder", Value: version.Builder},
raven.Tag{Key: "report_type", Value: reportType},
},
}
if version.commit != "" {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: "commit", Value: version.commit})
if version.Commit != "" {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: "commit", Value: version.Commit})
}
for _, tag := range version.extra {
for _, tag := range version.Extra {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: tag, Value: "1"})
}
return pkt

View File

@@ -12,66 +12,6 @@ import (
"testing"
)
func TestParseVersion(t *testing.T) {
cases := []struct {
longVersion string
parsed version
}{
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
},
},
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC [foo, bar]`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
extra: []string{"foo", "bar"},
},
},
{
longVersion: `syncthing v1.23.7-dev.26.gdf7b56ae-stversionextra "Fermium Flea" (go1.20.5 darwin-arm64) jb@ok.kastelo.net 2023-07-12 06:55:26 UTC [Some Wrapper, purego, stnoupgrade]`,
parsed: version{
version: "v1.23.7-dev.26.gdf7b56ae-stversionextra",
tag: "v1.23.7-dev",
commit: "df7b56ae",
codename: "Fermium Flea",
runtime: "go1.20.5",
goos: "darwin",
goarch: "arm64",
builder: "jb@ok.kastelo.net",
extra: []string{"Some Wrapper", "purego", "stnoupgrade"},
},
},
}
for _, tc := range cases {
v, err := parseVersion(tc.longVersion)
if err != nil {
t.Errorf("%s\nerror: %v\n", tc.longVersion, err)
continue
}
if fmt.Sprint(v) != fmt.Sprint(tc.parsed) {
t.Errorf("%s\nA: %v\nE: %v\n", tc.longVersion, v, tc.parsed)
}
}
}
func TestParseReport(t *testing.T) {
bs, err := os.ReadFile("_testdata/panic.log")
if err != nil {

View File

@@ -71,7 +71,6 @@ func (l *githubSourceCodeLoader) Load(filename string, line, context int) ([][]b
url := urlPrefix + l.version + filename[idx:]
resp, err := l.client.Get(url)
if err != nil {
fmt.Println("Loading source:", err)
return nil, 0

View File

@@ -12,11 +12,16 @@ import (
"net/http"
"path"
"strings"
"sync"
)
type crashReceiver struct {
store *diskStore
sentry *sentryService
ignore *ignorePatterns
ignoredMut sync.RWMutex
ignored map[string]struct{}
}
func (r *crashReceiver) ServeHTTP(w http.ResponseWriter, req *http.Request) {
@@ -64,6 +69,12 @@ func (r *crashReceiver) serveGet(reportID string, w http.ResponseWriter, _ *http
// serveHead responds to HEAD requests by checking if the named report
// already exists in the system.
func (r *crashReceiver) serveHead(reportID string, w http.ResponseWriter, _ *http.Request) {
r.ignoredMut.RLock()
_, ignored := r.ignored[reportID]
r.ignoredMut.RUnlock()
if ignored {
return // found
}
if !r.store.Exists(reportID) {
http.Error(w, "Not found", http.StatusNotFound)
}
@@ -76,6 +87,15 @@ func (r *crashReceiver) servePut(reportID string, w http.ResponseWriter, req *ht
metricCrashReportsTotal.WithLabelValues(result).Inc()
}()
r.ignoredMut.RLock()
_, ignored := r.ignored[reportID]
r.ignoredMut.RUnlock()
if ignored {
result = "ignored_cached"
io.Copy(io.Discard, req.Body)
return // found
}
// Read at most maxRequestSize of report data.
log.Println("Receiving report", reportID)
lr := io.LimitReader(req.Body, maxRequestSize)
@@ -86,6 +106,17 @@ func (r *crashReceiver) servePut(reportID string, w http.ResponseWriter, req *ht
return
}
if r.ignore.match(bs) {
r.ignoredMut.Lock()
if r.ignored == nil {
r.ignored = make(map[string]struct{})
}
r.ignored[reportID] = struct{}{}
r.ignoredMut.Unlock()
result = "ignored"
return
}
result = "success"
// Store the report

View File

@@ -9,14 +9,13 @@ package main
import (
"bytes"
"compress/gzip"
"crypto/sha256"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
// userIDFor returns a string we can use as the user ID for the purpose of
@@ -53,5 +52,5 @@ func compressAndWrite(bs []byte, fullPath string) error {
gw.Close()
// Create an output file with the compressed report
return os.WriteFile(fullPath, buf.Bytes(), 0644)
return os.WriteFile(fullPath, buf.Bytes(), 0o644)
}

View File

@@ -10,7 +10,7 @@ to NAT or firewall issues.
There is very little reason why you'd want to run this yourself, as
`relaypoolsrv` is just used for announcement and lookup of public relay
servers. If you are looking to setup a private or a public relay, please
servers. If you are looking to set up a private or a public relay, please
check the documentation for
[relaysrv](https://github.com/syncthing/relaysrv), which also explains how
to join the default public pool.
@@ -21,4 +21,3 @@ See `relaypoolsrv -help` for configuration options.
[oschwald/geoip2-golang](https://github.com/oschwald/geoip2-golang), [oschwald/maxminddb-golang](https://github.com/oschwald/maxminddb-golang), Copyright (C) 2015 [Gregory J. Oschwald](mailto:oschwald@gmail.com).
[lib/pq](https://github.com/lib/pq)</a>, Copyright (C) 2011-2013 'pq' Contributors Portions Copyright (C) 2011 Blake Mizerany.

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate go run ../../../script/genassets.go -o gui.files.go ../gui
//go:generate go run ../../../../script/genassets.go -o gui.files.go ../gui
// Package auto contains auto generated files for web assets.
package auto

View File

@@ -259,7 +259,7 @@
return a.value > b.value ? 1 : -1;
}
$http.get("/endpoint").then(function(response) {
$http.get("/endpoint/full").then(function(response) {
$scope.relays = response.data.relays;
angular.forEach($scope.relays, function(relay) {
@@ -338,7 +338,7 @@
relay.showMarker = function() {
relay.marker.openPopup();
}
relay.hideMarker = function() {
relay.marker.closePopup();
}
@@ -347,7 +347,7 @@
function addCircleToMap(relay) {
console.log(relay.location.latitude)
L.circle([relay.location.latitude, relay.location.longitude],
L.circle([relay.location.latitude, relay.location.longitude],
{
radius: ((relay.stats.bytesProxied * 100) / $scope.totals.bytesProxied) * 10000,
color: "FF0000",

View File

@@ -23,11 +23,11 @@ import (
lru "github.com/hashicorp/golang-lru/v2"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/assets"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/httpcache"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/relay/client"
@@ -51,6 +51,10 @@ type relay struct {
StatsRetrieved time.Time `json:"statsRetrieved"`
}
type relayShort struct {
URL string `json:"url"`
}
type stats struct {
StartTime time.Time `json:"startTime"`
UptimeSeconds int `json:"uptimeSeconds"`
@@ -95,6 +99,7 @@ var (
testCert tls.Certificate
knownRelaysFile = filepath.Join(os.TempDir(), "strelaypoolsrv_known_relays")
listen = ":80"
metricsListen = ":8081"
dir string
evictionTime = time.Hour
debug bool
@@ -106,6 +111,7 @@ var (
requestProcessors = 8
geoipLicenseKey = os.Getenv("GEOIP_LICENSE_KEY")
geoipAccountID, _ = strconv.Atoi(os.Getenv("GEOIP_ACCOUNT_ID"))
maxRelaysReturned = 100
requests chan request
@@ -125,6 +131,7 @@ func main() {
log.SetFlags(log.Lshortfile)
flag.StringVar(&listen, "listen", listen, "Listen address")
flag.StringVar(&metricsListen, "metrics-listen", metricsListen, "Metrics listen address")
flag.StringVar(&dir, "keys", dir, "Directory where http-cert.pem and http-key.pem is stored for TLS listening")
flag.BoolVar(&debug, "debug", debug, "Enable debug output")
flag.DurationVar(&evictionTime, "eviction", evictionTime, "After how long the relay is evicted")
@@ -136,6 +143,7 @@ func main() {
flag.IntVar(&requestQueueLen, "request-queue", requestQueueLen, "Queue length for incoming test requests")
flag.IntVar(&requestProcessors, "request-processors", requestProcessors, "Number of request processor routines")
flag.StringVar(&geoipLicenseKey, "geoip-license-key", geoipLicenseKey, "License key for GeoIP database")
flag.IntVar(&maxRelaysReturned, "max-relays-returned", maxRelaysReturned, "Maximum number of relays returned for a normal endpoint query")
flag.Parse()
@@ -218,15 +226,40 @@ func main() {
log.Fatalln("listen:", err)
}
handler := http.NewServeMux()
handler.HandleFunc("/", handleAssets)
handler.Handle("/endpoint", httpcache.SinglePath(http.HandlerFunc(handleRequest), 15*time.Second))
handler.HandleFunc("/metrics", handleMetrics)
if metricsListen != "" {
mmux := http.NewServeMux()
mmux.HandleFunc("/metrics", handleMetrics)
go func() {
if err := http.ListenAndServe(metricsListen, mmux); err != nil {
log.Fatalln("HTTP serve metrics:", err)
}
}()
}
getMux := http.NewServeMux()
getMux.HandleFunc("/", handleAssets)
getMux.HandleFunc("/endpoint", withAPIMetrics(handleEndpointShort))
getMux.HandleFunc("/endpoint/full", withAPIMetrics(handleEndpointFull))
postMux := http.NewServeMux()
postMux.HandleFunc("/endpoint", withAPIMetrics(handleRegister))
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet, http.MethodHead, http.MethodOptions:
getMux.ServeHTTP(w, r)
case http.MethodPost:
postMux.ServeHTTP(w, r)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
})
srv := http.Server{
Handler: handler,
ReadTimeout: 10 * time.Second,
}
srv.SetKeepAlivesEnabled(false)
err = srv.Serve(listener)
if err != nil {
@@ -260,39 +293,24 @@ func handleAssets(w http.ResponseWriter, r *http.Request) {
assets.Serve(w, r, as)
}
func handleRequest(w http.ResponseWriter, r *http.Request) {
timer := prometheus.NewTimer(apiRequestsSeconds.WithLabelValues(r.Method))
w = NewLoggingResponseWriter(w)
defer func() {
timer.ObserveDuration()
lw := w.(*loggingResponseWriter)
apiRequestsTotal.WithLabelValues(r.Method, strconv.Itoa(lw.statusCode)).Inc()
}()
if ipHeader != "" {
hdr := r.Header.Get(ipHeader)
fields := strings.Split(hdr, ",")
if len(fields) > 0 {
r.RemoteAddr = strings.TrimSpace(fields[len(fields)-1])
}
}
w.Header().Set("Access-Control-Allow-Origin", "*")
switch r.Method {
case "GET":
handleGetRequest(w, r)
case "POST":
handlePostRequest(w, r)
default:
if debug {
log.Println("Unhandled HTTP method", r.Method)
}
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
func withAPIMetrics(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
timer := prometheus.NewTimer(apiRequestsSeconds.WithLabelValues(r.Method))
w = NewLoggingResponseWriter(w)
defer func() {
timer.ObserveDuration()
lw := w.(*loggingResponseWriter)
apiRequestsTotal.WithLabelValues(r.Method, strconv.Itoa(lw.statusCode)).Inc()
}()
next(w, r)
}
}
func handleGetRequest(rw http.ResponseWriter, r *http.Request) {
// handleEndpointFull returns the relay list with full metadata and
// statistics. Large, and expensive.
func handleEndpointFull(rw http.ResponseWriter, r *http.Request) {
rw.Header().Set("Content-Type", "application/json; charset=utf-8")
rw.Header().Set("Access-Control-Allow-Origin", "*")
mut.RLock()
relays := make([]*relay, len(permanentRelays)+len(knownRelays))
@@ -300,17 +318,42 @@ func handleGetRequest(rw http.ResponseWriter, r *http.Request) {
copy(relays[n:], knownRelays)
mut.RUnlock()
// Shuffle
rand.Shuffle(relays)
_ = json.NewEncoder(rw).Encode(map[string][]*relay{
"relays": relays,
})
}
func handlePostRequest(w http.ResponseWriter, r *http.Request) {
// handleEndpointShort returns the relay list with only the URL.
func handleEndpointShort(rw http.ResponseWriter, r *http.Request) {
rw.Header().Set("Content-Type", "application/json; charset=utf-8")
rw.Header().Set("Access-Control-Allow-Origin", "*")
mut.RLock()
relays := make([]relayShort, 0, len(permanentRelays)+len(knownRelays))
for _, r := range append(permanentRelays, knownRelays...) {
relays = append(relays, relayShort{URL: slimURL(r.URL)})
}
mut.RUnlock()
if len(relays) > maxRelaysReturned {
rand.Shuffle(relays)
relays = relays[:maxRelaysReturned]
}
_ = json.NewEncoder(rw).Encode(map[string][]relayShort{
"relays": relays,
})
}
func handleRegister(w http.ResponseWriter, r *http.Request) {
// Get the IP address of the client
rhost := r.RemoteAddr
if ipHeader != "" {
hdr := r.Header.Get(ipHeader)
fields := strings.Split(hdr, ",")
if len(fields) > 0 {
rhost = strings.TrimSpace(fields[len(fields)-1])
}
}
if host, _, err := net.SplitHostPort(rhost); err == nil {
rhost = host
}
@@ -660,3 +703,16 @@ func (b *errorTracker) IsBlocked(host string) bool {
}
return false
}
func slimURL(u string) string {
p, err := url.Parse(u)
if err != nil {
return u
}
newQuery := url.Values{}
if id := p.Query().Get("id"); id != "" {
newQuery.Set("id", id)
}
p.RawQuery = newQuery.Encode()
return p.String()
}

View File

@@ -42,7 +42,7 @@ func TestHandleGetRequest(t *testing.T) {
w := httptest.NewRecorder()
w.Body = new(bytes.Buffer)
handleGetRequest(w, httptest.NewRequest("GET", "/", nil))
handleEndpointFull(w, httptest.NewRequest("GET", "/", nil))
result := make(map[string][]*relay)
err := json.NewDecoder(w.Body).Decode(&result)
@@ -92,3 +92,18 @@ func TestCanonicalizeQueryValues(t *testing.T) {
t.Errorf("expected %q, got %q", exp, str)
}
}
func TestSlimURL(t *testing.T) {
cases := []struct {
in, out string
}{
{"http://example.com/", "http://example.com/"},
{"relay://192.0.2.42:22067/?globalLimitBps=0&id=EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M&networkTimeout=2m0s&pingInterval=1m0s&providedBy=Test&sessionLimitBps=0&statusAddr=%3A22070", "relay://192.0.2.42:22067/?id=EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M"},
}
for _, c := range cases {
if got := slimURL(c.in); got != c.out {
t.Errorf("expected %q, got %q", c.out, got)
}
}
}

View File

@@ -6,27 +6,12 @@ import (
"encoding/json"
"net"
"net/http"
"os"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/syncthing/syncthing/lib/sync"
)
func init() {
processCollectorOpts := collectors.ProcessCollectorOpts{
Namespace: "syncthing_relaypoolsrv",
PidFn: func() (int, error) {
return os.Getpid(), nil
},
}
prometheus.MustRegister(
collectors.NewProcessCollector(processCollectorOpts),
)
}
var (
statusClient = http.Client{
Timeout: 5 * time.Second,

View File

@@ -0,0 +1,374 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"context"
"encoding/json"
"fmt"
"io"
"log/slog"
"net"
"net/http"
"os"
"regexp"
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/alecthomas/kong"
"github.com/prometheus/client_golang/prometheus/promhttp"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/httpcache"
"github.com/syncthing/syncthing/lib/upgrade"
)
type cli struct {
Listen string `default:":8080" help:"Listen address"`
MetricsListen string `default:":8082" help:"Listen address for metrics"`
URL string `short:"u" default:"https://api.github.com/repos/syncthing/syncthing/releases?per_page=25" help:"GitHub releases url"`
Forward []string `short:"f" help:"Forwarded pages, format: /path->https://example/com/url"`
CacheTime time.Duration `default:"15m" help:"Cache time"`
}
func main() {
var params cli
kong.Parse(&params)
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
})))
if err := server(&params); err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
}
func server(params *cli) error {
if params.MetricsListen != "" {
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.Handler())
metricsListen, err := net.Listen("tcp", params.MetricsListen)
if err != nil {
return fmt.Errorf("metrics: %w", err)
}
slog.Info("Metrics listener started", "addr", params.MetricsListen)
go func() {
if err := http.Serve(metricsListen, mux); err != nil {
slog.Warn("Metrics server returned", "error", err)
}
}()
}
cache := &cachedReleases{url: params.URL}
if err := cache.Update(context.Background()); err != nil {
return fmt.Errorf("initial cache update: %w", err)
} else {
slog.Info("Initial cache update done")
}
go func() {
for range time.NewTicker(params.CacheTime).C {
slog.Info("Refreshing cached releases", "url", params.URL)
if err := cache.Update(context.Background()); err != nil {
slog.Error("Failed to refresh cached releases", "url", params.URL, "error", err)
}
}
}()
ghRels := &githubReleases{cache: cache}
mux := http.NewServeMux()
mux.HandleFunc("/ping", ghRels.servePing)
mux.HandleFunc("/meta.json", ghRels.serveReleases)
for _, fwd := range params.Forward {
path, url, ok := strings.Cut(fwd, "->")
if !ok {
return fmt.Errorf("invalid forward: %q", fwd)
}
slog.Info("Forwarding", "from", path, "to", url)
name := strings.ReplaceAll(path, "/", "_")
mux.Handle(path, httpcache.SinglePath(&proxy{name: name, url: url}, params.CacheTime))
}
srv := &http.Server{
Addr: params.Listen,
Handler: mux,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
}
srv.SetKeepAlivesEnabled(false)
srvListener, err := net.Listen("tcp", params.Listen)
if err != nil {
return fmt.Errorf("listen: %w", err)
}
slog.Info("Main listener started", "addr", params.Listen)
return srv.Serve(srvListener)
}
type githubReleases struct {
cache *cachedReleases
}
func (p *githubReleases) servePing(w http.ResponseWriter, req *http.Request) {
rels := p.cache.Releases()
if len(rels) == 0 {
http.Error(w, "No releases available", http.StatusServiceUnavailable)
return
}
w.Header().Set("Syncthing-Num-Releases", strconv.Itoa(len(rels)))
w.WriteHeader(http.StatusOK)
}
func (p *githubReleases) serveReleases(w http.ResponseWriter, req *http.Request) {
rels := p.cache.Releases()
ua := req.Header.Get("User-Agent")
osv := req.Header.Get("Syncthing-Os-Version")
if ua != "" && osv != "" {
// We should determine the compatibility of the releases.
rels = filterForCompabitility(rels, ua, osv)
} else {
metricFilterCalls.WithLabelValues("no-ua-or-osversion").Inc()
}
rels = filterForLatest(rels)
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
w.Header().Set("Cache-Control", "public, max-age=900")
w.Header().Set("Vary", "User-Agent, Syncthing-Os-Version")
_ = json.NewEncoder(w).Encode(rels)
metricUpgradeChecks.Inc()
}
type proxy struct {
name string
url string
}
func (p *proxy) ServeHTTP(w http.ResponseWriter, req *http.Request) {
req, err := http.NewRequestWithContext(req.Context(), http.MethodGet, p.url, nil)
if err != nil {
metricHTTPRequests.WithLabelValues(p.name, "error").Inc()
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
metricHTTPRequests.WithLabelValues(p.name, "error").Inc()
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer resp.Body.Close()
metricHTTPRequests.WithLabelValues(p.name, "success").Inc()
ct := resp.Header.Get("Content-Type")
w.Header().Set("Content-Type", ct)
if resp.StatusCode == http.StatusOK {
w.Header().Set("Cache-Control", "public, max-age=900")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
}
w.WriteHeader(resp.StatusCode)
if strings.HasPrefix(ct, "application/json") {
// Special JSON handling; clean it up a bit.
var v interface{}
if err := json.NewDecoder(resp.Body).Decode(&v); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
_ = json.NewEncoder(w).Encode(v)
} else {
_, _ = io.Copy(w, resp.Body)
}
}
// filterForLatest returns the latest stable and prerelease only. If the
// stable version is newer (comes first in the list) there is no need to go
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
havePre := make(map[string]bool)
haveStable := make(map[string]bool)
for _, rel := range rels {
major, _, _ := strings.Cut(rel.Tag, ".")
if !rel.Prerelease && !haveStable[major] {
// Remember the first non-pre for each major
filtered = append(filtered, rel)
haveStable[major] = true
continue
}
if rel.Prerelease && !havePre[major] && !haveStable[major] {
// We remember the first prerelease we find, unless we've
// already found a non-pre of the same major.
filtered = append(filtered, rel)
havePre[major] = true
}
}
return filtered
}
var userAgentOSArchExp = regexp.MustCompile(`^syncthing.*\(.+ (\w+)-(\w+)\)$`)
func filterForCompabitility(rels []upgrade.Release, ua, osv string) []upgrade.Release {
osArch := userAgentOSArchExp.FindStringSubmatch(ua)
if len(osArch) != 3 {
metricFilterCalls.WithLabelValues("bad-os-arch").Inc()
return rels
}
os := osArch[1]
var filtered []upgrade.Release
for _, rel := range rels {
if rel.Compatibility == nil {
// No requirements means it's compatible with everything.
filtered = append(filtered, rel)
continue
}
req, ok := rel.Compatibility.Requirements[os]
if !ok {
// No entry for the current OS means it's compatible.
filtered = append(filtered, rel)
continue
}
if upgrade.CompareVersions(osv, req) >= 0 {
filtered = append(filtered, rel)
continue
}
}
if len(filtered) != len(rels) {
metricFilterCalls.WithLabelValues("filtered").Inc()
} else {
metricFilterCalls.WithLabelValues("unchanged").Inc()
}
return filtered
}
type cachedReleases struct {
url string
mut sync.RWMutex
current []upgrade.Release
latestRel, latestPre string
}
func (c *cachedReleases) Releases() []upgrade.Release {
c.mut.RLock()
defer c.mut.RUnlock()
return c.current
}
func (c *cachedReleases) Update(ctx context.Context) error {
rels, err := fetchGithubReleases(ctx, c.url)
if err != nil {
return err
}
latestRel, latestPre := "", ""
for _, rel := range rels {
if !rel.Prerelease && latestRel == "" {
latestRel = rel.Tag
}
if rel.Prerelease && latestPre == "" {
latestPre = rel.Tag
}
if latestRel != "" && latestPre != "" {
break
}
}
c.mut.Lock()
c.current = rels
if latestRel != c.latestRel || latestPre != c.latestPre {
metricLatestReleaseInfo.DeleteLabelValues(c.latestRel, c.latestPre)
metricLatestReleaseInfo.WithLabelValues(latestRel, latestPre).Set(1)
c.latestRel = latestRel
c.latestPre = latestPre
}
c.mut.Unlock()
return nil
}
func fetchGithubReleases(ctx context.Context, url string) ([]upgrade.Release, error) {
req, err := http.NewRequestWithContext(context.TODO(), http.MethodGet, url, nil)
if err != nil {
metricHTTPRequests.WithLabelValues("github-releases", "error").Inc()
return nil, err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
metricHTTPRequests.WithLabelValues("github-releases", "error").Inc()
return nil, err
}
defer resp.Body.Close()
var rels []upgrade.Release
if err := json.NewDecoder(resp.Body).Decode(&rels); err != nil {
metricHTTPRequests.WithLabelValues("github-releases", "error").Inc()
return nil, err
}
metricHTTPRequests.WithLabelValues("github-releases", "success").Inc()
// Move the URL used for browser downloads to the URL field, and remove
// the browser URL field. This avoids going via the GitHub API for
// downloads, since Syncthing uses the URL field.
for _, rel := range rels {
for j, asset := range rel.Assets {
rel.Assets[j].URL = asset.BrowserURL
rel.Assets[j].BrowserURL = ""
}
}
addReleaseCompatibility(ctx, rels)
sort.Sort(upgrade.SortByRelease(rels))
return rels, nil
}
func addReleaseCompatibility(ctx context.Context, rels []upgrade.Release) {
for i := range rels {
rel := &rels[i]
for i, asset := range rel.Assets {
if asset.Name != "compat.json" {
continue
}
// Load compat.json into the Compatibility field
req, err := http.NewRequestWithContext(ctx, http.MethodGet, asset.URL, nil)
if err != nil {
metricHTTPRequests.WithLabelValues("compat-json", "error").Inc()
break
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
metricHTTPRequests.WithLabelValues("compat-json", "error").Inc()
break
}
if resp.StatusCode != http.StatusOK {
metricHTTPRequests.WithLabelValues("compat-json", "error").Inc()
resp.Body.Close()
break
}
_ = json.NewDecoder(io.LimitReader(resp.Body, 10<<10)).Decode(&rel.Compatibility)
metricHTTPRequests.WithLabelValues("compat-json", "success").Inc()
resp.Body.Close()
// Remove compat.json from the asset list since it's been processed
rel.Assets = append(rel.Assets[:i], rel.Assets[i+1:]...)
break
}
}
}

View File

@@ -0,0 +1,36 @@
// Copyright (C) 2024 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricUpgradeChecks = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "metadata_requests",
})
metricFilterCalls = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "filter_calls",
}, []string{"result"})
metricHTTPRequests = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "http_requests",
}, []string{"target", "result"})
metricLatestReleaseInfo = promauto.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "latest_release_info",
Help: "Release information",
}, []string{"latest_release", "latest_pre"})
)

View File

@@ -8,22 +8,22 @@ package main
import (
"log"
"log/slog"
"os"
"github.com/alecthomas/kong"
"github.com/syncthing/syncthing/cmd/ursrv/aggregate"
"github.com/syncthing/syncthing/cmd/ursrv/serve"
"github.com/syncthing/syncthing/cmd/infra/ursrv/serve"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
)
type CLI struct {
Serve serve.CLI `cmd:"" default:""`
Aggregate aggregate.CLI `cmd:""`
Serve serve.CLI `cmd:"" default:""`
}
func main() {
log.SetFlags(log.Ltime | log.Ldate | log.Lshortfile)
log.SetOutput(os.Stdout)
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
})))
var cli CLI
ctx := kong.Parse(&cli)

View File

@@ -0,0 +1,46 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricReportsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "incoming_reports_total",
}, []string{"result"})
metricsCollectsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "collects_total",
})
metricsCollectSecondsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "collect_seconds_total",
})
metricsCollectSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "collect_seconds_last",
})
metricsWriteSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "write_seconds_last",
})
)
func init() {
metricReportsTotal.WithLabelValues("fail")
metricReportsTotal.WithLabelValues("replace")
metricReportsTotal.WithLabelValues("accept")
}

View File

@@ -0,0 +1,314 @@
// Copyright (C) 2024 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"reflect"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/syncthing/syncthing/lib/ur/contract"
)
const namePrefix = "syncthing_usage_"
type metricsSet struct {
srv *server
gauges map[string]prometheus.Gauge
gaugeVecs map[string]*prometheus.GaugeVec
gaugeVecLabels map[string][]string
summaries map[string]*metricSummary
collectMut sync.Mutex
collectCutoff time.Duration
}
func newMetricsSet(srv *server) *metricsSet {
s := &metricsSet{
srv: srv,
gauges: make(map[string]prometheus.Gauge),
gaugeVecs: make(map[string]*prometheus.GaugeVec),
gaugeVecLabels: make(map[string][]string),
summaries: make(map[string]*metricSummary),
collectCutoff: -24 * time.Hour,
}
var initForType func(reflect.Type)
initForType = func(t reflect.Type) {
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
if field.Type.Kind() == reflect.Struct {
initForType(field.Type)
continue
}
name, typ, label := fieldNameTypeLabel(field)
sname, labels := nameConstLabels(name)
switch typ {
case "gauge":
s.gauges[name] = prometheus.NewGauge(prometheus.GaugeOpts{
Name: namePrefix + sname,
ConstLabels: labels,
})
case "summary":
s.summaries[name] = newMetricSummary(namePrefix+sname, nil, labels)
case "gaugeVec":
s.gaugeVecLabels[name] = append(s.gaugeVecLabels[name], label)
case "summaryVec":
s.summaries[name] = newMetricSummary(namePrefix+sname, []string{label}, labels)
}
}
}
initForType(reflect.ValueOf(contract.Report{}).Type())
for name, labels := range s.gaugeVecLabels {
s.gaugeVecs[name] = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: namePrefix + name,
}, labels)
}
return s
}
func fieldNameTypeLabel(rf reflect.StructField) (string, string, string) {
metric := rf.Tag.Get("metric")
name, typ, ok := strings.Cut(metric, ",")
if !ok {
return "", "", ""
}
gv, label, ok := strings.Cut(typ, ":")
if ok {
typ = gv
}
return name, typ, label
}
func nameConstLabels(name string) (string, prometheus.Labels) {
if name == "-" {
return "", nil
}
name, labels, ok := strings.Cut(name, "{")
if !ok {
return name, nil
}
lls := strings.Split(labels[:len(labels)-1], ",")
m := make(map[string]string)
for _, l := range lls {
k, v, _ := strings.Cut(l, "=")
m[k] = v
}
return name, m
}
func (s *metricsSet) addReport(r *contract.Report) {
gaugeVecs := make(map[string][]string)
s.addReportStruct(reflect.ValueOf(r).Elem(), gaugeVecs)
for name, lv := range gaugeVecs {
s.gaugeVecs[name].WithLabelValues(lv...).Add(1)
}
}
func (s *metricsSet) addReportStruct(v reflect.Value, gaugeVecs map[string][]string) {
t := v.Type()
for i := 0; i < v.NumField(); i++ {
field := v.Field(i)
if field.Kind() == reflect.Struct {
s.addReportStruct(field, gaugeVecs)
continue
}
name, typ, label := fieldNameTypeLabel(t.Field(i))
switch typ {
case "gauge":
switch v := field.Interface().(type) {
case int:
s.gauges[name].Add(float64(v))
case string:
s.gaugeVecs[name].WithLabelValues(v).Add(1)
case bool:
if v {
s.gauges[name].Add(1)
}
}
case "gaugeVec":
var labelValue string
switch v := field.Interface().(type) {
case string:
labelValue = v
case int:
labelValue = strconv.Itoa(v)
case map[string]int:
for k, v := range v {
labelValue = k
field.SetInt(int64(v))
break
}
}
if _, ok := gaugeVecs[name]; !ok {
gaugeVecs[name] = make([]string, len(s.gaugeVecLabels[name]))
}
for i, l := range s.gaugeVecLabels[name] {
if l == label {
gaugeVecs[name][i] = labelValue
break
}
}
case "summary", "summaryVec":
switch v := field.Interface().(type) {
case int:
s.summaries[name].Observe("", float64(v))
case float64:
s.summaries[name].Observe("", v)
case []int:
for _, v := range v {
s.summaries[name].Observe("", float64(v))
}
case map[string]int:
for k, v := range v {
if k == "" {
// avoid empty string labels as those are the sign
// of a non-vec summary
k = "unknown"
}
s.summaries[name].Observe(k, float64(v))
}
}
}
}
}
func (s *metricsSet) Describe(c chan<- *prometheus.Desc) {
for _, g := range s.gauges {
g.Describe(c)
}
for _, g := range s.gaugeVecs {
g.Describe(c)
}
for _, g := range s.summaries {
g.Describe(c)
}
}
func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
s.collectMut.Lock()
defer s.collectMut.Unlock()
t0 := time.Now()
defer func() {
dur := time.Since(t0).Seconds()
metricsCollectSecondsLast.Set(dur)
metricsCollectSecondsTotal.Add(dur)
metricsCollectsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
for _, g := range s.gauges {
c <- g
}
for _, g := range s.gaugeVecs {
g.Collect(c)
}
for _, g := range s.summaries {
g.Collect(c)
}
}
type metricSummary struct {
name string
values map[string][]float64
zeroes map[string]int
qDesc *prometheus.Desc
countDesc *prometheus.Desc
sumDesc *prometheus.Desc
zDesc *prometheus.Desc
}
func newMetricSummary(name string, labels []string, constLabels prometheus.Labels) *metricSummary {
return &metricSummary{
name: name,
values: make(map[string][]float64),
zeroes: make(map[string]int),
qDesc: prometheus.NewDesc(name, "", append(labels, "quantile"), constLabels),
countDesc: prometheus.NewDesc(name+"_nonzero_count", "", labels, constLabels),
sumDesc: prometheus.NewDesc(name+"_sum", "", labels, constLabels),
zDesc: prometheus.NewDesc(name+"_zero_count", "", labels, constLabels),
}
}
func (q *metricSummary) Observe(labelValue string, v float64) {
if v == 0 {
q.zeroes[labelValue]++
return
}
q.values[labelValue] = append(q.values[labelValue], v)
}
func (q *metricSummary) Describe(c chan<- *prometheus.Desc) {
c <- q.qDesc
c <- q.countDesc
c <- q.sumDesc
c <- q.zDesc
}
func (q *metricSummary) Collect(c chan<- prometheus.Metric) {
for lv, vs := range q.values {
var labelVals []string
if lv != "" {
labelVals = []string{lv}
}
c <- prometheus.MustNewConstMetric(q.countDesc, prometheus.GaugeValue, float64(len(vs)), labelVals...)
c <- prometheus.MustNewConstMetric(q.zDesc, prometheus.GaugeValue, float64(q.zeroes[lv]), labelVals...)
var sum float64
for _, v := range vs {
sum += v
}
c <- prometheus.MustNewConstMetric(q.sumDesc, prometheus.GaugeValue, sum, labelVals...)
if len(vs) == 0 {
return
}
slices.Sort(vs)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[0], append(labelVals, "0")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*5/100], append(labelVals, "0.05")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)/2], append(labelVals, "0.5")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*9/10], append(labelVals, "0.9")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*95/100], append(labelVals, "0.95")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)-1], append(labelVals, "1")...)
}
}
func (q *metricSummary) Reset() {
clear(q.values)
clear(q.zeroes)
}

View File

@@ -0,0 +1,422 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"bufio"
"compress/gzip"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log/slog"
"net"
"net/http"
_ "net/http/pprof"
"os"
"regexp"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/puzpuzpuz/xsync/v3"
"github.com/syncthing/syncthing/internal/blob"
"github.com/syncthing/syncthing/internal/blob/azureblob"
"github.com/syncthing/syncthing/internal/blob/s3"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/ur/contract"
)
type CLI struct {
Listen string `env:"UR_LISTEN" help:"Usage reporting & metrics endpoint listen address" default:"0.0.0.0:8080"`
ListenInternal string `env:"UR_LISTEN_INTERNAL" help:"Internal metrics endpoint listen address" default:"0.0.0.0:8082"`
GeoIPLicenseKey string `env:"UR_GEOIP_LICENSE_KEY"`
GeoIPAccountID int `env:"UR_GEOIP_ACCOUNT_ID"`
DumpFile string `env:"UR_DUMP_FILE" default:"reports.jsons.gz"`
DumpInterval time.Duration `env:"UR_DUMP_INTERVAL" default:"5m"`
S3Endpoint string `name:"s3-endpoint" env:"UR_S3_ENDPOINT"`
S3Region string `name:"s3-region" env:"UR_S3_REGION"`
S3Bucket string `name:"s3-bucket" env:"UR_S3_BUCKET"`
S3AccessKeyID string `name:"s3-access-key-id" env:"UR_S3_ACCESS_KEY_ID"`
S3SecretKey string `name:"s3-secret-key" env:"UR_S3_SECRET_KEY"`
AzureBlobAccount string `name:"azure-blob-account" env:"UR_AZUREBLOB_ACCOUNT"`
AzureBlobKey string `name:"azure-blob-key" env:"UR_AZUREBLOB_KEY"`
AzureBlobContainer string `name:"azure-blob-container" env:"UR_AZUREBLOB_CONTAINER"`
}
var (
compilerRe = regexp.MustCompile(`\(([A-Za-z0-9()., -]+) \w+-\w+(?:| android| default)\) ([\w@.-]+)`)
knownDistributions = []distributionMatch{
// Maps well known builders to the official distribution method that
// they represent
{regexp.MustCompile(`\steamcity@build\.syncthing\.net`), "GitHub"},
{regexp.MustCompile(`\sjenkins@build\.syncthing\.net`), "GitHub"},
{regexp.MustCompile(`\sbuilder@github\.syncthing\.net`), "GitHub"},
{regexp.MustCompile(`\sdeb@build\.syncthing\.net`), "APT"},
{regexp.MustCompile(`\sdebian@github\.syncthing\.net`), "APT"},
{regexp.MustCompile(`\sdocker@syncthing\.net`), "Docker Hub"},
{regexp.MustCompile(`\sdocker@build.syncthing\.net`), "Docker Hub"},
{regexp.MustCompile(`\sdocker@github.syncthing\.net`), "Docker Hub"},
{regexp.MustCompile(`\sandroid-builder@github\.syncthing\.net`), "Google Play"},
{regexp.MustCompile(`\sandroid-.*teamcity@build\.syncthing\.net`), "Google Play"},
{regexp.MustCompile(`\sandroid-.*vagrant@basebox-stretch64`), "F-Droid"},
{regexp.MustCompile(`\svagrant@bullseye`), "F-Droid"},
{regexp.MustCompile(`\svagrant@bookworm`), "F-Droid"},
{regexp.MustCompile(`Anwender@NET2017`), "Syncthing-Fork (3rd party)"},
{regexp.MustCompile(`\sbuilduser@(archlinux|svetlemodry)`), "Arch (3rd party)"},
{regexp.MustCompile(`\ssyncthing@archlinux`), "Arch (3rd party)"},
{regexp.MustCompile(`@debian`), "Debian (3rd party)"},
{regexp.MustCompile(`@fedora`), "Fedora (3rd party)"},
{regexp.MustCompile(`@openSUSE`), "openSUSE (3rd party)"},
{regexp.MustCompile(`\sbrew@`), "Homebrew (3rd party)"},
{regexp.MustCompile(`\sroot@buildkitsandbox`), "LinuxServer.io (3rd party)"},
{regexp.MustCompile(`\sports@freebsd`), "FreeBSD (3rd party)"},
{regexp.MustCompile(`\snix@nix`), "Nix (3rd party)"},
{regexp.MustCompile(`.`), "Others"},
}
)
type distributionMatch struct {
matcher *regexp.Regexp
distribution string
}
func (cli *CLI) Run() error {
slog.Info("Starting", "version", build.Version)
// Listening
urListener, err := net.Listen("tcp", cli.Listen)
if err != nil {
slog.Error("Failed to listen (usage reports)", "error", err)
return err
}
slog.Info("Listening (usage reports)", "address", urListener.Addr())
internalListener, err := net.Listen("tcp", cli.ListenInternal)
if err != nil {
slog.Error("Failed to listen (internal)", "error", err)
return err
}
slog.Info("Listening (internal)", "address", internalListener.Addr())
var geo *geoip.Provider
if cli.GeoIPAccountID != 0 && cli.GeoIPLicenseKey != "" {
geo, err = geoip.NewGeoLite2CityProvider(context.Background(), cli.GeoIPAccountID, cli.GeoIPLicenseKey, os.TempDir())
if err != nil {
slog.Error("Failed to load GeoIP", "error", err)
return err
}
go geo.Serve(context.TODO())
}
// Blob storage
var blobs blob.Store
if cli.S3Endpoint != "" {
blobs, err = s3.NewSession(cli.S3Endpoint, cli.S3Region, cli.S3Bucket, cli.S3AccessKeyID, cli.S3SecretKey)
if err != nil {
slog.Error("Failed to create S3 session", "error", err)
return err
}
} else if cli.AzureBlobAccount != "" {
blobs, err = azureblob.NewBlobStore(cli.AzureBlobAccount, cli.AzureBlobKey, cli.AzureBlobContainer)
if err != nil {
slog.Error("Failed to create Azure blob store", "error", err)
return err
}
}
if _, err := os.Stat(cli.DumpFile); err != nil && blobs != nil {
if err := cli.downloadDumpFile(blobs); err != nil {
slog.Error("Failed to download dump file", "error", err)
}
}
// server
srv := &server{
geo: geo,
reports: xsync.NewMapOf[string, *contract.Report](),
}
if fd, err := os.Open(cli.DumpFile); err == nil {
gr, err := gzip.NewReader(fd)
if err == nil {
srv.load(gr)
}
fd.Close()
}
go func() {
for range time.Tick(cli.DumpInterval) {
if err := cli.saveDumpFile(srv, blobs); err != nil {
slog.Error("Failed to write dump file", "error", err)
}
}
}()
// The internal metrics endpoint just serves metrics about what the
// server is doing.
http.Handle("/metrics", promhttp.Handler())
internalSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
}
go internalSrv.Serve(internalListener)
// New external metrics endpoint accepts reports from clients and serves
// aggregated usage reporting metrics.
ms := newMetricsSet(srv)
reg := prometheus.NewRegistry()
reg.MustRegister(ms)
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
mux.HandleFunc("/newdata", srv.handleNewData)
mux.HandleFunc("/ping", srv.handlePing)
metricsSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
Handler: mux,
}
slog.Info("Ready to serve")
return metricsSrv.Serve(urListener)
}
func (cli *CLI) downloadDumpFile(blobs blob.Store) error {
latestKey, err := blobs.LatestKey(context.Background())
if err != nil {
return fmt.Errorf("list latest S3 key: %w", err)
}
fd, err := os.Create(cli.DumpFile)
if err != nil {
return fmt.Errorf("create dump file: %w", err)
}
if err := blobs.Download(context.Background(), latestKey, fd); err != nil {
_ = fd.Close()
return fmt.Errorf("download dump file: %w", err)
}
if err := fd.Close(); err != nil {
return fmt.Errorf("close dump file: %w", err)
}
slog.Info("Dump file downloaded", "key", latestKey)
return nil
}
func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
fd, err := os.Create(cli.DumpFile + ".tmp")
if err != nil {
return fmt.Errorf("creating dump file: %w", err)
}
gw := gzip.NewWriter(fd)
if err := srv.save(gw); err != nil {
return fmt.Errorf("saving dump file: %w", err)
}
if err := gw.Close(); err != nil {
fd.Close()
return fmt.Errorf("closing gzip writer: %w", err)
}
if err := fd.Close(); err != nil {
return fmt.Errorf("closing dump file: %w", err)
}
if err := os.Rename(cli.DumpFile+".tmp", cli.DumpFile); err != nil {
return fmt.Errorf("renaming dump file: %w", err)
}
slog.Info("Dump file saved")
if blobs != nil {
key := fmt.Sprintf("reports-%s.jsons.gz", time.Now().UTC().Format("2006-01-02"))
fd, err := os.Open(cli.DumpFile)
if err != nil {
return fmt.Errorf("opening dump file: %w", err)
}
if err := blobs.Upload(context.Background(), key, fd); err != nil {
return fmt.Errorf("uploading dump file: %w", err)
}
_ = fd.Close()
slog.Info("Dump file uploaded")
}
return nil
}
type server struct {
geo *geoip.Provider
reports *xsync.MapOf[string, *contract.Report]
}
func (s *server) handlePing(w http.ResponseWriter, r *http.Request) {
}
func (s *server) handleNewData(w http.ResponseWriter, r *http.Request) {
result := "fail"
defer func() {
// result is "accept" (new report), "replace" (existing report) or
// "fail"
metricReportsTotal.WithLabelValues(result).Inc()
}()
defer r.Body.Close()
if r.Method != http.MethodPost {
http.Error(w, "Method Not Allowed", http.StatusMethodNotAllowed)
return
}
addr := r.Header.Get("X-Forwarded-For")
if addr != "" {
addr = strings.Split(addr, ", ")[0]
} else {
addr = r.RemoteAddr
}
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
log := slog.With("addr", addr)
if net.ParseIP(addr) == nil {
addr = ""
}
var rep contract.Report
lr := &io.LimitedReader{R: r.Body, N: 40 * 1024}
bs, _ := io.ReadAll(lr)
if err := json.Unmarshal(bs, &rep); err != nil {
log.Error("Failed to decode JSON", "error", err)
http.Error(w, "JSON Decode Error", http.StatusInternalServerError)
return
}
rep.Received = time.Now()
rep.Date = rep.Received.UTC().Format("20060102")
rep.Address = addr
if err := rep.Validate(); err != nil {
log.Error("Failed to validate report", "error", err)
http.Error(w, "Validation Error", http.StatusInternalServerError)
return
}
if s.addReport(&rep) {
result = "replace"
} else {
result = "accept"
}
}
func (s *server) addReport(rep *contract.Report) bool {
if s.geo != nil {
if ip := net.ParseIP(rep.Address); ip != nil {
if city, err := s.geo.City(ip); err == nil {
rep.Country = city.Country.Names["en"]
rep.CountryCode = city.Country.IsoCode
}
}
}
if rep.Country == "" {
rep.Country = "Unknown"
}
if rep.CountryCode == "" {
rep.CountryCode = "ZZ"
}
rep.Version = transformVersion(rep.Version)
if strings.Contains(rep.Version, ".") {
split := strings.SplitN(rep.Version, ".", 3)
if len(split) == 3 {
rep.MajorVersion = strings.Join(split[:2], ".")
}
}
rep.OS, rep.Arch, _ = strings.Cut(rep.Platform, "-")
if m := compilerRe.FindStringSubmatch(rep.LongVersion); len(m) == 3 {
rep.Compiler = m[1]
rep.Builder = m[2]
}
for _, d := range knownDistributions {
if d.matcher.MatchString(rep.LongVersion) {
rep.Distribution = d.distribution
break
}
}
rep.DistDist = rep.Distribution
rep.DistOS = rep.OS
rep.DistArch = rep.Arch
_, loaded := s.reports.LoadAndStore(rep.UniqueID, rep)
return loaded
}
func (s *server) save(w io.Writer) error {
bw := bufio.NewWriter(w)
enc := json.NewEncoder(bw)
var err error
s.reports.Range(func(k string, v *contract.Report) bool {
err = enc.Encode(v)
return err == nil
})
if err != nil {
return err
}
return bw.Flush()
}
func (s *server) load(r io.Reader) {
dec := json.NewDecoder(r)
s.reports.Clear()
for {
var rep contract.Report
if err := dec.Decode(&rep); errors.Is(err, io.EOF) {
break
} else if err != nil {
slog.Error("Failed to load record", "error", err)
break
}
s.addReport(&rep)
}
slog.Info("Loaded reports", "count", s.reports.Size())
}
var (
plusRe = regexp.MustCompile(`(\+.*|[.-]dev\..*)$`)
plusStr = "-dev"
)
// transformVersion returns a version number formatted correctly, with all
// development versions aggregated into one.
func transformVersion(v string) string {
if v == "unknown-dev" {
return v
}
if !strings.HasPrefix(v, "v") {
v = "v" + v
}
v = plusRe.ReplaceAllString(v, plusStr)
return v
}

261
cmd/stdiscosrv/amqp.go Normal file
View File

@@ -0,0 +1,261 @@
// Copyright (C) 2024 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"context"
"fmt"
"io"
"log"
amqp "github.com/rabbitmq/amqp091-go"
"github.com/thejerf/suture/v4"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/internal/protoutil"
"github.com/syncthing/syncthing/lib/protocol"
)
type amqpReplicator struct {
suture.Service
broker string
sender *amqpSender
receiver *amqpReceiver
outbox chan *discosrv.ReplicationRecord
}
func newAMQPReplicator(broker, clientID string, db database) *amqpReplicator {
svc := suture.New("amqpReplicator", suture.Spec{PassThroughPanics: true})
sender := &amqpSender{
broker: broker,
clientID: clientID,
outbox: make(chan *discosrv.ReplicationRecord, replicationOutboxSize),
}
svc.Add(sender)
receiver := &amqpReceiver{
broker: broker,
clientID: clientID,
db: db,
}
svc.Add(receiver)
return &amqpReplicator{
Service: svc,
broker: broker,
sender: sender,
receiver: receiver,
outbox: make(chan *discosrv.ReplicationRecord, replicationOutboxSize),
}
}
func (s *amqpReplicator) send(key *protocol.DeviceID, ps []*discosrv.DatabaseAddress, seen int64) {
s.sender.send(key, ps, seen)
}
type amqpSender struct {
broker string
clientID string
outbox chan *discosrv.ReplicationRecord
}
func (s *amqpSender) Serve(ctx context.Context) error {
conn, ch, err := amqpChannel(s.broker)
if err != nil {
return err
}
defer ch.Close()
defer conn.Close()
buf := make([]byte, 1024)
for {
select {
case rec := <-s.outbox:
size := proto.Size(rec)
if len(buf) < size {
buf = make([]byte, size)
}
n, err := protoutil.MarshalTo(buf, rec)
if err != nil {
replicationSendsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication marshal: %w", err)
}
err = ch.PublishWithContext(ctx,
"discovery", // exchange
"", // routing key
false, // mandatory
false, // immediate
amqp.Publishing{
ContentType: "application/protobuf",
Body: buf[:n],
AppId: s.clientID,
})
if err != nil {
replicationSendsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication publish: %w", err)
}
replicationSendsTotal.WithLabelValues("success").Inc()
case <-ctx.Done():
return nil
}
}
}
func (s *amqpSender) String() string {
return fmt.Sprintf("amqpSender(%q)", s.broker)
}
func (s *amqpSender) send(key *protocol.DeviceID, ps []*discosrv.DatabaseAddress, seen int64) {
item := &discosrv.ReplicationRecord{
Key: key[:],
Addresses: ps,
Seen: seen,
}
// The send should never block. The inbox is suitably buffered for at
// least a few seconds of stalls, which shouldn't happen in practice.
select {
case s.outbox <- item:
default:
replicationSendsTotal.WithLabelValues("drop").Inc()
}
}
type amqpReceiver struct {
broker string
clientID string
db database
}
func (s *amqpReceiver) Serve(ctx context.Context) error {
conn, ch, err := amqpChannel(s.broker)
if err != nil {
return err
}
defer ch.Close()
defer conn.Close()
msgs, err := amqpConsume(ch)
if err != nil {
return err
}
for {
select {
case msg, ok := <-msgs:
if !ok {
return fmt.Errorf("subscription closed: %w", io.EOF)
}
// ignore messages from ourself
if msg.AppId == s.clientID {
continue
}
var rec discosrv.ReplicationRecord
if err := proto.Unmarshal(msg.Body, &rec); err != nil {
replicationRecvsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication unmarshal: %w", err)
}
id, err := protocol.DeviceIDFromBytes(rec.Key)
if err != nil {
id, err = protocol.DeviceIDFromString(string(rec.Key))
}
if err != nil {
log.Println("Replication device ID:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
continue
}
if err := s.db.merge(&id, rec.Addresses, rec.Seen); err != nil {
return fmt.Errorf("replication database merge: %w", err)
}
replicationRecvsTotal.WithLabelValues("success").Inc()
case <-ctx.Done():
return nil
}
}
}
func (s *amqpReceiver) String() string {
return fmt.Sprintf("amqpReceiver(%q)", s.broker)
}
func amqpChannel(dst string) (*amqp.Connection, *amqp.Channel, error) {
conn, err := amqp.Dial(dst)
if err != nil {
return nil, nil, fmt.Errorf("AMQP dial: %w", err)
}
ch, err := conn.Channel()
if err != nil {
return nil, nil, fmt.Errorf("AMQP channel: %w", err)
}
err = ch.ExchangeDeclare(
"discovery", // name
"fanout", // type
false, // durable
false, // auto-deleted
false, // internal
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, nil, fmt.Errorf("AMQP declare exchange: %w", err)
}
return conn, ch, nil
}
func amqpConsume(ch *amqp.Channel) (<-chan amqp.Delivery, error) {
q, err := ch.QueueDeclare(
"", // name
false, // durable
false, // delete when unused
true, // exclusive
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, fmt.Errorf("AMQP declare queue: %w", err)
}
err = ch.QueueBind(
q.Name, // queue name
"", // routing key
"discovery", // exchange
false,
nil,
)
if err != nil {
return nil, fmt.Errorf("AMQP bind queue: %w", err)
}
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
if err != nil {
return nil, fmt.Errorf("AMQP consume: %w", err)
}
return msgs, nil
}

View File

@@ -22,12 +22,13 @@ import (
"net"
"net/http"
"net/url"
"sort"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/stringutil"
)
@@ -39,15 +40,20 @@ type announcement struct {
}
type apiSrv struct {
addr string
cert tls.Certificate
db database
listener net.Listener
repl replicator // optional
useHTTP bool
addr string
cert tls.Certificate
db database
listener net.Listener
repl replicator // optional
useHTTP bool
compression bool
gzipWriters sync.Pool
seenTracker *retryAfterTracker
notSeenTracker *retryAfterTracker
}
mapsMut sync.Mutex
misses map[string]int32
type replicator interface {
send(key *protocol.DeviceID, addrs []*discosrv.DatabaseAddress, seen int64)
}
type requestID int64
@@ -60,18 +66,30 @@ type contextKey int
const idKey contextKey = iota
func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator, useHTTP bool) *apiSrv {
func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator, useHTTP, compression bool, desiredNotFoundRate float64) *apiSrv {
return &apiSrv{
addr: addr,
cert: cert,
db: db,
repl: repl,
useHTTP: useHTTP,
misses: make(map[string]int32),
addr: addr,
cert: cert,
db: db,
repl: repl,
useHTTP: useHTTP,
compression: compression,
seenTracker: &retryAfterTracker{
name: "seenTracker",
bucketStarts: time.Now(),
desiredRate: desiredNotFoundRate / 2,
currentDelay: notFoundRetryUnknownMinSeconds,
},
notSeenTracker: &retryAfterTracker{
name: "notSeenTracker",
bucketStarts: time.Now(),
desiredRate: desiredNotFoundRate / 2,
currentDelay: notFoundRetryUnknownMaxSeconds / 2,
},
}
}
func (s *apiSrv) Serve(_ context.Context) error {
func (s *apiSrv) Serve(ctx context.Context) error {
if s.useHTTP {
listener, err := net.Listen("tcp", s.addr)
if err != nil {
@@ -102,8 +120,15 @@ func (s *apiSrv) Serve(_ context.Context) error {
ReadTimeout: httpReadTimeout,
WriteTimeout: httpWriteTimeout,
MaxHeaderBytes: httpMaxHeaderBytes,
ErrorLog: log.New(io.Discard, "", 0),
}
if !debug {
srv.ErrorLog = log.New(io.Discard, "", 0)
}
go func() {
<-ctx.Done()
srv.Shutdown(context.Background())
}()
err := srv.Serve(s.listener)
if err != nil {
@@ -173,7 +198,7 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
deviceID, err := protocol.DeviceIDFromString(req.URL.Query().Get("device"))
if err != nil {
if debug {
log.Println(reqID, "bad device param")
log.Println(reqID, "bad device param:", err)
}
lookupRequestsTotal.WithLabelValues("bad_request").Inc()
w.Header().Set("Retry-After", errorRetryAfterString())
@@ -181,8 +206,7 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
return
}
key := deviceID.String()
rec, err := s.db.get(key)
rec, err := s.db.get(&deviceID)
if err != nil {
// some sort of internal error
lookupRequestsTotal.WithLabelValues("internal_error").Inc()
@@ -192,28 +216,14 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
}
if len(rec.Addresses) == 0 {
lookupRequestsTotal.WithLabelValues("not_found").Inc()
s.mapsMut.Lock()
misses := s.misses[key]
if misses < rec.Misses {
misses = rec.Misses + 1
var afterS int
if rec.Seen == 0 {
afterS = s.notSeenTracker.retryAfterS()
lookupRequestsTotal.WithLabelValues("not_found_ever").Inc()
} else {
misses++
afterS = s.seenTracker.retryAfterS()
lookupRequestsTotal.WithLabelValues("not_found_recent").Inc()
}
s.misses[key] = misses
s.mapsMut.Unlock()
if misses%notFoundMissesWriteInterval == 0 {
rec.Misses = misses
rec.Missed = time.Now().UnixNano()
rec.Addresses = nil
// rec.Seen retained from get
s.db.put(key, rec)
}
afterS := notFoundRetryAfterSeconds(int(misses))
retryAfterHistogram.Observe(float64(afterS))
w.Header().Set("Retry-After", strconv.Itoa(afterS))
http.Error(w, "Not Found", http.StatusNotFound)
return
@@ -225,10 +235,16 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
var bw io.Writer = w
// Use compression if the client asks for it
if strings.Contains(req.Header.Get("Accept-Encoding"), "gzip") {
if s.compression && strings.Contains(req.Header.Get("Accept-Encoding"), "gzip") {
gw, ok := s.gzipWriters.Get().(*gzip.Writer)
if ok {
gw.Reset(w)
} else {
gw = gzip.NewWriter(w)
}
w.Header().Set("Content-Encoding", "gzip")
gw := gzip.NewWriter(bw)
defer gw.Close()
defer s.gzipWriters.Put(gw)
bw = gw
}
@@ -267,6 +283,9 @@ func (s *apiSrv) handlePOST(remoteAddr *net.TCPAddr, w http.ResponseWriter, req
addresses := fixupAddresses(remoteAddr, ann.Addresses)
if len(addresses) == 0 {
if debug {
log.Println(reqID, "no addresses")
}
announceRequestsTotal.WithLabelValues("bad_request").Inc()
w.Header().Set("Retry-After", errorRetryAfterString())
http.Error(w, "Bad Request", http.StatusBadRequest)
@@ -274,6 +293,9 @@ func (s *apiSrv) handlePOST(remoteAddr *net.TCPAddr, w http.ResponseWriter, req
}
if err := s.handleAnnounce(deviceID, addresses); err != nil {
if debug {
log.Println(reqID, "handle:", err)
}
announceRequestsTotal.WithLabelValues("internal_error").Inc()
w.Header().Set("Retry-After", errorRetryAfterString())
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
@@ -284,6 +306,9 @@ func (s *apiSrv) handlePOST(remoteAddr *net.TCPAddr, w http.ResponseWriter, req
w.Header().Set("Reannounce-After", reannounceAfterString())
w.WriteHeader(http.StatusNoContent)
if debug {
log.Println(reqID, "announced", deviceID, addresses)
}
}
func (s *apiSrv) Stop() {
@@ -291,29 +316,31 @@ func (s *apiSrv) Stop() {
}
func (s *apiSrv) handleAnnounce(deviceID protocol.DeviceID, addresses []string) error {
key := deviceID.String()
now := time.Now()
expire := now.Add(addressExpiryTime).UnixNano()
dbAddrs := make([]DatabaseAddress, len(addresses))
for i := range addresses {
dbAddrs[i].Address = addresses[i]
dbAddrs[i].Expires = expire
}
// The address slice must always be sorted for database merges to work
// properly.
sort.Sort(databaseAddressOrder(dbAddrs))
slices.Sort(addresses)
addresses = slices.Compact(addresses)
dbAddrs := make([]*discosrv.DatabaseAddress, len(addresses))
for i := range addresses {
dbAddrs[i] = &discosrv.DatabaseAddress{
Address: addresses[i],
Expires: expire,
}
}
seen := now.UnixNano()
if s.repl != nil {
s.repl.send(key, dbAddrs, seen)
s.repl.send(&deviceID, dbAddrs, seen)
}
return s.db.merge(key, dbAddrs, seen)
return s.db.merge(&deviceID, dbAddrs, seen)
}
func handlePing(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(204)
w.WriteHeader(http.StatusNoContent)
}
func certificateBytes(req *http.Request) ([]byte, error) {
@@ -359,7 +386,7 @@ func certificateBytes(req *http.Request) ([]byte, error) {
}
bs = pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: hdr})
} else if hdr := req.Header.Get("X-Forwarded-Tls-Client-Cert"); hdr != "" {
} else if cert := req.Header.Get("X-Forwarded-Tls-Client-Cert"); cert != "" {
// Traefik 2 passtlsclientcert
//
// The certificate is in PEM format, maybe with URL encoding
@@ -367,19 +394,36 @@ func certificateBytes(req *http.Request) ([]byte, error) {
// statements. We need to decode, reinstate the newlines every 64
// character and add statements for the PEM decoder
if strings.Contains(hdr, "%") {
if unesc, err := url.QueryUnescape(hdr); err == nil {
hdr = unesc
if strings.Contains(cert, "%") {
if unesc, err := url.QueryUnescape(cert); err == nil {
cert = unesc
}
}
for i := 64; i < len(hdr); i += 65 {
hdr = hdr[:i] + "\n" + hdr[i:]
const (
header = "-----BEGIN CERTIFICATE-----"
footer = "-----END CERTIFICATE-----"
)
var b bytes.Buffer
b.Grow(len(header) + 1 + len(cert) + len(cert)/64 + 1 + len(footer) + 1)
b.WriteString(header)
b.WriteByte('\n')
for i := 0; i < len(cert); i += 64 {
end := i + 64
if end > len(cert) {
end = len(cert)
}
b.WriteString(cert[i:end])
b.WriteByte('\n')
}
hdr = "-----BEGIN CERTIFICATE-----\n" + hdr
hdr += "\n-----END CERTIFICATE-----\n"
bs = []byte(hdr)
b.WriteString(footer)
b.WriteByte('\n')
bs = b.Bytes()
}
if bs == nil {
@@ -444,7 +488,6 @@ func fixupAddresses(remote *net.TCPAddr, addresses []string) []string {
// remote is nil, unable to determine host IP
continue
}
}
// If zero port was specified, use remote port.
@@ -482,7 +525,7 @@ func (lrw *loggingResponseWriter) WriteHeader(code int) {
lrw.ResponseWriter.WriteHeader(code)
}
func addressStrs(dbAddrs []DatabaseAddress) []string {
func addressStrs(dbAddrs []*discosrv.DatabaseAddress) []string {
res := make([]string, len(dbAddrs))
for i, a := range dbAddrs {
res[i] = a.Address
@@ -494,15 +537,44 @@ func errorRetryAfterString() string {
return strconv.Itoa(errorRetryAfterSeconds + rand.Intn(errorRetryFuzzSeconds))
}
func notFoundRetryAfterSeconds(misses int) int {
retryAfterS := notFoundRetryMinSeconds + notFoundRetryIncSeconds*misses
if retryAfterS > notFoundRetryMaxSeconds {
retryAfterS = notFoundRetryMaxSeconds
}
retryAfterS += rand.Intn(notFoundRetryFuzzSeconds)
return retryAfterS
}
func reannounceAfterString() string {
return strconv.Itoa(reannounceAfterSeconds + rand.Intn(reannounzeFuzzSeconds))
}
type retryAfterTracker struct {
name string
desiredRate float64 // requests per second
mut sync.Mutex
lastCount int // requests in the last bucket
curCount int // requests in the current bucket
bucketStarts time.Time // start of the current bucket
currentDelay int // current delay in seconds
}
func (t *retryAfterTracker) retryAfterS() int {
now := time.Now()
t.mut.Lock()
if durS := now.Sub(t.bucketStarts).Seconds(); durS > float64(t.currentDelay) {
t.bucketStarts = now
t.lastCount = t.curCount
lastRate := float64(t.lastCount) / durS
switch {
case t.currentDelay > notFoundRetryUnknownMinSeconds &&
lastRate < 0.75*t.desiredRate:
t.currentDelay = max(8*t.currentDelay/10, notFoundRetryUnknownMinSeconds)
case t.currentDelay < notFoundRetryUnknownMaxSeconds &&
lastRate > 1.25*t.desiredRate:
t.currentDelay = min(3*t.currentDelay/2, notFoundRetryUnknownMaxSeconds)
}
t.curCount = 0
}
if t.curCount == 0 {
retryAfterLevel.WithLabelValues(t.name).Set(float64(t.currentDelay))
}
t.curCount++
t.mut.Unlock()
return t.currentDelay + rand.Intn(t.currentDelay/4)
}

View File

@@ -7,9 +7,20 @@
package main
import (
"context"
"crypto/tls"
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"os"
"regexp"
"strings"
"testing"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
)
func TestFixupAddresses(t *testing.T) {
@@ -94,3 +105,79 @@ func addr(host string, port int) *net.TCPAddr {
Port: port,
}
}
func BenchmarkAPIRequests(b *testing.B) {
db := newInMemoryStore(b.TempDir(), 0, nil)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go db.Serve(ctx)
api := newAPISrv("127.0.0.1:0", tls.Certificate{}, db, nil, true, true, 1000)
srv := httptest.NewServer(http.HandlerFunc(api.handler))
kf := b.TempDir() + "/cert"
crt, err := tlsutil.NewCertificate(kf+".crt", kf+".key", "localhost", 7)
if err != nil {
b.Fatal(err)
}
certBs, err := os.ReadFile(kf + ".crt")
if err != nil {
b.Fatal(err)
}
certBs = regexp.MustCompile(`---[^\n]+---\n`).ReplaceAll(certBs, nil)
certString := string(strings.ReplaceAll(string(certBs), "\n", " "))
devID := protocol.NewDeviceID(crt.Certificate[0])
devIDString := devID.String()
b.Run("Announce", func(b *testing.B) {
b.ReportAllocs()
url := srv.URL + "/v2/?device=" + devIDString
for i := 0; i < b.N; i++ {
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(`{"addresses":["tcp://10.10.10.10:42000"]}`))
req.Header.Set("X-Forwarded-Tls-Client-Cert", certString)
resp, err := http.DefaultClient.Do(req)
if err != nil {
b.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusNoContent {
b.Fatalf("unexpected status %s", resp.Status)
}
}
})
b.Run("Lookup", func(b *testing.B) {
b.ReportAllocs()
url := srv.URL + "/v2/?device=" + devIDString
for i := 0; i < b.N; i++ {
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
b.Fatal(err)
}
io.Copy(io.Discard, resp.Body)
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b.Fatalf("unexpected status %s", resp.Status)
}
}
})
b.Run("LookupNoCompression", func(b *testing.B) {
b.ReportAllocs()
url := srv.URL + "/v2/?device=" + devIDString
for i := 0; i < b.N; i++ {
req, _ := http.NewRequest(http.MethodGet, url, nil)
req.Header.Set("Accept-Encoding", "identity") // disable compression
resp, err := http.DefaultClient.Do(req)
if err != nil {
b.Fatal(err)
}
io.Copy(io.Discard, resp.Body)
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b.Fatalf("unexpected status %s", resp.Status)
}
}
})
}

View File

@@ -4,23 +4,31 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate go run ../../proto/scripts/protofmt.go database.proto
//go:generate protoc -I ../../ -I . --gogofast_out=. database.proto
package main
import (
"bufio"
"cmp"
"context"
"encoding/binary"
"errors"
"io"
"log"
"net"
"net/url"
"sort"
"os"
"path"
"runtime"
"slices"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sliceutil"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/storage"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/puzpuzpuz/xsync/v3"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/blob"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/internal/protoutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
)
type clock interface {
@@ -34,380 +42,401 @@ func (defaultClock) Now() time.Time {
}
type database interface {
put(key string, rec DatabaseRecord) error
merge(key string, addrs []DatabaseAddress, seen int64) error
get(key string) (DatabaseRecord, error)
put(key *protocol.DeviceID, rec *discosrv.DatabaseRecord) error
merge(key *protocol.DeviceID, addrs []*discosrv.DatabaseAddress, seen int64) error
get(key *protocol.DeviceID) (*discosrv.DatabaseRecord, error)
}
type levelDBStore struct {
db *leveldb.DB
inbox chan func()
clock clock
marshalBuf []byte
type inMemoryStore struct {
m *xsync.MapOf[protocol.DeviceID, *discosrv.DatabaseRecord]
dir string
flushInterval time.Duration
blobs blob.Store
objKey string
clock clock
}
func newLevelDBStore(dir string) (*levelDBStore, error) {
db, err := leveldb.OpenFile(dir, levelDBOptions)
func newInMemoryStore(dir string, flushInterval time.Duration, blobs blob.Store) *inMemoryStore {
hn, err := os.Hostname()
if err != nil {
return nil, err
hn = rand.String(8)
}
return &levelDBStore{
db: db,
inbox: make(chan func(), 16),
clock: defaultClock{},
}, nil
}
func newMemoryLevelDBStore() (*levelDBStore, error) {
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
return nil, err
s := &inMemoryStore{
m: xsync.NewMapOf[protocol.DeviceID, *discosrv.DatabaseRecord](),
dir: dir,
flushInterval: flushInterval,
blobs: blobs,
objKey: hn + ".db",
clock: defaultClock{},
}
return &levelDBStore{
db: db,
inbox: make(chan func(), 16),
clock: defaultClock{},
}, nil
}
func (s *levelDBStore) put(key string, rec DatabaseRecord) error {
t0 := time.Now()
defer func() {
databaseOperationSeconds.WithLabelValues(dbOpPut).Observe(time.Since(t0).Seconds())
}()
rc := make(chan error)
s.inbox <- func() {
size := rec.Size()
if len(s.marshalBuf) < size {
s.marshalBuf = make([]byte, size)
nr, err := s.read()
if os.IsNotExist(err) && blobs != nil {
// Try to read from blob storage
latestKey, cerr := blobs.LatestKey(context.Background())
if cerr != nil {
log.Println("Error finding database from blob storage:", cerr)
return s
}
n, _ := rec.MarshalTo(s.marshalBuf)
rc <- s.db.Put([]byte(key), s.marshalBuf[:n], nil)
fd, cerr := os.Create(path.Join(s.dir, "records.db"))
if cerr != nil {
log.Println("Error creating database file:", cerr)
return s
}
if cerr := blobs.Download(context.Background(), latestKey, fd); cerr != nil {
log.Printf("Error downloading database from blob storage: %v", cerr)
}
_ = fd.Close()
nr, err = s.read()
}
err := <-rc
if err != nil {
databaseOperations.WithLabelValues(dbOpPut, dbResError).Inc()
} else {
databaseOperations.WithLabelValues(dbOpPut, dbResSuccess).Inc()
log.Println("Error reading database:", err)
}
return err
log.Printf("Read %d records from database", nr)
s.expireAndCalculateStatistics()
return s
}
func (s *levelDBStore) merge(key string, addrs []DatabaseAddress, seen int64) error {
func (s *inMemoryStore) put(key *protocol.DeviceID, rec *discosrv.DatabaseRecord) error {
t0 := time.Now()
defer func() {
databaseOperationSeconds.WithLabelValues(dbOpMerge).Observe(time.Since(t0).Seconds())
}()
s.m.Store(*key, rec)
databaseOperations.WithLabelValues(dbOpPut, dbResSuccess).Inc()
databaseOperationSeconds.WithLabelValues(dbOpPut).Observe(time.Since(t0).Seconds())
return nil
}
rc := make(chan error)
newRec := DatabaseRecord{
func (s *inMemoryStore) merge(key *protocol.DeviceID, addrs []*discosrv.DatabaseAddress, seen int64) error {
t0 := time.Now()
newRec := &discosrv.DatabaseRecord{
Addresses: addrs,
Seen: seen,
}
s.inbox <- func() {
// grab the existing record
oldRec, err := s.get(key)
if err != nil {
// "not found" is not an error from get, so this is serious
// stuff only
rc <- err
return
}
newRec = merge(newRec, oldRec)
// We replicate s.put() functionality here ourselves instead of
// calling it because we want to serialize our get above together
// with the put in the same function.
size := newRec.Size()
if len(s.marshalBuf) < size {
s.marshalBuf = make([]byte, size)
}
n, _ := newRec.MarshalTo(s.marshalBuf)
rc <- s.db.Put([]byte(key), s.marshalBuf[:n], nil)
if oldRec, ok := s.m.Load(*key); ok {
newRec = merge(oldRec, newRec)
}
s.m.Store(*key, newRec)
err := <-rc
if err != nil {
databaseOperations.WithLabelValues(dbOpMerge, dbResError).Inc()
} else {
databaseOperations.WithLabelValues(dbOpMerge, dbResSuccess).Inc()
}
databaseOperations.WithLabelValues(dbOpMerge, dbResSuccess).Inc()
databaseOperationSeconds.WithLabelValues(dbOpMerge).Observe(time.Since(t0).Seconds())
return err
return nil
}
func (s *levelDBStore) get(key string) (DatabaseRecord, error) {
func (s *inMemoryStore) get(key *protocol.DeviceID) (*discosrv.DatabaseRecord, error) {
t0 := time.Now()
defer func() {
databaseOperationSeconds.WithLabelValues(dbOpGet).Observe(time.Since(t0).Seconds())
}()
keyBs := []byte(key)
val, err := s.db.Get(keyBs, nil)
if err == leveldb.ErrNotFound {
rec, ok := s.m.Load(*key)
if !ok {
databaseOperations.WithLabelValues(dbOpGet, dbResNotFound).Inc()
return DatabaseRecord{}, nil
}
if err != nil {
databaseOperations.WithLabelValues(dbOpGet, dbResError).Inc()
return DatabaseRecord{}, err
return &discosrv.DatabaseRecord{}, nil
}
var rec DatabaseRecord
if err := rec.Unmarshal(val); err != nil {
databaseOperations.WithLabelValues(dbOpGet, dbResUnmarshalError).Inc()
return DatabaseRecord{}, nil
}
rec.Addresses = expire(rec.Addresses, s.clock.Now().UnixNano())
rec.Addresses = expire(rec.Addresses, s.clock.Now())
databaseOperations.WithLabelValues(dbOpGet, dbResSuccess).Inc()
return rec, nil
}
func (s *levelDBStore) Serve(ctx context.Context) error {
t := time.NewTimer(0)
defer t.Stop()
defer s.db.Close()
func (s *inMemoryStore) Serve(ctx context.Context) error {
if s.flushInterval <= 0 {
<-ctx.Done()
return nil
}
// Start the statistics serve routine. It will exit with us when
// statisticsTrigger is closed.
statisticsTrigger := make(chan struct{})
statisticsDone := make(chan struct{})
go s.statisticsServe(statisticsTrigger, statisticsDone)
t := time.NewTimer(s.flushInterval)
defer t.Stop()
loop:
for {
select {
case fn := <-s.inbox:
// Run function in serialized order.
fn()
case <-t.C:
// Trigger the statistics routine to do its thing in the
// background.
statisticsTrigger <- struct{}{}
case <-statisticsDone:
// The statistics routine is done with one iteratation, schedule
// the next.
t.Reset(databaseStatisticsInterval)
log.Println("Calculating statistics")
s.expireAndCalculateStatistics()
log.Println("Flushing database")
if err := s.write(); err != nil {
log.Println("Error writing database:", err)
}
log.Println("Finished flushing database")
t.Reset(s.flushInterval)
case <-ctx.Done():
// We're done.
close(statisticsTrigger)
break loop
}
}
// Also wait for statisticsServe to return
<-statisticsDone
return s.write()
}
func (s *inMemoryStore) expireAndCalculateStatistics() {
now := s.clock.Now()
cutoff24h := now.Add(-24 * time.Hour).UnixNano()
cutoff1w := now.Add(-7 * 24 * time.Hour).UnixNano()
current, currentIPv4, currentIPv6, currentIPv6GUA, last24h, last1w := 0, 0, 0, 0, 0, 0
n := 0
s.m.Range(func(key protocol.DeviceID, rec *discosrv.DatabaseRecord) bool {
if n%1000 == 0 {
runtime.Gosched()
}
n++
addresses := expire(rec.Addresses, now)
if len(addresses) == 0 {
rec.Addresses = nil
s.m.Store(key, rec)
} else if len(addresses) != len(rec.Addresses) {
rec.Addresses = addresses
s.m.Store(key, rec)
}
switch {
case len(rec.Addresses) > 0:
current++
seenIPv4, seenIPv6, seenIPv6GUA := false, false, false
for _, addr := range rec.Addresses {
// We do fast and loose matching on strings here instead of
// parsing the address and the IP and doing "proper" checks,
// to keep things fast and generate less garbage.
if strings.Contains(addr.Address, "[") {
seenIPv6 = true
if strings.Contains(addr.Address, "[2") {
seenIPv6GUA = true
}
} else {
seenIPv4 = true
}
if seenIPv4 && seenIPv6 && seenIPv6GUA {
break
}
}
if seenIPv4 {
currentIPv4++
}
if seenIPv6 {
currentIPv6++
}
if seenIPv6GUA {
currentIPv6GUA++
}
case rec.Seen > cutoff24h:
last24h++
case rec.Seen > cutoff1w:
last1w++
default:
// drop the record if it's older than a week
s.m.Delete(key)
}
return true
})
databaseKeys.WithLabelValues("current").Set(float64(current))
databaseKeys.WithLabelValues("currentIPv4").Set(float64(currentIPv4))
databaseKeys.WithLabelValues("currentIPv6").Set(float64(currentIPv6))
databaseKeys.WithLabelValues("currentIPv6GUA").Set(float64(currentIPv6GUA))
databaseKeys.WithLabelValues("last24h").Set(float64(last24h))
databaseKeys.WithLabelValues("last1w").Set(float64(last1w))
databaseStatisticsSeconds.Set(time.Since(now).Seconds())
}
func (s *inMemoryStore) write() (err error) {
t0 := time.Now()
defer func() {
if err == nil {
databaseWriteSeconds.Set(time.Since(t0).Seconds())
databaseLastWritten.Set(float64(t0.Unix()))
}
}()
dbf := path.Join(s.dir, "records.db")
fd, err := os.Create(dbf + ".tmp")
if err != nil {
return err
}
bw := bufio.NewWriter(fd)
var buf []byte
var rangeErr error
now := s.clock.Now()
cutoff1w := now.Add(-7 * 24 * time.Hour).UnixNano()
n := 0
s.m.Range(func(key protocol.DeviceID, value *discosrv.DatabaseRecord) bool {
if n%1000 == 0 {
runtime.Gosched()
}
n++
if value.Seen < cutoff1w {
// drop the record if it's older than a week
return true
}
rec := &discosrv.ReplicationRecord{
Key: key[:],
Addresses: value.Addresses,
Seen: value.Seen,
}
s := proto.Size(rec)
if s+4 > len(buf) {
buf = make([]byte, s+4)
}
n, err := protoutil.MarshalTo(buf[4:], rec)
if err != nil {
rangeErr = err
return false
}
binary.BigEndian.PutUint32(buf, uint32(n))
if _, err := bw.Write(buf[:n+4]); err != nil {
rangeErr = err
return false
}
return true
})
if rangeErr != nil {
_ = fd.Close()
return rangeErr
}
if err := bw.Flush(); err != nil {
_ = fd.Close
return err
}
if err := fd.Close(); err != nil {
return err
}
if err := os.Rename(dbf+".tmp", dbf); err != nil {
return err
}
// Upload to blob storage
if s.blobs != nil {
fd, err = os.Open(dbf)
if err != nil {
log.Printf("Error uploading database to blob storage: %v", err)
return nil
}
defer fd.Close()
if err := s.blobs.Upload(context.Background(), s.objKey, fd); err != nil {
log.Printf("Error uploading database to blob storage: %v", err)
}
log.Println("Finished uploading database")
}
return nil
}
func (s *levelDBStore) statisticsServe(trigger <-chan struct{}, done chan<- struct{}) {
defer close(done)
func (s *inMemoryStore) read() (int, error) {
fd, err := os.Open(path.Join(s.dir, "records.db"))
if err != nil {
return 0, err
}
defer fd.Close()
for range trigger {
t0 := time.Now()
nowNanos := t0.UnixNano()
cutoff24h := t0.Add(-24 * time.Hour).UnixNano()
cutoff1w := t0.Add(-7 * 24 * time.Hour).UnixNano()
cutoff2Mon := t0.Add(-60 * 24 * time.Hour).UnixNano()
current, currentIPv4, currentIPv6, last24h, last1w, inactive, errors := 0, 0, 0, 0, 0, 0, 0
iter := s.db.NewIterator(&util.Range{}, nil)
for iter.Next() {
// Attempt to unmarshal the record and count the
// failure if there's something wrong with it.
var rec DatabaseRecord
if err := rec.Unmarshal(iter.Value()); err != nil {
errors++
continue
}
// If there are addresses that have not expired it's a current
// record, otherwise account it based on when it was last seen
// (last 24 hours or last week) or finally as inactice.
addrs := expire(rec.Addresses, nowNanos)
switch {
case len(addrs) > 0:
current++
seenIPv4, seenIPv6 := false, false
for _, addr := range addrs {
uri, err := url.Parse(addr.Address)
if err != nil {
continue
}
host, _, err := net.SplitHostPort(uri.Host)
if err != nil {
continue
}
if ip := net.ParseIP(host); ip != nil && ip.To4() != nil {
seenIPv4 = true
} else if ip != nil {
seenIPv6 = true
}
if seenIPv4 && seenIPv6 {
break
}
}
if seenIPv4 {
currentIPv4++
}
if seenIPv6 {
currentIPv6++
}
case rec.Seen > cutoff24h:
last24h++
case rec.Seen > cutoff1w:
last1w++
case rec.Seen > cutoff2Mon:
inactive++
case rec.Missed < cutoff2Mon:
// It hasn't been seen lately and we haven't recorded
// someone asking for this device in a long time either;
// delete the record.
if err := s.db.Delete(iter.Key(), nil); err != nil {
databaseOperations.WithLabelValues(dbOpDelete, dbResError).Inc()
} else {
databaseOperations.WithLabelValues(dbOpDelete, dbResSuccess).Inc()
}
default:
inactive++
br := bufio.NewReader(fd)
var buf []byte
nr := 0
for {
var n uint32
if err := binary.Read(br, binary.BigEndian, &n); err != nil {
if errors.Is(err, io.EOF) {
break
}
return nr, err
}
if int(n) > len(buf) {
buf = make([]byte, n)
}
if _, err := io.ReadFull(br, buf[:n]); err != nil {
return nr, err
}
rec := &discosrv.ReplicationRecord{}
if err := proto.Unmarshal(buf[:n], rec); err != nil {
return nr, err
}
key, err := protocol.DeviceIDFromBytes(rec.Key)
if err != nil {
key, err = protocol.DeviceIDFromString(string(rec.Key))
}
if err != nil {
log.Println("Bad device ID:", err)
continue
}
iter.Release()
databaseKeys.WithLabelValues("current").Set(float64(current))
databaseKeys.WithLabelValues("currentIPv4").Set(float64(currentIPv4))
databaseKeys.WithLabelValues("currentIPv6").Set(float64(currentIPv6))
databaseKeys.WithLabelValues("last24h").Set(float64(last24h))
databaseKeys.WithLabelValues("last1w").Set(float64(last1w))
databaseKeys.WithLabelValues("inactive").Set(float64(inactive))
databaseKeys.WithLabelValues("error").Set(float64(errors))
databaseStatisticsSeconds.Set(time.Since(t0).Seconds())
// Signal that we are done and can be scheduled again.
done <- struct{}{}
slices.SortFunc(rec.Addresses, Cmp)
rec.Addresses = slices.CompactFunc(rec.Addresses, Equal)
s.m.Store(key, &discosrv.DatabaseRecord{
Addresses: expire(rec.Addresses, s.clock.Now()),
Seen: rec.Seen,
})
nr++
}
return nr, nil
}
// merge returns the merged result of the two database records a and b. The
// result is the union of the two address sets, with the newer expiry time
// chosen for any duplicates.
func merge(a, b DatabaseRecord) DatabaseRecord {
// chosen for any duplicates. The address list in a is overwritten and
// reused for the result.
func merge(a, b *discosrv.DatabaseRecord) *discosrv.DatabaseRecord {
// Both lists must be sorted for this to work.
if !sort.IsSorted(databaseAddressOrder(a.Addresses)) {
log.Println("Warning: bug: addresses not correctly sorted in merge")
a.Addresses = sortedAddressCopy(a.Addresses)
}
if !sort.IsSorted(databaseAddressOrder(b.Addresses)) {
// no warning because this is the side we read from disk and it may
// legitimately predate correct sorting.
b.Addresses = sortedAddressCopy(b.Addresses)
}
res := DatabaseRecord{
Addresses: make([]DatabaseAddress, 0, len(a.Addresses)+len(b.Addresses)),
Seen: a.Seen,
}
if b.Seen > a.Seen {
res.Seen = b.Seen
}
a.Seen = max(a.Seen, b.Seen)
aIdx := 0
bIdx := 0
aAddrs := a.Addresses
bAddrs := b.Addresses
loop:
for {
switch {
case aIdx == len(aAddrs) && bIdx == len(bAddrs):
// both lists are exhausted, we are done
break loop
case aIdx == len(aAddrs):
// a is exhausted, pick from b and continue
res.Addresses = append(res.Addresses, bAddrs[bIdx])
bIdx++
continue
case bIdx == len(bAddrs):
// b is exhausted, pick from a and continue
res.Addresses = append(res.Addresses, aAddrs[aIdx])
aIdx++
continue
}
// We have values left on both sides.
aVal := aAddrs[aIdx]
bVal := bAddrs[bIdx]
switch {
case aVal.Address == bVal.Address:
// update for same address, pick newer
if aVal.Expires > bVal.Expires {
res.Addresses = append(res.Addresses, aVal)
} else {
res.Addresses = append(res.Addresses, bVal)
}
for aIdx < len(a.Addresses) && bIdx < len(b.Addresses) {
switch cmp.Compare(a.Addresses[aIdx].Address, b.Addresses[bIdx].Address) {
case 0:
// a == b, choose the newer expiry time
a.Addresses[aIdx].Expires = max(a.Addresses[aIdx].Expires, b.Addresses[bIdx].Expires)
aIdx++
bIdx++
case aVal.Address < bVal.Address:
// a is smallest, pick it and continue
res.Addresses = append(res.Addresses, aVal)
case -1:
// a < b, keep a and move on
aIdx++
default:
// b is smallest, pick it and continue
res.Addresses = append(res.Addresses, bVal)
case 1:
// a > b, insert b before a
a.Addresses = append(a.Addresses[:aIdx], append([]*discosrv.DatabaseAddress{b.Addresses[bIdx]}, a.Addresses[aIdx:]...)...)
bIdx++
}
}
return res
if bIdx < len(b.Addresses) {
a.Addresses = append(a.Addresses, b.Addresses[bIdx:]...)
}
return a
}
// expire returns the list of addresses after removing expired entries.
// Expiration happen in place, so the slice given as the parameter is
// destroyed. Internal order is not preserved.
func expire(addrs []DatabaseAddress, now int64) []DatabaseAddress {
i := 0
for i < len(addrs) {
if addrs[i].Expires < now {
addrs = sliceutil.RemoveAndZero(addrs, i)
// destroyed. Internal order is preserved.
func expire(addrs []*discosrv.DatabaseAddress, now time.Time) []*discosrv.DatabaseAddress {
cutoff := now.UnixNano()
naddrs := addrs[:0]
for i := range addrs {
if i > 0 && addrs[i].Address == addrs[i-1].Address {
// Skip duplicates
continue
}
i++
if addrs[i].Expires >= cutoff {
naddrs = append(naddrs, addrs[i])
}
}
return addrs
if len(naddrs) == 0 {
return nil
}
return naddrs
}
func sortedAddressCopy(addrs []DatabaseAddress) []DatabaseAddress {
sorted := make([]DatabaseAddress, len(addrs))
copy(sorted, addrs)
sort.Sort(databaseAddressOrder(sorted))
return sorted
func Cmp(d, other *discosrv.DatabaseAddress) (n int) {
if c := cmp.Compare(d.Address, other.Address); c != 0 {
return c
}
return cmp.Compare(d.Expires, other.Expires)
}
type databaseAddressOrder []DatabaseAddress
func (s databaseAddressOrder) Less(a, b int) bool {
return s[a].Address < s[b].Address
}
func (s databaseAddressOrder) Swap(a, b int) {
s[a], s[b] = s[b], s[a]
}
func (s databaseAddressOrder) Len() int {
return len(s)
func Equal(d, other *discosrv.DatabaseAddress) bool {
return d.Address == other.Address
}

View File

@@ -1,847 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: database.proto
package main
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type DatabaseRecord struct {
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses"`
Misses int32 `protobuf:"varint,2,opt,name=misses,proto3" json:"misses,omitempty"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
Missed int64 `protobuf:"varint,4,opt,name=missed,proto3" json:"missed,omitempty"`
}
func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
func (m *DatabaseRecord) String() string { return proto.CompactTextString(m) }
func (*DatabaseRecord) ProtoMessage() {}
func (*DatabaseRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{0}
}
func (m *DatabaseRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DatabaseRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DatabaseRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseRecord.Merge(m, src)
}
func (m *DatabaseRecord) XXX_Size() int {
return m.Size()
}
func (m *DatabaseRecord) XXX_DiscardUnknown() {
xxx_messageInfo_DatabaseRecord.DiscardUnknown(m)
}
var xxx_messageInfo_DatabaseRecord proto.InternalMessageInfo
type ReplicationRecord struct {
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
Addresses []DatabaseAddress `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
}
func (m *ReplicationRecord) Reset() { *m = ReplicationRecord{} }
func (m *ReplicationRecord) String() string { return proto.CompactTextString(m) }
func (*ReplicationRecord) ProtoMessage() {}
func (*ReplicationRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{1}
}
func (m *ReplicationRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ReplicationRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ReplicationRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ReplicationRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_ReplicationRecord.Merge(m, src)
}
func (m *ReplicationRecord) XXX_Size() int {
return m.Size()
}
func (m *ReplicationRecord) XXX_DiscardUnknown() {
xxx_messageInfo_ReplicationRecord.DiscardUnknown(m)
}
var xxx_messageInfo_ReplicationRecord proto.InternalMessageInfo
type DatabaseAddress struct {
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"`
Expires int64 `protobuf:"varint,2,opt,name=expires,proto3" json:"expires,omitempty"`
}
func (m *DatabaseAddress) Reset() { *m = DatabaseAddress{} }
func (m *DatabaseAddress) String() string { return proto.CompactTextString(m) }
func (*DatabaseAddress) ProtoMessage() {}
func (*DatabaseAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{2}
}
func (m *DatabaseAddress) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DatabaseAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DatabaseAddress.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseAddress.Merge(m, src)
}
func (m *DatabaseAddress) XXX_Size() int {
return m.Size()
}
func (m *DatabaseAddress) XXX_DiscardUnknown() {
xxx_messageInfo_DatabaseAddress.DiscardUnknown(m)
}
var xxx_messageInfo_DatabaseAddress proto.InternalMessageInfo
func init() {
proto.RegisterType((*DatabaseRecord)(nil), "main.DatabaseRecord")
proto.RegisterType((*ReplicationRecord)(nil), "main.ReplicationRecord")
proto.RegisterType((*DatabaseAddress)(nil), "main.DatabaseAddress")
}
func init() { proto.RegisterFile("database.proto", fileDescriptor_b90fe3356ea5df07) }
var fileDescriptor_b90fe3356ea5df07 = []byte{
// 270 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x90, 0x41, 0x4a, 0xc4, 0x30,
0x18, 0x85, 0x9b, 0x49, 0x1d, 0x99, 0x08, 0xa3, 0x06, 0x94, 0x20, 0x12, 0x4b, 0xdd, 0x74, 0xd5,
0x01, 0x5d, 0xb9, 0x74, 0xd0, 0x0b, 0xe4, 0x06, 0xe9, 0xe4, 0x77, 0x08, 0x3a, 0x4d, 0x49, 0x2a,
0xe8, 0x29, 0xf4, 0x58, 0x5d, 0xce, 0xd2, 0x95, 0x68, 0x7b, 0x11, 0x69, 0x26, 0x55, 0x14, 0x37,
0xb3, 0x7b, 0xdf, 0xff, 0xbf, 0x97, 0xbc, 0x84, 0x4c, 0x95, 0xac, 0x65, 0x21, 0x1d, 0xe4, 0x95,
0x35, 0xb5, 0xa1, 0xf1, 0x4a, 0xea, 0xf2, 0xe4, 0xdc, 0x42, 0x65, 0xdc, 0xcc, 0x8f, 0x8a, 0xc7,
0xbb, 0xd9, 0xd2, 0x2c, 0x8d, 0x07, 0xaf, 0x36, 0xd6, 0xf4, 0x05, 0x91, 0xe9, 0x4d, 0x48, 0x0b,
0x58, 0x18, 0xab, 0xe8, 0x15, 0x99, 0x48, 0xa5, 0x2c, 0x38, 0x07, 0x8e, 0xa1, 0x04, 0x67, 0x7b,
0x17, 0x47, 0x79, 0x7f, 0x62, 0x3e, 0x18, 0xaf, 0x37, 0xeb, 0x79, 0xdc, 0xbc, 0x9f, 0x45, 0xe2,
0xc7, 0x4d, 0x8f, 0xc9, 0x78, 0xa5, 0x7d, 0x6e, 0x94, 0xa0, 0x6c, 0x47, 0x04, 0xa2, 0x94, 0xc4,
0x0e, 0xa0, 0x64, 0x38, 0x41, 0x19, 0x16, 0x5e, 0x7f, 0x7b, 0x15, 0x8b, 0xfd, 0x34, 0x50, 0x5a,
0x93, 0x43, 0x01, 0xd5, 0x83, 0x5e, 0xc8, 0x5a, 0x9b, 0x32, 0x74, 0x3a, 0x20, 0xf8, 0x1e, 0x9e,
0x19, 0x4a, 0x50, 0x36, 0x11, 0xbd, 0xfc, 0xdd, 0x72, 0xb4, 0x55, 0xcb, 0x7f, 0xda, 0xa4, 0xb7,
0x64, 0xff, 0x4f, 0x8e, 0x32, 0xb2, 0x1b, 0x32, 0xe1, 0xde, 0x01, 0xfb, 0x0d, 0x3c, 0x55, 0xda,
0x86, 0x77, 0x62, 0x31, 0xe0, 0xfc, 0xb4, 0xf9, 0xe4, 0x51, 0xd3, 0x72, 0xb4, 0x6e, 0x39, 0xfa,
0x68, 0x39, 0x7a, 0xed, 0x78, 0xb4, 0xee, 0x78, 0xf4, 0xd6, 0xf1, 0xa8, 0x18, 0xfb, 0x3f, 0xbf,
0xfc, 0x0a, 0x00, 0x00, 0xff, 0xff, 0x7a, 0xa2, 0xf6, 0x1e, 0xb0, 0x01, 0x00, 0x00,
}
func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *DatabaseRecord) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Missed != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
i--
dAtA[i] = 0x20
}
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if m.Misses != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
i--
dAtA[i] = 0x10
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ReplicationRecord) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ReplicationRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x12
}
}
if len(m.Key) > 0 {
i -= len(m.Key)
copy(dAtA[i:], m.Key)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *DatabaseAddress) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseAddress) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Expires != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Expires))
i--
dAtA[i] = 0x10
}
if len(m.Address) > 0 {
i -= len(m.Address)
copy(dAtA[i:], m.Address)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func encodeVarintDatabase(dAtA []byte, offset int, v uint64) int {
offset -= sovDatabase(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *DatabaseRecord) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Addresses) > 0 {
for _, e := range m.Addresses {
l = e.Size()
n += 1 + l + sovDatabase(uint64(l))
}
}
if m.Misses != 0 {
n += 1 + sovDatabase(uint64(m.Misses))
}
if m.Seen != 0 {
n += 1 + sovDatabase(uint64(m.Seen))
}
if m.Missed != 0 {
n += 1 + sovDatabase(uint64(m.Missed))
}
return n
}
func (m *ReplicationRecord) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Key)
if l > 0 {
n += 1 + l + sovDatabase(uint64(l))
}
if len(m.Addresses) > 0 {
for _, e := range m.Addresses {
l = e.Size()
n += 1 + l + sovDatabase(uint64(l))
}
}
if m.Seen != 0 {
n += 1 + sovDatabase(uint64(m.Seen))
}
return n
}
func (m *DatabaseAddress) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Address)
if l > 0 {
n += 1 + l + sovDatabase(uint64(l))
}
if m.Expires != 0 {
n += 1 + sovDatabase(uint64(m.Expires))
}
return n
}
func sovDatabase(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozDatabase(x uint64) (n int) {
return sovDatabase(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: DatabaseRecord: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: DatabaseRecord: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, DatabaseAddress{})
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Misses", wireType)
}
m.Misses = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Misses |= int32(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
}
m.Seen = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Missed", wireType)
}
m.Missed = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Missed |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ReplicationRecord: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ReplicationRecord: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Key = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, DatabaseAddress{})
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
}
m.Seen = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: DatabaseAddress: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: DatabaseAddress: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Address = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType)
}
m.Expires = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Expires |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipDatabase(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthDatabase
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupDatabase
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthDatabase
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupDatabase = fmt.Errorf("proto: unexpected end of group")
)

View File

@@ -1,36 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
syntax = "proto3";
package main;
import "repos/protobuf/gogoproto/gogo.proto";
option (gogoproto.goproto_getters_all) = false;
option (gogoproto.goproto_unkeyed_all) = false;
option (gogoproto.goproto_unrecognized_all) = false;
option (gogoproto.goproto_sizecache_all) = false;
message DatabaseRecord {
repeated DatabaseAddress addresses = 1 [(gogoproto.nullable) = false];
int32 misses = 2; // Number of lookups* without hits
int64 seen = 3; // Unix nanos, last device announce
int64 missed = 4; // Unix nanos, last* failed lookup
}
// *) Not every lookup results in a write, so may not be completely accurate
message ReplicationRecord {
string key = 1;
repeated DatabaseAddress addresses = 2 [(gogoproto.nullable) = false];
int64 seen = 3; // Unix nanos, last device announce
}
message DatabaseAddress {
string address = 1;
int64 expires = 2; // Unix nanos
}

View File

@@ -11,29 +11,26 @@ import (
"fmt"
"testing"
"time"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestDatabaseGetSet(t *testing.T) {
db, err := newMemoryLevelDBStore()
if err != nil {
t.Fatal(err)
}
db := newInMemoryStore(t.TempDir(), 0, nil)
ctx, cancel := context.WithCancel(context.Background())
go db.Serve(ctx)
defer cancel()
// Check missing record
rec, err := db.get("abcd")
rec, err := db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Error("not found should not be an error")
}
if len(rec.Addresses) != 0 {
t.Error("addresses should be empty")
}
if rec.Misses != 0 {
t.Error("missing should be zero")
}
// Set up a clock
@@ -43,16 +40,16 @@ func TestDatabaseGetSet(t *testing.T) {
// Put a record
rec.Addresses = []DatabaseAddress{
rec.Addresses = []*discosrv.DatabaseAddress{
{Address: "tcp://1.2.3.4:5", Expires: tc.Now().Add(time.Minute).UnixNano()},
}
if err := db.put("abcd", rec); err != nil {
if err := db.put(&protocol.EmptyDeviceID, rec); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("abcd")
rec, err = db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -69,16 +66,16 @@ func TestDatabaseGetSet(t *testing.T) {
tc.wind(30 * time.Second)
addrs := []DatabaseAddress{
addrs := []*discosrv.DatabaseAddress{
{Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()},
}
if err := db.merge("abcd", addrs, tc.Now().UnixNano()); err != nil {
if err := db.merge(&protocol.EmptyDeviceID, addrs, tc.Now().UnixNano()); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("abcd")
rec, err = db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -101,7 +98,7 @@ func TestDatabaseGetSet(t *testing.T) {
// Verify it
rec, err = db.get("abcd")
rec, err = db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -114,40 +111,18 @@ func TestDatabaseGetSet(t *testing.T) {
t.Error("incorrect address")
}
// Put a record with misses
rec = DatabaseRecord{Misses: 42, Missed: tc.Now().UnixNano()}
if err := db.put("efgh", rec); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("efgh")
if err != nil {
t.Fatal(err)
}
if len(rec.Addresses) != 0 {
t.Log(rec.Addresses)
t.Fatal("should have no addresses")
}
if rec.Misses != 42 {
t.Log(rec.Misses)
t.Error("incorrect misses")
}
// Set an address
addrs = []DatabaseAddress{
addrs = []*discosrv.DatabaseAddress{
{Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()},
}
if err := db.merge("efgh", addrs, tc.Now().UnixNano()); err != nil {
if err := db.merge(&protocol.GlobalDeviceID, addrs, tc.Now().UnixNano()); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("efgh")
rec, err = db.get(&protocol.GlobalDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -155,48 +130,126 @@ func TestDatabaseGetSet(t *testing.T) {
t.Log(rec.Addresses)
t.Fatal("should have one address")
}
if rec.Misses != 0 {
t.Log(rec.Misses)
t.Error("should have no misses")
}
}
func TestFilter(t *testing.T) {
// all cases are expired with t=10
cases := []struct {
a []DatabaseAddress
b []DatabaseAddress
a []*discosrv.DatabaseAddress
b []*discosrv.DatabaseAddress
}{
{
a: nil,
b: nil,
},
{
a: []DatabaseAddress{{Address: "a", Expires: 9}, {Address: "b", Expires: 9}, {Address: "c", Expires: 9}},
b: []DatabaseAddress{},
a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 9}, {Address: "b", Expires: 9}, {Address: "c", Expires: 9}},
b: []*discosrv.DatabaseAddress{},
},
{
a: []DatabaseAddress{{Address: "a", Expires: 10}},
b: []DatabaseAddress{{Address: "a", Expires: 10}},
a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
b: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
},
{
a: []DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
b: []DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
b: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
},
{
a: []DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "c", Expires: 5}, {Address: "d", Expires: 15}, {Address: "e", Expires: 5}},
b: []DatabaseAddress{{Address: "b", Expires: 15}, {Address: "d", Expires: 15}},
a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "c", Expires: 5}, {Address: "d", Expires: 15}, {Address: "e", Expires: 5}},
b: []*discosrv.DatabaseAddress{{Address: "b", Expires: 15}, {Address: "d", Expires: 15}},
},
}
for _, tc := range cases {
res := expire(tc.a, 10)
res := expire(tc.a, time.Unix(0, 10))
if fmt.Sprint(res) != fmt.Sprint(tc.b) {
t.Errorf("Incorrect result %v, expected %v", res, tc.b)
}
}
}
func TestMerge(t *testing.T) {
cases := []struct {
a, b, res []*discosrv.DatabaseAddress
}{
{nil, nil, nil},
{
nil,
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
},
{
nil,
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
},
{
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 15}},
},
{
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
[]*discosrv.DatabaseAddress{{Address: "b", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
},
{
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}},
},
{
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
},
{
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
},
{
[]*discosrv.DatabaseAddress{{Address: "y", Expires: 10}, {Address: "z", Expires: 10}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "y", Expires: 10}, {Address: "z", Expires: 10}},
},
{
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "d", Expires: 10}},
[]*discosrv.DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}},
[]*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}, {Address: "d", Expires: 10}},
},
}
for _, tc := range cases {
rec := merge(&discosrv.DatabaseRecord{Addresses: tc.a}, &discosrv.DatabaseRecord{Addresses: tc.b})
if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) {
t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res)
}
rec = merge(&discosrv.DatabaseRecord{Addresses: tc.b}, &discosrv.DatabaseRecord{Addresses: tc.a})
if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) {
t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res)
}
}
}
func BenchmarkMergeEqual(b *testing.B) {
for i := 0; i < b.N; i++ {
ar := []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}
br := []*discosrv.DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 10}}
res := merge(&discosrv.DatabaseRecord{Addresses: ar}, &discosrv.DatabaseRecord{Addresses: br})
if len(res.Addresses) != 2 {
b.Fatal("wrong length")
}
if res.Addresses[0].Address != "a" || res.Addresses[1].Address != "b" {
b.Fatal("wrong address")
}
if res.Addresses[0].Expires != 15 || res.Addresses[1].Expires != 15 {
b.Fatal("wrong expiry")
}
}
b.ReportAllocs() // should be zero per operation
}
type testClock struct {
now time.Time
}

View File

@@ -9,22 +9,26 @@ package main
import (
"context"
"crypto/tls"
"flag"
"log"
"net"
"net/http"
_ "net/http/pprof"
"os"
"os/signal"
"runtime"
"strings"
"time"
"github.com/alecthomas/kong"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/internal/blob"
"github.com/syncthing/syncthing/internal/blob/azureblob"
"github.com/syncthing/syncthing/internal/blob/s3"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/thejerf/suture/v4"
)
const (
@@ -38,17 +42,12 @@ const (
errorRetryAfterSeconds = 1500
errorRetryFuzzSeconds = 300
// Retry for not found is minSeconds + failures * incSeconds +
// random(fuzz), where failures is the number of consecutive lookups
// with no answer, up to maxSeconds. The fuzz is applied after capping
// to maxSeconds.
notFoundRetryMinSeconds = 60
notFoundRetryMaxSeconds = 3540
notFoundRetryIncSeconds = 10
notFoundRetryFuzzSeconds = 60
// How often (in requests) we serialize the missed counter to database.
notFoundMissesWriteInterval = 10
// Retry for not found is notFoundRetrySeenSeconds for records we have
// seen an announcement for (but it's not active right now) and
// notFoundRetryUnknownSeconds for records we have never seen (or not
// seen within the last week).
notFoundRetryUnknownMinSeconds = 60
notFoundRetryUnknownMaxSeconds = 3600
httpReadTimeout = 5 * time.Second
httpWriteTimeout = 5 * time.Second
@@ -58,164 +57,123 @@ const (
replicationOutboxSize = 10000
)
// These options make the database a little more optimized for writes, at
// the expense of some memory usage and risk of losing writes in a (system)
// crash.
var levelDBOptions = &opt.Options{
NoSync: true,
WriteBuffer: 32 << 20, // default 4<<20
}
var debug = false
type CLI struct {
Cert string `group:"Listen" help:"Certificate file" default:"./cert.pem" env:"DISCOVERY_CERT_FILE"`
Key string `group:"Listen" help:"Key file" default:"./key.pem" env:"DISCOVERY_KEY_FILE"`
HTTP bool `group:"Listen" help:"Listen on HTTP (behind an HTTPS proxy)" env:"DISCOVERY_HTTP"`
Compression bool `group:"Listen" help:"Enable GZIP compression of responses" env:"DISCOVERY_COMPRESSION"`
Listen string `group:"Listen" help:"Listen address" default:":8443" env:"DISCOVERY_LISTEN"`
MetricsListen string `group:"Listen" help:"Metrics listen address" env:"DISCOVERY_METRICS_LISTEN"`
DesiredNotFoundRate float64 `group:"Listen" help:"Desired maximum rate of not-found replies (/s)" default:"1000"`
DBDir string `group:"Database" help:"Database directory" default:"." env:"DISCOVERY_DB_DIR"`
DBFlushInterval time.Duration `group:"Database" help:"Interval between database flushes" default:"5m" env:"DISCOVERY_DB_FLUSH_INTERVAL"`
DBS3Endpoint string `name:"db-s3-endpoint" group:"Database (S3 backup)" hidden:"true" help:"S3 endpoint for database" env:"DISCOVERY_DB_S3_ENDPOINT"`
DBS3Region string `name:"db-s3-region" group:"Database (S3 backup)" hidden:"true" help:"S3 region for database" env:"DISCOVERY_DB_S3_REGION"`
DBS3Bucket string `name:"db-s3-bucket" group:"Database (S3 backup)" hidden:"true" help:"S3 bucket for database" env:"DISCOVERY_DB_S3_BUCKET"`
DBS3AccessKeyID string `name:"db-s3-access-key-id" group:"Database (S3 backup)" hidden:"true" help:"S3 access key ID for database" env:"DISCOVERY_DB_S3_ACCESS_KEY_ID"`
DBS3SecretKey string `name:"db-s3-secret-key" group:"Database (S3 backup)" hidden:"true" help:"S3 secret key for database" env:"DISCOVERY_DB_S3_SECRET_KEY"`
DBAzureBlobAccount string `name:"db-azure-blob-account" env:"DISCOVERY_DB_AZUREBLOB_ACCOUNT"`
DBAzureBlobKey string `name:"db-azure-blob-key" env:"DISCOVERY_DB_AZUREBLOB_KEY"`
DBAzureBlobContainer string `name:"db-azure-blob-container" env:"DISCOVERY_DB_AZUREBLOB_CONTAINER"`
AMQPAddress string `group:"AMQP replication" hidden:"true" help:"Address to AMQP broker" env:"DISCOVERY_AMQP_ADDRESS"`
Debug bool `short:"d" help:"Print debug output" env:"DISCOVERY_DEBUG"`
Version bool `short:"v" help:"Print version and exit"`
}
func main() {
var listen string
var dir string
var metricsListen string
var replicationListen string
var replicationPeers string
var certFile string
var keyFile string
var replCertFile string
var replKeyFile string
var useHTTP bool
var largeDB bool
log.SetOutput(os.Stdout)
log.SetFlags(0)
flag.StringVar(&certFile, "cert", "./cert.pem", "Certificate file")
flag.StringVar(&keyFile, "key", "./key.pem", "Key file")
flag.StringVar(&dir, "db-dir", "./discovery.db", "Database directory")
flag.BoolVar(&debug, "debug", false, "Print debug output")
flag.BoolVar(&useHTTP, "http", false, "Listen on HTTP (behind an HTTPS proxy)")
flag.StringVar(&listen, "listen", ":8443", "Listen address")
flag.StringVar(&metricsListen, "metrics-listen", "", "Metrics listen address")
flag.StringVar(&replicationPeers, "replicate", "", "Replication peers, id@address, comma separated")
flag.StringVar(&replicationListen, "replication-listen", ":19200", "Replication listen address")
flag.StringVar(&replCertFile, "replication-cert", "", "Certificate file for replication")
flag.StringVar(&replKeyFile, "replication-key", "", "Key file for replication")
flag.BoolVar(&largeDB, "large-db", false, "Use larger database settings")
showVersion := flag.Bool("version", false, "Show version")
flag.Parse()
var cli CLI
kong.Parse(&cli)
debug = cli.Debug
log.Println(build.LongVersionFor("stdiscosrv"))
if *showVersion {
if cli.Version {
return
}
buildInfo.WithLabelValues(build.Version, runtime.Version(), build.User, build.Date.UTC().Format("2006-01-02T15:04:05Z")).Set(1)
if largeDB {
levelDBOptions.BlockCacheCapacity = 64 << 20
levelDBOptions.BlockSize = 64 << 10
levelDBOptions.CompactionTableSize = 16 << 20
levelDBOptions.CompactionTableSizeMultiplier = 2.0
levelDBOptions.WriteBuffer = 64 << 20
levelDBOptions.CompactionL0Trigger = 8
}
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if os.IsNotExist(err) {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv", 20*365)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
} else if err != nil {
log.Fatalln("Failed to load keypair:", err)
}
devID := protocol.NewDeviceID(cert.Certificate[0])
log.Println("Server device ID is", devID)
replCert := cert
if replCertFile != "" && replKeyFile != "" {
replCert, err = tls.LoadX509KeyPair(replCertFile, replKeyFile)
if err != nil {
log.Fatalln("Failed to load replication keypair:", err)
}
}
replDevID := protocol.NewDeviceID(replCert.Certificate[0])
log.Println("Replication device ID is", replDevID)
// Parse the replication specs, if any.
var allowedReplicationPeers []protocol.DeviceID
var replicationDestinations []string
parts := strings.Split(replicationPeers, ",")
for _, part := range parts {
if part == "" {
continue
}
fields := strings.Split(part, "@")
switch len(fields) {
case 2:
// This is an id@address specification. Grab the address for the
// destination list. Try to resolve it once to catch obvious
// syntax errors here rather than having the sender service fail
// repeatedly later.
_, err := net.ResolveTCPAddr("tcp", fields[1])
var cert tls.Certificate
if !cli.HTTP {
var err error
cert, err = tls.LoadX509KeyPair(cli.Cert, cli.Key)
if os.IsNotExist(err) {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(cli.Cert, cli.Key, "stdiscosrv", 20*365)
if err != nil {
log.Fatalln("Resolving address:", err)
log.Fatalln("Failed to generate X509 key pair:", err)
}
replicationDestinations = append(replicationDestinations, fields[1])
fallthrough // N.B.
case 1:
// The first part is always a device ID.
id, err := protocol.DeviceIDFromString(fields[0])
if err != nil {
log.Fatalln("Parsing device ID:", err)
}
if id == protocol.EmptyDeviceID {
log.Fatalf("Missing device ID for peer in %q", part)
}
allowedReplicationPeers = append(allowedReplicationPeers, id)
default:
log.Fatalln("Unrecognized replication spec:", part)
} else if err != nil {
log.Fatalln("Failed to load keypair:", err)
}
devID := protocol.NewDeviceID(cert.Certificate[0])
log.Println("Server device ID is", devID)
}
// Root of the service tree.
main := suture.New("main", suture.Spec{
PassThroughPanics: true,
Timeout: 2 * time.Minute,
})
// Start the database.
db, err := newLevelDBStore(dir)
if err != nil {
log.Fatalln("Open database:", err)
// If configured, use blob storage for database backups.
var blobs blob.Store
var err error
if cli.DBS3Endpoint != "" {
blobs, err = s3.NewSession(cli.DBS3Endpoint, cli.DBS3Region, cli.DBS3Bucket, cli.DBS3AccessKeyID, cli.DBS3SecretKey)
} else if cli.DBAzureBlobAccount != "" {
blobs, err = azureblob.NewBlobStore(cli.DBAzureBlobAccount, cli.DBAzureBlobKey, cli.DBAzureBlobContainer)
}
if err != nil {
log.Fatalf("Failed to create blob store: %v", err)
}
// Start the database.
db := newInMemoryStore(cli.DBDir, cli.DBFlushInterval, blobs)
main.Add(db)
// Start any replication senders.
var repl replicationMultiplexer
for _, dst := range replicationDestinations {
rs := newReplicationSender(dst, replCert, allowedReplicationPeers)
main.Add(rs)
repl = append(repl, rs)
}
// If we have replication configured, start the replication listener.
if len(allowedReplicationPeers) > 0 {
rl := newReplicationListener(replicationListen, replCert, allowedReplicationPeers, db)
main.Add(rl)
// If we have an AMQP broker for replication, start that
var repl replicator
if cli.AMQPAddress != "" {
clientID := rand.String(10)
kr := newAMQPReplicator(cli.AMQPAddress, clientID, db)
main.Add(kr)
repl = kr
}
// Start the main API server.
qs := newAPISrv(listen, cert, db, repl, useHTTP)
qs := newAPISrv(cli.Listen, cert, db, repl, cli.HTTP, cli.Compression, cli.DesiredNotFoundRate)
main.Add(qs)
// If we have a metrics port configured, start a metrics handler.
if metricsListen != "" {
if cli.MetricsListen != "" {
go func() {
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(metricsListen, mux))
log.Fatal(http.ListenAndServe(cli.MetricsListen, mux))
}()
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Cancel on signal
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, os.Interrupt)
go func() {
sig := <-signalChan
log.Printf("Received signal %s; shutting down", sig)
cancel()
}()
// Engage!
main.Serve(context.Background())
main.Serve(ctx)
}

View File

@@ -1,324 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"context"
"crypto/tls"
"encoding/binary"
"fmt"
io "io"
"log"
"net"
"time"
"github.com/syncthing/syncthing/lib/protocol"
)
const (
replicationReadTimeout = time.Minute
replicationWriteTimeout = 30 * time.Second
replicationHeartbeatInterval = time.Second * 30
)
type replicator interface {
send(key string, addrs []DatabaseAddress, seen int64)
}
// a replicationSender tries to connect to the remote address and provide
// them with a feed of replication updates.
type replicationSender struct {
dst string
cert tls.Certificate // our certificate
allowedIDs []protocol.DeviceID
outbox chan ReplicationRecord
}
func newReplicationSender(dst string, cert tls.Certificate, allowedIDs []protocol.DeviceID) *replicationSender {
return &replicationSender{
dst: dst,
cert: cert,
allowedIDs: allowedIDs,
outbox: make(chan ReplicationRecord, replicationOutboxSize),
}
}
func (s *replicationSender) Serve(ctx context.Context) error {
// Sleep a little at startup. Peers often restart at the same time, and
// this avoid the service failing and entering backoff state
// unnecessarily, while also reducing the reconnect rate to something
// reasonable by default.
time.Sleep(2 * time.Second)
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{s.cert},
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: true,
}
// Dial the TLS connection.
conn, err := tls.Dial("tcp", s.dst, tlsCfg)
if err != nil {
log.Println("Replication connect:", err)
return err
}
defer func() {
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
}()
// The replication stream is not especially latency sensitive, but it is
// quite a lot of data in small writes. Make it more efficient.
if tcpc, ok := conn.NetConn().(*net.TCPConn); ok {
_ = tcpc.SetNoDelay(false)
}
// Get the other side device ID.
remoteID, err := deviceID(conn)
if err != nil {
log.Println("Replication connect:", err)
return err
}
// Verify it's in the set of allowed device IDs.
if !deviceIDIn(remoteID, s.allowedIDs) {
log.Println("Replication connect: unexpected device ID:", remoteID)
return err
}
heartBeatTicker := time.NewTicker(replicationHeartbeatInterval)
defer heartBeatTicker.Stop()
// Send records.
buf := make([]byte, 1024)
for {
select {
case <-heartBeatTicker.C:
if len(s.outbox) > 0 {
// No need to send heartbeats if there are events/prevrious
// heartbeats to send, they will keep the connection alive.
continue
}
// Empty replication message is the heartbeat:
s.outbox <- ReplicationRecord{}
case rec := <-s.outbox:
// Buffer must hold record plus four bytes for size
size := rec.Size()
if len(buf) < size+4 {
buf = make([]byte, size+4)
}
// Record comes after the four bytes size
n, err := rec.MarshalTo(buf[4:])
if err != nil {
// odd to get an error here, but we haven't sent anything
// yet so it's not fatal
replicationSendsTotal.WithLabelValues("error").Inc()
log.Println("Replication marshal:", err)
continue
}
binary.BigEndian.PutUint32(buf, uint32(n))
// Send
conn.SetWriteDeadline(time.Now().Add(replicationWriteTimeout))
if _, err := conn.Write(buf[:4+n]); err != nil {
replicationSendsTotal.WithLabelValues("error").Inc()
log.Println("Replication write:", err)
// Yes, we are losing the replication event here.
return err
}
replicationSendsTotal.WithLabelValues("success").Inc()
case <-ctx.Done():
return nil
}
}
}
func (s *replicationSender) String() string {
return fmt.Sprintf("replicationSender(%q)", s.dst)
}
func (s *replicationSender) send(key string, ps []DatabaseAddress, _ int64) {
item := ReplicationRecord{
Key: key,
Addresses: ps,
}
// The send should never block. The inbox is suitably buffered for at
// least a few seconds of stalls, which shouldn't happen in practice.
select {
case s.outbox <- item:
default:
replicationSendsTotal.WithLabelValues("drop").Inc()
}
}
// a replicationMultiplexer sends to multiple replicators
type replicationMultiplexer []replicator
func (m replicationMultiplexer) send(key string, ps []DatabaseAddress, seen int64) {
for _, s := range m {
// each send is nonblocking
s.send(key, ps, seen)
}
}
// replicationListener accepts incoming connections and reads replication
// items from them. Incoming items are applied to the KV store.
type replicationListener struct {
addr string
cert tls.Certificate
allowedIDs []protocol.DeviceID
db database
}
func newReplicationListener(addr string, cert tls.Certificate, allowedIDs []protocol.DeviceID, db database) *replicationListener {
return &replicationListener{
addr: addr,
cert: cert,
allowedIDs: allowedIDs,
db: db,
}
}
func (l *replicationListener) Serve(ctx context.Context) error {
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{l.cert},
ClientAuth: tls.RequestClientCert,
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: true,
}
lst, err := tls.Listen("tcp", l.addr, tlsCfg)
if err != nil {
log.Println("Replication listen:", err)
return err
}
defer lst.Close()
for {
select {
case <-ctx.Done():
return nil
default:
}
// Accept a connection
conn, err := lst.Accept()
if err != nil {
log.Println("Replication accept:", err)
return err
}
// Figure out the other side device ID
remoteID, err := deviceID(conn.(*tls.Conn))
if err != nil {
log.Println("Replication accept:", err)
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
continue
}
// Verify it is in the set of allowed device IDs
if !deviceIDIn(remoteID, l.allowedIDs) {
log.Println("Replication accept: unexpected device ID:", remoteID)
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
continue
}
go l.handle(ctx, conn)
}
}
func (l *replicationListener) String() string {
return fmt.Sprintf("replicationListener(%q)", l.addr)
}
func (l *replicationListener) handle(ctx context.Context, conn net.Conn) {
defer func() {
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
}()
buf := make([]byte, 1024)
for {
select {
case <-ctx.Done():
return
default:
}
conn.SetReadDeadline(time.Now().Add(replicationReadTimeout))
// First four bytes are the size
if _, err := io.ReadFull(conn, buf[:4]); err != nil {
log.Println("Replication read size:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
return
}
// Read the rest of the record
size := int(binary.BigEndian.Uint32(buf[:4]))
if len(buf) < size {
buf = make([]byte, size)
}
if size == 0 {
// Heartbeat, ignore
continue
}
if _, err := io.ReadFull(conn, buf[:size]); err != nil {
log.Println("Replication read record:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
return
}
// Unmarshal
var rec ReplicationRecord
if err := rec.Unmarshal(buf[:size]); err != nil {
log.Println("Replication unmarshal:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
continue
}
// Store
l.db.merge(rec.Key, rec.Addresses, rec.Seen)
replicationRecvsTotal.WithLabelValues("success").Inc()
}
}
func deviceID(conn *tls.Conn) (protocol.DeviceID, error) {
// Handshake may not be complete on the server side yet, which we need
// to get the client certificate.
if !conn.ConnectionState().HandshakeComplete {
if err := conn.Handshake(); err != nil {
return protocol.DeviceID{}, err
}
}
// We expect exactly one certificate.
certs := conn.ConnectionState().PeerCertificates
if len(certs) != 1 {
return protocol.DeviceID{}, fmt.Errorf("unexpected number of certificates (%d != 1)", len(certs))
}
return protocol.NewDeviceID(certs[0].Raw), nil
}
func deviceIDIn(id protocol.DeviceID, ids []protocol.DeviceID) bool {
for _, candidate := range ids {
if id == candidate {
return true
}
}
return false
}

View File

@@ -7,10 +7,7 @@
package main
import (
"os"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
)
var (
@@ -99,13 +96,28 @@ var (
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
}, []string{"operation"})
retryAfterHistogram = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "retry_after_seconds",
Help: "Retry-After header value in seconds.",
Buckets: prometheus.ExponentialBuckets(60, 2, 7), // 60, 120, 240, 480, 960, 1920, 3840
})
databaseWriteSeconds = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "database_write_seconds",
Help: "Time spent writing the database.",
})
databaseLastWritten = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "database_last_written",
Help: "Timestamp of the last successful database write.",
})
retryAfterLevel = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "retry_after_seconds",
Help: "Retry-After header value in seconds.",
}, []string{"name"})
)
const (
@@ -126,16 +138,6 @@ func init() {
replicationSendsTotal, replicationRecvsTotal,
databaseKeys, databaseStatisticsSeconds,
databaseOperations, databaseOperationSeconds,
retryAfterHistogram)
processCollectorOpts := collectors.ProcessCollectorOpts{
Namespace: "syncthing_discovery",
PidFn: func() (int, error) {
return os.Getpid(), nil
},
}
prometheus.MustRegister(
collectors.NewProcessCollector(processCollectorOpts),
)
databaseWriteSeconds, databaseLastWritten,
retryAfterLevel)
}

View File

@@ -12,9 +12,8 @@ import (
"time"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
)
var (
@@ -185,7 +184,7 @@ func protocolConnectionHandler(tcpConn net.Conn, config *tls.Config, token strin
continue
}
// requestedPeer is the server, id is the client
ses := newSession(requestedPeer, id, sessionLimiter, globalLimiter)
ses := newSession(requestedPeer, id, sessionLimitBps, globalLimiter)
go ses.Serve()

View File

@@ -19,6 +19,8 @@ import (
"syscall"
"time"
"golang.org/x/time/rate"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
@@ -30,7 +32,6 @@ import (
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
_ "github.com/syncthing/syncthing/lib/upnp"
"golang.org/x/time/rate"
)
var (
@@ -50,7 +51,6 @@ var (
globalLimitBps int
overLimit atomic.Bool
descriptorLimit int64
sessionLimiter *rate.Limiter
globalLimiter *rate.Limiter
networkBufferSize int
@@ -227,9 +227,6 @@ func main() {
}
}
if sessionLimitBps > 0 {
sessionLimiter = rate.NewLimiter(rate.Limit(sessionLimitBps), 2*sessionLimitBps)
}
if globalLimitBps > 0 {
globalLimiter = rate.NewLimiter(rate.Limit(globalLimitBps), 2*globalLimitBps)
}

View File

@@ -27,7 +27,7 @@ var (
bytesProxied atomic.Int64
)
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit, globalRateLimit *rate.Limiter) *session {
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionLimitBps int, globalRateLimit *rate.Limiter) *session {
serverkey := make([]byte, 32)
_, err := rand.Read(serverkey)
if err != nil {
@@ -40,12 +40,17 @@ func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit,
return nil
}
var sessionRateLimit *rate.Limiter
if sessionLimitBps > 0 {
sessionRateLimit = rate.NewLimiter(rate.Limit(sessionLimitBps), 2*sessionLimitBps)
}
ses := &session{
serverkey: serverkey,
serverid: serverid,
clientkey: clientkey,
clientid: clientid,
rateLimit: makeRateLimitFunc(sessionRateLimit, globalRateLimit),
limiter: sessionRateLimit,
connsChan: make(chan net.Conn),
conns: make([]net.Conn, 0, 2),
}
@@ -68,7 +73,6 @@ func findSession(key string) *session {
ses, ok := pendingSessions[key]
if !ok {
return nil
}
delete(pendingSessions, key)
return ses
@@ -110,6 +114,7 @@ type session struct {
clientid syncthingprotocol.DeviceID
rateLimit func(bytes int)
limiter *rate.Limiter
connsChan chan net.Conn
conns []net.Conn

View File

@@ -1,149 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"sort"
"strings"
"time"
"github.com/alecthomas/kong"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/httpcache"
"github.com/syncthing/syncthing/lib/upgrade"
)
type cli struct {
Listen string `default:":8080" help:"Listen address"`
URL string `short:"u" default:"https://api.github.com/repos/syncthing/syncthing/releases?per_page=25" help:"GitHub releases url"`
Forward []string `short:"f" help:"Forwarded pages, format: /path->https://example/com/url"`
CacheTime time.Duration `default:"15m" help:"Cache time"`
}
func main() {
var params cli
kong.Parse(&params)
if err := server(&params); err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
}
func server(params *cli) error {
http.Handle("/meta.json", httpcache.SinglePath(&githubReleases{url: params.URL}, params.CacheTime))
for _, fwd := range params.Forward {
path, url, ok := strings.Cut(fwd, "->")
if !ok {
return fmt.Errorf("invalid forward: %q", fwd)
}
http.Handle(path, httpcache.SinglePath(&proxy{url: url}, params.CacheTime))
}
return http.ListenAndServe(params.Listen, nil)
}
type githubReleases struct {
url string
}
func (p *githubReleases) ServeHTTP(w http.ResponseWriter, _ *http.Request) {
log.Println("Fetching", p.url)
rels := upgrade.FetchLatestReleases(p.url, "")
if rels == nil {
http.Error(w, "no releases", http.StatusInternalServerError)
return
}
sort.Sort(upgrade.SortByRelease(rels))
rels = filterForLatest(rels)
// Move the URL used for browser downloads to the URL field, and remove
// the browser URL field. This avoids going via the GitHub API for
// downloads, since Syncthing uses the URL field.
for _, rel := range rels {
for j, asset := range rel.Assets {
rel.Assets[j].URL = asset.BrowserURL
rel.Assets[j].BrowserURL = ""
}
}
buf := new(bytes.Buffer)
_ = json.NewEncoder(buf).Encode(rels)
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
w.Write(buf.Bytes())
}
type proxy struct {
url string
}
func (p *proxy) ServeHTTP(w http.ResponseWriter, req *http.Request) {
log.Println("Fetching", p.url)
req, err := http.NewRequestWithContext(req.Context(), http.MethodGet, p.url, nil)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer resp.Body.Close()
ct := resp.Header.Get("Content-Type")
w.Header().Set("Content-Type", ct)
if resp.StatusCode == http.StatusOK {
w.Header().Set("Cache-Control", "public, max-age=900")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
}
w.WriteHeader(resp.StatusCode)
if strings.HasPrefix(ct, "application/json") {
// Special JSON handling; clean it up a bit.
var v interface{}
if err := json.NewDecoder(resp.Body).Decode(&v); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
_ = json.NewEncoder(w).Encode(v)
} else {
_, _ = io.Copy(w, resp.Body)
}
}
// filterForLatest returns the latest stable and prerelease only. If the
// stable version is newer (comes first in the list) there is no need to go
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
var havePre bool
for _, rel := range rels {
if !rel.Prerelease {
// We found a stable version, we're good now.
filtered = append(filtered, rel)
break
}
if rel.Prerelease && !havePre {
// We remember the first prerelease we find.
filtered = append(filtered, rel)
havePre = true
}
}
return filtered
}

View File

@@ -11,6 +11,10 @@ import (
"fmt"
"time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -33,19 +37,19 @@ func indexDump() error {
name := nulString(key[1+4+4:])
fmt.Printf("[device] F:%d D:%d N:%q", folder, device, name)
var f protocol.FileInfo
err := f.Unmarshal(it.Value())
var f bep.FileInfo
err := proto.Unmarshal(it.Value(), &f)
if err != nil {
return err
}
fmt.Printf(" V:%v\n", f)
fmt.Printf(" V:%v\n", &f)
case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv db.VersionList
flv.Unmarshal(it.Value())
fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, flv)
var flv dbproto.VersionList
proto.Unmarshal(it.Value(), &flv)
fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, &flv)
case db.KeyTypeBlock:
folder := binary.BigEndian.Uint32(key[1:])
@@ -94,11 +98,11 @@ func indexDump() error {
case db.KeyTypeFolderMeta:
folder := binary.BigEndian.Uint32(key[1:])
fmt.Printf("[foldermeta] F:%d", folder)
var cs db.CountsSet
if err := cs.Unmarshal(it.Value()); err != nil {
var cs dbproto.CountsSet
if err := proto.Unmarshal(it.Value(), &cs); err != nil {
fmt.Printf(" (invalid)\n")
} else {
fmt.Printf(" V:%v\n", cs)
fmt.Printf(" V:%v\n", &cs)
}
case db.KeyTypeMiscData:
@@ -125,20 +129,20 @@ func indexDump() error {
case db.KeyTypeVersion:
fmt.Printf("[version] H:%x", key[1:])
var v protocol.Vector
err := v.Unmarshal(it.Value())
var v bep.Vector
err := proto.Unmarshal(it.Value(), &v)
if err != nil {
fmt.Printf(" (invalid)\n")
} else {
fmt.Printf(" V:%v\n", v)
fmt.Printf(" V:%v\n", &v)
}
case db.KeyTypePendingFolder:
device := binary.BigEndian.Uint32(key[1:])
folder := string(key[5:])
var of db.ObservedFolder
of.Unmarshal(it.Value())
fmt.Printf("[pendingFolder] D:%d F:%s V:%v\n", device, folder, of)
var of dbproto.ObservedFolder
proto.Unmarshal(it.Value(), &of)
fmt.Printf("[pendingFolder] D:%d F:%s V:%v\n", device, folder, &of)
case db.KeyTypePendingDevice:
device := "<invalid>"
@@ -146,9 +150,9 @@ func indexDump() error {
if err == nil {
device = dev.String()
}
var od db.ObservedDevice
od.Unmarshal(it.Value())
fmt.Printf("[pendingDevice] D:%v V:%v\n", device, od)
var od dbproto.ObservedDevice
proto.Unmarshal(it.Value(), &od)
fmt.Printf("[pendingDevice] D:%v V:%v\n", device, &od)
default:
fmt.Printf("[??? %d]\n %x\n %x\n", key[0], key, it.Value())

View File

@@ -7,9 +7,10 @@
package cli
import (
"cmp"
"encoding/binary"
"fmt"
"sort"
"slices"
"github.com/syncthing/syncthing/lib/db"
)
@@ -77,8 +78,8 @@ func indexDumpSize() error {
elems = append(elems, ele)
}
sort.Slice(elems, func(i, j int) bool {
return elems[i].size > elems[j].size
slices.SortFunc(elems, func(a, b sizedElement) int {
return cmp.Compare(b.size, a.size)
})
for _, ele := range elems {
fmt.Println(ele.key, ele.size)

View File

@@ -8,11 +8,16 @@ package cli
import (
"bytes"
"cmp"
"encoding/binary"
"errors"
"fmt"
"sort"
"slices"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -42,12 +47,12 @@ func indexCheck() (err error) {
folders := make(map[uint32]string)
devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32)
fileInfos := make(map[fileInfoKey]protocol.FileInfo)
globals := make(map[globalKey]db.VersionList)
fileInfos := make(map[fileInfoKey]*bep.FileInfo)
globals := make(map[globalKey]*dbproto.VersionList)
sequences := make(map[sequenceKey]string)
needs := make(map[globalKey]struct{})
blocklists := make(map[string]struct{})
versions := make(map[string]protocol.Vector)
versions := make(map[string]*bep.Vector)
usedBlocklists := make(map[string]struct{})
usedVersions := make(map[string]struct{})
var localDeviceKey uint32
@@ -74,26 +79,26 @@ func indexCheck() (err error) {
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
var f protocol.FileInfo
err := f.Unmarshal(it.Value())
var f bep.FileInfo
err := proto.Unmarshal(it.Value(), &f)
if err != nil {
fmt.Println("Unable to unmarshal FileInfo:", err)
success = false
continue
}
fileInfos[fileInfoKey{folder, device, name}] = f
fileInfos[fileInfoKey{folder, device, name}] = &f
case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv db.VersionList
if err := flv.Unmarshal(it.Value()); err != nil {
var flv dbproto.VersionList
if err := proto.Unmarshal(it.Value(), &flv); err != nil {
fmt.Println("Unable to unmarshal VersionList:", err)
success = false
continue
}
globals[globalKey{folder, name}] = flv
globals[globalKey{folder, name}] = &flv
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
@@ -124,13 +129,13 @@ func indexCheck() (err error) {
case db.KeyTypeVersion:
hash := string(key[1:])
var v protocol.Vector
if err := v.Unmarshal(it.Value()); err != nil {
var v bep.Vector
if err := proto.Unmarshal(it.Value(), &v); err != nil {
fmt.Println("Unable to unmarshal Vector:", err)
success = false
continue
}
versions[hash] = v
versions[hash] = &v
}
}
@@ -203,11 +208,11 @@ func indexCheck() (err error) {
// Aggregate the ranges of missing sequence entries, print them
sort.Slice(missingSeq, func(a, b int) bool {
if missingSeq[a].folder != missingSeq[b].folder {
return missingSeq[a].folder < missingSeq[b].folder
slices.SortFunc(missingSeq, func(a, b sequenceKey) int {
if a.folder != b.folder {
return cmp.Compare(a.folder, b.folder)
}
return missingSeq[a].sequence < missingSeq[b].sequence
return cmp.Compare(a.sequence, b.sequence)
})
var folder uint32
@@ -248,25 +253,27 @@ func indexCheck() (err error) {
if fi.VersionHash != nil {
fiv = versions[string(fi.VersionHash)]
}
if !fiv.Equal(version) {
if !protocol.VectorFromWire(fiv).Equal(version) {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, version, fi.Version)
success = false
}
if fi.IsInvalid() != invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, invalid, fi.IsInvalid())
ffi := protocol.FileInfoFromDB(fi)
if ffi.IsInvalid() != invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, invalid, ffi.IsInvalid())
success = false
}
if fi.IsDeleted() != deleted {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo deleted mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, deleted, fi.IsDeleted())
if ffi.IsDeleted() != deleted {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo deleted mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, deleted, ffi.IsDeleted())
success = false
}
}
for i, fv := range vl.RawVersions {
for i, fv := range vl.Versions {
ver := protocol.VectorFromWire(fv.Version)
for _, device := range fv.Devices {
checkGlobal(i, device, fv.Version, false, fv.Deleted)
checkGlobal(i, device, ver, false, fv.Deleted)
}
for _, device := range fv.InvalidDevices {
checkGlobal(i, device, fv.Version, true, fv.Deleted)
checkGlobal(i, device, ver, true, fv.Deleted)
}
}
@@ -276,10 +283,10 @@ func indexCheck() (err error) {
if needsLocally(vl) {
_, ok := needs[gk]
if !ok {
fv, _ := vl.GetGlobal()
devB, _ := fv.FirstDevice()
fv, _ := vlGetGlobal(vl)
devB, _ := fvFirstDevice(fv)
dev := deviceToIDs[string(devB)]
fi := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
fi := protocol.FileInfoFromDB(fileInfos[fileInfoKey{gk.folder, dev, gk.name}])
if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
}
@@ -345,11 +352,84 @@ func indexCheck() (err error) {
return nil
}
func needsLocally(vl db.VersionList) bool {
gfv, gok := vl.GetGlobal()
func needsLocally(vl *dbproto.VersionList) bool {
gfv, gok := vlGetGlobal(vl)
if !gok { // That's weird, but we hardly need something non-existent
return false
}
fv, ok := vl.Get(protocol.LocalDeviceID[:])
return db.Need(gfv, ok, fv.Version)
fv, ok := vlGet(vl, protocol.LocalDeviceID[:])
return db.Need(gfv, ok, protocol.VectorFromWire(fv.Version))
}
// Get returns a FileVersion that contains the given device and whether it has
// been found at all.
func vlGet(vl *dbproto.VersionList, device []byte) (*dbproto.FileVersion, bool) {
_, i, _, ok := vlFindDevice(vl, device)
if !ok {
return &dbproto.FileVersion{}, false
}
return vl.Versions[i], true
}
// GetGlobal returns the current global FileVersion. The returned FileVersion
// may be invalid, if all FileVersions are invalid. Returns false only if
// VersionList is empty.
func vlGetGlobal(vl *dbproto.VersionList) (*dbproto.FileVersion, bool) {
i := vlFindGlobal(vl)
if i == -1 {
return nil, false
}
return vl.Versions[i], true
}
// findGlobal returns the first version that isn't invalid, or if all versions are
// invalid just the first version (i.e. 0) or -1, if there's no versions at all.
func vlFindGlobal(vl *dbproto.VersionList) int {
for i := range vl.Versions {
if !fvIsInvalid(vl.Versions[i]) {
return i
}
}
if len(vl.Versions) == 0 {
return -1
}
return 0
}
// findDevice returns whether the device is in InvalidVersions or Versions and
// in InvalidDevices or Devices (true for invalid), the positions in the version
// and device slices and whether it has been found at all.
func vlFindDevice(vl *dbproto.VersionList, device []byte) (bool, int, int, bool) {
for i, v := range vl.Versions {
if j := deviceIndex(v.Devices, device); j != -1 {
return false, i, j, true
}
if j := deviceIndex(v.InvalidDevices, device); j != -1 {
return true, i, j, true
}
}
return false, -1, -1, false
}
func deviceIndex(devices [][]byte, device []byte) int {
for i, dev := range devices {
if bytes.Equal(device, dev) {
return i
}
}
return -1
}
func fvFirstDevice(fv *dbproto.FileVersion) ([]byte, bool) {
if len(fv.Devices) != 0 {
return fv.Devices[0], true
}
if len(fv.InvalidDevices) != 0 {
return fv.InvalidDevices[0], true
}
return nil, false
}
func fvIsInvalid(fv *dbproto.FileVersion) bool {
return fv == nil || len(fv.Devices) == 0
}

View File

@@ -12,7 +12,7 @@ import (
"os"
"github.com/alecthomas/kong"
"github.com/flynn-archive/go-shlex"
"github.com/kballard/go-shellquote"
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/lib/config"
@@ -67,7 +67,7 @@ func (*stdinCommand) Run() error {
fmt.Println("Reading commands from stdin...", args)
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
input, err := shellquote.Split(scanner.Text())
if err != nil {
return fmt.Errorf("parsing input: %w", err)
}

View File

@@ -9,15 +9,14 @@ package main
import (
"bytes"
"context"
"crypto/sha256"
"fmt"
"net/http"
"os"
"path/filepath"
"sort"
"slices"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
const (
@@ -38,7 +37,9 @@ func uploadPanicLogs(ctx context.Context, urlBase, dir string) {
return
}
sort.Sort(sort.Reverse(sort.StringSlice(files)))
slices.SortFunc(files, func(a, b string) int {
return strings.Compare(b, a)
})
for _, file := range files {
if strings.Contains(file, ".reported.") {
// We've already sent this file. It'll be cleaned out at some

View File

@@ -10,6 +10,4 @@ import (
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("main", "Main package")
)
var l = logger.DefaultLogger.NewFacility("main", "Main package")

View File

@@ -17,6 +17,9 @@ import (
"os"
"path/filepath"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil"
@@ -280,10 +283,11 @@ func loadEncryptedFileInfo(fd fs.File) (*protocol.FileInfo, error) {
return nil, err
}
var encFi protocol.FileInfo
if err := encFi.Unmarshal(trailer); err != nil {
var encFi bep.FileInfo
if err := proto.Unmarshal(trailer, &encFi); err != nil {
return nil, err
}
fi := protocol.FileInfoFromWire(&encFi)
return &encFi, nil
return &fi, nil
}

View File

@@ -23,14 +23,14 @@ import (
"path/filepath"
"regexp"
"runtime/pprof"
"sort"
"slices"
"strconv"
"strings"
"syscall"
"time"
"github.com/alecthomas/kong"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/gofrs/flock"
"github.com/thejerf/suture/v4"
"github.com/willabides/kongplete"
@@ -38,10 +38,10 @@ import (
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/cmd/syncthing/decrypt"
"github.com/syncthing/syncthing/cmd/syncthing/generate"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
@@ -92,11 +92,6 @@ above.
STLOCKTHRESHOLD Used for debugging internal deadlocks; sets debug
sensitivity. Use only under direction of a developer.
STHASHING Select the SHA256 hashing package to use. Possible values
are "standard" for the Go standard library implementation,
"minio" for the github.com/minio/sha256-simd implementation,
and blank (the default) for auto detection.
STVERSIONEXTRA Add extra information to the version string in logs and the
version line in the GUI. Can be set to the name of a wrapper
or tool controlling syncthing to communicate this to the end
@@ -354,10 +349,12 @@ func (options serveOptions) Run() error {
return nil
}
// Ensure that our home directory exists.
if err := syncthing.EnsureDir(locations.GetBaseDir(locations.ConfigBaseDir), 0o700); err != nil {
l.Warnln("Failure on home directory:", err)
os.Exit(svcutil.ExitError.AsInt())
// Ensure that our config and data directories exist.
for _, loc := range []locations.BaseDirEnum{locations.ConfigBaseDir, locations.DataBaseDir} {
if err := syncthing.EnsureDir(locations.GetBaseDir(loc), 0o700); err != nil {
l.Warnln("Failed to ensure directory exists:", err)
os.Exit(svcutil.ExitError.AsInt())
}
}
if options.UpgradeTo != "" {
@@ -381,15 +378,18 @@ func (options serveOptions) Run() error {
if options.Upgrade {
release, err := checkUpgrade()
if err == nil {
// Use leveldb database locks to protect against concurrent upgrades
var ldb backend.Backend
ldb, err = syncthing.OpenDBBackend(locations.Get(locations.Database), config.TuningAuto)
lf := flock.New(locations.Get(locations.LockFile))
locked, err := lf.TryLock()
if err != nil {
l.Warnln("Upgrade:", err)
os.Exit(1)
} else if locked {
err = upgradeViaRest()
} else {
_ = ldb.Close()
err = upgrade.To(release)
}
_ = lf.Unlock()
_ = os.Remove(locations.Get(locations.LockFile))
}
if err != nil {
l.Warnln("Upgrade:", err)
@@ -443,7 +443,7 @@ func debugFacilities() string {
maxLen = len(name)
}
}
sort.Strings(names)
slices.Sort(names)
// Format the choices
b := new(bytes.Buffer)
@@ -549,6 +549,17 @@ func syncthingMain(options serveOptions) {
os.Exit(1)
}
// Ensure we are the only running instance
lf := flock.New(locations.Get(locations.LockFile))
locked, err := lf.TryLock()
if err != nil {
l.Warnln("Failed to acquire lock:", err)
os.Exit(1)
} else if !locked {
l.Warnln("Failed to acquire lock: is another Syncthing instance already running?")
os.Exit(1)
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -567,6 +578,7 @@ func syncthingMain(options serveOptions) {
os.Exit(svcutil.ExitError.AsInt())
}
earlyService.Add(cfgWrapper)
config.RegisterInfoMetrics(cfgWrapper)
// Candidate builds should auto upgrade. Make sure the option is set,
// unless we are in a build where it's disabled or the STNOUPGRADE
@@ -629,9 +641,21 @@ func syncthingMain(options serveOptions) {
DBRecheckInterval: options.DebugDBRecheckInterval,
DBIndirectGCInterval: options.DebugDBIndirectGCInterval,
}
if options.Audit {
appOpts.AuditWriter = auditWriter(options.AuditFile)
if options.Audit || cfgWrapper.Options().AuditEnabled {
l.Infoln("Auditing is enabled.")
auditFile := cfgWrapper.Options().AuditFile
// Ignore config option if command-line option is set
if options.AuditFile != "" {
l.Debugln("Using the audit file from the command-line parameter.")
auditFile = options.AuditFile
}
appOpts.AuditWriter = auditWriter(auditFile)
}
if dur, err := time.ParseDuration(os.Getenv("STRECHECKDBEVERY")); err == nil {
appOpts.DBRecheckInterval = dur
}
@@ -685,6 +709,10 @@ func syncthingMain(options serveOptions) {
pprof.StopCPUProfile()
}
// Best effort remove lockfile, doesn't matter if it succeeds
_ = lf.Unlock()
_ = os.Remove(locations.Get(locations.LockFile))
os.Exit(int(status))
}
@@ -835,6 +863,10 @@ func initialAutoUpgradeCheck(misc *db.NamespacedKV) (upgrade.Release, error) {
if err != nil {
return upgrade.Release{}, err
}
if upgrade.CompareVersions(release.Tag, build.Version) == upgrade.MajorNewer {
return upgrade.Release{}, errors.New("higher major version")
}
if lastVersion, ok, err := misc.String(upgradeVersionKey); err == nil && ok && lastVersion == release.Tag {
// Only check time if we try to upgrade to the same release.
if lastTime, ok, err := misc.Time(upgradeTimeKey); err == nil && ok && time.Since(lastTime) < upgradeRetryInterval {

View File

@@ -64,7 +64,7 @@ func monitorMain(options serveOptions) {
fileDst, err = open(logFile)
}
if err != nil {
l.Warnln("Failed to setup logging to file, proceeding with logging to stdout only:", err)
l.Warnln("Failed to set up logging to file, proceeding with logging to stdout only:", err)
} else {
if build.IsWindows {
// Translate line breaks to Windows standard
@@ -238,35 +238,18 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
return
}
if panicFd == nil {
dst.Write([]byte(line))
dst.Write([]byte(line))
if strings.Contains(line, "SIGILL") {
l.Warnln(`
*******************************************************************************
* Crash due to illegal instruction detected. This is most likely due to a CPU *
* incompatibility with the high performance hashing package. Switching to the *
* standard hashing package instead. Please report this issue at: *
* *
* https://github.com/syncthing/syncthing/issues *
* *
* Include the details of your CPU. *
*******************************************************************************
`)
os.Setenv("STHASHING", "standard")
return
if panicFd == nil && (strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:")) {
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
}
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
}
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
if strings.Contains(line, "leveldb") && strings.Contains(line, "corrupt") {
l.Warnln(`
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
if strings.Contains(line, "leveldb") && strings.Contains(line, "corrupt") {
l.Warnln(`
*********************************************************************************
* Crash due to corrupt database. *
* *
@@ -279,22 +262,21 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
* https://docs.syncthing.net/users/faq.html#my-syncthing-database-is-corrupt *
*********************************************************************************
`)
} else {
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
}
stdoutMut.Lock()
for _, line := range stdoutFirstLines {
panicFd.WriteString(line)
}
panicFd.WriteString("...\n")
for _, line := range stdoutLastLines {
panicFd.WriteString(line)
}
stdoutMut.Unlock()
} else {
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
}
stdoutMut.Lock()
for _, line := range stdoutFirstLines {
panicFd.WriteString(line)
}
panicFd.WriteString("...\n")
for _, line := range stdoutLastLines {
panicFd.WriteString(line)
}
stdoutMut.Unlock()
panicFd.WriteString("Panic at " + time.Now().Format(time.RFC3339) + "\n")
}

View File

@@ -140,7 +140,7 @@ func checkNotExist(t *testing.T, name string) {
func TestAutoClosedFile(t *testing.T) {
os.RemoveAll("_autoclose")
defer os.RemoveAll("_autoclose")
os.Mkdir("_autoclose", 0755)
os.Mkdir("_autoclose", 0o755)
file := filepath.FromSlash("_autoclose/tmp")
data := []byte("hello, world\n")

View File

@@ -1,226 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package aggregate
import (
"database/sql"
"fmt"
"log"
"os"
"time"
_ "github.com/lib/pq"
)
type CLI struct {
DBConn string `env:"UR_DB_URL" default:"postgres://user:password@localhost/ur?sslmode=disable"`
}
func (cli *CLI) Run() error {
log.SetFlags(log.Ltime | log.Ldate)
log.SetOutput(os.Stdout)
db, err := sql.Open("postgres", cli.DBConn)
if err != nil {
return fmt.Errorf("database: %w", err)
}
err = setupDB(db)
if err != nil {
return fmt.Errorf("database: %w", err)
}
for {
runAggregation(db)
// Sleep until one minute past next midnight
sleepUntilNext(24*time.Hour, 1*time.Minute)
}
}
func runAggregation(db *sql.DB) {
since := maxIndexedDay(db, "VersionSummary")
log.Println("Aggregating VersionSummary data since", since)
rows, err := aggregateVersionSummary(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
log.Println("Inserted", rows, "rows")
since = maxIndexedDay(db, "Performance")
log.Println("Aggregating Performance data since", since)
rows, err = aggregatePerformance(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
log.Println("Inserted", rows, "rows")
since = maxIndexedDay(db, "BlockStats")
log.Println("Aggregating BlockStats data since", since)
rows, err = aggregateBlockStats(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
log.Println("Inserted", rows, "rows")
}
func sleepUntilNext(intv, margin time.Duration) {
now := time.Now().UTC()
next := now.Truncate(intv).Add(intv).Add(margin)
log.Println("Sleeping until", next)
time.Sleep(next.Sub(now))
}
func setupDB(db *sql.DB) error {
_, err := db.Exec(`CREATE TABLE IF NOT EXISTS VersionSummary (
Day TIMESTAMP NOT NULL,
Version VARCHAR(8) NOT NULL,
Count INTEGER NOT NULL
)`)
if err != nil {
return err
}
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS Performance (
Day TIMESTAMP NOT NULL,
TotFiles INTEGER NOT NULL,
TotMiB INTEGER NOT NULL,
SHA256Perf DOUBLE PRECISION NOT NULL,
MemorySize INTEGER NOT NULL,
MemoryUsageMiB INTEGER NOT NULL
)`)
if err != nil {
return err
}
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS BlockStats (
Day TIMESTAMP NOT NULL,
Reports INTEGER NOT NULL,
Total BIGINT NOT NULL,
Renamed BIGINT NOT NULL,
Reused BIGINT NOT NULL,
Pulled BIGINT NOT NULL,
CopyOrigin BIGINT NOT NULL,
CopyOriginShifted BIGINT NOT NULL,
CopyElsewhere BIGINT NOT NULL
)`)
if err != nil {
return err
}
var t string
row := db.QueryRow(`SELECT 'UniqueDayVersionIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE UNIQUE INDEX UniqueDayVersionIndex ON VersionSummary (Day, Version)`)
}
row = db.QueryRow(`SELECT 'VersionDayIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE INDEX VersionDayIndex ON VersionSummary (Day)`)
}
row = db.QueryRow(`SELECT 'PerformanceDayIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE INDEX PerformanceDayIndex ON Performance (Day)`)
}
row = db.QueryRow(`SELECT 'BlockStatsDayIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE INDEX BlockStatsDayIndex ON BlockStats (Day)`)
}
return nil
}
func maxIndexedDay(db *sql.DB, table string) time.Time {
var t time.Time
row := db.QueryRow("SELECT MAX(DATE_TRUNC('day', Day)) FROM " + table)
err := row.Scan(&t)
if err != nil {
return time.Time{}
}
return t
}
func aggregateVersionSummary(db *sql.DB, since time.Time) (int64, error) {
res, err := db.Exec(`INSERT INTO VersionSummary (
SELECT
DATE_TRUNC('day', Received) AS Day,
SUBSTRING(Report->>'version' FROM '^v\d.\d+') AS Ver,
COUNT(*) AS Count
FROM ReportsJson
WHERE
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
GROUP BY Day, Ver
);
`, since)
if err != nil {
return 0, err
}
return res.RowsAffected()
}
func aggregatePerformance(db *sql.DB, since time.Time) (int64, error) {
res, err := db.Exec(`INSERT INTO Performance (
SELECT
DATE_TRUNC('day', Received) AS Day,
AVG((Report->>'totFiles')::numeric) As TotFiles,
AVG((Report->>'totMiB')::numeric) As TotMiB,
AVG((Report->>'sha256Perf')::numeric) As SHA256Perf,
AVG((Report->>'memorySize')::numeric) As MemorySize,
AVG((Report->>'memoryUsageMiB')::numeric) As MemoryUsageMiB
FROM ReportsJson
WHERE
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
/* Some custom implementation reported bytes when we expect megabytes, cap at petabyte */
AND (Report->>'memorySize')::numeric < 1073741824
GROUP BY Day
);
`, since)
if err != nil {
return 0, err
}
return res.RowsAffected()
}
func aggregateBlockStats(db *sql.DB, since time.Time) (int64, error) {
// Filter out anything prior 0.14.41 as that has sum aggregations which
// made no sense.
res, err := db.Exec(`INSERT INTO BlockStats (
SELECT
DATE_TRUNC('day', Received) AS Day,
COUNT(1) As Reports,
SUM((Report->'blockStats'->>'total')::numeric)::bigint AS Total,
SUM((Report->'blockStats'->>'renamed')::numeric)::bigint AS Renamed,
SUM((Report->'blockStats'->>'reused')::numeric)::bigint AS Reused,
SUM((Report->'blockStats'->>'pulled')::numeric)::bigint AS Pulled,
SUM((Report->'blockStats'->>'copyOrigin')::numeric)::bigint AS CopyOrigin,
SUM((Report->'blockStats'->>'copyOriginShifted')::numeric)::bigint AS CopyOriginShifted,
SUM((Report->'blockStats'->>'copyElsewhere')::numeric)::bigint AS CopyElsewhere
FROM ReportsJson
WHERE
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND (Report->>'urVersion')::numeric >= 3
AND Report->>'version' like 'v_.%'
AND Report->>'version' NOT LIKE 'v0.14.40%'
AND Report->>'version' NOT LIKE 'v0.14.39%'
AND Report->>'version' NOT LIKE 'v0.14.38%'
GROUP BY Day
);
`, since)
if err != nil {
return 0, err
}
return res.RowsAffected()
}

View File

@@ -1,276 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"regexp"
"sort"
"strconv"
"strings"
)
type analytic struct {
Key string
Count int
Percentage float64
Items []analytic `json:",omitempty"`
}
type analyticList []analytic
func (l analyticList) Less(a, b int) bool {
if l[a].Key == "Others" {
return false
}
if l[b].Key == "Others" {
return true
}
return l[b].Count < l[a].Count // inverse
}
func (l analyticList) Swap(a, b int) {
l[a], l[b] = l[b], l[a]
}
func (l analyticList) Len() int {
return len(l)
}
// Returns a list of frequency analytics for a given list of strings.
func analyticsFor(ss []string, cutoff int) []analytic {
m := make(map[string]int)
t := 0
for _, s := range ss {
m[s]++
t++
}
l := make([]analytic, 0, len(m))
for k, c := range m {
l = append(l, analytic{
Key: k,
Count: c,
Percentage: 100 * float64(c) / float64(t),
})
}
sort.Sort(analyticList(l))
if cutoff > 0 && len(l) > cutoff {
c := 0
for _, i := range l[cutoff:] {
c += i.Count
}
l = append(l[:cutoff], analytic{
Key: "Others",
Count: c,
Percentage: 100 * float64(c) / float64(t),
})
}
return l
}
// Find the points at which certain penetration levels are met
func penetrationLevels(as []analytic, points []float64) []analytic {
sort.Slice(as, func(a, b int) bool {
return versionLess(as[b].Key, as[a].Key)
})
var res []analytic
idx := 0
sum := 0.0
for _, a := range as {
sum += a.Percentage
if sum >= points[idx] {
a.Count = int(points[idx])
a.Percentage = sum
res = append(res, a)
idx++
if idx == len(points) {
break
}
}
}
return res
}
func statsForInts(data []int) [4]float64 {
var res [4]float64
if len(data) == 0 {
return res
}
sort.Ints(data)
res[0] = float64(data[int(float64(len(data))*0.05)])
res[1] = float64(data[len(data)/2])
res[2] = float64(data[int(float64(len(data))*0.95)])
res[3] = float64(data[len(data)-1])
return res
}
func statsForInt64s(data []int64) [4]float64 {
var res [4]float64
if len(data) == 0 {
return res
}
sort.Slice(data, func(a, b int) bool {
return data[a] < data[b]
})
res[0] = float64(data[int(float64(len(data))*0.05)])
res[1] = float64(data[len(data)/2])
res[2] = float64(data[int(float64(len(data))*0.95)])
res[3] = float64(data[len(data)-1])
return res
}
func statsForFloats(data []float64) [4]float64 {
var res [4]float64
if len(data) == 0 {
return res
}
sort.Float64s(data)
res[0] = data[int(float64(len(data))*0.05)]
res[1] = data[len(data)/2]
res[2] = data[int(float64(len(data))*0.95)]
res[3] = data[len(data)-1]
return res
}
func group(by func(string) string, as []analytic, perGroup int, otherPct float64) []analytic {
var res []analytic
next:
for _, a := range as {
group := by(a.Key)
for i := range res {
if res[i].Key == group {
res[i].Count += a.Count
res[i].Percentage += a.Percentage
if len(res[i].Items) < perGroup {
res[i].Items = append(res[i].Items, a)
}
continue next
}
}
res = append(res, analytic{
Key: group,
Count: a.Count,
Percentage: a.Percentage,
Items: []analytic{a},
})
}
sort.Sort(analyticList(res))
if otherPct > 0 {
// Groups with less than otherPCt go into "Other"
other := analytic{
Key: "Other",
}
for i := 0; i < len(res); i++ {
if res[i].Percentage < otherPct || res[i].Key == "Other" {
other.Count += res[i].Count
other.Percentage += res[i].Percentage
res = append(res[:i], res[i+1:]...)
i--
}
}
if other.Count > 0 {
res = append(res, other)
}
}
return res
}
func byVersion(s string) string {
parts := strings.Split(s, ".")
if len(parts) >= 2 {
return strings.Join(parts[:2], ".")
}
return s
}
func byPlatform(s string) string {
parts := strings.Split(s, "-")
if len(parts) >= 2 {
return parts[0]
}
return s
}
var numericGoVersion = regexp.MustCompile(`^go[0-9]\.[0-9]+`)
func byCompiler(s string) string {
if m := numericGoVersion.FindString(s); m != "" {
return m
}
return "Other"
}
func versionLess(a, b string) bool {
arel, apre := versionParts(a)
brel, bpre := versionParts(b)
minlen := len(arel)
if l := len(brel); l < minlen {
minlen = l
}
for i := 0; i < minlen; i++ {
if arel[i] != brel[i] {
return arel[i] < brel[i]
}
}
// Longer version is newer, when the preceding parts are equal
if len(arel) != len(brel) {
return len(arel) < len(brel)
}
if apre != bpre {
// "(+dev)" versions are ahead
if apre == plusStr {
return false
}
if bpre == plusStr {
return true
}
return apre < bpre
}
// don't actually care how the prerelease stuff compares for our purposes
return false
}
// Split a version as returned from transformVersion into parts.
// "1.2.3-beta.2" -> []int{1, 2, 3}, "beta.2"}
func versionParts(v string) ([]int, string) {
parts := strings.SplitN(v[1:], " ", 2) // " (+dev)" versions
if len(parts) == 1 {
parts = strings.SplitN(parts[0], "-", 2) // "-rc.1" type versions
}
fields := strings.Split(parts[0], ".")
release := make([]int, len(fields))
for i, s := range fields {
v, _ := strconv.Atoi(s)
release[i] = v
}
var prerelease string
if len(parts) > 1 {
prerelease = parts[1]
}
return release, prerelease
}

View File

@@ -1,131 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"bytes"
"fmt"
"strings"
)
type NumberType int
const (
NumberMetric NumberType = iota
NumberBinary
NumberDuration
)
func number(ntype NumberType, v float64) string {
switch ntype {
case NumberMetric:
return metric(v)
case NumberDuration:
return duration(v)
case NumberBinary:
return binary(v)
default:
return metric(v)
}
}
type suffix struct {
Suffix string
Multiplier float64
}
var metricSuffixes = []suffix{
{"G", 1e9},
{"M", 1e6},
{"k", 1e3},
}
var binarySuffixes = []suffix{
{"Gi", 1 << 30},
{"Mi", 1 << 20},
{"Ki", 1 << 10},
}
var durationSuffix = []suffix{
{"year", 365 * 24 * 60 * 60},
{"month", 30 * 24 * 60 * 60},
{"day", 24 * 60 * 60},
{"hour", 60 * 60},
{"minute", 60},
{"second", 1},
}
func metric(v float64) string {
return withSuffix(v, metricSuffixes, false)
}
func binary(v float64) string {
return withSuffix(v, binarySuffixes, false)
}
func duration(v float64) string {
return withSuffix(v, durationSuffix, true)
}
func withSuffix(v float64, ps []suffix, pluralize bool) string {
for _, p := range ps {
if v >= p.Multiplier {
suffix := p.Suffix
if pluralize && v/p.Multiplier != 1.0 {
suffix += "s"
}
// If the number only has decimal zeroes, strip em off.
num := strings.TrimRight(strings.TrimRight(fmt.Sprintf("%.1f", v/p.Multiplier), "0"), ".")
return fmt.Sprintf("%s %s", num, suffix)
}
}
return strings.TrimRight(strings.TrimRight(fmt.Sprintf("%.1f", v), "0"), ".")
}
// commatize returns a number with sep as thousands separators. Handles
// integers and plain floats.
func commatize(sep, s string) string {
// If no dot, don't do anything.
if !strings.ContainsRune(s, '.') {
return s
}
var b bytes.Buffer
fs := strings.SplitN(s, ".", 2)
l := len(fs[0])
for i := range fs[0] {
b.Write([]byte{s[i]})
if i < l-1 && (l-i)%3 == 1 {
b.WriteString(sep)
}
}
if len(fs) > 1 && len(fs[1]) > 0 {
b.WriteString(".")
b.WriteString(fs[1])
}
return b.String()
}
func proportion(m map[string]int, count int) float64 {
total := 0
isMax := true
for _, n := range m {
total += n
if n > count {
isMax = false
}
}
pct := (100 * float64(count)) / float64(total)
// To avoid rounding errors in the template, surpassing 100% and breaking
// the progress bars.
if isMax && len(m) > 1 && count != total {
pct -= 0.01
}
return pct
}

View File

@@ -1,26 +0,0 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var metricReportsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv",
Name: "reports_total",
}, []string{"version"})
func init() {
metricReportsTotal.WithLabelValues("fail")
metricReportsTotal.WithLabelValues("duplicate")
metricReportsTotal.WithLabelValues("v1")
metricReportsTotal.WithLabelValues("v2")
metricReportsTotal.WithLabelValues("v3")
}

View File

File diff suppressed because it is too large Load Diff

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

View File

File diff suppressed because one or more lines are too long

View File

File diff suppressed because one or more lines are too long

View File

File diff suppressed because one or more lines are too long

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

View File

@@ -1,623 +0,0 @@
<!DOCTYPE html>
<!--
Copyright (C) 2014 Jakob Borg and other contributors. All rights reserved.
Use of this source code is governed by an MIT-style license that can be
found in the LICENSE file.
-->
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
<meta name="author" content="">
<link rel="shortcut icon" href="static/assets/img/favicon.png">
<title>Syncthing Usage Reports</title>
<link href="static/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script type="text/javascript" src="static/bootstrap/js/bootstrap.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/leaflet.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/leaflet.js"></script>
<script src="https://cdn.jsdelivr.net/npm/heatmapjs@2.0.2/heatmap.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/leaflet-heatmap@1.0.0/leaflet-heatmap.js"></script>
<style type="text/css">
body {
margin: 40px;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
}
tr.main td {
font-weight: bold;
}
tr.child td.first {
padding-left: 2em;
}
.progress-bar {
overflow:hidden;
white-space:nowrap;
text-overflow: ellipsis;
}
</style>
<script type="text/javascript"
src='https://www.google.com/jsapi?autoload={
"modules":[{
"name":"visualization",
"version":"1",
"packages":["corechart"]
}]
}'></script>
<script type="text/javascript">
google.setOnLoadCallback(drawVersionChart);
google.setOnLoadCallback(drawBlockStatsChart);
google.setOnLoadCallback(drawPerformanceCharts);
function drawVersionChart() {
// Summary version chart for versions that at some point in the chart
// reaches 250 devices. This filters out versions that are old and
// uninteresting yet linger forever with like four users.
var jsonData = $.ajax({url: "summary.json?min=250", dataType:"json", async: false}).responseText;
var rows = JSON.parse(jsonData);
var data = new google.visualization.DataTable();
data.addColumn('date', 'Day');
for (var i = 1; i < rows[0].length; i++){
data.addColumn('number', rows[0][i]);
}
for (var i = 1; i < rows.length; i++){
rows[i][0] = new Date(rows[i][0]);
data.addRow(rows[i]);
};
var options = {
legend: { position: 'bottom', alignment: 'center' },
isStacked: true,
colors: ['rgb(102,194,165)','rgb(252,141,98)','rgb(141,160,203)','rgb(231,138,195)','rgb(166,216,84)','rgb(255,217,47)'],
chartArea: {left: 80, top: 20, width: '1020', height: '300'},
};
var chart = new google.visualization.AreaChart(document.getElementById('versionChart'));
chart.draw(data, options);
}
function formatGibibytes(gibibytes, decimals) {
if(gibibytes == 0) return '0 GiB';
var k = 1024,
dm = decimals || 2,
sizes = ['GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'],
i = Math.floor(Math.log(gibibytes) / Math.log(k));
if (i < 0) {
sizes = 'MiB';
} else {
sizes = sizes[i];
}
return parseFloat((gibibytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes;
}
function drawBlockStatsChart() {
var jsonData = $.ajax({url: "blockstats.json", dataType:"json", async: false}).responseText;
var rows = JSON.parse(jsonData);
var data = new google.visualization.DataTable();
data.addColumn('date', 'Day');
for (var i = 1; i < rows[0].length; i++){
data.addColumn('number', rows[0][i]);
}
var totals = [0, 0, 0, 0, 0, 0];
for (var i = 1; i < rows.length; i++){
rows[i][0] = new Date(rows[i][0]);
for (var j = 2; j < rows[i].length; j++) {
totals[j-2] += rows[i][j];
}
data.addRow(rows[i]);
};
var totalTotals = totals.reduce(function(a, b) { return a + b; }, 0);
if (totalTotals > 0) {
var content = "<table class='table'>\n"
for (var j = 2; j < rows[0].length; j++) {
content += "<tr><td><b>" + rows[0][j].replace(' (GiB)', '') + "</b></td><td>" + formatGibibytes(totals[j-2].toFixed(2)) + " (" + ((100*totals[j-2])/totalTotals).toFixed(2) +"%)</td></tr>\n";
}
content += "</table>";
document.getElementById("data-to-date").innerHTML = content;
} else {
// No data, hide it.
document.getElementById("block-stats").outerHTML = "";
return;
}
var options = {
focusTarget: 'category',
vAxes: {0: {}, 1: {}},
series: {0: {type: 'line', targetAxisIndex:1}},
isStacked: true,
legend: {position: 'none'},
colors: ['rgb(102,194,165)','rgb(252,141,98)','rgb(141,160,203)','rgb(231,138,195)','rgb(166,216,84)','rgb(255,217,47)'],
chartArea: {left: 80, top: 20, width: '1020', height: '300'},
};
var chart = new google.visualization.AreaChart(document.getElementById('blockStatsChart'));
chart.draw(data, options);
}
function drawPerformanceCharts() {
var jsonData = $.ajax({url: "/performance.json", dataType:"json", async: false}).responseText;
var rows = JSON.parse(jsonData);
for (var i = 1; i < rows.length; i++){
rows[i][0] = new Date(rows[i][0]);
}
drawChart(rows, 1, 'Total Number of Files', 'totFilesChart', 1e6, 1);
drawChart(rows, 2, 'Total Folder Size (GiB)', 'totMiBChart', 1e6, 1024);
drawChart(rows, 3, 'Hash Performance (MiB/s)', 'hashPerfChart', 1000, 1);
drawChart(rows, 4, 'System RAM Size (GiB)', 'memSizeChart', 1e6, 1024);
drawChart(rows, 5, 'Memory Usage (MiB)', 'memUsageChart', 250, 1);
}
function drawChart(rows, index, title, id, cutoff, divisor) {
var data = new google.visualization.DataTable();
data.addColumn('date', 'Day');
data.addColumn('number', title);
var row;
for (var i = 1; i < rows.length; i++){
row = [rows[i][0], rows[i][index] / divisor];
if (row[1] > cutoff) {
row[1] = null;
}
data.addRow(row);
}
var options = {
legend: { position: 'bottom', alignment: 'center' },
colors: ['rgb(102,194,165)','rgb(252,141,98)','rgb(141,160,203)','rgb(231,138,195)','rgb(166,216,84)','rgb(255,217,47)'],
chartArea: {left: 80, top: 20, width: '1020', height: '300'},
vAxes: {0: {minValue: 0}},
};
var chart = new google.visualization.LineChart(document.getElementById(id));
chart.draw(data, options);
}
var locations = [];
{{range $location, $weight := .locations}}
locations.push({lat:{{- $location.Latitude -}},lng:{{- $location.Longitude -}},count:Math.min(100, {{- $weight -}})});
{{- end}}
function drawHeatMap() {
if (locations.length == 0) {
return;
}
var testData = {
data: locations
};
var baseLayer = L.tileLayer(
'https://tile.openstreetmap.org/{z}/{x}/{y}.png',{
attribution: '...',
maxZoom: 18
}
);
var cfg = {
"radius": 1,
"minOpacity": .25,
"maxOpacity": .8,
"scaleRadius": true,
"useLocalExtrema": true,
latField: 'lat',
lngField: 'lng',
valueField: 'count',
gradient: {
'.1': 'cyan',
'.8': 'blue',
'.95': 'red'
}
};
var heatmapLayer = new HeatmapOverlay(cfg);
var map = new L.Map('map', {
center: new L.LatLng(25, 0),
zoom: 1,
layers: [baseLayer, heatmapLayer]
});
heatmapLayer.setData(testData);
}
</script>
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-12">
<h1>Syncthing Usage Data</h1>
<h4 id="active-users">Active Users per Day and Version</h4>
<p>
This is the total number of unique users with reporting enabled, per day. Area color represents the major version.
</p>
<div class="img-thumbnail" id="versionChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<div id="block-stats">
<h4>Data Transfers per Day</h4>
<p>
This is total data transferred per day. Also shows how much data was saved (not transferred) by each of the methods syncthing uses.
</p>
<div class="img-thumbnail" id="blockStatsChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="totals-to-date">Totals to date</h4>
<p id="data-to-date">
No data
</p>
</div>
<h4 id="metrics">Usage Metrics</h4>
<p>
This is the aggregated usage report data for the last 24 hours. Data based on <b>{{.nodes}}</b> devices that have reported in.
</p>
{{if .locations}}
<div class="img-thumbnail" id="map" style="width: 1130px; height: 400px; padding: 10px;"></div>
<p class="text-muted">
Heatmap max intensity is capped at 100 reports within a location.
</p>
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" href="#collapseTwo">Break down per country</a>
</h4>
</div>
<div id="collapseTwo" class="panel-collapse collapse">
<div class="panel-body">
<div class="row">
<div class="col-md-6">
<table class="table table-striped">
<tbody>
{{range .countries | slice 2 1}}
<tr>
<td style="width: 45%">{{.Key}}</td>
<td style="width: 5%" class="text-right">{{if ge .Pct 10.0}}{{.Pct | printf "%.0f"}}{{else if ge .Pct 1.0}}{{.Pct | printf "%.01f"}}{{else}}{{.Pct | printf "%.02f"}}{{end}}%</td>
<td style="width: 5%" class="text-right">{{.Count}}</td>
<td>
<div class="progress-bar" role="progressbar" aria-valuenow="{{.Pct | printf "%.02f"}}" aria-valuemin="0" aria-valuemax="100" style="width: {{.Pct | printf "%.02f"}}%; height:20px"></div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<tbody>
{{range .countries | slice 2 2}}
<tr>
<td style="width: 45%">{{.Key}}</td>
<td style="width: 5%" class="text-right">{{if ge .Pct 10.0}}{{.Pct | printf "%.0f"}}{{else if ge .Pct 1.0}}{{.Pct | printf "%.01f"}}{{else}}{{.Pct | printf "%.02f"}}{{end}}%</td>
<td style="width: 5%" class="text-right">{{.Count}}</td>
<td>
<div class="progress-bar" role="progressbar" aria-valuenow="{{.Pct | printf "%.02f"}}" aria-valuemin="0" aria-valuemax="100" style="width: {{.Pct | printf "%.02f"}}%; height:20px"></div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
{{end}}
<table class="table table-striped">
<thead>
<tr>
<th></th>
<th colspan="4" class="text-center">
<a href="https://en.wikipedia.org/wiki/Percentile">Percentile</a>
</th>
</tr>
<tr>
<th></th>
<th class="text-right">5%</th>
<th class="text-right">50%</th>
<th class="text-right">95%</th>
<th class="text-right">100%</th>
</tr>
</thead>
<tbody>
{{range .categories}}
<tr>
<td>{{.Descr}}</td>
<td class="text-right">{{index .Values 0 | number .Type | commatize " "}}{{.Unit}}</td>
<td class="text-right">{{index .Values 1 | number .Type | commatize " "}}{{.Unit}}</td>
<td class="text-right">{{index .Values 2 | number .Type | commatize " "}}{{.Unit}}</td>
<td class="text-right">{{index .Values 3 | number .Type | commatize " "}}{{.Unit}}</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
<div class="row">
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Version</th><th class="text-right">Devices</th><th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .versions}}
{{if gt .Percentage 0.1}}
<tr class="main">
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{range .Items}}
{{if gt .Percentage 0.1}}
<tr class="child">
<td class="first">{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
{{end}}
{{end}}
{{end}}
</tbody>
</table>
<table class="table table-striped">
<thead>
<tr>
<th>Penetration Level</th>
<th>Version</th>
<th class="text-right">Actual</th>
</tr>
</thead>
<tbody>
{{range .versionPenetrations}}
<tr>
<td>{{.Count}}%</td>
<td>&ge; {{.Key}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Platform</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .platforms}}
<tr class="main">
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{range .Items}}
<tr class="child">
<td class="first">{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
{{end}}
</tbody>
</table>
</div>
</div>
<div class="row">
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Compiler</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .compilers}}
<tr class="main">
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{range .Items}}
{{if or (gt .Percentage 0.1) (eq .Key "Others")}}
<tr class="child">
<td class="first">{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
{{end}}
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Distribution Channel</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .distributions}}
<tr>
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Builder</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .builders}}
<tr>
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
<div class="row">
<div class="col-md-12">
<h4 id="features">Feature Usage</h4>
<p>
The following lists feature usage. Some features are reported per report, some are per sum of units within report (eg. devices with static addresses among all known devices per report).
Currently there are <b>{{.versionNodes.v2}}</b> devices reporting for version 2 and <b>{{.versionNodes.v3}}</b> for version 3.
</p>
</div>
</div>
<div class="row">
{{$i := counter}}
{{range $featureName := .featureOrder}}
{{$featureValues := index $.features $featureName }}
{{if $i.DrawTwoDivider}}
</div>
<div class="row">
{{end}}
{{ $i.Increment }}
<div class="col-md-6">
<table class="table table-striped">
<thead><tr>
<th>{{$featureName}} Features</th><th colspan="2" class="text-center">Usage</th>
</tr></thead>
<tbody>
{{range $featureValues}}
<tr>
<td style="width: 50%">{{.Key}} ({{.Version}})</td>
<td style="width: 10%" class="text-right">{{if ge .Pct 10.0}}{{.Pct | printf "%.0f"}}{{else if ge .Pct 1.0}}{{.Pct | printf "%.01f"}}{{else}}{{.Pct | printf "%.02f"}}{{end}}%</td>
<td style="width: 40%" {{if lt .Pct 5.0}}data-toggle="tooltip" title='{{.Count}}'{{end}}>
<div class="progress-bar" role="progressbar" aria-valuenow="{{.Pct | printf "%.02f"}}" aria-valuemin="0" aria-valuemax="100" style="width: {{.Pct | printf "%.02f"}}%; height:20px" {{if ge .Pct 5.0}}data-toggle="tooltip" title='{{.Count}}'{{end}}></div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
{{end}}
</div>
<div class="row">
<div class="col-md-12">
<h4 id="features">Feature Group Usage</h4>
<p>
The following lists feature usage groups, which might include multiple occourances of a feature use per report.
</p>
</div>
</div>
<div class="row">
{{$i := counter}}
{{range $featureName := .featureOrder}}
{{$featureValues := index $.featureGroups $featureName }}
{{if $i.DrawTwoDivider}}
</div>
<div class="row">
{{end}}
{{ $i.Increment }}
<div class="col-md-6">
<table class="table table-striped">
<thead><tr>
<th>{{$featureName}} Group Features</th><th colspan="2" class="text-center">Usage</th>
</tr></thead>
<tbody>
{{range $featureValues}}
{{$counts := .Counts}}
<tr>
<td style="width: 50%">
<div data-toggle="tooltip" title='{{range $key, $value := .Counts}}{{$key}} ({{$value | proportion $counts | printf "%.02f"}}% - {{$value}})</br>{{end}}'>
{{.Key}} ({{.Version}})
</div>
</td>
<td style="width: 50%">
<div class="progress" role="progressbar" style="width: 100%">
{{$j := counter}}
{{range $key, $value := .Counts}}
{{with $valuePct := $value | proportion $counts}}
<div class="progress-bar {{ $j.Current | progressBarClassByIndex }}" style='width: {{$valuePct | printf "%.02f"}}%' data-toggle="tooltip" title='{{$key}} ({{$valuePct | printf "%.02f"}}% - {{$value}})'>
{{if ge $valuePct 30.0}}{{$key}}{{end}}
</div>
{{end}}
{{ $j.Increment }}
{{end}}
</div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
{{end}}
</div>
<div class="row">
<div class="col-md-12">
<h1 id="performance-charts">Historical Performance Data</h1>
<p>These charts are all the average of the corresponding metric, for the entire population of a given day.</p>
<h4 id="hash-performance">Hash Performance (MiB/s)</h4>
<div class="img-thumbnail" id="hashPerfChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="memory-usage">Memory Usage (MiB)</h4>
<div class="img-thumbnail" id="memUsageChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="total-files">Total Number of Files</h4>
<div class="img-thumbnail" id="totFilesChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="total-size">Total Folder Size (GiB)</h4>
<div class="img-thumbnail" id="totMiBChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="system-ram">System RAM Size (GiB)</h4>
<div class="img-thumbnail" id="memSizeChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
</div>
</div>
</div>
<hr>
<p>
<a href="https://github.com/syncthing/syncthing/tree/main/cmd/ursrv">Source code</a>.
This product includes GeoLite2 data created by MaxMind, available from
<a href="http://www.maxmind.com">http://www.maxmind.com</a>.
</p>
<script type="text/javascript">
$('[data-toggle="tooltip"]').tooltip({html:true});
drawHeatMap();
</script>
</body>
</html>

34
compat.yaml Normal file
View File

@@ -0,0 +1,34 @@
- runtime: go1.21
requirements:
# See https://en.wikipedia.org/wiki/MacOS_version_history#Releases
#
# macOS 10.15 (Catalina) per https://go.dev/doc/go1.22#darwin
darwin: "19"
# Per https://go.dev/doc/go1.23#linux
linux: "2.6.32"
# Windows 10's initial release was 10.0.10240.16405, per
# https://learn.microsoft.com/en-us/windows/release-health/release-information
# and Windows 11's initial release was 10.0.22000.194 per
# https://learn.microsoft.com/en-us/windows/release-health/windows11-release-information
#
# Windows 10/Windows Server 2016 per https://go.dev/doc/go1.21#windows
windows: "10.0"
- runtime: go1.22
requirements:
darwin: "19"
linux: "2.6.32"
windows: "10.0"
- runtime: go1.23
requirements:
# macOS 11 (Big Sur) per https://tip.golang.org/doc/go1.23#darwin
darwin: "20"
linux: "2.6.32"
windows: "10.0"
- runtime: go1.24
requirements:
darwin: "20"
linux: "3.2"
windows: "10.0"

View File

@@ -2,7 +2,7 @@
Name=Start Syncthing
GenericName=File synchronization
Comment=Starts the main syncthing process in the background.
Exec=/usr/bin/syncthing serve --no-browser --logfile=default
Exec=syncthing serve --no-browser --logfile=default
Icon=syncthing
Terminal=false
Type=Application

View File

@@ -2,7 +2,7 @@
Name=Syncthing Web UI
GenericName=File synchronization UI
Comment=Opens Syncthing's Web UI in the default browser (Syncthing must already be started).
Exec=/usr/bin/syncthing --browser-only
Exec=syncthing --browser-only
Icon=syncthing
Terminal=false
Type=Application

View File

@@ -1,11 +0,0 @@
[Unit]
Description=Restart Syncthing after resume
Documentation=man:syncthing(1)
After=sleep.target
[Service]
Type=oneshot
ExecStart=-/usr/bin/pkill -HUP -x syncthing
[Install]
WantedBy=sleep.target

Some files were not shown because too many files have changed in this diff Show More