Compare commits

..

223 Commits

Author SHA1 Message Date
Jakob Borg
612fdff377 build: automatically update APT repository on release
This uses https://github.com/kastelo/ezapt to generate and sign the
archive, and uploads it to blob storage.
2024-11-24 22:55:12 +01:00
Jakob Borg
8ccb7f1924 fix(protocol): allow encrypted-to-encrypted connections again
Encrypted-to-encrypted connections (i.e., ones where both sides set a
password) used to work but were broken in the 1.28.0 release. The
culprit is the 5342bec1b refactor which slightly changed how the request
was constructed, resulting in a bad block hash field.

Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-11-24 22:55:12 +01:00
André Colomb
65d0ca8aa9 fix(config): respect GUI address override in fresh default config (fixes #9783) (#9675)
### Purpose

When generating a new `config.xml` file with default options, the GUI
address is populated with a hard-coded default value of
`127.0.0.1:8384`, except for a random free port if that default one is
occupied. This is independent from the GUI configuration default address
defined in the protobuf description. More importantly, it ignores any
`STGUIADDRESS` override given via environment variable or command-line
option, thus probing for the default port instead of the one specified
via override.

The `ProbeFreePorts()` function now respects the override, by reading
the `GUIConfiguration.Address()` method instead of using hard-coded
defaults.

When not calling `ProbeFreePorts()`, the override should still be
persisted rather than the default address. This happens only when
generating a fresh default `config.xml`, never on an existing one.
2024-11-19 11:01:43 +00:00
Jakob Borg
e82ed6e3d3 style: gofumpt all the things (#9829)
Literally `gofumpt -w .` from the top level dir. Guaranteed to be minor
style changes only and nothing else.

@imsodin per request?
2024-11-19 11:32:56 +01:00
Syncthing Release Automation
4b815fc086 chore(gui, man, authors): update docs, translations, and contributors 2024-11-18 03:49:35 +00:00
Jakob Borg
7eaf843de2 chore(api): add block and goroutine profiles to support bundle (#9824) 2024-11-16 09:43:17 +01:00
Jakob Borg
110e1ae6f9 fix(model): don't panic in index consistency print (fixes #9821) (#9823)
We try to compare to the last fileinfo, but apparently we can end up
here with an empty file list and crash on out of index.
2024-11-14 19:59:34 +00:00
Jakob Borg
896f9725ec chore: update policy to allow approvals by contributors (#9818)
This adds `allow_contributor: true` which allows approvals by
contributors to the PR (but still not the author themself, which is a
different thing). This allows things like pushing minor fixups while
also approving.

The `ignore_update_merges: true` option makes it so that someone is not
considered a "contributor" just because they push the merge button to
update the branch. In principle this is not needed given the above, but
I like it for clarity.
2024-11-12 10:50:19 +01:00
Tobias Frölich
1a529e9d5d fix(gui): expand tildes for subdir check (fixes #9400) (#9788)
### Purpose

This closes #9400 by always expanding tildes when parent/subdir checks
are done.

### Testing

I tested this by creating folders with paths to parent or subdirectories
of the default folder that include a tilde in their path as shown in the
attached screenshots.
With this change, overlap will be detected regardless of wether or not
tildes are used in other folder paths.

### Screenshots

Default Folder:

![2024-10-26-At-08h40m33s](https://github.com/user-attachments/assets/07df090c-4481-41ec-b741-d2785fc848d5)
Newly created folder (parent directory in this case)

![2024-10-26-At-08h40m13s](https://github.com/user-attachments/assets/636fa1fd-41dc-44d9-ac90-0a4937c9921c)

---------

Signed-off-by: tobifroe <froeltob@pm.me>
2024-11-12 09:24:00 +01:00
Hireworks
36ef17df8f fix(model): check if remote folder state before pulling files (fixes #9686) (#9732)
### Purpose

As discussed in #9686 
Syncthing currently does not check folderstate on remote device before
pulling. If no devices have a valid folderstate (i.e all devices have
the folder paused) it will still attempt to pull. On large folders this
will cause a hanging "Syncing" status.

This checks whether at least one connected device has the file available
and has a valid folderstate.

### Testing
Tested locally on multiple devices.
We're new to Go (all our stuff is Python) so please bear with!
Interested if there may be a better place to slot this in.

Thanks,
Jon

---------

Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-11-12 08:51:52 +01:00
Syncthing Release Automation
955ac7775e chore(gui, man, authors): update docs, translations, and contributors 2024-11-11 03:46:06 +00:00
tomasz1986
8f69e874c4 chore(gui): group logout/restart/shutdown buttons together in Actions menu (#9801)
Currently, the "Restart" and "Shutdown" buttons are displayed in the
middle of the Actions menu. On the other hand, the "Log Out" button is
displayed at the very bottom. However, in other cases, e.g. the menus in
operating systems like Windows or macOS, these kind of buttons are
usually grouped together.

Therefore, move the "Restart" and "Shutdown" buttons down, so that they
are listed together with the "Log Out" button. Also, change the order,
so that it goes from the least impactful ("Log Out") to the most
impactful ("Shutdown").

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

### Screenshots

#### Before


![image](https://github.com/user-attachments/assets/a51438ef-bb6f-4535-a972-8c1bc1dffa02)

#### After


![image](https://github.com/user-attachments/assets/535762d6-6f26-44ab-a402-db87bdcbfb36)

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-11-07 16:47:00 +01:00
Syncthing Release Automation
ac06fd97e9 chore(gui, man, authors): update docs, translations, and contributors 2024-11-04 03:47:49 +00:00
Syncthing Release Automation
3726b7d112 chore(gui, man, authors): update docs, translations, and contributors 2024-10-28 03:48:15 +00:00
Ross Smith II
377200591e fix(fs): fix directory junction handling (fixes #9775) (#9786)
### Purpose

This fixes #9775. I also improved the comments as they were lacking.

My apologies for introducing this bug. In summary, the bug was
```
mode = mode ^ (ModeSymlink | ModeIrregular)
```
didn't correctly reset those bits. This correctly resets them:
```
mode = mode &^ ModeSymlink &^ ModeIrregular
```
Tested and working in Windows 11 version 10.0.22631.4317. I didn't test
in other versions, but I'm sure this is the only issue.
2024-10-27 16:08:38 +01:00
Kapil Sareen
4afc898c2f fix(model): don't sync symbolic links on Android (fixes #9725) (#9782) 2024-10-26 09:29:38 +00:00
Simon Frei
ff7e4fef55 chore(nat, upnp): Make failure logging less reptitive (ref #9324) (#9785)
Currently we log on every single one of 10 retries deep in the upnp
stack. However we also return the failure as an error, which is bubbled
up a while until it's logged at debug level. Switch that around, such
that the repeat logging happens at debug level but the top-level happens
at info. There's some chance that this will newly log errors from
nat-pmp that were previously hidden in debug level - I hope those are
useful and not too numerous.

Also potentially this can even close #9324, my (very limited)
understanding of the reports/discussion there is that there's likely no
problem with syncthing beyond the excessive logging, it's some weird
router behaviour.
2024-10-25 21:04:22 +00:00
Simon Frei
9ffddb1923 fix(gui): apply small screen CSS changes earlier (fixes #9590) (#9756)
These CSS overrides address issues that are already present on wider
screens, so apply it there. Some experiments show we might even want to
up the limit more, but I am chicken and lazy, so I propose to use the
existing 470px media block.

Supersedes another PR after not getting any reaction to feedback there:
https://github.com/syncthing/syncthing/pull/9591#issuecomment-2212586134

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2024-10-25 22:49:22 +02:00
André Colomb
896b857fc4 fix(gui): hide lists with [object Object] in advanced settings (#9743)
As discussed in
https://github.com/syncthing/syncthing/pull/9175#discussion_r1730431703,
entries in advanced settings are unusable if they are comprised of a
list of objects. It just displays `[object Object], [object Object],
[object Object]`, e.g. for the devices a folder is shared with.

Filter out these config elements by detecting an array whose members are
not all strings or numbers, and setting them to `skip` type.

Fix some unnecessary repetition in calling `inputTypeFor()`, since it is
already cached in the `ng-init` directive.
2024-10-25 22:22:12 +02:00
Terrance
acc5d2675b fix(gui): add dark scheme styles for disabled checkboxes (fixes #9776) (#9777)
### Purpose

Fixes #9776 by tweaking the text/background colours of disabled checkbox
panels when dark mode is enabled.

It was [noted on that
issue](https://github.com/syncthing/syncthing/issues/9776#issuecomment-2424828520)
that there's a bigger issue around the correctness of using the
`disabled` attribute on a `<div>` in the first place, but this PR does
not attempt to change that.

### Testing

I've hooked up the GUI files against a release build as suggested below.

### Screenshots

Using the dark theme, or the default theme with a system dark scheme:


![image](https://github.com/user-attachments/assets/3c6bfa77-cc7a-4f3e-a5c2-83daf54dcc34)

Using the black theme:


![image](https://github.com/user-attachments/assets/768db657-aa52-4db0-8455-5194a00fc143)

These borrow the colours from dark theme text inputs and black theme
tabs for a consistent look (initially I tried the text colour of
disabled text inputs, but that produced some poor contrast).
2024-10-22 14:03:32 +02:00
Syncthing Release Automation
6ece4c1fd2 chore(gui, man, authors): update docs, translations, and contributors 2024-10-21 03:47:52 +00:00
Jakob Borg
cc09f0170d build(deps): update dependencies (#9773) 2024-10-17 13:05:53 +00:00
Syncthing Release Automation
bb234d6c0e chore(gui, man, authors): update docs, translations, and contributors 2024-10-14 03:47:49 +00:00
Syncthing Release Automation
e6acc64758 chore(gui, man, authors): update docs, translations, and contributors 2024-10-07 03:47:13 +00:00
André Colomb
f18cf545b9 fix(gui): improve device ID readability in black and dark themes (fixes #9757) (#9758) 2024-10-06 00:48:19 +02:00
Jakob Borg
6d64daaba3 chore(db): process "unchanged" files anyway (#9755)
Skipping these makes the sequence numbering inconcistent; we've received
a file and suppsedly added it to the database, but if you check the
sequence number afterwards it didn't increase, i.e., we trigger [this
failure
condition](47f48faed7/lib/model/indexhandler.go (L447-L459))
and, similarly, a future update will look like there was a hole in the
numbering.

I propose to at least temporarily remove this optimisation in order for
things to make more sense. Is there a reason to keep this beyond saving
some database operations?
2024-10-04 19:47:57 +00:00
Jakob Borg
47f48faed7 fix(upgrades): avoid clobbering cache when filtering (#9752)
The slice is shared, can't overwrite elements of it. (Upgrade server
only thing.)
2024-10-02 18:56:39 +00:00
Jakob Borg
cfa834177b build(deps): update dependencies (#9751) 2024-10-02 12:53:14 +00:00
tomasz1986
c454fc8baa chore(build): use conventional commit title in update script (#9747) 2024-09-30 15:59:14 -05:00
Jakob Borg
dbe7fa9155 Merge branch 'infrastructure'
* infrastructure:
  feat(ursrv): new metrics based approach
2024-09-30 14:17:02 -05:00
Jakob Borg
4d842f7d3b feat(ursrv): new metrics based approach 2024-09-30 14:16:27 -05:00
Syncthing Release Automation
0e68221c91 gui, man, authors: Update docs, translations, and contributors 2024-09-30 03:48:06 +00:00
Jakob Borg
19f63c7ea3 chore(model): improve tracking sentPrevSeq for index debugging (#9740) 2024-09-29 22:18:24 +00:00
Emil Lundberg
fb939ec496 fix(model): prevent division by zero in numHashers (#9744)
This should prevent the panic that occurred in this test run:
https://github.com/syncthing/syncthing/actions/runs/11095876010/job/30825046810

```
2024-09-29T21:01:53.5425372Z === RUN   TestIssue4357
2024-09-29T21:01:53.5505943Z panic: runtime error: integer divide by zero [recovered]
2024-09-29T21:01:53.5512200Z 	panic: runtime error: integer divide by zero
2024-09-29T21:01:53.5516633Z
2024-09-29T21:01:53.5523018Z goroutine 2655 [running]:
2024-09-29T21:01:53.5524157Z github.com/thejerf/suture/v4.(*Supervisor).runService.func2.2()
2024-09-29T21:01:53.5527176Z 	/home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:563 +0xd0
2024-09-29T21:01:53.5530556Z panic({0x1080d20?, 0x1851290?})
2024-09-29T21:01:53.5564723Z 	/home/runner/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.1.linux-amd64/src/runtime/panic.go:785 +0x132
2024-09-29T21:01:53.5566616Z github.com/syncthing/syncthing/lib/model.(*model).numHashers(0xc0006f6180, {0x117dc1a, 0x7})
2024-09-29T21:01:53.5568061Z 	/home/runner/work/syncthing/syncthing/lib/model/model.go:2581 +0x210
2024-09-29T21:01:53.5569912Z github.com/syncthing/syncthing/lib/model.(*folder).scanSubdirsChangedAndNew(0xc00c38c808, {0x0, 0x0, 0x0}, 0xc0003fc060)
2024-09-29T21:01:53.5571612Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:653 +0x250
2024-09-29T21:01:53.5573010Z github.com/syncthing/syncthing/lib/model.(*folder).scanSubdirs(0xc00c38c808, {0x0, 0x0, 0x0})
2024-09-29T21:01:53.5574447Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:512 +0xd0f
2024-09-29T21:01:53.5576011Z github.com/syncthing/syncthing/lib/model.(*folder).scanTimerFired(0xc00c38c808)
2024-09-29T21:01:53.5577367Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:916 +0x46
2024-09-29T21:01:53.5579010Z github.com/syncthing/syncthing/lib/model.(*folder).Serve(0xc00c38c808, {0x1307650, 0xc0006a0910})
2024-09-29T21:01:53.5580428Z 	/home/runner/work/syncthing/syncthing/lib/model/folder.go:205 +0xd7e
2024-09-29T21:01:53.5581624Z github.com/thejerf/suture/v4.(*Supervisor).runService.func2()
2024-09-29T21:01:53.5582978Z 	/home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:567 +0x249
2024-09-29T21:01:53.5584400Z created by github.com/thejerf/suture/v4.(*Supervisor).runService in goroutine 2651
2024-09-29T21:01:53.5585872Z 	/home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:541 +0x32a
2024-09-29T21:01:53.5661413Z FAIL	github.com/syncthing/syncthing/lib/model	5.510s
```

### Testing

I have not been able to reproduce the panic throughout a few minutes of
continuously running the test without this fix, but judging by the
traceback it seems to only happen if the test happens to delete the
folder from config at the same time `scanTimerFired` triggers.
2024-09-30 00:01:52 +02:00
Jakob Borg
39df3173d4 chore(model): log sequence anomaly when update appears not to "take" (#9741)
I hope this doesn't fire, but 👻  I'm Seeing Things I Can't Explain 👻
2024-09-29 15:04:06 +00:00
maxice8
429672e0b4 docs(docker): add healthcheck to docker-compose (#9742)
### Purpose

Syncthing had a healthcheck API for a while, and the example Dockerfile
for it has it in the form of:

HEALTHCHECK --interval=1m --timeout=10s \
CMD curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o
--color=never OK || exit 1

Let's add it to the docker-compose as well

### Testing

I use this docker-compose.yml file to deploy via ansible (using
community.docker.docker_compose_v2) to my machine with success, using
`wait: true` in ansible for it to use `docker compose up --wait`.

```yml
- name: Enable syncthing docker
  community.docker.docker_compose_v2:
    project_src: /srv/syncthing
    wait: true
    wait_timeout: 90
```
2024-09-29 09:53:13 -05:00
Simon Frei
605fd6d726 fix(ignore): ensure normalization of patterns and paths match (fixes #9597) (#9717)
In ignores, normalize the input when parsing it.
When scanning, normalize earlier such that the path is already
normalized when checking ignores. This requires splitting normalization
of the string from normalization of the file, as we don't want to
attempt the latter if the file is ignored.

Closes #9598

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2024-09-28 17:16:44 +02:00
Jakob Borg
3c476542d2 fix(ur): actually send usage report directly when enabled (#9736)
There was a bug that the unique ID was not set when reporting was
enabled, and thus the reports where rejected by the server. The unique
ID got set only on startup, so next time Syncthing restarted.

This makes sure to set the unique ID when blank.
2024-09-28 17:02:05 +02:00
Jakob Borg
31874f3ebb chore(model): remove GUI/log warning on sequence anomaly (#9738)
I can see already in our Sentry data that there are a fair amount of
these warnings, and mostly the shape of it. Asking users to report them
will likely cause a lot of reporting effort to fairly little additional
value. We can do that when/if we have something more targeted to ask
for.
2024-09-28 16:38:07 +02:00
Jakob Borg
77942747db Merge branch 'infrastructure'
* infrastructure:
  feat(stupgrades): filter returned releases per compatibility
2024-09-26 10:25:02 +02:00
Jakob Borg
fe01b396ba feat(stupgrades): filter returned releases per compatibility 2024-09-26 10:22:23 +02:00
Jakob Borg
3583949706 refactor(upgrade): rename insecureGet which is no longer insecure (#9735) 2024-09-25 15:50:22 +00:00
Jakob Borg
23fc22ebc5 chore: add more advanced policy configuration (#9726)
This codifies a review policy which is closer to what I always
envisioned, but which isn't expressible using the normal checks in the
GitHub GUI. It would move the commit approval check from GitHub into the
policy-bot check which is already present to enforce the
conventional-commits standard. Approvals in general would still work the
same -- it's just that the bot picks it up and toggles the status
accordingly. From a GitHub side when this is enabled we'd remove the
requires-review check from there and let the bot decide that part. We
would still require builds and tests to pass of course.

There are a couple of relexations from the current policy, details in
the code but briefly:

- Changes to translations or dependencies by a trusted person don't
require review
- Trivial changes by a trusted person, explicitly marked as such, don't
require review

This enables less bureaucracy for things like adding new translated
languages and updating dependencies, and enables the trivial-change
workflow to a larger audience than, like, me, who could always just
bypass the rules by way of being admin.
2024-09-25 17:41:56 +02:00
Jakob Borg
cba163a1fd chore: enable TLS client cache for HTTPS where appropriate (#9721)
https://forum.syncthing.net/t/infrastructure-report-discovery-stuff/22819/4
2024-09-24 08:55:04 +02:00
Jakob Borg
a8e2c8edb6 fix(connections): announce PtP links again (fixes #9730) (#9731) 2024-09-23 14:32:19 +02:00
Syncthing Release Automation
3e501d9036 gui, man, authors: Update docs, translations, and contributors 2024-09-23 03:46:01 +00:00
bt90
9ca101756d chore(ursrv): add Nix detection (#9729)
Classify the builder `nix@nix` as [Nix](https://nixos.org/)

![369684243-172cab09-df6f-449a-a638-1f0a0c080ab3](https://github.com/user-attachments/assets/37a6e0a5-bdcb-4b31-8b36-eaaa42423382)
2024-09-22 14:03:40 +02:00
bt90
a873d12c65 chore(ursrv): extend F-Droid detection (#9728)
Our f-droid apps are currently built using `vagrant@bookworm`:

![grafik](https://github.com/user-attachments/assets/172cab09-df6f-449a-a638-1f0a0c080ab3)
2024-09-22 13:48:38 +02:00
Emil Lundberg
8ff670c564 fix(gui): get version from header when not authenticated (#9724)
### Purpose

Since #8757, the Syncthing GUI now has an unauthenticated state. One
consequence of this is that `$scope.versionBase()` is not initialized
while unauthenticated, which causes the `docsURL` function to truncate
links to just `https://docs.syncthing.net`/, discarding the section
path. This currently affects at least the "Help > Introduction" link
reachable both while logged in and not. The issue is exacerbated in
https://github.com/syncthing/syncthing/pull/9175 where we sometimes want
to show additional contextual help links from the login page to
particular sections of the docs.

I don't think it's any worse to try to preserve the section path even
without an explicit version tag, than to fall back to just the host and
lose all context the link was attempting to provide.

### Testing

- On commit b1ed2802fb (before):
  - Open the GUI, set a username and log out.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/
  - Log in.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/v1.27.10/intro/gui
- On commit 44fef31780 (after):
  - Open the GUI, set a username and log out.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/intro/gui
  - Log in.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/v1.27.10/intro/gui

### Screenshots

This is a GUI change, but affecting only URLs in the markup with no
visual changes.


### Drawbacks

If a `docsURL` call generates a versionless link to a docs page that
doesn't exist on https://docs.syncthing.net - presumably because
Syncthing is not the latest version and links to a deleted page? - then
this will lead to GitHub's generic 404 page with no link to the
Syncthing docs root. Before, any versionless link would also be a
pathless link, leading to the Syncthing docs root instead of a 404 page.
2024-09-22 09:47:02 +02:00
Jakob Borg
b1ed2802fb fix(connections): skip point-to-point interfaces when listing LANs (fixes #9719) (#9720)
Point-to-point interfaces are typically VPNs and similar which, for our
purposes, do not qualify as LANs.
2024-09-21 09:27:23 +02:00
Jakob Borg
b70cb580c8 build(deps): update all dependencies (#9723) 2024-09-21 09:25:27 +02:00
Sonu Kumar Saw
28be3ba788 chore(connections): lower log level from INFO to DEBUG for "already connected to this device" messages (fixes #9715) (#9722)
### Purpose

The primary aim of this change is to minimize log clutter in production
environments. There are many lines in the logs coming from an expected
race condition when two devices connect `already connected to this
device`. These messages do not indicate errors and can overwhelm the log
files with unnecessary noise.

By lowering the logging level, we enhance the usability of the logs,
making it easier for users and developers to identify actual issues
without being distracted

### Testing
1. Build syncthing locally
2. Start two Syncthing instances
```bash
./syncthing -no-browser -home=~/.config/syncthing1
./syncthing -no-browser -home=~/.config/syncthing2
```
3. Enable the DEBUG logs from UI for `connections` package
4. Connect the synching instances by adding remote devices from the UI
5. Observe the logs for the message `XXXX already connected to this
device`

### Screenshots


![image](https://github.com/user-attachments/assets/882ccb4c-d39d-463a-8f66-2aad97010700)

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
2024-09-21 07:19:58 +00:00
Jakob Borg
d4770ddc77 chore(cmd): clean up commands (#9705)
Move infrastructure related commands to under `cmd/infra` and
development stuff to `cmd/dev`. The default build command builds the
regular user facing binaries: syncthing, stdiscosrv, and strelaysrv.
2024-09-21 09:04:22 +02:00
Simon Frei
cbe1220680 chore(fs): put the caseFS as the outermost layer again (#9716)
Reasoning in comments. The main motivation is to avoid all the case
checks when walking the filesystem.

"again" as we already tried once, but it caused a major issue ragarding
mtimefs layer. The root of this problem has been fixed in the meantime
in ac8b3342a
2024-09-18 20:31:19 +02:00
André Colomb
0b95c5fa76 fix(meta): return read error in forbidden_words_test (#9706)
When reading a file fails, the error is currently swallowed / hidden.
Probably just a typo.
2024-09-18 19:12:24 +02:00
Jakob Borg
0343bca257 Merge branch 'infrastructure'
* infrastructure:
  chore(stdiscosrv): ensure incoming addresses are sorted and unique
  chore(stdiscosrv): use zero-allocation merge in the common case
  chore(stdiscosrv): properly clean out old addresses from memory
  chore(stdiscosrv): calculate IPv6 GUA
2024-09-16 09:33:15 +02:00
Syncthing Release Automation
878016db39 gui, man, authors: Update docs, translations, and contributors 2024-09-16 03:47:01 +00:00
Simon Frei
1f4fde9525 chore(protocol): prioritize closing a connection (#9711)
The read/write loops may keep going for a while on a closing connection
with lots of read/write activity, as it's random which select case is
chosen. And if the connection is slow or even broken, a single
read/write
can take a long time/until timeout. Add initial non-blocking selects
with only the cases relevant to closing, to prioritize those.
2024-09-15 21:13:56 +02:00
Jakob Borg
5b9d8a838f chore(stdiscosrv): ensure incoming addresses are sorted and unique 2024-09-15 17:01:16 +02:00
Jakob Borg
8b19cb1e11 chore(stdiscosrv): use zero-allocation merge in the common case 2024-09-15 15:26:40 +02:00
Jakob Borg
ce1e259bb4 chore(stdiscosrv): properly clean out old addresses from memory 2024-09-15 14:20:59 +02:00
Jakob Borg
2238a288d9 fix(model): shut down index sender faster (#9704) 2024-09-15 11:37:49 +02:00
Jakob Borg
c8ee2a5cf6 chore(stdiscosrv): calculate IPv6 GUA 2024-09-15 10:48:16 +02:00
Ross Smith II
1704827d04 chore(gui): update HumanDuration.js (#9710)
Relevant changes:

ko: Use correct names for month and hour in Korean (465eaed)
Hide unit count if 2 in Arabic (f90d847)
2024-09-15 10:21:18 +02:00
Jakob Borg
a156e88eef Merge branch 'infrastructure'
* infrastructure:
  chore(stdiscosrv): hide internal/undocumented flags
  chore(stdiscosrv): remove legacy replication
  chore(stdiscosrv): clean up s3 handling
  chore(stdiscosrv): less garbage in statistics
  chore(stdiscosrv): improve expire, logging
  chore(stdiscosrv): sched in loop
  chore(stdiscosrv): database writing logging
  chore(stdiscosrv): use order-preserving expire
  chore(stdiscosrv): simplify sorting
  chore(stdiscosrv): reduce allocations in cert handling
  chore(stdiscosrv): reduce unnecessary allocations in merge
  feat(stdiscosrv): enable HTTP profiler
  feat(discosrv): in-memory storage with S3 backing
  feat(stdiscosrv): make compression optional (and faster)
2024-09-13 08:50:24 +02:00
Jakob Borg
94d0195b63 chore(stdiscosrv): hide internal/undocumented flags 2024-09-13 08:49:13 +02:00
Jakob Borg
1616edcee3 chore(stdiscosrv): remove legacy replication 2024-09-13 08:49:13 +02:00
Jakob Borg
6505e123bb chore(stdiscosrv): clean up s3 handling 2024-09-13 08:48:04 +02:00
Jakob Borg
63e4659282 chore(stdiscosrv): less garbage in statistics 2024-09-13 08:48:04 +02:00
Jakob Borg
f3f5557c8e chore(stdiscosrv): improve expire, logging 2024-09-13 08:48:04 +02:00
Jakob Borg
b794726e1f chore(stdiscosrv): sched in loop 2024-09-13 08:48:04 +02:00
Jakob Borg
3d59740a0a chore(stdiscosrv): database writing logging 2024-09-13 08:48:04 +02:00
Jakob Borg
66fb65b01f chore(stdiscosrv): use order-preserving expire 2024-09-13 08:48:04 +02:00
Jakob Borg
5c2fcbfd19 chore(stdiscosrv): simplify sorting 2024-09-13 08:48:03 +02:00
Jakob Borg
f9b72330a8 chore(stdiscosrv): reduce allocations in cert handling 2024-09-13 08:48:03 +02:00
Jakob Borg
822b6ac36b chore(stdiscosrv): reduce unnecessary allocations in merge 2024-09-13 08:48:03 +02:00
Jakob Borg
77f7778292 feat(stdiscosrv): enable HTTP profiler 2024-09-13 08:48:03 +02:00
Jakob Borg
aed2c66e52 feat(discosrv): in-memory storage with S3 backing 2024-09-13 08:48:03 +02:00
Jakob Borg
68a1fd010f feat(stdiscosrv): make compression optional (and faster) 2024-09-13 08:33:03 +02:00
Simon Frei
ac8b3342ac chore(fs): only cache the cache for case FS, not the entire FS (#9701)
This would have addressed a recent issue that arose when re-ordering our
"filesystem layers". Specifically moving the caseFilesystem to the
outermost layer. The previous cache included the filesystem, and as such
all the layers below. This isn't desirable (to put it mildly), as you
can create different variants of filesystems with different layers for
the same path and options. Concretely this did happen with the mtime
layer, which isn't always present. A test for the mtime related breakage
was added in #9687, and I intend to redo the caseFilesystem reordering
after this.

Ref: #9677
Followup to: #9687
2024-09-12 20:35:21 +02:00
Jakob Borg
0ea90dd932 build: add generating compat.json (#9700)
This is to add the generation of `compat.json` as a release artifact. It
describes the runtime requirements of the release in question. The next
step is to have the upgrade server use this information to filter
releases provided to clients. This is per the discussion in #9656

---------

Co-authored-by: Ross Smith II <ross@smithii.com>
2024-09-11 09:29:49 +02:00
Jakob Borg
718b1ce2b7 chore(discovery,upgrade): use regular TLS certificate verification (#9673)
This changes the two remaining instances where we use insecure HTTPS to
use standard HTTPS certificate verification.

When we introduced these things, almost a decade ago, HTTPS certificates
were expensive and annoying to get, much of the web was still HTTP, and
many devices seemed to not have up-to-date CA bundles.

Nowadays _all_ of the web is HTTPS and I'm skeptical that any device can
work well without understanding LetsEncrypt certificates in particular.

Our current discovery servers use hardcoded certificates which has
several issues:
- Not great for security if it leaks as there is no way to rotate it
- Not great for infrastructure flexibility as we can't use many load
balancer or TLS termination services
- The certificate is a very oddball ECDSA-SHA384 type certificate which
has higher CPU cost than a more regular certificate, which has real
effects on our infrastructure

Using normal TLS certificates here improves these things.

I expect there will be some very few devices out there for which this
doesn't work. For the foreseeable future they can simply change the
config to use the old URLs and parameters -- it'll be years before we
can retire those entirely.

For the upgrade client this simply seems like better hygiene. While our
releases are signed anyway, protecting the metadata exchange is _better_
and, again, I doubt many clients will fail this today.
2024-09-11 09:29:19 +02:00
Simon Frei
29f7510f5a lib/fs: Add test reproducing missing mtimefs issue (ref #9677) (#9687)
The test is quite odd and specific, but it does reproduce the issue that
caused #9677, so I'd propose to add it to have a simple regression test
for the basic scenario. Also the option to the fakefs might come handy
for other scenarios where you want to quickly test some behaviour on a
filesystem without nanosecond precision, without actually needing access
to one.
2024-09-10 13:36:17 +02:00
Syncthing Release Automation
a7f9ed4a80 gui, man, authors: Update docs, translations, and contributors 2024-09-09 03:45:15 +00:00
tomasz1986
1baefea410 gui: Actually filter scope ID out of IPv6 address when using Remote GUI (ref #8084) (#9688)
gui: Actually filter scope ID out of IPv6 address when using Remote GUI
(ref #8084)

The current code does not work, because it uses a string in the
replace() method instead of regex. Thus, change it to a proper regex.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-09-08 12:42:32 +02:00
Jakob Borg
563cec8923 Merge branch 'release'
* release:
  Revert "lib/fs: Put the caseFS as the outermost layer (#9648)"
2024-09-06 09:39:09 +02:00
Jakob Borg
a3c340ece9 Revert "lib/fs: Put the caseFS as the outermost layer (#9648)"
This reverts commit 7517d18fbb.

Fixes #9677
2024-09-06 09:15:45 +02:00
André Colomb
cb24638ec9 lib/api: Correct ordering of Accept-Language codes by weight (fixes #9670) (#9671)
The preference for languages in the Accept-Language header field
should not be deduced from the listed order, but from the passed
"quality values", according to the HTTP specification:
https://httpwg.org/specs/rfc9110.html#field.accept-language

This implements the parsing of q=values and ordering within the API
backend, to not complicate things further in the GUI code.  Entries
with invalid (unparseable) quality values are discarded completely.

* gui: Fix API endpoint in comment.
2024-09-02 10:15:04 +02:00
Syncthing Release Automation
2fb24dc2cc gui, man, authors: Update docs, translations, and contributors 2024-09-02 03:45:15 +00:00
tomasz1986
9aa2d2c92f gui: Fix incorrect UI language auto detection (fixes #9668) (#9669)
gui: Fix incorrect UI language auto detection (fixes #9668)

Currently, the code only checks whether the detected language partially
matches one of the available languages. This means that if the detected
language is "fi" but among the available languages there is only "fil"
and no "fi", then it will match "fi" with "fil", even though the two are
completely different languages.

With this change, the matching is only done when there is a hyphen in
the language code, e.g. "en" will match with "en-US".

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-08-31 22:47:10 +02:00
Jakob Borg
d1c5100c98 Merge branch 'release'
* release:
  lib/upgrade: Send OS version header to upgrade server (#9663)
2024-08-30 11:32:49 +02:00
Jakob Borg
42e677c055 lib/model, lib/protocol: Index sending/receiving debugging (#9657)
This adds guardrails to the index sending and receiving, to verify that
what we thinks is happening is what actually happens.
2024-08-28 15:00:19 +02:00
Jakob Borg
27bba2c0c2 lib/upgrade: Send OS version header to upgrade server (#9663)
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
2024-08-28 08:32:03 +02:00
Jakob Borg
feff334547 lib/upgrade: Send OS version header to upgrade server (#9663)
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
2024-08-28 08:31:10 +02:00
Syncthing Release Automation
713cf357ce gui, man, authors: Update docs, translations, and contributors 2024-08-26 03:45:23 +00:00
Jakob Borg
5342bec1b7 lib/protocol: Further interface refactor (#9396)
This is a symmetric change to #9375 -- where that PR changed the
protocol->model interface, this changes the model->protocol one.
2024-08-24 12:45:10 +02:00
Emil Lundberg
7df75e681d gui: Replace global "Panel padding decrease" style with targeted class (#9659)
Transplanted from https://github.com/emlun/syncthing/pull/8 (meta-PR
into https://github.com/syncthing/syncthing/pull/9175) by request of
@acolomb (see:
https://github.com/emlun/syncthing/pull/8#discussion_r1724470574).

This padding decrease currently applies to _all_ collapsible panels, but
this padding decrease may not be appropriate for all collapsible panels.
In particular, it will not be appropriate for the collapsible panels
introduced in https://github.com/emlun/syncthing/pull/8.
2024-08-21 15:02:45 +02:00
Jakob Borg
8dc826b234 build: use Go 1.23, require minimum 1.22 (#9651)
🥳

---------

Co-authored-by: Ross Smith II <ross@smithii.com>
2024-08-19 20:26:08 +02:00
Syncthing Release Automation
9ef37e1485 gui, man, authors: Update docs, translations, and contributors 2024-08-19 03:45:18 +00:00
Simon Frei
7517d18fbb lib/fs: Put the caseFS as the outermost layer (#9648)
Reasoning in comments. The main motivation is to avoid all the case
checks when walking the filesystem.
2024-08-13 10:59:31 +02:00
André Colomb
42d0fee536 gui: Add Irish (ga) translation template (#9646)
Based on user request from Weblate, user `@aindriu80`.

Looks promising based on the profile:
https://hosted.weblate.org/user/aindriu80/ Not sure whether almost
30.000 translations in about one month is realistic for a human though.
2024-08-12 13:32:29 +02:00
Syncthing Release Automation
2ca9d3b5c5 gui, man, authors: Update docs, translations, and contributors 2024-08-12 03:45:16 +00:00
Tommy van der Vorst
9cde068f2a lib/syncthing: Add wrapper for access to model (#9627)
### Purpose

Wrap access to Model for users that use the syncthing Go package. See
discussion:
https://github.com/syncthing/syncthing/pull/9619#pullrequestreview-2212484910

### Testing

It works with the iOS app. Other than that, there are no current users
of this API (to my knowledge) as Model was only exposed recently form
the iOS app.
2024-08-11 20:20:43 +02:00
Gusted
1243083831 cli: Remove go-shlex dependency (#9644) 2024-08-11 11:37:18 +02:00
Gusted
356c5055ad lib/sha256: Remove it (#9643)
### Purpose

Remove the `lib/sha256` package, because it's no longer necessary. Go's
standard library now has the same performance and is on par with
`sha256-simd` since [Since Go
1.21](1a64574f42).
Therefore using `sha256-simd` has no benefits anymore.

ARM already has optimized sha256 assembly code since
7b8a7f8272,
`sha256-simd` published their results before that optimized assembly was
implemented,
f941fedda8.
The assembly looks very similar and the benchmarks in the Go commit
match that of `sha256-simd`.

This patch removes all of the related code of `lib/sha256` and makes
`crypto/sha256` the 'default'.

Benchmark of `sha256-simd` and `crypto/sha256`:
<details>

```
cpu: AMD Ryzen 5 3600X 6-Core Processor
                │  simd.txt   │               go.txt                │
                │   sec/op    │    sec/op     vs base               │
Hash/8Bytes-12    63.25n ± 1%    73.38n ± 1%  +16.02% (p=0.002 n=6)
Hash/64Bytes-12   98.73n ± 1%   105.30n ± 1%   +6.65% (p=0.002 n=6)
Hash/1K-12        567.2n ± 1%    572.8n ± 1%   +0.99% (p=0.002 n=6)
Hash/8K-12        4.062µ ± 1%    4.062µ ± 1%        ~ (p=0.396 n=6)
Hash/1M-12        512.1µ ± 0%    510.6µ ± 1%        ~ (p=0.485 n=6)
Hash/5M-12        2.556m ± 1%    2.564m ± 0%        ~ (p=0.093 n=6)
Hash/10M-12       5.112m ± 0%    5.127m ± 0%        ~ (p=0.093 n=6)
geomean           13.82µ         14.27µ        +3.28%

                │   simd.txt   │               go.txt                │
                │     B/s      │     B/s       vs base               │
Hash/8Bytes-12    120.6Mi ± 1%   104.0Mi ± 1%  -13.81% (p=0.002 n=6)
Hash/64Bytes-12   618.2Mi ± 1%   579.8Mi ± 1%   -6.22% (p=0.002 n=6)
Hash/1K-12        1.682Gi ± 1%   1.665Gi ± 1%   -0.98% (p=0.002 n=6)
Hash/8K-12        1.878Gi ± 1%   1.878Gi ± 1%        ~ (p=0.310 n=6)
Hash/1M-12        1.907Gi ± 0%   1.913Gi ± 1%        ~ (p=0.485 n=6)
Hash/5M-12        1.911Gi ± 1%   1.904Gi ± 0%        ~ (p=0.093 n=6)
Hash/10M-12       1.910Gi ± 0%   1.905Gi ± 0%        ~ (p=0.093 n=6)
geomean           1.066Gi        1.032Gi        -3.18%
```

</details>


### Testing

Compiled and tested on Linux.

### Documentation

https://github.com/syncthing/docs/pull/874
2024-08-10 12:58:20 +01:00
Jakob Borg
19693734a3 build: Update dependencies (#9640) 2024-08-09 16:04:51 +02:00
Ross Smith II
17e60b9e0c Chmod -x non-executable files (fixes #9629) (#9630)
Fixed via
```
git ls-files -s | grep 100755 | grep -E -v '(run|sh)$' | cut -f 2 | xargs git update-index --chmod -x
```
See #9629.
2024-08-05 18:32:23 +02:00
Syncthing Release Automation
ac22b2d00a gui, man, authors: Update docs, translations, and contributors 2024-08-05 03:45:25 +00:00
Tommy van der Vorst
de0b4270df all: minimal set of changes for iOS app (#9619)
### Purpose

This PR contains the set of changes needed to make Syncthing work on iOS
for [my iOS app for
Syncthing](https://github.com/pixelspark/sushitrain).

Most changes originate from [the Mobius Sync
fork](http://github.com/MobiusSync/syncthing/tree/ios). I have removed
the changes from their fork that are not strictly needed for my app
(i.e. their changes to the GUI and command line utilities, for instance)
and squashed it all in a single commit.

In summary, the changes are:

* Resolve non-absolute paths to the 'Documents' folder (basically the
only one an app can/should write user data to by default on iOS)
* Tweaking of build flags/conditions for iOS (i.e. determine which
basicfs_watch, ignoreresult variant to build for iOS)
* Disable upgrade mechanism on iOS
* Make `RequestGlobal` and `PullerProgress` public symbols
* Expose syncthing.app's Model instance (app.M)
* Add no-op stub for SetLowPriority on iOS

I would very much appreciate these changes to be (eventually) merged to
mainline syncthing, as this would allow my iOS app to track the mainline
source code directly and removes the need (for me at least) for
maintaining a separate fork. Perhaps the Mobius folks can also benefit
from this (although as noted this branch does not contain their changes
to e.g. the GUI).

### Testing

This branch has been tested with the iOS app and appears to work fine.
The full set of MobiusSync changes has been used before with success.

### Screenshots

n/a

### Documentation

There should be no visible changes for users due to this set of changes.

---------

Co-authored-by: Simon Pickup <simon@pickupinfinity.com>
2024-07-31 07:31:14 +02:00
Syncthing Release Automation
e738af7c56 gui, man, authors: Update docs, translations, and contributors 2024-07-29 03:45:25 +00:00
Syncthing Release Automation
a28441a9bf gui, man, authors: Update docs, translations, and contributors 2024-07-22 03:45:28 +00:00
Simon Frei
2e313716e5 etc: Remove restart on suspend systemd service (ref #8448) (#9611)
The option to do the same in our own monitor process has been removed a
long time ago.
2024-07-17 09:29:49 +03:00
Syncthing Release Automation
0b5ff1f5f7 gui, man, authors: Update docs, translations, and contributors 2024-07-15 03:45:20 +00:00
Simon Frei
0fe6d97d3d lib/fs: Add missing locks to fakeFile methods (fixes #9499) (#9603)
fixes #9499
2024-07-09 10:33:30 +02:00
Simon Frei
0756e42a85 lib/api: Increase test request timeout (fixes #9455) (#9602)
Fixes #9455
2024-07-09 00:37:44 +02:00
Syncthing Release Automation
13ebe1c87f gui, man, authors: Update docs, translations, and contributors 2024-07-08 03:45:16 +00:00
Simon Frei
aea7fa5f22 lib/ignore: Remove unused patterns in cache (#9601)
Tiny cleanup I noticed while trying to fix/test another issue
(https://github.com/syncthing/syncthing/pull/9600). I shortly tried to
figure out what it was used for in the past, but gave up without
results.
2024-07-02 11:01:00 +00:00
Simon Frei
403ce7e597 lib/ignore: Fix caching of filenames with path separators on windows (#9600)
Previously we queried cache with backslashes, and stored entries with
slashes. As in no cache hits ever for non-toplevel files. I also
eventually remembered that cache is disabled by default, so this is a
bit pointless, but still right :P
2024-07-02 10:58:06 +00:00
Syncthing Release Automation
4704d3bc48 gui, man, authors: Update docs, translations, and contributors 2024-07-01 03:45:16 +00:00
André Colomb
2794b04243 gui: Add Filipino (fil) translation template (#9599) 2024-06-29 17:53:32 +02:00
Syncthing Release Automation
1ce64971fd gui, man, authors: Update docs, translations, and contributors 2024-06-24 03:45:42 +00:00
Syncthing Release Automation
eb6d80eac4 gui, man, authors: Update docs, translations, and contributors 2024-06-17 03:45:27 +00:00
Syncthing Release Automation
a8db3351ae gui, man, authors: Update docs, translations, and contributors 2024-06-10 03:45:25 +00:00
Ross Smith II
23a900e096 gui: Use localised time in duration (#9552)
https://github.com/syncthing/syncthing/pull/8291 inpired me to develop
this. I tested it with all the languages Syncthing currently supports,
and they all work.

The only issue is that when you change the language in the GUI, you have
to either refresh the page, or wait a few seconds for the page to
refresh by itself, before the duration is translated into the new
language.

### Screenshots


![2024-05-20_21-47-58](https://github.com/syncthing/syncthing/assets/220772/7e3b371e-3495-4e3e-853a-b5a41215e6c7)

### Documentation

The documentation for the translation widget is at
https://github.com/EvanHahn/HumanizeDuration.js/blob/main/README.md
2024-06-05 06:05:51 -04:00
Jakob Borg
5a304cf295 build: Skip autoprocs for build script 2024-06-04 08:55:37 -04:00
Jakob Borg
136b3742bf build: Update dependencies (#9565) 2024-06-04 13:58:49 +02:00
Jakob Borg
21e0f98fe2 Merge branch 'infrastructure'
* infrastructure:
  cmd/stupgrades: Basic process metrics
  cmd/stcrashreceiver: Ignore patterns, improve metrics
  cmd/strelaypoolsrv: More compact response, improved metrics
  cmd/stdiscosrv: Add AMQP replication
2024-06-04 07:18:35 -04:00
Jakob Borg
2bb5b2244b cmd/stupgrades: Basic process metrics 2024-06-03 19:50:28 +02:00
Jakob Borg
2f281799c1 cmd/stcrashreceiver: Ignore patterns, improve metrics 2024-06-03 19:50:28 +02:00
Jakob Borg
18a58a2ddc cmd/strelaypoolsrv: More compact response, improved metrics 2024-06-03 19:50:28 +02:00
Jakob Borg
f283215fce cmd/stdiscosrv: Add AMQP replication 2024-06-03 19:50:28 +02:00
Syncthing Release Automation
495809ac9e gui, man, authors: Update docs, translations, and contributors 2024-06-03 03:45:18 +00:00
Jakob Borg
9ca8addcf7 build: Add missing region attribute for uploads 2024-05-30 10:49:22 +02:00
Jakob Borg
94181ade23 build: Generalise S3 push options 2024-05-27 13:56:32 +02:00
Syncthing Release Automation
e50933433e gui, man, authors: Update docs, translations, and contributors 2024-05-27 03:45:21 +00:00
Jakob Borg
a2b8f2361e lib/config: Add file inside folder marker directory (#9525)
### Purpose

Avoid the issue where the folder marker is deleted by overzealous
cleanup tools because it's just a useless, empty directory.

We create a small file containing a an admonishment to not delete the
directory, and some metadata that is just for human consumption at the
moment. (But it would parse as a valid yaml file if we wanted to read
this, at some point.)

This will only apply when _creating_ a folder marker, that is, existing
setups will not gain the file automatically. Obviously, when using a
custom folder marker none of this applies.

Also, slightly adjust the permission bits for the folder marker directory and file on Unixes, making sure the group & write bits are unset.

### Testing

I've created and deleted a few folders and it appears to behave as I
expect.

### Screenshots

```
jb@ok:~/somefolder % ls -la
total 0
drwxr-xr-x   3 jb  staff   96 May  1 08:52 ./
drwx------  12 jb  staff  384 May  1 08:52 ../
drwxr-xr-x   3 jb  staff   96 May  1 08:52 .stfolder/
jb@ok:~/somefolder % ls -l .stfolder
total 8
-rw-r--r--  1 jb  staff  122 May  1 08:52 syncthing-folder-39a4b0.txt
jb@ok:~/somefolder % cat .stfolder/syncthing-folder-39a4b0.txt
# This directory is a Syncthing folder marker.
# Do not delete.

folderID: xtdca-cudyf
created: 2024-05-01T08:52:49+02:00
```
2024-05-24 08:51:02 +02:00
Jakob Borg
4b60e86d02 lib/config, lib/watchaggregator: Add config for max FS watcher delay (#9558)
Currently the maximum delay is always derived automatically from the
initial delay. This is fine in most cases, but for some use cases (large
files that take a long time to write) we need to be able to set a longer
max delay than the computed value (e.g., 15s delay with 10min timeout).
2024-05-23 16:21:00 +02:00
Jakob Borg
d6b5676603 lib/fs: Watcher should react to xattr-only events on Darwin 2024-05-23 09:39:10 +02:00
Jakob Borg
3821b6ceee build: Update dependencies (#9553) 2024-05-21 12:37:43 +02:00
Syncthing Release Automation
973585e97d gui, man, authors: Update docs, translations, and contributors 2024-05-20 03:45:15 +00:00
Jakob Borg
ba6ac2f604 lib/geoip, cmd/relaypoolsrv, cmd/ursrv: Automatically manage GeoIP updates (#9342)
This adds a small package `geoip` which knows how to download and manage
the Maxmind GeoLite2 database we use. This removes the need for various
scripts to download and manage the geoip database, something that today
happens on Docker startup for the relay pool server and using various
hand written hacks for the usage reporting server.

The database is downloaded when needed and then refreshed on a
best-effort basis weekly.
2024-05-18 20:31:49 +03:00
luchenhan
57d399317e lib/db: Correct function name in comments (#9520) 2024-05-16 07:02:57 +00:00
WangXi
f2d6722348 lib/connections: Use proper errors.Is check (#9538) 2024-05-16 07:01:16 +00:00
dependabot[bot]
7b1b77e50d build(deps): bump github.com/quic-go/quic-go from 0.42.0 to 0.43.0 (#9522)
Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go)
from 0.42.0 to 0.43.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/quic-go/quic-go/releases">github.com/quic-go/quic-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.43.0</h2>
<h2><em>quic-go.net</em>: Launching a new Documentation Site</h2>
<p>With this release, we're launching a new documentation site for the
quic-go projects (quic-go itself, HTTP/3, webtransport-go, and soon,
masque-go): <a href="https://quic-go.net">quic-go.net</a>.</p>
<p>The documentation site aims to explain QUIC concepts and how they are
made accessible using quic-go's API. This site replaces the wiki, and
the ever-growing README files.</p>
<p>A lot of work has gone into the documentation already, but we're by
no means done yet. The entire source is public in <a
href="https://github.com/quic-go/docs/">https://github.com/quic-go/docs/</a>,
and we're happy about community contributions.</p>
<h2>HTTP Datagrams (RFC 9297)</h2>
<p>This release adds support for HTTP Datagrams (<a
href="https://datatracker.ietf.org/doc/html/rfc9297">RFC 9297</a>), both
on the client and on the server side (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4452">#4452</a>).
HTTP Datagrams are used in WebTransport in CONNECT-UDP (<a
href="https://datatracker.ietf.org/doc/html/rfc9298">RFC 9298</a>),
among others.</p>
<p>The new API for HTTP Datagrams is described on the new documentation
page: <a href="https://quic-go.net/docs/http3/datagrams/">HTTP
Datagrams</a>. The integration of HTTP Datagram support necessitated a
comprehensive refactor of the HTTP/3 package, resulting in several
breaking API changes listed below.</p>
<h2>Breaking Changes</h2>
<ul>
<li>quicvarint: functions now return an <code>int</code> instead the
internal <code>protocol.ByteCount</code> (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4365">#4365</a>)</li>
<li>http3: <code>Server.SetQuicHeaders</code> was renamed to
<code>SetQUICHeaders</code> (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4377">#4377</a>)</li>
<li>http3: <code>Server.QuicConfig</code> was renamed to
<code>QUICConfig</code> (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4384">#4384</a>)</li>
<li>http3: <code>RoundTripper.QuicConfig</code> was renamed to
<code>QUICConfig</code> (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4385">#4385</a>)</li>
<li>http3: <code>RoundTripOpt.CheckSettings</code> was removed (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4416">#4416</a>).
Use the new<code>SingleDestinationRoundTripper</code> API instead.</li>
<li>http3: the <code>HTTPStreamer</code> interface is now implemented by
the <code>http.ResponseWriter</code> (and not the
<code>http.Request.Body</code>) (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4469">#4469</a>)</li>
<li>include the maximum payload size in the
<code>DatagramTooLargeError</code> (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4470">#4470</a>)</li>
</ul>
<h2>Other Notable Changes</h2>
<ul>
<li>GSO and ECN is disabled on kernel versions older than 5 (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4456">#4456</a>)</li>
<li>http3: logging can be controlled using an <code>slog.Logger</code>
(<a
href="https://redirect.github.com/quic-go/quic-go/issues/4449">#4449</a>)</li>
<li>http3: HEAD requests can now be sent in 0-RTT (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4378">#4378</a>)</li>
<li>http3: duplicate QPACK encoder and decoder streams are not rejected
as required by the RFC (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4388">#4388</a>)</li>
<li>http3: Extended CONNECT are blocked until the server's SETTINGS are
received, as required by the RFC (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4450">#4450</a>)</li>
<li>http3: HTTP/3 client connections aren't removed if
<code>RoundTrip</code> errors due to a cancelled context (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4448">#4448</a>).
Thanks to <a
href="https://github.com/GeorgeMac"><code>@​GeorgeMac</code></a>!</li>
<li>http3: sniff Content-Type when flushing the ResponseWriter (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4412">#4412</a>).
Thanks to <a
href="https://github.com/WeidiDeng"><code>@​WeidiDeng</code></a>!</li>
<li>The <code>Context</code> exposed on the <code>quic.Stream</code> is
now derived from the connection's context (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4414">#4414</a>)</li>
<li>The UDP send and receive buffer size was increased to 7 MiB (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4455">#4455</a>).
Thanks to <a
href="https://github.com/bt90"><code>@​bt90</code></a>!</li>
</ul>
<h2>Clarifications on the QUIC Stream State Machine</h2>
<h3>Calling CancelWrite after Close</h3>
<p>After a long and fruitful discussion (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4404">#4404</a>),
we decided to clarify that calling <code>CancelWrite</code> after
<code>Close</code> on a <code>SendStream</code> (or a bidirectional
stream) should cause a state transition from the &quot;Data Sent&quot;
to the &quot;Reset Sent&quot; state, as described in <a
href="https://datatracker.ietf.org/doc/html/rfc9000#section-3.1">section
3.1 of RFC 9000</a>. This matches the current behavior of quic-go,
however, it didn't match the API documentation (fixed in <a
href="https://redirect.github.com/quic-go/quic-go/issues/4419">#4419</a>).</p>
<p>This means that stream data will not be delivered reliably if
<code>CancelWrite</code> is called, and that this applies even if
<code>Close</code> was called before.</p>
<h3>Garbage Collection of Streams</h3>
<p>This release also changes the way streams are garbage-collected (and
the peer is granted additional limit to open a new stream), once they're
not needed anymore, in a subtle way:</p>
<ul>
<li>for the send direction of streams: <a
href="https://redirect.github.com/quic-go/quic-go/issues/4445">#4445</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="93c4785521"><code>93c4785</code></a>
http3: sniff Content-Type when flushing the ResponseWriter (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4412">#4412</a>)</li>
<li><a
href="c0250ce824"><code>c0250ce</code></a>
include the maximum payload size in the DatagramTooLargeError (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4470">#4470</a>)</li>
<li><a
href="34f4d1443f"><code>34f4d14</code></a>
http3: implement on the HTTPStreamer on the ResponseWriter, flush header
(<a
href="https://redirect.github.com/quic-go/quic-go/issues/4469">#4469</a>)</li>
<li><a
href="083ceb42f2"><code>083ceb4</code></a>
http3: rename Settings.EnableDatagram to EnableDatagrams (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4466">#4466</a>)</li>
<li><a
href="e1e5b6294d"><code>e1e5b62</code></a>
README: link to the new documentation site (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4464">#4464</a>)</li>
<li><a
href="2a37c53143"><code>2a37c53</code></a>
http3: add support for HTTP Datagrams (RFC 9297) (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4452">#4452</a>)</li>
<li><a
href="11b11594b2"><code>11b1159</code></a>
http3: fix race condition in client unit test (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4463">#4463</a>)</li>
<li><a
href="4b87539b1e"><code>4b87539</code></a>
delay completion of the receive stream until the reset error was read
(<a
href="https://redirect.github.com/quic-go/quic-go/issues/4460">#4460</a>)</li>
<li><a
href="bff131e546"><code>bff131e</code></a>
delay completion of the send stream until the reset error was delivered
(<a
href="https://redirect.github.com/quic-go/quic-go/issues/4445">#4445</a>)</li>
<li><a
href="12aa63824c"><code>12aa638</code></a>
disable GSO and ECN on kernels older than version 5 (<a
href="https://redirect.github.com/quic-go/quic-go/issues/4456">#4456</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/quic-go/quic-go/compare/v0.42.0...v0.43.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/quic-go/quic-go&package-manager=go_modules&previous-version=0.42.0&new-version=0.43.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-16 08:56:44 +02:00
Syncthing Release Automation
06914b872b gui, man, authors: Update docs, translations, and contributors 2024-05-13 03:45:18 +00:00
Jakob Borg
f6df8b40b4 build: Use Go 1.22.3 at minimum 2024-05-08 08:01:46 +02:00
André Colomb
e057f5ee9a gui: Add Hindi (hi) translation template (#9530)
Based on user request from Weblate, user @Scrambled777.

Looks promising based on the profile:
https://hosted.weblate.org/user/Scrambled777/
2024-05-07 10:29:42 +02:00
Syncthing Release Automation
a5bf110d90 gui, man, authors: Update docs, translations, and contributors 2024-05-06 03:45:24 +00:00
DerRockWolf
debbe726e0 lib/connections: Add syncthing_connections_active metric (fixes #9527) (#9528)
### Purpose

Adds a new metric `syncthing_connections_active` which equals to the
amount of active connections per device.

Fixes #9527 

<!--
Describe the purpose of this change. If there is an existing issue that
is
resolved by this pull request, ensure that the commit subject is on the
form
`Some short description (fixes #1234)` where 1234 is the issue number.
-->

### Testing

I've manually tested it by running syncthing with these changes locally
and examining the returned metrics from `/metrics`.
I've done the following things:
- Connect & disconnect a device
- Increase & decrease the number of connections and verify that the
value of the metric matches with the amount displayed in the GUI.

### Documentation

https://github.com/syncthing/docs/blob/main/includes/metrics-list.rst
needs to be regenerated with
[find-metrics.go](https://github.com/syncthing/docs/blob/main/_script/find-metrics/find-metrics.go)

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2024-05-04 22:31:37 +02:00
otbutz
ec3e474a53 etc: Use 7MiB buffer size (#9524)
### Purpose

In preparation for quic-go v0.43.0. see
https://github.com/quic-go/quic-go/pull/4455
2024-05-01 10:03:07 +02:00
Severin von Wnuck-Lipinski
ebb1edc652 gui: Fix Firefox bookmark favicon (fixes #9506) (#9507)
### Purpose

Firefox uses the last specified favicon link for bookmarks, but only if
it is available on initial page load.
Remove the second link and use ng-href to change the icon instead.

I'm not really familiar with AngularJS, feel free to offer suggestions
for improvements.

### Testing

Briefly tested on Firefox 124.0.2 and Chrome 123.0.6312.105.
2024-05-01 09:00:36 +02:00
Syncthing Release Automation
6204670c66 gui, man, authors: Update docs, translations, and contributors 2024-04-29 03:45:25 +00:00
Syncthing Release Automation
ff9b24f388 gui, man, authors: Update docs, translations, and contributors 2024-04-22 03:45:18 +00:00
Syncthing Release Automation
01b820dc78 gui, man, authors: Update docs, translations, and contributors 2024-04-15 04:10:05 +00:00
Jakob Borg
79ae24df76 lib/nat: Don't crash on empty address list (fixes #9503) (#9504) 2024-04-11 13:23:29 +02:00
Jakob Borg
61b94b9ea5 lib/db: Drop indexes for outgoing data to force refresh (ref #9496) (#9502)
### Purpose

Resend our indexes since we fixed that index-sending issue.

I made a new thing to only drop the non-local-device index IDs, i.e.,
those for other devices. This means we will see a mismatch and resend
all indexes, but they will not. This is somewhat cleaner as it avoids
resending everything twice when two devices are upgraded, and in any
case, we have no reason to force a resend of incoming indexes here.

### Testing

It happens on my computer...
2024-04-08 11:14:27 +02:00
tomasz1986
6fb3c5ccf2 gui: Fix missing link to device editor for names with superscript (ref #9472) (#9494)
The commit 7e4e65ebf5 added links to
devices listed in the Shared With list in the folder info. However, it
only added them to those that had no superscript next to them.

With this change, the links are added to all devices regardless of
whether they have the superscript next to their names or not. The commit
also simplifies the code by using anchors directly instead of wrapping
them in spans.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-04-07 21:37:31 +02:00
Jakob Borg
2e7c03420f lib/db: Hold update lock while taking snapshot (#9496) 2024-04-05 21:32:43 +02:00
Jakob Borg
faa56b4bb7 build: Update dependencies (#9497) 2024-04-05 16:13:20 +02:00
Syncthing Release Automation
d7ba5316b8 gui, man, authors: Update docs, translations, and contributors 2024-04-01 03:45:41 +00:00
Syncthing Release Automation
bdfd0f0548 gui, man, authors: Update docs, translations, and contributors 2024-03-25 03:45:14 +00:00
Tim Nordenfur
5d27185083 Removed no longer relevant Bountysource link (#9480)
### Purpose

Bountysource no longer exists and the readme link 404s. This removes the
Bountysource link and corresponding readme section.

Perhaps the section should instead be replaced by other instructions for
voting on features.
2024-03-23 00:23:02 +01:00
Jakob Borg
4dfb9d7c83 lib/api: Missing return after HTTP error 2024-03-21 08:57:43 -04:00
Emil Lundberg
2f15670094 lib/api: Extract session store (#9425)
This is an extract from PR #9175, which can be reviewed in isolation to
reduce the volume of changes to review all at once in #9175. There are
about to be several services and API handlers that read and set cookies
and session state, so this abstraction will prove helpful.

In particular a motivating cause for this is that with the current
architecture in PR #9175, in `api.go` the [`webauthnService` needs to
access the
session](https://github.com/syncthing/syncthing/pull/9175/files#diff-e2e14f22d818b8e635572ef0ee7718dee875c365e07225d760a6faae8be7772dR309-R310)
for authentication purposes but needs to be instantiated before the
`configMuxBuilder` for config purposes, because the WebAuthn additions
to config management need to perform WebAuthn registration ceremonies,
but currently the session management is embedded in the
`basicAuthAndSessionMiddleware` which is [instantiated much
later](https://github.com/syncthing/syncthing/pull/9175/files#diff-e2e14f22d818b8e635572ef0ee7718dee875c365e07225d760a6faae8be7772dL371-R380)
and only if authentication is enabled in `guiCfg`. This refactorization
extracts the session management out from `basicAuthAndSessionMiddleware`
so that `basicAuthAndSessionMiddleware` and `webauthnService` can both
use the same shared session management service to perform session
management logic.

### Testing

This is a refactorization intended to not change any externally
observable behaviour, so existing tests (e.g., `api_auth_test.go`)
should cover this where appropriate. I have manually verified that:

- Appending `+ "foo"` to the cookie name in `createSession` causes
`TestHtmlFormLogin/invalid_URL_returns_403_before_auth_and_404_after_auth`
and `TestHtmlFormLogin/UTF-8_auth_works` to fail
- Inverting the return value of `hasValidSession` cases a whole bunch of
tests in `TestHTTPLogin` and `TestHtmlFormLogin` to fail
- (Fixed) Changing the cookie to `MaxAge: 1000` in `destroySession` does
NOT cause any tests to fail!
- Added tests `TestHtmlFormLogin/Logout_removes_the_session_cookie`,
`TestHTTPLogin/*/Logout_removes_the_session_cookie`,
`TestHtmlFormLogin/Session_cookie_is_invalid_after_logout` and
`TestHTTPLogin/200_path#01/Session_cookie_is_invalid_after_logout` to
cover this.
- Manually verified that these tests pass both before and after the
changes in this PR, and that changing the cookie to `MaxAge: 1000` or
not calling `m.tokens.Delete(cookie.Value)` in `destroySession` makes
the respective pair of tests fail.
2024-03-21 08:09:47 -04:00
Syncthing Release Automation
b49137ce36 gui, man, authors: Update docs, translations, and contributors 2024-03-18 03:45:22 +00:00
Jaspitta
7e4e65ebf5 gui: Open devices on click in Shared With list in folder info (fixes #8972) (#9472) 2024-03-17 20:13:09 +01:00
Simon Frei
8c8167a4ab lib/model: Don't bump seq on error in index handler (#9459) 2024-03-11 07:30:21 +01:00
Simon Frei
73cc5553b6 lib/model: Prevent infinite index sending loop (fixes #9407) (#9458)
Explanation of what/why in a code comment.

Fixes https://github.com/syncthing/syncthing/issues/9407
2024-03-10 22:28:40 +01:00
Simon Frei
2ab2488274 lib/scanner: Fix ticker leak in scanner (fixes #9417) (#9451)
Move the ticker closer to where it's used and defer stop it to avoid
missing a branch.

Fixes regression introduced in
2f3eacdb6c

Fixes https://github.com/syncthing/syncthing/issues/9417
2024-03-05 19:04:26 +01:00
Jakob Borg
eb9cd363d0 build: Update dependencies (#9448) 2024-03-04 20:39:43 +01:00
Syncthing Release Automation
7fe3906534 gui, man, authors: Update docs, translations, and contributors 2024-03-04 03:54:27 +00:00
André Colomb
5fdab1bf11 gui: Show encryption status for devices sharing folder (ref #8808) (#9355)
This re-implements the stalled enhancement from #8808. Thanks @Craeckie
for the idea and first implementation draft!

If a folder is shared to a device with encryption, add a lock icon in
front of the device name under "Shared With" in the folder details
panel. Be careful not to add whitespace caused by line wraps in HTML
source code, which would defeat the purpose of keeping the icon glued to
the name by a non-breaking space.

Apply the same lock icon for the list of folders shared with a device.
2024-03-03 21:09:57 +01:00
André Colomb
13a6d43f0b gui: Fix wrapping in "Shared With" / "Folders" lists. (#9438)
This change was split off from #9355 as an independent clean-up / fix.
See that PR for review discussion, testing, and screenshots.

Improve the wrapping of folder labels / device names by going back to
word-wrapping, but making sure other spans, such as the trailing comma,
do not get separated from the label span.

* Avoid adding whitespace caused by line wraps in HTML source code.

The different cases within the ng-switch block are separated by
newlines for readability, but that gets parsed as whitespace.  For
wrapping purposes, this should not happen, because then there is no
way to keep other HTML parts glued to the name / label in each list
entry.

* Simplify redundant conditional comma code.

The separating comma after a device name or folder label (all but the
last) should always stick to it.  Use the HTML comment trick to avoid
whitespace and therefore a wrapping opportunity caused by the code
formatting newline.  Thus the conditional comma only needs to be
defined once, not in each ng-switch case.

* Wrap at word boundaries and only break up words if necessary.

Use the overflow-wrap: break-word; style instead of word-break:
break-all;.  While the latter is suitable for longish paths, breaking
device names or folder labels arbitrarily within words is ugly.

This also makes the the <sup> numbers actually stay glued to their
respective neighboring words.

Include legacy CSS alias "word-wrap" in the class definition.

* Fix indentation (unrelated).
2024-03-03 20:55:09 +01:00
Jakob Borg
ac942e2481 github: Convert issue templates into forms (fixes #9442) 2024-03-02 16:27:57 +01:00
Luke Hamburg
bbd2a7fbc5 lib/model: Ignore difference in extended attributes & ownership when deleting (fixes #9371) (#9430)
Adds a bool flag to `scanIfItemChanged()` to indicate when the scan was initiated from a delete function, and if so, tell `IsEquivalentOptional()` to ignore Xattrs and Ownership regardless of the global setting.

I tested this with my sledgehammer and it seems to pass.
2024-03-02 14:55:18 +00:00
Jakob Borg
07a9fa2dbd all: Use own automaxprocs package that doesn't log (ref #9436) (#9437)
### Purpose

🤫
2024-02-27 13:05:19 +01:00
Thomas
aa559bf496 all: Use Linux container CPU quota (fixes #9357, fixes #9435) (#9436)
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.
2024-02-26 12:23:14 +00:00
Jakob Borg
2d968d46b7 cmd/syncthing: Remove legacy GOMAXPROCS handling (ref #9436) 2024-02-26 13:12:57 +01:00
Syncthing Release Automation
86c4cafc96 gui, man, authors: Update docs, translations, and contributors 2024-02-26 03:45:28 +00:00
Beat Reichenbach
c4dfb66d84 docker: Add support for setting umask (#9429)
Add support for setting umask value in the Docker `entrypoint.sh`
script. This is useful when
not syncing permissions and working with groups, and needing umask
values like `002` instead of `022`.
2024-02-22 08:47:43 +00:00
Syncthing Release Automation
f4d160684b gui, man, authors: Update docs, translations, and contributors 2024-02-19 03:45:23 +00:00
Syncthing Release Automation
b76e6ce70d gui, man, authors: Update docs, translations, and contributors 2024-02-12 03:45:25 +00:00
Jakob Borg
6b4028eede build: Use correct Go version also for script runs (#9414)
The changes to go.mod in latest Go 1.21/1.22 are not fully understood by
older Go that might be pre-installed on builds, so make sure we always
have a modern one in place even for running small release scripts etc.
2024-02-11 09:24:00 +01:00
Jakob Borg
ad81ac8da7 lib/api: Deflake TestAPIServiceRequests (#9413)
Somewhere along the way, the non-parallel test became parallel, and at
that point, timeouts occurred. Parallel is better, so increase the
timeout on the offending call a bit...
2024-02-11 09:20:29 +01:00
Jakob Borg
7ebeaefe77 lib/model: Deflake new IndexHandlerTest (#9412)
The new test has a flakiness factor on slow platforms, where the close
on the sending connection races with the last index message, potentially
messing up the count. This adds a wait to ensure that all sent messages
are received, or the test will eventually fail with a timeout.
2024-02-11 09:03:12 +01:00
Jakob Borg
e1dd36561d all: Use some Go 1.21 features (#9409) 2024-02-10 21:02:42 +01:00
Jakob Borg
96c30f8387 lib/model, lib/protocol: Remove FileInfoBatch reuse behavior (#9399) 2024-02-10 19:16:27 +01:00
Jakob Borg
fc8b353011 build: Use Go 1.22, minimum is Go 1.21 (#9408)
Also updated dependencies, and an adjustment to build tags for how those
are handled and how irrelevant go1.15 is nowadays...
2024-02-09 16:35:29 +01:00
Jakob Borg
416b9e8924 lib/logger: Reduce API surface (#9404)
There is no need to expose the IsTraced() thing; it's just used in
initialisation, and thereafter ShouldDebug() is the corresponding
correct call.
2024-02-09 11:17:44 +01:00
gudvinr
9f6d732587 lib/logger: Split STTRACE into list of strings (#9402)
Currently `IsTraced("xyz")` will return true for
any inclusion of "xyz" in string.

This change splits `STTRACE` using `','`, `' '` and `';'`
as delimiters. That makes facilities separation
more clear.
2024-02-06 14:07:59 +01:00
Syncthing Release Automation
f2f5786b33 gui, man, authors: Update docs, translations, and contributors 2024-02-05 03:45:33 +00:00
Jakob Borg
eb617865d2 lib/model: Typo in method name (fixes #9389) 2024-02-01 15:13:33 +01:00
Jakob Borg
a49e318d25 lib/model: Typo in debug print (fixes #9386) 2024-02-01 15:11:09 +01:00
Jakob Borg
e74674a019 build: Update dependencies (#9379) 2024-01-31 09:11:04 +01:00
dependabot[bot]
d98fa474ae build(deps): bump actions/cache from 3 to 4 (#9363)
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update action to node20 by <a
href="https://github.com/takost"><code>@​takost</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1284">actions/cache#1284</a></li>
<li>feat: save-always flag by <a
href="https://github.com/to-s"><code>@​to-s</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1242">actions/cache#1242</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/takost"><code>@​takost</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1284">actions/cache#1284</a></li>
<li><a href="https://github.com/to-s"><code>@​to-s</code></a> made their
first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1242">actions/cache#1242</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v3...v4.0.0">https://github.com/actions/cache/compare/v3...v4.0.0</a></p>
<h2>v3.3.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Cache v3.3.3 by <a
href="https://github.com/robherley"><code>@​robherley</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1302">actions/cache#1302</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/robherley"><code>@​robherley</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1302">actions/cache#1302</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v3...v3.3.3">https://github.com/actions/cache/compare/v3...v3.3.3</a></p>
<h2>v3.3.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Fixed readme with new segment timeout values by <a
href="https://github.com/kotewar"><code>@​kotewar</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1133">actions/cache#1133</a></li>
<li>Readme fixes by <a
href="https://github.com/kotewar"><code>@​kotewar</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1134">actions/cache#1134</a></li>
<li>Updated description of the lookup-only input for main action by <a
href="https://github.com/kotewar"><code>@​kotewar</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1130">actions/cache#1130</a></li>
<li>Change two new actions mention as quoted text by <a
href="https://github.com/bishal-pdMSFT"><code>@​bishal-pdMSFT</code></a>
in <a
href="https://redirect.github.com/actions/cache/pull/1131">actions/cache#1131</a></li>
<li>Update Cross-OS Caching tips by <a
href="https://github.com/pdotl"><code>@​pdotl</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1122">actions/cache#1122</a></li>
<li>Bazel example (Take <a
href="https://redirect.github.com/actions/cache/issues/2">#2</a>️⃣) by
<a href="https://github.com/vorburger"><code>@​vorburger</code></a> in
<a
href="https://redirect.github.com/actions/cache/pull/1132">actions/cache#1132</a></li>
<li>Remove actions to add new PRs and issues to a project board by <a
href="https://github.com/jorendorff"><code>@​jorendorff</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1187">actions/cache#1187</a></li>
<li>Consume latest toolkit and fix dangling promise bug by <a
href="https://github.com/chkimes"><code>@​chkimes</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1217">actions/cache#1217</a></li>
<li>Bump action version to 3.3.2 by <a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1236">actions/cache#1236</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/vorburger"><code>@​vorburger</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1132">actions/cache#1132</a></li>
<li><a
href="https://github.com/jorendorff"><code>@​jorendorff</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1187">actions/cache#1187</a></li>
<li><a href="https://github.com/chkimes"><code>@​chkimes</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1217">actions/cache#1217</a></li>
<li><a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1236">actions/cache#1236</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v3...v3.3.2">https://github.com/actions/cache/compare/v3...v3.3.2</a></p>
<h2>v3.3.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Reduced download segment size to 128 MB and timeout to 10 minutes by
<a href="https://github.com/kotewar"><code>@​kotewar</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1129">actions/cache#1129</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v3...v3.3.1">https://github.com/actions/cache/compare/v3...v3.3.1</a></p>
<h2>v3.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bug: Permission is missing in cache delete example by <a
href="https://github.com/kotokaze"><code>@​kotokaze</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1123">actions/cache#1123</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h3>3.0.0</h3>
<ul>
<li>Updated minimum runner version support from node 12 -&gt; node
16</li>
</ul>
<h3>3.0.1</h3>
<ul>
<li>Added support for caching from GHES 3.5.</li>
<li>Fixed download issue for files &gt; 2GB during restore.</li>
</ul>
<h3>3.0.2</h3>
<ul>
<li>Added support for dynamic cache size cap on GHES.</li>
</ul>
<h3>3.0.3</h3>
<ul>
<li>Fixed avoiding empty cache save when no files are available for
caching. (<a
href="https://redirect.github.com/actions/cache/issues/624">issue</a>)</li>
</ul>
<h3>3.0.4</h3>
<ul>
<li>Fixed tar creation error while trying to create tar with path as
<code>~/</code> home folder on <code>ubuntu-latest</code>. (<a
href="https://redirect.github.com/actions/cache/issues/689">issue</a>)</li>
</ul>
<h3>3.0.5</h3>
<ul>
<li>Removed error handling by consuming actions/cache 3.0 toolkit, Now
cache server error handling will be done by toolkit. (<a
href="https://redirect.github.com/actions/cache/pull/834">PR</a>)</li>
</ul>
<h3>3.0.6</h3>
<ul>
<li>Fixed <a
href="https://redirect.github.com/actions/cache/issues/809">#809</a> -
zstd -d: no such file or directory error</li>
<li>Fixed <a
href="https://redirect.github.com/actions/cache/issues/833">#833</a> -
cache doesn't work with github workspace directory</li>
</ul>
<h3>3.0.7</h3>
<ul>
<li>Fixed <a
href="https://redirect.github.com/actions/cache/issues/810">#810</a> -
download stuck issue. A new timeout is introduced in the download
process to abort the download if it gets stuck and doesn't finish within
an hour.</li>
</ul>
<h3>3.0.8</h3>
<ul>
<li>Fix zstd not working for windows on gnu tar in issues <a
href="https://redirect.github.com/actions/cache/issues/888">#888</a> and
<a
href="https://redirect.github.com/actions/cache/issues/891">#891</a>.</li>
<li>Allowing users to provide a custom timeout as input for aborting
download of a cache segment using an environment variable
<code>SEGMENT_DOWNLOAD_TIMEOUT_MINS</code>. Default is 60 minutes.</li>
</ul>
<h3>3.0.9</h3>
<ul>
<li>Enhanced the warning message for cache unavailablity in case of
GHES.</li>
</ul>
<h3>3.0.10</h3>
<ul>
<li>Fix a bug with sorting inputs.</li>
<li>Update definition for restore-keys in README.md</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="13aacd865c"><code>13aacd8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1242">#1242</a>
from to-s/main</li>
<li><a
href="53b35c5439"><code>53b35c5</code></a>
Merge branch 'main' into main</li>
<li><a
href="65b8989fab"><code>65b8989</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1284">#1284</a>
from takost/update-to-node-20</li>
<li><a
href="d0be34d544"><code>d0be34d</code></a>
Fix dist</li>
<li><a
href="66cf064d47"><code>66cf064</code></a>
Merge branch 'main' into update-to-node-20</li>
<li><a
href="1326563738"><code>1326563</code></a>
Merge branch 'main' into main</li>
<li><a
href="e71876755e"><code>e718767</code></a>
Fix format</li>
<li><a
href="01229828ff"><code>0122982</code></a>
Apply workaround for earlyExit</li>
<li><a
href="3185ecfd61"><code>3185ecf</code></a>
Update &quot;only-&quot; actions to node20</li>
<li><a
href="25618a0a67"><code>25618a0</code></a>
Bump version</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/cache/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-31 08:33:02 +01:00
Jakob Borg
f16817632f lib/api: Improve folder summary event, verbose service (#9370)
This makes a couple of small improvements to the folder summary
mechanism:

- The folder summary includes the local and remote sequence numbers in
clear text, rather than some odd sum that I'm not sure what it was
intended to represent.
- The folder summary event is generated when appropriate, regardless of
whether there is an event listener. We did this before because
generating it was expensive, and we wanted to avoid doing it
unnecessarily. Nowadays, however, it's mostly just reading out
pre-calculated metadata, and anyway, it's nice if it shows up reliably
when running with -verbose.

The point of all this is to make it easier to use these events to judge
when devices are, in fact, in sync. As-is, if I'm looking at two
devices, it's very difficult to reliably determine if they are in sync
or not. The reason is that while we can ask device A if it thinks it's
in sync, we can't see if the answer is "yes" because it has processed
all changes from B, or if it just doesn't know about the changes from B
yet. With proper sequence numbers in the event we can compare the two
and determine the truth. This makes testing a lot easier.
2024-01-31 08:24:39 +01:00
Jakob Borg
bda4016109 lib/protocol: Refactor interface (#9375)
This is a refactor of the protocol/model interface to take the actual
message as the parameter, instead of the broken-out fields:

```diff
type Model interface {
        // An index was received from the peer device
-       Index(conn Connection, folder string, files []FileInfo) error
+       Index(conn Connection, idx *Index) error
        // An index update was received from the peer device
-       IndexUpdate(conn Connection, folder string, files []FileInfo) error
+       IndexUpdate(conn Connection, idxUp *IndexUpdate) error
        // A request was made by the peer device
-       Request(conn Connection, folder, name string, blockNo, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error)
+       Request(conn Connection, req *Request) (RequestResponse, error)
        // A cluster configuration message was received
-       ClusterConfig(conn Connection, config ClusterConfig) error
+       ClusterConfig(conn Connection, config *ClusterConfig) error
        // The peer device closed the connection or an error occurred
        Closed(conn Connection, err error)
        // The peer device sent progress updates for the files it is currently downloading
-       DownloadProgress(conn Connection, folder string, updates []FileDownloadProgressUpdate) error
+       DownloadProgress(conn Connection, p *DownloadProgress) error
 }
```

(and changing the `ClusterConfig` to `*ClusterConfig` for symmetry;
we'll be forced to use all pointers everywhere at some point anyway...)

The reason for this is that I have another thing cooking which is a
small troubleshooting change to check index consistency during transfer.
This required adding a field or two to the index/indexupdate messages,
and plumbing the extra parameters in umpteen changes is almost as big a
diff as this is. I figured let's do it once and avoid having to do that
in the future again...

The rest of the diff falls out of the change above, much of it being in
test code where we run these methods manually...
2024-01-31 08:18:27 +01:00
Syncthing Release Automation
8f5d07bd09 gui, man, authors: Update docs, translations, and contributors 2024-01-29 03:45:37 +00:00
kylosus
302b352d78 lib/fs: Add invalid UTF-8 guards to watcher (fixes #9369) (#9372)
Add invalid UTF-8 guards to fix #9369. Probably not a permanent fix, but
putting it up here in case someone else encounters the same panic.
2024-01-28 19:50:26 +01:00
Jakob Borg
45beb28fa5 lib/api: Remove remnants of CSRF tokens file mentions (ref #9284) 2024-01-23 12:07:58 +01:00
Syncthing Release Automation
ee9b20e47a gui, man, authors: Update docs, translations, and contributors 2024-01-22 03:45:30 +00:00
tomasz1986
0f55d5fc3e gui: Remove non-functional HTML from External Versioning tooltip (ref #8923) (#9358)
gui: Remove non-functional HTML from External Versioning tooltip (ref
#8923)

Since [1], it is no longer possible to use HTML in tooltips. This was
addressed in [2], however the commit missed one instance of HTML that
was used to change the font type of the External versioning command
tooltip. This remaining HTML is removed in this commit.

[1] f5e5af391a
[2] 73c52eafb6

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

### Screenshots


![image](https://github.com/syncthing/syncthing/assets/5626656/d5f6c553-35cb-48c2-b654-809d8bbe93b8)

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-01-20 22:11:42 +01:00
bt90
35e153625c cmd/ursrv: Add FreeBSD detection (#9351)
### Purpose

Classify `ports@freebsd` as `FreeBSD (3rd party)`
2024-01-16 17:14:12 +01:00
bt90
d5e1b99e6c cmd/ursrv: Fix Arch detection (#9350)
### Purpose

Classify `syncthing@archlinux` as `Arch (3rd party)`
2024-01-16 17:13:34 +01:00
Jakob Borg
3297624037 lib/ignore: Optimise ignoring directories for filesystem watcher (fixes #9339) (#9340)
This improves the ignore handling so that directories can be fully
ignored (skipped in the watcher) in more cases. Specifically, where the
previous rule was that any complex `!`-pattern would disable skipping
directories, the new rule is that only matches on patterns *after* such
a `!`-pattern disable skipping. That is, the following now does the
intuitive thing:

```
/foo
/bar
!whatever
*
```

- `/foo/**` and `/bar/**` are completely skipped, since there is no
chance anything underneath them could ever be not-ignored
- `!whatever` toggles the "can't skip directories any more" flag
- Anything that matches `*` can't skip directories, because it's
possible we can have `whatever` match something deeper.

To enable this, some refactoring was necessary:

- The "can skip dirs" flag is now a property of the match result, not of
the pattern set as a whole.
- That meant returning a boolean is not good enough, we need to actually
return the entire `Result` (or, like, two booleans but that seemed
uglier and more annoying to use)
- `ShouldIgnore(string) boolean` went away with
`Match(string).IsIgnored()` being the obvious replacement (API
simplification!)
- The watcher then needed to import the `ignore` package (for the
`Result` type), but `fs` imports the watcher and `ignore` imports `fs`.
That's a cycle, so I broke out `Result` into a package of its own so
that it can be safely imported everywhere in things like `type Matcher
interface { Match(string) result.Result }`. There's a fair amount of
stuttering in `result.Result` and maybe we should go with something like
`ignoreresult.R` or so, leaving this open for discussion.

Tests refactored to suit, I think this change is in fact quite well
covered by the existing ones...

Also some noise because a few of the changed files were quite old and
got the `gofumpt` treatment by my editor. Sorry not sorry.

---------

Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-01-15 10:13:22 +00:00
Syncthing Release Automation
445e8cc532 gui, man, authors: Update docs, translations, and contributors 2024-01-15 03:45:19 +00:00
Jakob Borg
e041877488 lib/ignore: Refactor out result type (#9343) 2024-01-13 18:58:23 +01:00
Jakob Borg
36e08f8eee build: Testing infra images for infra-* branches 2024-01-13 11:24:59 +01:00
nf
8b321387c0 lib/versioner: Expand tildes in version directory (fixes #9241) (#9327)
### Purpose

Fix #9241 by expanding tildes in version paths.

When creating the versioner file system, first try to expand any leading
tildes to the user's home directory before handling relative paths. This
makes a version path `"~/p"` expand to `"$HOME/p"` instead of
`"/folder/~/p"`.

### Testing

Added a test to lib/versioner that exercises this code path. Also
manually tested with local syncthing instances.
2024-01-12 10:46:18 +01:00
Julian Lehrhuber
8edd67a569 lib/scanner: Prevent sync-conflict for receive-only local modifications (#9323)
### Purpose

This PR changes behaviour of syncthing related to `receive-only`
folders, which I believe to be a bug since I wouldn't expect the current
behaviour. With the current syncthing codebase, a file of a
`receive-only` folder that is only modified locally can cause the
creation of a `.sync-conflict` file.

### Testing

Consider this szenario: Setup two paired clients that sync a folder with
a given file (e.g. `Test.txt`). One of the clients configures the folder
to be `receive-only`. Now, change the contents of the file for the
receive-only client **_twice_**.

With the current syncthing codebase, this leads to the creation of a
`.sync-conflict` file that contains the modified contents, while the
regular `Test.txt` file is reset to the cluster's provided contents.
This is due to a `protocol.FileInfo#ShouldConflict` check, that is
succeeding on the locally modified file.

This PR changes this behaviour to not reset the file and not cause the
creation of a `.sync-conflict`. Instead, the second content update is
treated the same as the first content update.

This PR also contains a test that fails on the current codebase and
succeeds with the changes introduced in this PR.

### Screenshots

This is not a GUI change

### Documentation

This is not a user visible change.

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.

#### Thanks to all the syncthing folks for this awesome piece of
software!
2024-01-08 10:29:20 +01:00
Syncthing Release Automation
e829a63295 gui, man, authors: Update docs, translations, and contributors 2024-01-08 03:45:23 +00:00
diemade
6881a6897a Fix website security link in README.md (#9325)
https://syncthing.net/security/
html file does not exist

### Purpose
Fixing link that is supposed to link to https://syncthing.net/security/
2024-01-06 23:58:56 +01:00
Daniel Padrta
9387e107b3 cmd/syncthing: Add CLI completion functionality (fixes #8616) (#9226)
### Purpose

This implements CLI completion using the Kongplete module. As a side
effect a CLI structure for syncthing/cli was created for kongplete to be
able to parse and implement CLI completion.

### Testing

I've tested the autocompletion manually, and it had worked, but I hadn't
added any tests so as to test it automatically. Additionally, I ran `go
run build.go test` with all tests passing.
2024-01-04 10:20:53 +00:00
Jakob Borg
aa901790b9 lib/api: Save session & CSRF tokens to database, add option to stay logged in (fixes #9151) (#9284)
This adds a "token manager" which handles storing and checking expired
tokens, used for both sessions and CSRF tokens. It removes the old,
corresponding functionality for CSRFs which saved things in a file. The
result is less crap in the state directory, and active login sessions
now survive a Syncthing restart (this really annoyed me).

It also adds a boolean on login to create a longer-lived session cookie,
which is now possible and useful. Thus we can remain logged in over
browser restarts, which was also annoying... :)

<img width="1001" alt="Screenshot 2023-12-12 at 09 56 34"
src="https://github.com/syncthing/syncthing/assets/125426/55cb20c8-78fc-453e-825d-655b94c8623b">

Best viewed with whitespace-insensitive diff, as a bunch of the auth
functions became methods instead of closures which changed indentation.
2024-01-04 10:07:12 +00:00
Jakob Borg
17df4b8634 Update dependencies (#9321)
```
% export GOTOOLCHAIN=go1.20.7
% go list -m all | cut -d ' ' -f 1 | xargs go get -u
% go mod tidy
```

Except:

- github.com/jackpal/gateway now requires Go 1.21
- github.com/shirou/gopsutil breaks linux-mips in latest version
2024-01-04 10:56:11 +01:00
tomasz1986
34ef30dd5c gui: Always inform about loading data in Restore Versions modal (#9317)
Currently, with a large number of versioned files, there is a delay
between the Restore Versions modal showing up on the screen and
initialisation of the actual versions tree. This leads to a situation,
where the modal is initially empty, which confuses the user, making
them think that something is not working correctly.

To avoid the above, always show the loading data information. The string
is displayed using static HTML first, and then replaced with the exact
same content once the tree has been initialised. Both elements use the
same style and position, so there is no visual shift between the two.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2024-01-03 12:37:51 +01:00
Peter Badida
fc1c7a3c49 lib/build: Allow semver build in version regex (fixes #9267) (#9316) 2024-01-02 20:43:22 +01:00
Peter Badida
2abfefc18c gui: Keep short deviceID length consistent + xrefs (fixes #9313) (#9314)
Making short deviceID length consistent and referencing to protocol file
for future-proof edits. Closes #9313.
2024-01-02 17:31:57 +01:00
dependabot[bot]
86a08eb87d build(deps): bump actions/download-artifact from 3 to 4 (#9294)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<p>The release of upload-artifact@v4 and download-artifact@v4 are major
changes to the backend architecture of Artifacts. They have numerous
performance and behavioral improvements.</p>
<p>For more information, see the <a
href="https://github.com/actions/toolkit/tree/main/packages/artifact"><code>@​actions/artifact</code></a>
documentation.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/bflad"><code>@​bflad</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/download-artifact/pull/194">actions/download-artifact#194</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v3...v4.0.0">https://github.com/actions/download-artifact/compare/v3...v4.0.0</a></p>
<h2>v3.0.2</h2>
<ul>
<li>Bump <code>@actions/artifact</code> to v1.1.1 - <a
href="https://redirect.github.com/actions/download-artifact/pull/195">actions/download-artifact#195</a></li>
<li>Fixed a bug in Node16 where if an HTTP download finished too quickly
(&lt;1ms, e.g. when it's mocked) we attempt to delete a temp file that
has not been created yet <a
href="hhttps://redirect.github.com/actions/toolkit/pull/1278">actions/toolkit#1278</a></li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li><a
href="https://redirect.github.com/actions/download-artifact/pull/178">Bump
<code>@​actions/core</code> to 1.10.0</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7a1cd3216c"><code>7a1cd32</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/246">#246</a>
from actions/v4-beta</li>
<li><a
href="8f32874a49"><code>8f32874</code></a>
licensed cache</li>
<li><a
href="b5ff8444b1"><code>b5ff844</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/245">#245</a>
from actions/robherley/v4-documentation</li>
<li><a
href="f07a0f73f5"><code>f07a0f7</code></a>
Update README.md</li>
<li><a
href="7226129829"><code>7226129</code></a>
update test workflow to use different artifact names for matrix</li>
<li><a
href="ada9446619"><code>ada9446</code></a>
update docs and bump <code>@​actions/artifact</code></li>
<li><a
href="7eafc8b729"><code>7eafc8b</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/244">#244</a>
from actions/robherley/bump-toolkit</li>
<li><a
href="3132d12662"><code>3132d12</code></a>
consume latest toolkit</li>
<li><a
href="5be1d38671"><code>5be1d38</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/243">#243</a>
from actions/robherley/v4-beta-updates</li>
<li><a
href="465b526e63"><code>465b526</code></a>
consume latest <code>@​actions/toolkit</code></li>
<li>Additional commits viewable in <a
href="https://github.com/actions/download-artifact/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-01 11:09:23 +01:00
dependabot[bot]
d330d65859 build(deps): bump actions/upload-artifact from 3 to 4 (#9293)
Bumps
[actions/upload-artifact](https://github.com/actions/upload-artifact)
from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<p>The release of upload-artifact@v4 and download-artifact@v4 are major
changes to the backend architecture of Artifacts. They have numerous
performance and behavioral improvements.</p>
<p>For more information, see the <a
href="https://github.com/actions/toolkit/tree/main/packages/artifact"><code>@​actions/artifact</code></a>
documentation.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/vmjoseph"><code>@​vmjoseph</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/464">actions/upload-artifact#464</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v3...v4.0.0">https://github.com/actions/upload-artifact/compare/v3...v4.0.0</a></p>
<h2>v3.1.3</h2>
<h2>What's Changed</h2>
<ul>
<li>chore(github): remove trailing whitespaces by <a
href="https://github.com/ljmf00"><code>@​ljmf00</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/313">actions/upload-artifact#313</a></li>
<li>Bump <code>@​actions/artifact</code> version to v1.1.2 by <a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/436">actions/upload-artifact#436</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v3...v3.1.3">https://github.com/actions/upload-artifact/compare/v3...v3.1.3</a></p>
<h2>v3.1.2</h2>
<ul>
<li>Update all <code>@actions/*</code> NPM packages to their latest
versions- <a
href="https://redirect.github.com/actions/upload-artifact/issues/374">#374</a></li>
<li>Update all dev dependencies to their most recent versions - <a
href="https://redirect.github.com/actions/upload-artifact/issues/375">#375</a></li>
</ul>
<h2>v3.1.1</h2>
<ul>
<li>Update actions/core package to latest version to remove
<code>set-output</code> deprecation warning <a
href="https://redirect.github.com/actions/upload-artifact/issues/351">#351</a></li>
</ul>
<h2>v3.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump <code>@​actions/artifact</code> to v1.1.0 (<a
href="https://redirect.github.com/actions/upload-artifact/pull/327">actions/upload-artifact#327</a>)
<ul>
<li>Adds checksum headers on artifact upload (<a
href="https://redirect.github.com/actions/toolkit/pull/1095">actions/toolkit#1095</a>)
(<a
href="https://redirect.github.com/actions/toolkit/pull/1063">actions/toolkit#1063</a>)</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c7d193f32e"><code>c7d193f</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/466">#466</a>
from actions/v4-beta</li>
<li><a
href="13131bb095"><code>13131bb</code></a>
licensed cache</li>
<li><a
href="4a6c273b98"><code>4a6c273</code></a>
Merge branch 'main' into v4-beta</li>
<li><a
href="f391bb91a3"><code>f391bb9</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/465">#465</a>
from actions/robherley/v4-documentation</li>
<li><a
href="9653d03c4b"><code>9653d03</code></a>
Apply suggestions from code review</li>
<li><a
href="875b630764"><code>875b630</code></a>
add limitations section</li>
<li><a
href="ecb21463e9"><code>ecb2146</code></a>
add compression example</li>
<li><a
href="5e7604f84a"><code>5e7604f</code></a>
trim some repeated info</li>
<li><a
href="d6437d0758"><code>d6437d0</code></a>
naming</li>
<li><a
href="1b56155703"><code>1b56155</code></a>
s/v4-beta/v4/g</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/upload-artifact/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-01 11:08:55 +01:00
Syncthing Release Automation
4e1831fa3a gui, man, authors: Update docs, translations, and contributors 2024-01-01 03:45:30 +00:00
Simon Frei
2f3eacdb6c gui, lib/scanner: Improve scan progress indication (ref #8331) (#9308) 2023-12-31 23:01:16 +01:00
Sven Bachmann
1ce2af1238 lib/protocol: handle empty names in unixOwnershipEqual (fixes #9039) (#9306)
If syncOwnership is enabled and the remote uses for example a dockerized
Syncthing it can't fetch the ownername and groupname of the local
instance. Without this patch this led to an endless cycle of detected
changes on the remote and failing re-sync attempts.

This patch skips comparing the ownername and groupname if they zare empty
on one side.

See https://github.com/syncthing/syncthing/issues/9039 for details.

### Testing

Proposed by @calmh in
https://github.com/syncthing/syncthing/issues/9039#issuecomment-1870584783
and tested locally in my setup,

Setup PC 1:
- Syncthing is run in Docker as user `root` and has none of the users
configured that synchronize their files

Setup PC 2:
  - this PC has all users locally setup
- Syncthing runs as `systemd` service as user `syncthing` and has
multiple capabilities set to set the correct owner and permissions

Setup PC 3:
  - same as PC 2

Handling:
- `PC 1` is send & receive and uses just the `UID` and `GID` identifiers
to store the files
- `PC 2` and `PC 3` synchronize their files over `PC 1` but not directly
to each other

Outcome:
- `PC 2` and `PC 3` should send and receive their files with the correct
ownership and groups from `PC 1`
2023-12-29 09:16:33 +01:00
372 changed files with 12984 additions and 8714 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1,2 +0,0 @@
/AUTHORS @calmh
/*.md @calmh

View File

@@ -1,13 +0,0 @@
---
name: Feature request
about: If you're just not sure how to do something, see "ask a question".
labels: enhancement, needs-triage
---
### Include required information
Please be sure to include at least:
- what problem your new feature would solve
- how or why you think it is generally useful (i.e., not just for you)
- what alternatives or workarounds you considered

28
.github/ISSUE_TEMPLATE/01-feature.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Feature request
description: File a new feature request
labels: ["enhancement", "needs-triage"]
body:
- type: textarea
id: feature
attributes:
label: Feature description
description: Please describe the behavior you'd like to see.
validations:
required: true
- type: textarea
id: problem-usecase
attributes:
label: Problem or use case
description: Please explain which problem this would solve, or what the use case is for the feature. Keep in mind that it's more likely to be implemented if it's generally useful for a larger number of users.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives or workarounds
description: Please describe any alternatives or workarounds you have considered and, possibly, rejected.
validations:
required: true

View File

@@ -1,23 +0,0 @@
---
name: Bug report
about: If you're actually looking for support, see "ask a question".
labels: bug, needs-triage
---
### Does your log mention database corruption?
If your Syncthing log reports panics because of database corruption it is
most likely a fault with your system's storage or memory. Affected log
entries will contain lines starting with `panic: leveldb`. You will need to
delete the index database to clear this, by running `syncthing
-reset-database`.
### Include required information
Please be sure to include at least:
- which version of Syncthing and what operating system you are using
- browser and version, if applicable
- what happened,
- what you expected to happen instead, and
- any steps to reproduce the problem.

51
.github/ISSUE_TEMPLATE/02-bug.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: Bug report
description: If you're actually looking for support instead, see "I need help / I have a question".
labels: ["bug", "needs-triage"]
body:
- type: markdown
attributes:
value: |
:no_entry_sign: If you want to report a security issue, please see [our Security Policy](https://syncthing.net/security/) and do not report the issue here.
:interrobang: If you are not sure if there is a bug, but something isn't working right and you need help, please [use the forum](https://forum.syncthing.net/).
- type: textarea
id: what-happened
attributes:
label: What happened?
description: Also tell us, what did you expect to happen, and any steps we might use to reproduce the problem.
placeholder: Tell us what you see!
validations:
required: true
- type: input
id: version
attributes:
label: Syncthing version
description: What version of Syncthing are you running?
placeholder: v1.27.4
validations:
required: true
- type: input
id: platform
attributes:
label: Platform & operating system
description: On what platform(s) are you seeing the problem?
placeholder: Linux arm64
validations:
required: true
- type: input
id: browser
attributes:
label: Browser version
description: If the problem is related to the GUI, describe your browser and version.
placeholder: Safari 17.3.1
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output or crash backtrace. This will be automatically formatted into code, so no need for backticks.
render: shell

View File

@@ -4,9 +4,10 @@ on:
push:
branches:
- infrastructure
- infra-*
env:
GO_VERSION: "~1.21.5"
GO_VERSION: "~1.23.0"
CGO_ENABLED: "0"
BUILD_USER: docker
BUILD_HOST: github.syncthing.net
@@ -14,6 +15,7 @@ env:
jobs:
docker-syncthing:
name: Build and push Docker images
if: github.repository == 'syncthing/syncthing'
runs-on: ubuntu-latest
environment: docker
strategy:
@@ -49,6 +51,17 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set Docker tags (all branches)
run: |
tags=syncthing/${{ matrix.pkg }}:${{ github.sha }}
echo "TAGS=$tags" >> $GITHUB_ENV
- name: Set Docker tags (latest)
if: github.ref == 'refs/heads/infrastructure'
run: |
tags=syncthing/${{ matrix.pkg }}:latest,${{ env.TAGS }}
echo "TAGS=$tags" >> $GITHUB_ENV
- name: Build and push
uses: docker/build-push-action@v5
with:
@@ -56,6 +69,6 @@ jobs:
file: ./Dockerfile.${{ matrix.pkg }}
platforms: linux/amd64,linux/arm64
push: true
tags: syncthing/${{ matrix.pkg }}:latest,syncthing/${{ matrix.pkg }}:${{ github.sha }}
tags: ${{ env.TAGS }}
labels: |
org.opencontainers.image.revision=${{ github.sha }}

View File

@@ -12,7 +12,7 @@ env:
# The go version to use for builds. We set check-latest to true when
# installing, so we get the latest patch version that matches the
# expression.
GO_VERSION: "~1.21.5"
GO_VERSION: "~1.23.0"
# Optimize compatibility on the slow archictures.
GO386: softfloat
@@ -48,7 +48,7 @@ jobs:
runner: ["windows-latest", "ubuntu-latest", "macos-latest"]
# The oldest version in this list should match what we have in our go.mod.
# Variables don't seem to be supported here, or we could have done something nice.
go: ["~1.20.12", "~1.21.5"]
go: ["~1.22.6", "~1.23.0"]
runs-on: ${{ matrix.runner }}
steps:
- name: Set git to use LF
@@ -165,7 +165,7 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: actions/cache@v3
- uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
@@ -190,7 +190,7 @@ jobs:
CODESIGN_TIMESTAMP_SERVER: ${{ secrets.CODESIGN_TIMESTAMP_SERVER }}
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: packages-windows
path: syncthing-windows-*.zip
@@ -218,7 +218,7 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: actions/cache@v3
- uses: actions/cache@v4
with:
path: |
~/.cache/go-build
@@ -235,10 +235,12 @@ jobs:
CGO_ENABLED: "0"
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: packages-linux
path: syncthing-linux-*.tar.gz
path: |
syncthing-linux-*.tar.gz
compat.json
#
# macOS
@@ -265,7 +267,7 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: actions/cache@v3
- uses: actions/cache@v4
with:
path: |
~/.cache/go-build
@@ -334,7 +336,7 @@ jobs:
zip -r "../$universal.zip" "$universal"
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: packages-macos
path: syncthing-*.zip
@@ -349,7 +351,7 @@ jobs:
runs-on: macos-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: packages-macos
@@ -392,7 +394,7 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: actions/cache@v3
- uses: actions/cache@v4
with:
path: |
~/.cache/go-build
@@ -431,7 +433,7 @@ jobs:
CGO_ENABLED: "0"
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: packages-other
path: syncthing-*.tar.gz
@@ -471,7 +473,7 @@ jobs:
mv "syncthing-source-$version.tar.gz" syncthing
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: packages-source
path: syncthing-source-*.tar.gz
@@ -504,11 +506,17 @@ jobs:
fetch-depth: 0
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
check-latest: true
- name: Install signing tool
run: |
go install ./cmd/stsigtool
go install ./cmd/dev/stsigtool
- name: Sign archives
run: |
@@ -543,7 +551,7 @@ jobs:
GNUPG_SIGNING_KEY_BASE64: ${{ secrets.GNUPG_SIGNING_KEY_BASE64 }}
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: packages-signed
path: packages/*
@@ -579,7 +587,7 @@ jobs:
run: |
gem install fpm
- uses: actions/cache@v3
- uses: actions/cache@v4
with:
path: |
~/.cache/go-build
@@ -595,7 +603,7 @@ jobs:
BUILD_USER: debian
- name: Archive artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: debian-packages
path: "*.deb"
@@ -620,29 +628,36 @@ jobs:
fetch-depth: 0
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: packages-signed
path: packages
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
check-latest: true
- name: Create release json
run: |
cd packages
"$GITHUB_WORKSPACE/tools/generate-release-json" "$BASE_URL" > nightly.json
env:
BASE_URL: https://syncthing.ams3.digitaloceanspaces.com/nightly/
BASE_URL: ${{ secrets.NIGHTLY_BASE_URL }}
- name: Push artifacts
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_SPACES_TYPE: s3
RCLONE_CONFIG_SPACES_PROVIDER: DigitalOcean
RCLONE_CONFIG_SPACES_ACCESS_KEY_ID: ${{ secrets.SPACES_KEY }}
RCLONE_CONFIG_SPACES_SECRET_ACCESS_KEY: ${{ secrets.SPACES_SECRET }}
RCLONE_CONFIG_SPACES_ENDPOINT: ams3.digitaloceanspaces.com
RCLONE_CONFIG_SPACES_ACL: public-read
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync packages spaces:syncthing/nightly
args: sync packages objstore:${{ secrets.S3_BUCKET }}/nightly
#
# Push release artifacts to Spaces
@@ -662,13 +677,73 @@ jobs:
fetch-depth: 0
- name: Download signed packages
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: packages-signed
path: packages
- name: Download debian packages
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: debian-packages
path: packages
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
check-latest: true
- name: Set version
run: |
version=$(go run build.go version)
echo "VERSION=$version" >> $GITHUB_ENV
- name: Push to object store (${{ env.VERSION }})
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync packages objstore:${{ secrets.S3_BUCKET }}/release/${{ env.VERSION }}
- name: Push to object store (latest)
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync objstore:${{ secrets.S3_BUCKET }}/release/${{ env.VERSION }} objstore:${{ secrets.S3_BUCKET }}/release/latest
#
# Push Debian/APT archive
#
publish-apt:
name: Publish APT
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-'))
environment: signing
needs:
- basics
- package-debian
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Download packages
uses: actions/download-artifact@v4
with:
name: debian-packages
path: packages
@@ -676,31 +751,61 @@ jobs:
- name: Set version
run: |
version=$(go run build.go version)
echo "Version: $version"
echo "VERSION=$version" >> $GITHUB_ENV
- name: Push to Spaces (${{ env.VERSION }})
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_SPACES_TYPE: s3
RCLONE_CONFIG_SPACES_PROVIDER: DigitalOcean
RCLONE_CONFIG_SPACES_ACCESS_KEY_ID: ${{ secrets.SPACES_KEY }}
RCLONE_CONFIG_SPACES_SECRET_ACCESS_KEY: ${{ secrets.SPACES_SECRET }}
RCLONE_CONFIG_SPACES_ENDPOINT: ams3.digitaloceanspaces.com
RCLONE_CONFIG_SPACES_ACL: public-read
with:
args: sync packages spaces:syncthing/release/${{ env.VERSION }}
# Decide whether packages should go to stable, candidate or nightly
- name: Prepare packages
run: |
kind=stable
if [[ $VERSION == *-rc.[0-9] ]] ; then
kind=candidate
elif [[ $VERSION == *-* ]] ; then
kind=nightly
fi
echo "Kind: $kind"
mkdir -p packages/syncthing/$kind
mv packages/*.deb packages/syncthing/$kind
- name: Push to Spaces (latest)
- name: Pull archive
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_SPACES_TYPE: s3
RCLONE_CONFIG_SPACES_PROVIDER: DigitalOcean
RCLONE_CONFIG_SPACES_ACCESS_KEY_ID: ${{ secrets.SPACES_KEY }}
RCLONE_CONFIG_SPACES_SECRET_ACCESS_KEY: ${{ secrets.SPACES_SECRET }}
RCLONE_CONFIG_SPACES_ENDPOINT: ams3.digitaloceanspaces.com
RCLONE_CONFIG_SPACES_ACL: public-read
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync spaces:syncthing/release/${{ env.VERSION }} spaces:syncthing/release/latest
args: sync objstore:syncthing-apt/dists dists
- name: Prepare signing key
run: |
echo "$APT_GPG_KEYRING_BASE64" | base64 -d > keyring.pgp
env:
APT_GPG_KEYRING_BASE64: ${{ secrets.APT_GPG_KEYRING_BASE64 }}
- name: Update archive
uses: docker://ghcr.io/kastelo/ezapt:latest
with:
args:
--add packages
--dists dists
--keyring keyring.pgp
- name: Push archive
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync dists -v objstore:syncthing-apt/dists
#
# Build and push to Docker Hub
@@ -743,7 +848,7 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: actions/cache@v3
- uses: actions/cache@v4
with:
path: |
~/.cache/go-build

View File

@@ -16,7 +16,7 @@ jobs:
token: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
- uses: actions/setup-go@v5
with:
go-version: ^1.19.6
go-version: stable
- run: |
set -euo pipefail
git config --global user.name 'Syncthing Release Automation'

1
.gitignore vendored
View File

@@ -18,3 +18,4 @@ deb
/repos
/proto/scripts/protoc-gen-gosyncthing
/gui/next-gen-gui
/compat.json

99
.policy.yml Normal file
View File

@@ -0,0 +1,99 @@
# This is the policy-bot configuration for this repository. It controls
# which approvals are required for any given pull request. The format is
# described at https://github.com/palantir/policy-bot. The syntax of the
# policy can be verified by the bot:
# curl https://pb.syncthing.net/api/validate -X PUT -T .policy.yml
# The policy below is what is required for any pull request.
policy:
approval:
- subject is conventional commit
- project metadata requires maintainer approval
- or:
- is approved by a syncthing contributor
- is a translation or dependency update by a contributor
- is a trivial change by a contributor
# Additionally, contributors can disapprove of a PR
disapproval:
requires:
teams:
- syncthing/contributors
# The rules for the policy are described below.
approval_rules:
# All commits (PRs before squashing) should have a valid conventional
# commit type subject.
- name: subject is conventional commit
requires:
conditions:
title:
matches:
- '^(feat|fix|docs|chore|refactor|build): [a-z].+'
- '^(feat|fix|docs|chore|refactor|build)\(\w+(, \w+)*\): [a-z].+'
# Changes to important project metadata and documentation, including this
# policy, require signoff by a maintainer
- name: project metadata requires maintainer approval
if:
changed_files:
paths:
- ^[^/]+\.md
- ^\.policy\.yml
- ^\.github/
- ^LICENSE
requires:
count: 1
teams:
- syncthing/maintainers
options:
ignore_update_merges: true
allow_contributor: true
# Regular pull requests require approval by an active contributor
- name: is approved by a syncthing contributor
requires:
count: 1
teams:
- syncthing/contributors
options:
ignore_update_merges: true
allow_contributor: true
# Changes to some files (translations, dependencies, compatibility) do not
# require approval if they were proposed by a contributor and have a
# matching commit subject
- name: is a translation or dependency update by a contributor
if:
only_changed_files:
paths:
- ^gui/default/assets/lang/
- ^go\.mod$
- ^go\.sum$
- ^compat\.yaml$
title:
matches:
- '^chore\(gui\):'
- '^build\(deps\):'
- '^build\(compat\):'
has_author_in:
teams:
- syncthing/contributors
# If the change is small and the label "trivial" is added, we accept that
# on trust. These PRs can be audited after the fact as appropriate.
# Features are not trivial.
- name: is a trivial change by a contributor
if:
modified_lines:
total: "< 25"
title:
not_matches:
- '^feat'
has_labels:
- trivial
has_author_in:
teams:
- syncthing/contributors

25
AUTHORS
View File

@@ -51,6 +51,7 @@ Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com> <github
Aurélien Rainone <476650+arl@users.noreply.github.com>
BAHADIR YILMAZ <bahadiryilmaz32@gmail.com>
Bart De Vries (mogwa1) <devriesb@gmail.com>
Beat Reichenbach <44111292+beatreichenbach@users.noreply.github.com>
Ben Curthoys (bencurthoys) <ben@bencurthoys.com>
Ben Schulz (uok) <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Shepherd (benshep) <bjashepherd@gmail.com>
@@ -93,6 +94,7 @@ Daniel Barczyk <46358936+DanielBarczyk@users.noreply.github.com>
Daniel Bergmann (brgmnn) <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Harte (norgeous) <daniel@harte.me> <daniel@danielharte.co.uk> <norgeous@users.noreply.github.com>
Daniel Martí (mvdan) <mvdan@mvdan.cc>
Daniel Padrta <64928366+danpadcz@users.noreply.github.com>
Darshil Chanpura (dtchanpura) <dtchanpura@gmail.com> <dcprime314@gmail.com>
David Rimmer (dinosore) <dinosore@dbrsoftware.co.uk>
deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
@@ -105,6 +107,7 @@ derekriemer <derek.riemer@colorado.edu>
DerRockWolf <50499906+DerRockWolf@users.noreply.github.com>
desbma <desbma@users.noreply.github.com>
Devon G. Redekopp <devon@redekopp.com>
diemade <spamkill@posteo.ch>
digital <didev@dinid.net>
Dimitri Papadopoulos Orfanos <3234522+DimitriPapadopoulos@users.noreply.github.com>
Dmitry Saveliev (dsaveliev) <d.e.saveliev@gmail.com>
@@ -138,10 +141,12 @@ greatroar <61184462+greatroar@users.noreply.github.com>
Greg <gco@jazzhaiku.com>
guangwu <guoguangwu@magic-shield.com>
gudvinr <gudvinr@gmail.com>
Gusted <postmaster@gusted.xyz> <williamzijl7@hotmail.com>
Han Boetes <han@boetes.org>
HansK-p <42314815+HansK-p@users.noreply.github.com>
Harrison Jones (harrisonhjones) <harrisonhjones@users.noreply.github.com>
Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Hireworks <129852174+hireworksltd@users.noreply.github.com>
Hugo Locurcio <hugo.locurcio@hugo.pro>
Iain Barnett <iainspeed@gmail.com>
Ian Johnson (anonymouse64) <ian.johnson@canonical.com> <person.uwsome@gmail.com>
@@ -154,13 +159,14 @@ Jacek Szafarkiewicz (hadogenes) <szafar@linux.pl>
Jack Croft <jccroft1@users.noreply.github.com>
Jacob <jyundt@gmail.com>
Jake Peterson (acogdev) <jake@acogdev.com>
Jakob Borg (calmh) <jakob@nym.se> <jakob@kastelo.net>
Jakob Borg (calmh) <jakob@nym.se> <jakob@kastelo.net> <jborg@coreweave.com>
James O'Beirne <wild-github@au92.org>
James Patterson (jpjp) <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
janost <janost@tuta.io>
Jaroslav Lichtblau <svetlemodry@users.noreply.github.com>
Jaroslav Malec (dzarda) <dzardacz@gmail.com>
jaseg <githubaccount@jaseg.net>
Jaspitta <ste.scarpitta@gmail.com>
Jauder Ho <jauderho@users.noreply.github.com>
Jaya Chithra (jayachithra) <s.k.jayachithra@gmail.com>
Jaya Kumar <jaya.kumar@ict.nl>
@@ -179,10 +185,12 @@ Jonathan Cross <jcross@gmail.com>
Jonta <359397+Jonta@users.noreply.github.com>
Jose Manuel Delicado (jmdaweb) <jmdaweb@hotmail.com> <jmdaweb@users.noreply.github.com>
jtagcat <git-514635f7@jtag.cat> <git-12dbd862@jtag.cat>
Julian Lehrhuber <jul13579@users.noreply.github.com>
Jörg Thalheim <Mic92@users.noreply.github.com>
Jędrzej Kula <kula.jedrek@gmail.com>
K.B.Dharun Krishna <kbdharunkrishna@gmail.com>
Kalle Laine <pahakalle@protonmail.com>
Kapil Sareen <kapilsareen584@gmail.com>
Karol Różycki (krozycki) <rozycki.karol@gmail.com>
Kebin Liu <lkebin@gmail.com>
Keith Harrison <keithh@protonmail.com>
@@ -194,6 +202,7 @@ Kevin Bushiri (keevBush) <keevbush@gmail.com> <36192217+keevBush@users.noreply.g
Kevin White, Jr. (kwhite17) <kevinwhite1710@gmail.com>
klemens <ka7@github.com>
Kurt Fitzner (Kudalufi) <kurt@va1der.ca> <kurt.fitzner@gmail.com>
kylosus <33132401+kylosus@users.noreply.github.com>
Lars K.W. Gohlke (lkwg82) <lkwg82@gmx.de>
Lars Lehtonen <lars.lehtonen@gmail.com>
Laurent Arnoud <laurent@spkdev.net>
@@ -203,7 +212,9 @@ Liu Siyuan (liusy182) <liusy182@gmail.com> <liusy182@hotmail.com>
Lode Hoste (Zillode) <zillode@zillode.be>
Lord Landon Agahnim (LordLandon) <lordlandon@gmail.com>
LSmithx2 <42276854+lsmithx2@users.noreply.github.com>
luchenhan <168071714+luchenhan@users.noreply.github.com>
Lukas Lihotzki <lukas@lihotzki.de>
Luke Hamburg <1992842+luckman212@users.noreply.github.com>
luzpaz <luzpaz@users.noreply.github.com>
Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
@@ -224,6 +235,7 @@ Matteo Ruina <matteo.ruina@gmail.com>
Maurizio Tomasi <ziotom78@gmail.com>
Max <github@germancoding.com>
Max Schulze (kralo) <max.schulze@online.de> <kralo@users.noreply.github.com>
maxice8 <30738253+maxice8@users.noreply.github.com>
MaximAL <almaximal@ya.ru>
Maxime Thirouin <m@moox.io>
Maximilian <maxi.rostock@outlook.de> <public@complexvector.space>
@@ -241,6 +253,7 @@ Mingxuan Lin <gdlmx@users.noreply.github.com>
mv1005 <49659413+mv1005@users.noreply.github.com>
Nate Morrison (nrm21) <natemorrison@gmail.com>
Naveen <172697+naveensrinivasan@users.noreply.github.com>
nf <nf@wh3rd.net>
Nicholas Rishel (PrototypeNM1) <rishel.nick@gmail.com> <PrototypeNM1@users.noreply.github.com>
Nick Busey <NickBusey@users.noreply.github.com>
Nico Stapelbroek <3368018+nstapelbroek@users.noreply.github.com>
@@ -292,26 +305,35 @@ Scott Klupfel (kluppy) <kluppy@going2blue.com>
sec65 <106604020+sec65@users.noreply.github.com>
Sergey Mishin (ralder) <ralder@yandex.ru>
Sertonix <83883937+Sertonix@users.noreply.github.com>
Severin von Wnuck-Lipinski <ss7@live.de>
Shaarad Dalvi <60266155+shaaraddalvi@users.noreply.github.com> <shdalv@microsoft.com>
Simon Frei (imsodin) <freisim93@gmail.com>
Simon Mwepu <simonmwepu@gmail.com>
Simon Pickup <simon@pickupinfinity.com>
Sly_tom_cat <slytomcat@mail.ru>
Sonu Kumar Saw <31889738+dev-saw99@users.noreply.github.com>
Stefan Kuntz (Stefan-Code) <stefan.github@gmail.com> <Stefan.github@gmail.com>
Stefan Tatschner (rumpelsepp) <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org> <stefan@rumpelsepp.org>
Steven Eckhoff <steven.eckhoff.opensource@gmail.com>
Suhas Gundimeda (snugghash) <suhas.gundimeda@gmail.com> <snugghash@gmail.com>
Sven Bachmann <dev@mcbachmann.de>
Syncthing Automation <automation@syncthing.net>
Syncthing Release Automation <release@syncthing.net>
Taylor Khan (nelsonkhan) <nelsonkhan@gmail.com>
Terrance <git@terrance.allofti.me>
Thomas <9749173+uhthomas@users.noreply.github.com>
Thomas Hipp <thomashipp@gmail.com>
Tim Abell (timabell) <tim@timwise.co.uk>
Tim Howes (timhowes) <timhowes@berkeley.edu>
Tim Nordenfur <tim@gurka.se>
Tobias Frölich <40638719+tobifroe@users.noreply.github.com>
Tobias Klauser <tobias.klauser@gmail.com>
Tobias Nygren (tnn2) <tnn@nygren.pp.se>
Tobias Tom (tobiastom) <t.tom@succont.de>
Tom Jakubowski <tom@crystae.net>
Tomasz Wilczyński <5626656+tomasz1986@users.noreply.github.com> <twilczynski@naver.com>
Tommy Thorn <tommy-github-email@thorn.ws>
Tommy van der Vorst <tommy-github@pixelspark.nl> <tommy@pixelspark.nl>
Tully Robinson (tojrobinson) <tully@tojr.org>
Tyler Brazier (tylerbrazier) <tyler@tylerbrazier.com>
Tyler Kropp <kropptyler@gmail.com>
@@ -324,6 +346,7 @@ Vil Brekin (Vilbrekin) <vilbrekin@gmail.com>
villekalliomaki <53118179+villekalliomaki@users.noreply.github.com>
Vladimir Rusinov <vrusinov@google.com> <vladimir.rusinov@gmail.com>
wangguoliang <liangcszzu@163.com>
WangXi <xib1102@icloud.com>
Will Rouesnel <wrouesnel@wrouesnel.com>
William A. Kennington III (wkennington) <william@wkennington.com>
wouter bolsterlee <wouter@bolsterl.ee>

View File

@@ -11,14 +11,6 @@ LABEL org.opencontainers.image.authors="The Syncthing Project" \
EXPOSE 8080
RUN apk add --no-cache ca-certificates su-exec curl
ENV PUID=1000 PGID=1000 MAXMIND_KEY=
RUN mkdir /var/strelaypoolsrv && chown 1000 /var/strelaypoolsrv
USER 1000
COPY strelaypoolsrv-linux-${TARGETARCH} /bin/strelaypoolsrv
COPY script/strelaypoolsrv-entrypoint.sh /bin/entrypoint.sh
WORKDIR /var/strelaypoolsrv
ENTRYPOINT ["/bin/entrypoint.sh", "/bin/strelaypoolsrv", "-listen", ":8080"]
ENTRYPOINT ["/bin/strelaypoolsrv", "-listen", ":8080"]

View File

@@ -15,6 +15,9 @@ To grant Syncthing additional capabilities without running as root, use the
`PCAP` environment variable with the same syntax as that for `setcap(8)`.
For example, `PCAP=cap_chown,cap_fowner+ep`.
To set a different umask value, use the `UMASK` environment variable. For
example `UMASK=002`.
## Example Usage
**Docker cli**
@@ -46,6 +49,11 @@ services:
- 22000:22000/udp # QUIC file transfers
- 21027:21027/udp # Receive local discovery broadcasts
restart: unless-stopped
healthcheck:
test: curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o --color=never OK || exit 1
interval: 1m
timeout: 10s
retries: 3
```
## Discovery
@@ -81,6 +89,11 @@ services:
- /wherever/st-sync:/var/syncthing
network_mode: host
restart: unless-stopped
healthcheck:
test: curl -fkLsS -m 2 127.0.0.1:8384/rest/noauth/health | grep -o --color=never OK || exit 1
interval: 1m
timeout: 10s
retries: 3
```
Be aware that syncthing alone is now in control of what interfaces and ports it

View File

@@ -63,12 +63,6 @@ implementations][11] for Windows, Mac, and Linux.
To run Syncthing in Docker, see [the Docker README][16].
## Vote on features/bugs
We'd like to encourage you to [vote][12] on issues that matter to you.
This helps the team understand what are the biggest pain points for our
users, and could potentially influence what is being worked on next.
## Getting in Touch
The first and best point of contact is the [Forum][8].
@@ -89,7 +83,7 @@ build process.
## Signed Releases
As of v0.10.15 and onwards, release binaries are GPG signed with the key
D26E6ED000654A3E, available from https://syncthing.net/security.html and
D26E6ED000654A3E, available from https://syncthing.net/security/ and
most key servers.
There is also a built-in automatic upgrade mechanism (disabled in some
@@ -111,7 +105,6 @@ All code is licensed under the [MPLv2 License][7].
[8]: https://forum.syncthing.net/
[10]: https://github.com/syncthing/syncthing/issues
[11]: https://docs.syncthing.net/users/contrib.html#gui-wrappers
[12]: https://www.bountysource.com/teams/syncthing/issues
[13]: https://github.com/syncthing/syncthing/blob/main/GOALS.md
[14]: assets/logo-text-128.png
[15]: https://syncthing.net/

201
build.go
View File

@@ -4,8 +4,8 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:build ignore
// +build ignore
//go:build tools
// +build tools
package main
@@ -34,6 +34,8 @@ import (
"time"
buildpkg "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/upgrade"
"sigs.k8s.io/yaml"
)
var (
@@ -95,41 +97,40 @@ var targets = map[string]target{
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/syncthing"},
binaryName: "syncthing", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "README.md", dst: "README.txt", perm: 0644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "README.md", dst: "README.txt", perm: 0o644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
// All files from etc/ and extra/ added automatically in init().
},
systemdService: "syncthing@*.service",
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "README.md", dst: "deb/usr/share/doc/syncthing/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing/AUTHORS.txt", perm: 0644},
{src: "man/syncthing.1", dst: "deb/usr/share/man/man1/syncthing.1", perm: 0644},
{src: "man/syncthing-config.5", dst: "deb/usr/share/man/man5/syncthing-config.5", perm: 0644},
{src: "man/syncthing-stignore.5", dst: "deb/usr/share/man/man5/syncthing-stignore.5", perm: 0644},
{src: "man/syncthing-device-ids.7", dst: "deb/usr/share/man/man7/syncthing-device-ids.7", perm: 0644},
{src: "man/syncthing-event-api.7", dst: "deb/usr/share/man/man7/syncthing-event-api.7", perm: 0644},
{src: "man/syncthing-faq.7", dst: "deb/usr/share/man/man7/syncthing-faq.7", perm: 0644},
{src: "man/syncthing-networking.7", dst: "deb/usr/share/man/man7/syncthing-networking.7", perm: 0644},
{src: "man/syncthing-rest-api.7", dst: "deb/usr/share/man/man7/syncthing-rest-api.7", perm: 0644},
{src: "man/syncthing-security.7", dst: "deb/usr/share/man/man7/syncthing-security.7", perm: 0644},
{src: "man/syncthing-versioning.7", dst: "deb/usr/share/man/man7/syncthing-versioning.7", perm: 0644},
{src: "etc/linux-systemd/system/syncthing@.service", dst: "deb/lib/systemd/system/syncthing@.service", perm: 0644},
{src: "etc/linux-systemd/system/syncthing-resume.service", dst: "deb/lib/systemd/system/syncthing-resume.service", perm: 0644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0644},
{src: "etc/linux-sysctl/30-syncthing.conf", dst: "deb/usr/lib/sysctl.d/30-syncthing.conf", perm: 0644},
{src: "etc/firewall-ufw/syncthing", dst: "deb/etc/ufw/applications.d/syncthing", perm: 0644},
{src: "etc/linux-desktop/syncthing-start.desktop", dst: "deb/usr/share/applications/syncthing-start.desktop", perm: 0644},
{src: "etc/linux-desktop/syncthing-ui.desktop", dst: "deb/usr/share/applications/syncthing-ui.desktop", perm: 0644},
{src: "assets/logo-32.png", dst: "deb/usr/share/icons/hicolor/32x32/apps/syncthing.png", perm: 0644},
{src: "assets/logo-64.png", dst: "deb/usr/share/icons/hicolor/64x64/apps/syncthing.png", perm: 0644},
{src: "assets/logo-128.png", dst: "deb/usr/share/icons/hicolor/128x128/apps/syncthing.png", perm: 0644},
{src: "assets/logo-256.png", dst: "deb/usr/share/icons/hicolor/256x256/apps/syncthing.png", perm: 0644},
{src: "assets/logo-512.png", dst: "deb/usr/share/icons/hicolor/512x512/apps/syncthing.png", perm: 0644},
{src: "assets/logo-only.svg", dst: "deb/usr/share/icons/hicolor/scalable/apps/syncthing.svg", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "README.md", dst: "deb/usr/share/doc/syncthing/README.txt", perm: 0o644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing/AUTHORS.txt", perm: 0o644},
{src: "man/syncthing.1", dst: "deb/usr/share/man/man1/syncthing.1", perm: 0o644},
{src: "man/syncthing-config.5", dst: "deb/usr/share/man/man5/syncthing-config.5", perm: 0o644},
{src: "man/syncthing-stignore.5", dst: "deb/usr/share/man/man5/syncthing-stignore.5", perm: 0o644},
{src: "man/syncthing-device-ids.7", dst: "deb/usr/share/man/man7/syncthing-device-ids.7", perm: 0o644},
{src: "man/syncthing-event-api.7", dst: "deb/usr/share/man/man7/syncthing-event-api.7", perm: 0o644},
{src: "man/syncthing-faq.7", dst: "deb/usr/share/man/man7/syncthing-faq.7", perm: 0o644},
{src: "man/syncthing-networking.7", dst: "deb/usr/share/man/man7/syncthing-networking.7", perm: 0o644},
{src: "man/syncthing-rest-api.7", dst: "deb/usr/share/man/man7/syncthing-rest-api.7", perm: 0o644},
{src: "man/syncthing-security.7", dst: "deb/usr/share/man/man7/syncthing-security.7", perm: 0o644},
{src: "man/syncthing-versioning.7", dst: "deb/usr/share/man/man7/syncthing-versioning.7", perm: 0o644},
{src: "etc/linux-systemd/system/syncthing@.service", dst: "deb/lib/systemd/system/syncthing@.service", perm: 0o644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0o644},
{src: "etc/linux-sysctl/30-syncthing.conf", dst: "deb/usr/lib/sysctl.d/30-syncthing.conf", perm: 0o644},
{src: "etc/firewall-ufw/syncthing", dst: "deb/etc/ufw/applications.d/syncthing", perm: 0o644},
{src: "etc/linux-desktop/syncthing-start.desktop", dst: "deb/usr/share/applications/syncthing-start.desktop", perm: 0o644},
{src: "etc/linux-desktop/syncthing-ui.desktop", dst: "deb/usr/share/applications/syncthing-ui.desktop", perm: 0o644},
{src: "assets/logo-32.png", dst: "deb/usr/share/icons/hicolor/32x32/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-64.png", dst: "deb/usr/share/icons/hicolor/64x64/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-128.png", dst: "deb/usr/share/icons/hicolor/128x128/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-256.png", dst: "deb/usr/share/icons/hicolor/256x256/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-512.png", dst: "deb/usr/share/icons/hicolor/512x512/apps/syncthing.png", perm: 0o644},
{src: "assets/logo-only.svg", dst: "deb/usr/share/icons/hicolor/scalable/apps/syncthing.svg", perm: 0o644},
},
},
"stdiscosrv": {
@@ -141,21 +142,21 @@ var targets = map[string]target{
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stdiscosrv"},
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "README.txt", perm: 0644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "cmd/stdiscosrv/README.md", dst: "README.txt", perm: 0o644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
},
systemdService: "stdiscosrv.service",
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service", dst: "deb/lib/systemd/system/stdiscosrv.service", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0o644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0o644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0o644},
{src: "cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service", dst: "deb/lib/systemd/system/stdiscosrv.service", perm: 0o644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0o644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0o644},
},
tags: []string{"purego"},
},
@@ -168,23 +169,23 @@ var targets = map[string]target{
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaysrv"},
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "cmd/strelaysrv/README.md", dst: "README.txt", perm: 0o644},
{src: "cmd/strelaysrv/LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
},
systemdService: "strelaysrv.service",
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/strelaysrv.service", dst: "deb/lib/systemd/system/strelaysrv.service", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0o644},
{src: "cmd/strelaysrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0o644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0o644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0o644},
{src: "cmd/strelaysrv/etc/linux-systemd/strelaysrv.service", dst: "deb/lib/systemd/system/strelaysrv.service", perm: 0o644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0o644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0o644},
},
},
"strelaypoolsrv": {
@@ -192,37 +193,37 @@ var targets = map[string]target{
debname: "syncthing-relaypoolsrv",
debdeps: []string{"libc6"},
description: "Syncthing Relay Pool Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/strelaypoolsrv"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv"},
binaryName: "strelaypoolsrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "{{binary}}", perm: 0o755},
{src: "cmd/infra/strelaypoolsrv/README.md", dst: "README.txt", perm: 0o644},
{src: "cmd/infra/strelaypoolsrv/LICENSE", dst: "LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0o644},
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/AUTHORS.txt", perm: 0644},
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0o755},
{src: "cmd/infra/strelaypoolsrv/README.md", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/README.txt", perm: 0o644},
{src: "cmd/infra/strelaypoolsrv/LICENSE", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/LICENSE.txt", perm: 0o644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaypoolsrv/AUTHORS.txt", perm: 0o644},
},
},
"stupgrades": {
name: "stupgrades",
description: "Syncthing Upgrade Check Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stupgrades"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/stupgrades"},
binaryName: "stupgrades",
},
"stcrashreceiver": {
name: "stcrashreceiver",
description: "Syncthing Crash Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/stcrashreceiver"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/stcrashreceiver"},
binaryName: "stcrashreceiver",
},
"ursrv": {
name: "ursrv",
description: "Syncthing Usage Reporting Server",
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/ursrv"},
buildPkgs: []string{"github.com/syncthing/syncthing/cmd/infra/ursrv"},
binaryName: "ursrv",
},
}
@@ -231,15 +232,11 @@ func initTargets() {
all := targets["all"]
pkgs, _ := filepath.Glob("cmd/*")
for _, pkg := range pkgs {
pkg = filepath.Base(pkg)
if strings.HasPrefix(pkg, ".") {
// ignore dotfiles
if files, err := filepath.Glob(pkg + "/*.go"); err != nil || len(files) == 0 {
// No go files in the directory
continue
}
if noupgrade && pkg == "stupgrades" {
continue
}
all.buildPkgs = append(all.buildPkgs, fmt.Sprintf("github.com/syncthing/syncthing/cmd/%s", pkg))
all.buildPkgs = append(all.buildPkgs, fmt.Sprintf("github.com/syncthing/syncthing/%s", pkg))
}
targets["all"] = all
@@ -247,13 +244,13 @@ func initTargets() {
// and "extra" dirs.
syncthingPkg := targets["syncthing"]
for _, file := range listFiles("etc") {
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0644})
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0o644})
}
for _, file := range listFiles("extra") {
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0644})
syncthingPkg.archiveFiles = append(syncthingPkg.archiveFiles, archiveFile{src: file, dst: file, perm: 0o644})
}
for _, file := range listFiles("extra") {
syncthingPkg.installationFiles = append(syncthingPkg.installationFiles, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0644})
syncthingPkg.installationFiles = append(syncthingPkg.installationFiles, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0o644})
}
targets["syncthing"] = syncthingPkg
}
@@ -343,9 +340,11 @@ func runCommand(cmd string, target target) {
case "tar":
buildTar(target, tags)
writeCompatJSON()
case "zip":
buildZip(target, tags)
writeCompatJSON()
case "deb":
buildDeb(target)
@@ -751,7 +750,7 @@ func shouldBuildSyso(dir string) (string, error) {
}
jsonPath := filepath.Join(dir, "versioninfo.json")
err = os.WriteFile(jsonPath, bs, 0644)
err = os.WriteFile(jsonPath, bs, 0o644)
if err != nil {
return "", errors.New("failed to create " + jsonPath + ": " + err.Error())
}
@@ -810,7 +809,7 @@ func copyFile(src, dst string, perm os.FileMode) error {
}
copy:
os.MkdirAll(filepath.Dir(dst), 0777)
os.MkdirAll(filepath.Dir(dst), 0o777)
if err := os.WriteFile(dst, in, perm); err != nil {
return err
}
@@ -835,12 +834,12 @@ func listFiles(dir string) []string {
func rebuildAssets() {
os.Setenv("SOURCE_DATE_EPOCH", fmt.Sprint(buildStamp()))
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/api/auto", "github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto")
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/lib/api/auto", "github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv/auto")
}
func lazyRebuildAssets() {
shouldRebuild := shouldRebuildAssets("lib/api/auto/gui.files.go", "gui") ||
shouldRebuildAssets("cmd/strelaypoolsrv/auto/gui.files.go", "cmd/strelaypoolsrv/gui")
shouldRebuildAssets("cmd/infra/strelaypoolsrv/auto/gui.files.go", "cmd/infra/strelaypoolsrv/gui")
if withNextGenGUI {
shouldRebuild = buildNextGenGUI() || shouldRebuild
@@ -870,7 +869,7 @@ func buildNextGenGUI() bool {
for _, src := range listFiles("next-gen-gui/dist") {
rel, _ := filepath.Rel("next-gen-gui/dist", src)
dst := filepath.Join("gui", rel)
if err := copyFile(src, dst, 0644); err != nil {
if err := copyFile(src, dst, 0o644); err != nil {
fmt.Println("copy:", err)
os.Exit(1)
}
@@ -931,7 +930,7 @@ func proto() {
path := filepath.Join("repos", "protobuf")
runPrint(goCmd, "install", fmt.Sprintf("github.com/gogo/protobuf/protoc-gen-gogofast@%v", pv))
os.MkdirAll("repos", 0755)
os.MkdirAll("repos", 0o755)
if _, err := os.Stat(path); err != nil {
runPrint("git", "clone", repo, path)
@@ -1428,7 +1427,7 @@ func windowsCodesign(file string) {
log.Println("Codesign: signing failed: creating temp file:", err)
return
}
_ = f.Chmod(0600) // best effort remove other users' access
_ = f.Chmod(0o600) // best effort remove other users' access
defer os.Remove(f.Name())
if _, err := f.Write(bs); err != nil {
log.Println("Codesign: signing failed: writing temp file:", err)
@@ -1558,3 +1557,29 @@ func nextPatchVersion(ver string) string {
digits[len(digits)-1] = strconv.Itoa(n + 1)
return strings.Join(digits, ".")
}
func writeCompatJSON() {
bs, err := os.ReadFile("compat.yaml")
if err != nil {
log.Fatal("Reading compat.yaml:", err)
}
var entries []upgrade.ReleaseCompatibility
if err := yaml.Unmarshal(bs, &entries); err != nil {
log.Fatal("Parsing compat.yaml:", err)
}
rt := runtime.Version()
for _, e := range entries {
if !strings.HasPrefix(rt, e.Runtime) {
continue
}
bs, _ := json.MarshalIndent(e, "", " ")
if err := os.WriteFile("compat.json", bs, 0o644); err != nil {
log.Fatal("Writing compat.json:", err)
}
return
}
log.Fatalf("runtime %v not found in compat.yaml", rt)
}

View File

@@ -26,7 +26,7 @@ case "${1:-default}" in
build weblate
pushd man ; ./refresh.sh ; popd
git add -A gui man AUTHORS
git commit -m 'gui, man, authors: Update docs, translations, and contributors'
git commit -m 'chore(gui, man, authors): update docs, translations, and contributors'
;;
*)

View File

@@ -7,6 +7,7 @@
package main
import (
"crypto/sha256"
"errors"
"flag"
"fmt"
@@ -15,7 +16,7 @@ import (
"os"
"path/filepath"
"github.com/syncthing/syncthing/lib/sha256"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
)
func main() {

View File

@@ -15,6 +15,7 @@ import (
"strings"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/beacon"
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/protocol"

View File

@@ -14,6 +14,8 @@ import (
"net/http"
"os"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
)
type event struct {

View File

@@ -13,6 +13,7 @@ import (
"os"
"path/filepath"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/scanner"
)

View File

@@ -16,6 +16,7 @@ import (
"os"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/events"

View File

@@ -12,6 +12,7 @@ import (
"fmt"
"os"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ignore"
)

View File

@@ -15,6 +15,8 @@ import (
"os"
"path/filepath"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
)
func main() {
@@ -43,7 +45,7 @@ func generateFiles(dir string, files, maxexp int, srcname string) error {
}
p0 := filepath.Join(dir, string(n[0]), n[0:2])
err = os.MkdirAll(p0, 0755)
err = os.MkdirAll(p0, 0o755)
if err != nil {
log.Fatal(err)
}
@@ -82,7 +84,7 @@ func generateOneFile(fd io.ReadSeeker, p1 string, s int64) error {
return err
}
os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
os.Chmod(p1, os.FileMode(rand.Intn(0o777)|0o400))
t := time.Now().Add(-time.Duration(rand.Intn(30*86400)) * time.Second)
return os.Chtimes(p1, t, t)

View File

@@ -12,6 +12,7 @@ import (
"log"
"os"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/signature"
"github.com/syncthing/syncthing/lib/upgrade"
)

View File

@@ -26,6 +26,7 @@ import (
"sync/atomic"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -157,7 +158,7 @@ func saveCert(priv interface{}, derBytes []byte) {
os.Exit(1)
}
keyOut, err := os.OpenFile("key.pem", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
keyOut, err := os.OpenFile("key.pem", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o600)
if err != nil {
fmt.Println(err)
os.Exit(1)

View File

@@ -7,13 +7,14 @@
package main
import (
"crypto/sha256"
"flag"
"fmt"
"io"
"os"
"time"
"github.com/syncthing/syncthing/lib/sha256"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
)
func main() {

View File

@@ -14,6 +14,7 @@ package main
import (
"context"
"crypto/sha256"
"encoding/json"
"fmt"
"io"
@@ -21,25 +22,29 @@ import (
"net/http"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/alecthomas/kong"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/ur"
raven "github.com/getsentry/raven-go"
"github.com/prometheus/client_golang/prometheus/promhttp"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/ur"
)
const maxRequestSize = 1 << 20 // 1 MiB
type cli struct {
Dir string `help:"Parent directory to store crash and failure reports in" env:"REPORTS_DIR" default:"."`
DSN string `help:"Sentry DSN" env:"SENTRY_DSN"`
Listen string `help:"HTTP listen address" default:":8080" env:"LISTEN_ADDRESS"`
MaxDiskFiles int `help:"Maximum number of reports on disk" default:"100000" env:"MAX_DISK_FILES"`
MaxDiskSizeMB int64 `help:"Maximum disk space to use for reports" default:"1024" env:"MAX_DISK_SIZE_MB"`
SentryQueue int `help:"Maximum number of reports to queue for sending to Sentry" default:"64" env:"SENTRY_QUEUE"`
DiskQueue int `help:"Maximum number of reports to queue for writing to disk" default:"64" env:"DISK_QUEUE"`
Dir string `help:"Parent directory to store crash and failure reports in" env:"REPORTS_DIR" default:"."`
DSN string `help:"Sentry DSN" env:"SENTRY_DSN"`
Listen string `help:"HTTP listen address" default:":8080" env:"LISTEN_ADDRESS"`
MaxDiskFiles int `help:"Maximum number of reports on disk" default:"100000" env:"MAX_DISK_FILES"`
MaxDiskSizeMB int64 `help:"Maximum disk space to use for reports" default:"1024" env:"MAX_DISK_SIZE_MB"`
SentryQueue int `help:"Maximum number of reports to queue for sending to Sentry" default:"64" env:"SENTRY_QUEUE"`
DiskQueue int `help:"Maximum number of reports to queue for writing to disk" default:"64" env:"DISK_QUEUE"`
MetricsListen string `help:"HTTP listen address for metrics" default:":8081" env:"METRICS_LISTEN_ADDRESS"`
IngorePatterns string `help:"File containing ignore patterns (regexp)" env:"IGNORE_PATTERNS" type:"existingfile"`
}
func main() {
@@ -62,19 +67,38 @@ func main() {
}
go ss.Serve(context.Background())
var ip *ignorePatterns
if params.IngorePatterns != "" {
var err error
ip, err = loadIgnorePatterns(params.IngorePatterns)
if err != nil {
log.Fatalf("Failed to load ignore patterns: %v", err)
}
}
cr := &crashReceiver{
store: ds,
sentry: ss,
ignore: ip,
}
mux.Handle("/", cr)
mux.HandleFunc("/ping", func(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("OK"))
})
mux.Handle("/metrics", promhttp.Handler())
if params.MetricsListen != "" {
mmux := http.NewServeMux()
mmux.Handle("/metrics", promhttp.Handler())
go func() {
if err := http.ListenAndServe(params.MetricsListen, mmux); err != nil {
log.Fatalln("HTTP serve metrics:", err)
}
}()
}
if params.DSN != "" {
mux.HandleFunc("/newcrash/failure", handleFailureFn(params.DSN, filepath.Join(params.Dir, "failure_reports")))
mux.HandleFunc("/newcrash/failure", handleFailureFn(params.DSN, filepath.Join(params.Dir, "failure_reports"), ip))
}
log.SetOutput(os.Stdout)
@@ -83,7 +107,7 @@ func main() {
}
}
func handleFailureFn(dsn, failureDir string) func(w http.ResponseWriter, req *http.Request) {
func handleFailureFn(dsn, failureDir string, ignore *ignorePatterns) func(w http.ResponseWriter, req *http.Request) {
return func(w http.ResponseWriter, req *http.Request) {
result := "failure"
defer func() {
@@ -98,6 +122,11 @@ func handleFailureFn(dsn, failureDir string) func(w http.ResponseWriter, req *ht
return
}
if ignore.match(bs) {
result = "ignored"
return
}
var reports []ur.FailureReport
err = json.Unmarshal(bs, &reports)
if err != nil {
@@ -110,7 +139,7 @@ func handleFailureFn(dsn, failureDir string) func(w http.ResponseWriter, req *ht
return
}
version, err := parseVersion(reports[0].Version)
version, err := build.ParseVersion(reports[0].Version)
if err != nil {
http.Error(w, err.Error(), 400)
return
@@ -158,3 +187,42 @@ func saveFailureWithGoroutines(data ur.FailureData, failureDir string) (string,
}
return reportServer + path, nil
}
type ignorePatterns struct {
patterns []*regexp.Regexp
}
func loadIgnorePatterns(path string) (*ignorePatterns, error) {
bs, err := os.ReadFile(path)
if err != nil {
return nil, err
}
var patterns []*regexp.Regexp
for _, line := range strings.Split(string(bs), "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
re, err := regexp.Compile(line)
if err != nil {
return nil, err
}
patterns = append(patterns, re)
}
log.Printf("Loaded %d ignore patterns", len(patterns))
return &ignorePatterns{patterns: patterns}, nil
}
func (i *ignorePatterns) match(report []byte) bool {
if i == nil {
return false
}
for _, re := range i.patterns {
if re.Match(report) {
return true
}
}
return false
}

View File

@@ -18,6 +18,7 @@ import (
raven "github.com/getsentry/raven-go"
"github.com/maruel/panicparse/v2/stack"
"github.com/syncthing/syncthing/lib/build"
)
const reportServer = "https://crash.syncthing.net/report/"
@@ -105,7 +106,7 @@ func parseCrashReport(path string, report []byte) (*raven.Packet, error) {
return nil, errors.New("no first line")
}
version, err := parseVersion(string(parts[0]))
version, err := build.ParseVersion(string(parts[0]))
if err != nil {
return nil, err
}
@@ -143,12 +144,12 @@ func parseCrashReport(path string, report []byte) (*raven.Packet, error) {
}
// Lock the source code loader to the version we are processing here.
if version.commit != "" {
if version.Commit != "" {
// We have a commit hash, so we know exactly which source to use
loader.LockWithVersion(version.commit)
} else if strings.HasPrefix(version.tag, "v") {
loader.LockWithVersion(version.Commit)
} else if strings.HasPrefix(version.Tag, "v") {
// Lets hope the tag is close enough
loader.LockWithVersion(version.tag)
loader.LockWithVersion(version.Tag)
} else {
// Last resort
loader.LockWithVersion("main")
@@ -215,106 +216,26 @@ func crashReportFingerprint(message string) []string {
return []string{"{{ default }}", message}
}
// syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC [foo, bar]
// or, somewhere along the way the "+" in the version tag disappeared:
// syncthing v1.23.7-dev.26.gdf7b56ae.dirty-stversionextra "Fermium Flea" (go1.20.5 darwin-arm64) jb@ok.kastelo.net 2023-07-12 06:55:26 UTC [Some Wrapper, purego, stnoupgrade]
var (
longVersionRE = regexp.MustCompile(`syncthing\s+(v[^\s]+)\s+"([^"]+)"\s\(([^\s]+)\s+([^-]+)-([^)]+)\)\s+([^\s]+)[^\[]*(?:\[(.+)\])?$`)
gitExtraRE = regexp.MustCompile(`\.\d+\.g[0-9a-f]+`) // ".1.g6aaae618"
gitExtraSepRE = regexp.MustCompile(`[.-]`) // dot or dash
)
type version struct {
version string // "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep"
tag string // "v1.1.4-rc.1"
commit string // "6aaae618", blank when absent
codename string // "Erbium Earthworm"
runtime string // "go1.12.5"
goos string // "darwin"
goarch string // "amd64"
builder string // "jb@kvin.kastelo.net"
extra []string // "foo", "bar"
}
func (v version) environment() string {
if v.commit != "" {
return "Development"
}
if strings.Contains(v.tag, "-rc.") {
return "Candidate"
}
if strings.Contains(v.tag, "-") {
return "Beta"
}
return "Stable"
}
func parseVersion(line string) (version, error) {
m := longVersionRE.FindStringSubmatch(line)
if len(m) == 0 {
return version{}, errors.New("unintelligeble version string")
}
v := version{
version: m[1],
codename: m[2],
runtime: m[3],
goos: m[4],
goarch: m[5],
builder: m[6],
}
// Split the version tag into tag and commit. This is old style
// v1.2.3-something.4+11-g12345678 or newer with just dots
// v1.2.3-something.4.11.g12345678 or v1.2.3-dev.11.g12345678.
parts := []string{v.version}
if strings.Contains(v.version, "+") {
parts = strings.Split(v.version, "+")
} else {
idxs := gitExtraRE.FindStringIndex(v.version)
if len(idxs) > 0 {
parts = []string{v.version[:idxs[0]], v.version[idxs[0]+1:]}
}
}
v.tag = parts[0]
if len(parts) > 1 {
fields := gitExtraSepRE.Split(parts[1], -1)
if len(fields) >= 2 && strings.HasPrefix(fields[1], "g") {
v.commit = fields[1][1:]
}
}
if len(m) >= 8 && m[7] != "" {
tags := strings.Split(m[7], ",")
for i := range tags {
tags[i] = strings.TrimSpace(tags[i])
}
v.extra = tags
}
return v, nil
}
func packet(version version, reportType string) *raven.Packet {
func packet(version build.VersionParts, reportType string) *raven.Packet {
pkt := &raven.Packet{
Platform: "go",
Release: version.tag,
Environment: version.environment(),
Release: version.Tag,
Environment: version.Environment(),
Tags: raven.Tags{
raven.Tag{Key: "version", Value: version.version},
raven.Tag{Key: "tag", Value: version.tag},
raven.Tag{Key: "codename", Value: version.codename},
raven.Tag{Key: "runtime", Value: version.runtime},
raven.Tag{Key: "goos", Value: version.goos},
raven.Tag{Key: "goarch", Value: version.goarch},
raven.Tag{Key: "builder", Value: version.builder},
raven.Tag{Key: "version", Value: version.Version},
raven.Tag{Key: "tag", Value: version.Tag},
raven.Tag{Key: "codename", Value: version.Codename},
raven.Tag{Key: "runtime", Value: version.Runtime},
raven.Tag{Key: "goos", Value: version.GOOS},
raven.Tag{Key: "goarch", Value: version.GOARCH},
raven.Tag{Key: "builder", Value: version.Builder},
raven.Tag{Key: "report_type", Value: reportType},
},
}
if version.commit != "" {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: "commit", Value: version.commit})
if version.Commit != "" {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: "commit", Value: version.Commit})
}
for _, tag := range version.extra {
for _, tag := range version.Extra {
pkt.Tags = append(pkt.Tags, raven.Tag{Key: tag, Value: "1"})
}
return pkt

View File

@@ -12,66 +12,6 @@ import (
"testing"
)
func TestParseVersion(t *testing.T) {
cases := []struct {
longVersion string
parsed version
}{
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
},
},
{
longVersion: `syncthing v1.1.4-rc.1+30-g6aaae618-dirty-crashrep "Erbium Earthworm" (go1.12.5 darwin-amd64) jb@kvin.kastelo.net 2019-05-23 16:08:14 UTC [foo, bar]`,
parsed: version{
version: "v1.1.4-rc.1+30-g6aaae618-dirty-crashrep",
tag: "v1.1.4-rc.1",
commit: "6aaae618",
codename: "Erbium Earthworm",
runtime: "go1.12.5",
goos: "darwin",
goarch: "amd64",
builder: "jb@kvin.kastelo.net",
extra: []string{"foo", "bar"},
},
},
{
longVersion: `syncthing v1.23.7-dev.26.gdf7b56ae-stversionextra "Fermium Flea" (go1.20.5 darwin-arm64) jb@ok.kastelo.net 2023-07-12 06:55:26 UTC [Some Wrapper, purego, stnoupgrade]`,
parsed: version{
version: "v1.23.7-dev.26.gdf7b56ae-stversionextra",
tag: "v1.23.7-dev",
commit: "df7b56ae",
codename: "Fermium Flea",
runtime: "go1.20.5",
goos: "darwin",
goarch: "arm64",
builder: "jb@ok.kastelo.net",
extra: []string{"Some Wrapper", "purego", "stnoupgrade"},
},
},
}
for _, tc := range cases {
v, err := parseVersion(tc.longVersion)
if err != nil {
t.Errorf("%s\nerror: %v\n", tc.longVersion, err)
continue
}
if fmt.Sprint(v) != fmt.Sprint(tc.parsed) {
t.Errorf("%s\nA: %v\nE: %v\n", tc.longVersion, v, tc.parsed)
}
}
}
func TestParseReport(t *testing.T) {
bs, err := os.ReadFile("_testdata/panic.log")
if err != nil {

View File

@@ -71,7 +71,6 @@ func (l *githubSourceCodeLoader) Load(filename string, line, context int) ([][]b
url := urlPrefix + l.version + filename[idx:]
resp, err := l.client.Get(url)
if err != nil {
fmt.Println("Loading source:", err)
return nil, 0

View File

@@ -12,11 +12,16 @@ import (
"net/http"
"path"
"strings"
"sync"
)
type crashReceiver struct {
store *diskStore
sentry *sentryService
ignore *ignorePatterns
ignoredMut sync.RWMutex
ignored map[string]struct{}
}
func (r *crashReceiver) ServeHTTP(w http.ResponseWriter, req *http.Request) {
@@ -64,6 +69,12 @@ func (r *crashReceiver) serveGet(reportID string, w http.ResponseWriter, _ *http
// serveHead responds to HEAD requests by checking if the named report
// already exists in the system.
func (r *crashReceiver) serveHead(reportID string, w http.ResponseWriter, _ *http.Request) {
r.ignoredMut.RLock()
_, ignored := r.ignored[reportID]
r.ignoredMut.RUnlock()
if ignored {
return // found
}
if !r.store.Exists(reportID) {
http.Error(w, "Not found", http.StatusNotFound)
}
@@ -76,6 +87,15 @@ func (r *crashReceiver) servePut(reportID string, w http.ResponseWriter, req *ht
metricCrashReportsTotal.WithLabelValues(result).Inc()
}()
r.ignoredMut.RLock()
_, ignored := r.ignored[reportID]
r.ignoredMut.RUnlock()
if ignored {
result = "ignored_cached"
io.Copy(io.Discard, req.Body)
return // found
}
// Read at most maxRequestSize of report data.
log.Println("Receiving report", reportID)
lr := io.LimitReader(req.Body, maxRequestSize)
@@ -86,6 +106,17 @@ func (r *crashReceiver) servePut(reportID string, w http.ResponseWriter, req *ht
return
}
if r.ignore.match(bs) {
r.ignoredMut.Lock()
if r.ignored == nil {
r.ignored = make(map[string]struct{})
}
r.ignored[reportID] = struct{}{}
r.ignoredMut.Unlock()
result = "ignored"
return
}
result = "success"
// Store the report

View File

@@ -9,14 +9,13 @@ package main
import (
"bytes"
"compress/gzip"
"crypto/sha256"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
// userIDFor returns a string we can use as the user ID for the purpose of
@@ -53,5 +52,5 @@ func compressAndWrite(bs []byte, fullPath string) error {
gw.Close()
// Create an output file with the compressed report
return os.WriteFile(fullPath, buf.Bytes(), 0644)
return os.WriteFile(fullPath, buf.Bytes(), 0o644)
}

View File

@@ -21,4 +21,3 @@ See `relaypoolsrv -help` for configuration options.
[oschwald/geoip2-golang](https://github.com/oschwald/geoip2-golang), [oschwald/maxminddb-golang](https://github.com/oschwald/maxminddb-golang), Copyright (C) 2015 [Gregory J. Oschwald](mailto:oschwald@gmail.com).
[lib/pq](https://github.com/lib/pq)</a>, Copyright (C) 2011-2013 'pq' Contributors Portions Copyright (C) 2011 Blake Mizerany.

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate go run ../../../script/genassets.go -o gui.files.go ../gui
//go:generate go run ../../../../script/genassets.go -o gui.files.go ../gui
// Package auto contains auto generated files for web assets.
package auto

View File

@@ -259,7 +259,7 @@
return a.value > b.value ? 1 : -1;
}
$http.get("/endpoint").then(function(response) {
$http.get("/endpoint/full").then(function(response) {
$scope.relays = response.data.relays;
angular.forEach($scope.relays, function(relay) {
@@ -338,7 +338,7 @@
relay.showMarker = function() {
relay.marker.openPopup();
}
relay.hideMarker = function() {
relay.marker.closePopup();
}
@@ -347,7 +347,7 @@
function addCircleToMap(relay) {
console.log(relay.location.latitude)
L.circle([relay.location.latitude, relay.location.longitude],
L.circle([relay.location.latitude, relay.location.longitude],
{
radius: ((relay.stats.bytesProxied * 100) / $scope.totals.bytesProxied) * 10000,
color: "FF0000",

View File

@@ -21,15 +21,13 @@ import (
"time"
lru "github.com/hashicorp/golang-lru/v2"
"github.com/syncthing/syncthing/lib/httpcache"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/oschwald/geoip2-golang"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/assets"
"github.com/syncthing/syncthing/lib/rand"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
@@ -51,6 +49,10 @@ type relay struct {
StatsRetrieved time.Time `json:"statsRetrieved"`
}
type relayShort struct {
URL string `json:"url"`
}
type stats struct {
StartTime time.Time `json:"startTime"`
UptimeSeconds int `json:"uptimeSeconds"`
@@ -95,16 +97,18 @@ var (
testCert tls.Certificate
knownRelaysFile = filepath.Join(os.TempDir(), "strelaypoolsrv_known_relays")
listen = ":80"
metricsListen = ":8081"
dir string
evictionTime = time.Hour
debug bool
permRelaysFile string
ipHeader string
geoipPath string
proto string
statsRefresh = time.Minute
requestQueueLen = 64
requestProcessors = 8
geoipLicenseKey = os.Getenv("GEOIP_LICENSE_KEY")
geoipAccountID, _ = strconv.Atoi(os.Getenv("GEOIP_ACCOUNT_ID"))
requests chan request
@@ -124,40 +128,45 @@ func main() {
log.SetFlags(log.Lshortfile)
flag.StringVar(&listen, "listen", listen, "Listen address")
flag.StringVar(&metricsListen, "metrics-listen", metricsListen, "Metrics listen address")
flag.StringVar(&dir, "keys", dir, "Directory where http-cert.pem and http-key.pem is stored for TLS listening")
flag.BoolVar(&debug, "debug", debug, "Enable debug output")
flag.DurationVar(&evictionTime, "eviction", evictionTime, "After how long the relay is evicted")
flag.StringVar(&permRelaysFile, "perm-relays", "", "Path to list of permanent relays")
flag.StringVar(&knownRelaysFile, "known-relays", knownRelaysFile, "Path to list of current relays")
flag.StringVar(&ipHeader, "ip-header", "", "Name of header which holds clients ip:port. Only meaningful when running behind a reverse proxy.")
flag.StringVar(&geoipPath, "geoip", "GeoLite2-City.mmdb", "Path to GeoLite2-City database")
flag.StringVar(&proto, "protocol", "tcp", "Protocol used for listening. 'tcp' for IPv4 and IPv6, 'tcp4' for IPv4, 'tcp6' for IPv6")
flag.DurationVar(&statsRefresh, "stats-refresh", statsRefresh, "Interval at which to refresh relay stats")
flag.IntVar(&requestQueueLen, "request-queue", requestQueueLen, "Queue length for incoming test requests")
flag.IntVar(&requestProcessors, "request-processors", requestProcessors, "Number of request processor routines")
flag.StringVar(&geoipLicenseKey, "geoip-license-key", geoipLicenseKey, "License key for GeoIP database")
flag.Parse()
requests = make(chan request, requestQueueLen)
geoip, err := geoip.NewGeoLite2CityProvider(context.Background(), geoipAccountID, geoipLicenseKey, os.TempDir())
if err != nil {
log.Fatalln("Failed to create GeoIP provider:", err)
}
go geoip.Serve(context.TODO())
var listener net.Listener
var err error
if permRelaysFile != "" {
permanentRelays = loadRelays(permRelaysFile)
permanentRelays = loadRelays(permRelaysFile, geoip)
}
testCert = createTestCertificate()
for i := 0; i < requestProcessors; i++ {
go requestProcessor()
go requestProcessor(geoip)
}
// Load relays from cache in the background.
// Load them in a serial fashion to make sure any genuine requests
// are not dropped.
go func() {
for _, relay := range loadRelays(knownRelaysFile) {
for _, relay := range loadRelays(knownRelaysFile, geoip) {
resultChan := make(chan result)
requests <- request{relay, resultChan, nil}
result := <-resultChan
@@ -213,15 +222,40 @@ func main() {
log.Fatalln("listen:", err)
}
handler := http.NewServeMux()
handler.HandleFunc("/", handleAssets)
handler.Handle("/endpoint", httpcache.SinglePath(http.HandlerFunc(handleRequest), 15*time.Second))
handler.HandleFunc("/metrics", handleMetrics)
if metricsListen != "" {
mmux := http.NewServeMux()
mmux.HandleFunc("/metrics", handleMetrics)
go func() {
if err := http.ListenAndServe(metricsListen, mmux); err != nil {
log.Fatalln("HTTP serve metrics:", err)
}
}()
}
getMux := http.NewServeMux()
getMux.HandleFunc("/", handleAssets)
getMux.HandleFunc("/endpoint", withAPIMetrics(handleEndpointShort))
getMux.HandleFunc("/endpoint/full", withAPIMetrics(handleEndpointFull))
postMux := http.NewServeMux()
postMux.HandleFunc("/endpoint", withAPIMetrics(handleRegister))
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet, http.MethodHead, http.MethodOptions:
getMux.ServeHTTP(w, r)
case http.MethodPost:
postMux.ServeHTTP(w, r)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
})
srv := http.Server{
Handler: handler,
ReadTimeout: 10 * time.Second,
}
srv.SetKeepAlivesEnabled(false)
err = srv.Serve(listener)
if err != nil {
@@ -255,39 +289,24 @@ func handleAssets(w http.ResponseWriter, r *http.Request) {
assets.Serve(w, r, as)
}
func handleRequest(w http.ResponseWriter, r *http.Request) {
timer := prometheus.NewTimer(apiRequestsSeconds.WithLabelValues(r.Method))
w = NewLoggingResponseWriter(w)
defer func() {
timer.ObserveDuration()
lw := w.(*loggingResponseWriter)
apiRequestsTotal.WithLabelValues(r.Method, strconv.Itoa(lw.statusCode)).Inc()
}()
if ipHeader != "" {
hdr := r.Header.Get(ipHeader)
fields := strings.Split(hdr, ",")
if len(fields) > 0 {
r.RemoteAddr = strings.TrimSpace(fields[len(fields)-1])
}
}
w.Header().Set("Access-Control-Allow-Origin", "*")
switch r.Method {
case "GET":
handleGetRequest(w, r)
case "POST":
handlePostRequest(w, r)
default:
if debug {
log.Println("Unhandled HTTP method", r.Method)
}
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
func withAPIMetrics(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
timer := prometheus.NewTimer(apiRequestsSeconds.WithLabelValues(r.Method))
w = NewLoggingResponseWriter(w)
defer func() {
timer.ObserveDuration()
lw := w.(*loggingResponseWriter)
apiRequestsTotal.WithLabelValues(r.Method, strconv.Itoa(lw.statusCode)).Inc()
}()
next(w, r)
}
}
func handleGetRequest(rw http.ResponseWriter, r *http.Request) {
// handleEndpointFull returns the relay list with full metadata and
// statistics. Large, and expensive.
func handleEndpointFull(rw http.ResponseWriter, r *http.Request) {
rw.Header().Set("Content-Type", "application/json; charset=utf-8")
rw.Header().Set("Access-Control-Allow-Origin", "*")
mut.RLock()
relays := make([]*relay, len(permanentRelays)+len(knownRelays))
@@ -295,17 +314,38 @@ func handleGetRequest(rw http.ResponseWriter, r *http.Request) {
copy(relays[n:], knownRelays)
mut.RUnlock()
// Shuffle
rand.Shuffle(relays)
_ = json.NewEncoder(rw).Encode(map[string][]*relay{
"relays": relays,
})
}
func handlePostRequest(w http.ResponseWriter, r *http.Request) {
// handleEndpointShort returns the relay list with only the URL.
func handleEndpointShort(rw http.ResponseWriter, r *http.Request) {
rw.Header().Set("Content-Type", "application/json; charset=utf-8")
rw.Header().Set("Access-Control-Allow-Origin", "*")
mut.RLock()
relays := make([]relayShort, 0, len(permanentRelays)+len(knownRelays))
for _, r := range append(permanentRelays, knownRelays...) {
relays = append(relays, relayShort{URL: slimURL(r.URL)})
}
mut.RUnlock()
_ = json.NewEncoder(rw).Encode(map[string][]relayShort{
"relays": relays,
})
}
func handleRegister(w http.ResponseWriter, r *http.Request) {
// Get the IP address of the client
rhost := r.RemoteAddr
if ipHeader != "" {
hdr := r.Header.Get(ipHeader)
fields := strings.Split(hdr, ",")
if len(fields) > 0 {
rhost = strings.TrimSpace(fields[len(fields)-1])
}
}
if host, _, err := net.SplitHostPort(rhost); err == nil {
rhost = host
}
@@ -425,19 +465,19 @@ func handlePostRequest(w http.ResponseWriter, r *http.Request) {
}
}
func requestProcessor() {
func requestProcessor(geoip *geoip.Provider) {
for request := range requests {
if request.queueTimer != nil {
request.queueTimer.ObserveDuration()
}
timer := prometheus.NewTimer(relayTestActionsSeconds.WithLabelValues("test"))
handleRelayTest(request)
handleRelayTest(request, geoip)
timer.ObserveDuration()
}
}
func handleRelayTest(request request) {
func handleRelayTest(request request, geoip *geoip.Provider) {
if debug {
log.Println("Request for", request.relay)
}
@@ -450,7 +490,7 @@ func handleRelayTest(request request) {
}
stats := fetchStats(request.relay)
location := getLocation(request.relay.uri.Host)
location := getLocation(request.relay.uri.Host, geoip)
mut.Lock()
if stats != nil {
@@ -523,7 +563,7 @@ func evict(relay *relay) func() {
}
}
func loadRelays(file string) []*relay {
func loadRelays(file string, geoip *geoip.Provider) []*relay {
content, err := os.ReadFile(file)
if err != nil {
log.Println("Failed to load relays: " + err.Error())
@@ -547,7 +587,7 @@ func loadRelays(file string) []*relay {
relays = append(relays, &relay{
URL: line,
Location: getLocation(uri.Host),
Location: getLocation(uri.Host, geoip),
uri: uri,
})
if debug {
@@ -580,21 +620,16 @@ func createTestCertificate() tls.Certificate {
return cert
}
func getLocation(host string) location {
func getLocation(host string, geoip *geoip.Provider) location {
timer := prometheus.NewTimer(locationLookupSeconds)
defer timer.ObserveDuration()
db, err := geoip2.Open(geoipPath)
if err != nil {
return location{}
}
defer db.Close()
addr, err := net.ResolveTCPAddr("tcp", host)
if err != nil {
return location{}
}
city, err := db.City(addr.IP)
city, err := geoip.City(addr.IP)
if err != nil {
return location{}
}
@@ -660,3 +695,16 @@ func (b *errorTracker) IsBlocked(host string) bool {
}
return false
}
func slimURL(u string) string {
p, err := url.Parse(u)
if err != nil {
return u
}
newQuery := url.Values{}
if id := p.Query().Get("id"); id != "" {
newQuery.Set("id", id)
}
p.RawQuery = newQuery.Encode()
return p.String()
}

View File

@@ -42,7 +42,7 @@ func TestHandleGetRequest(t *testing.T) {
w := httptest.NewRecorder()
w.Body = new(bytes.Buffer)
handleGetRequest(w, httptest.NewRequest("GET", "/", nil))
handleEndpointFull(w, httptest.NewRequest("GET", "/", nil))
result := make(map[string][]*relay)
err := json.NewDecoder(w.Body).Decode(&result)
@@ -92,3 +92,18 @@ func TestCanonicalizeQueryValues(t *testing.T) {
t.Errorf("expected %q, got %q", exp, str)
}
}
func TestSlimURL(t *testing.T) {
cases := []struct {
in, out string
}{
{"http://example.com/", "http://example.com/"},
{"relay://192.0.2.42:22067/?globalLimitBps=0&id=EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M&networkTimeout=2m0s&pingInterval=1m0s&providedBy=Test&sessionLimitBps=0&statusAddr=%3A22070", "relay://192.0.2.42:22067/?id=EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M-EIC6B3M"},
}
for _, c := range cases {
if got := slimURL(c.in); got != c.out {
t.Errorf("expected %q, got %q", c.out, got)
}
}
}

View File

@@ -6,27 +6,12 @@ import (
"encoding/json"
"net"
"net/http"
"os"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/syncthing/syncthing/lib/sync"
)
func init() {
processCollectorOpts := collectors.ProcessCollectorOpts{
Namespace: "syncthing_relaypoolsrv",
PidFn: func() (int, error) {
return os.Getpid(), nil
},
}
prometheus.MustRegister(
collectors.NewProcessCollector(processCollectorOpts),
)
}
var (
statusClient = http.Client{
Timeout: 5 * time.Second,

View File

@@ -0,0 +1,351 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"context"
"encoding/json"
"fmt"
"io"
"log/slog"
"net"
"net/http"
"os"
"regexp"
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/alecthomas/kong"
"github.com/prometheus/client_golang/prometheus/promhttp"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/httpcache"
"github.com/syncthing/syncthing/lib/upgrade"
)
type cli struct {
Listen string `default:":8080" help:"Listen address"`
MetricsListen string `default:":8082" help:"Listen address for metrics"`
URL string `short:"u" default:"https://api.github.com/repos/syncthing/syncthing/releases?per_page=25" help:"GitHub releases url"`
Forward []string `short:"f" help:"Forwarded pages, format: /path->https://example/com/url"`
CacheTime time.Duration `default:"15m" help:"Cache time"`
}
func main() {
var params cli
kong.Parse(&params)
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
})))
if err := server(&params); err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
}
func server(params *cli) error {
if params.MetricsListen != "" {
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.Handler())
metricsListen, err := net.Listen("tcp", params.MetricsListen)
if err != nil {
return fmt.Errorf("metrics: %w", err)
}
slog.Info("Metrics listener started", "addr", params.MetricsListen)
go func() {
if err := http.Serve(metricsListen, mux); err != nil {
slog.Warn("Metrics server returned", "error", err)
}
}()
}
cache := &cachedReleases{url: params.URL}
if err := cache.Update(context.Background()); err != nil {
return fmt.Errorf("initial cache update: %w", err)
} else {
slog.Info("Initial cache update done")
}
go func() {
for range time.NewTicker(params.CacheTime).C {
slog.Info("Refreshing cached releases", "url", params.URL)
if err := cache.Update(context.Background()); err != nil {
slog.Error("Failed to refresh cached releases", "url", params.URL, "error", err)
}
}
}()
ghRels := &githubReleases{cache: cache}
mux := http.NewServeMux()
mux.HandleFunc("/ping", ghRels.servePing)
mux.HandleFunc("/meta.json", ghRels.serveReleases)
for _, fwd := range params.Forward {
path, url, ok := strings.Cut(fwd, "->")
if !ok {
return fmt.Errorf("invalid forward: %q", fwd)
}
slog.Info("Forwarding", "from", path, "to", url)
name := strings.ReplaceAll(path, "/", "_")
mux.Handle(path, httpcache.SinglePath(&proxy{name: name, url: url}, params.CacheTime))
}
srv := &http.Server{
Addr: params.Listen,
Handler: mux,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
}
srv.SetKeepAlivesEnabled(false)
srvListener, err := net.Listen("tcp", params.Listen)
if err != nil {
return fmt.Errorf("listen: %w", err)
}
slog.Info("Main listener started", "addr", params.Listen)
return srv.Serve(srvListener)
}
type githubReleases struct {
cache *cachedReleases
}
func (p *githubReleases) servePing(w http.ResponseWriter, req *http.Request) {
rels := p.cache.Releases()
if len(rels) == 0 {
http.Error(w, "No releases available", http.StatusServiceUnavailable)
return
}
w.Header().Set("Syncthing-Num-Releases", strconv.Itoa(len(rels)))
w.WriteHeader(http.StatusOK)
}
func (p *githubReleases) serveReleases(w http.ResponseWriter, req *http.Request) {
rels := p.cache.Releases()
ua := req.Header.Get("User-Agent")
osv := req.Header.Get("Syncthing-Os-Version")
if ua != "" && osv != "" {
// We should determine the compatibility of the releases.
rels = filterForCompabitility(rels, ua, osv)
} else {
metricFilterCalls.WithLabelValues("no-ua-or-osversion").Inc()
}
rels = filterForLatest(rels)
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
w.Header().Set("Cache-Control", "public, max-age=900")
w.Header().Set("Vary", "User-Agent, Syncthing-Os-Version")
_ = json.NewEncoder(w).Encode(rels)
metricUpgradeChecks.Inc()
}
type proxy struct {
name string
url string
}
func (p *proxy) ServeHTTP(w http.ResponseWriter, req *http.Request) {
req, err := http.NewRequestWithContext(req.Context(), http.MethodGet, p.url, nil)
if err != nil {
metricHTTPRequests.WithLabelValues(p.name, "error").Inc()
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
metricHTTPRequests.WithLabelValues(p.name, "error").Inc()
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer resp.Body.Close()
metricHTTPRequests.WithLabelValues(p.name, "success").Inc()
ct := resp.Header.Get("Content-Type")
w.Header().Set("Content-Type", ct)
if resp.StatusCode == http.StatusOK {
w.Header().Set("Cache-Control", "public, max-age=900")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
}
w.WriteHeader(resp.StatusCode)
if strings.HasPrefix(ct, "application/json") {
// Special JSON handling; clean it up a bit.
var v interface{}
if err := json.NewDecoder(resp.Body).Decode(&v); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
_ = json.NewEncoder(w).Encode(v)
} else {
_, _ = io.Copy(w, resp.Body)
}
}
// filterForLatest returns the latest stable and prerelease only. If the
// stable version is newer (comes first in the list) there is no need to go
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
var havePre bool
for _, rel := range rels {
if !rel.Prerelease {
// We found a stable version, we're good now.
filtered = append(filtered, rel)
break
}
if rel.Prerelease && !havePre {
// We remember the first prerelease we find.
filtered = append(filtered, rel)
havePre = true
}
}
return filtered
}
var userAgentOSArchExp = regexp.MustCompile(`^syncthing.*\(.+ (\w+)-(\w+)\)$`)
func filterForCompabitility(rels []upgrade.Release, ua, osv string) []upgrade.Release {
osArch := userAgentOSArchExp.FindStringSubmatch(ua)
if len(osArch) != 3 {
metricFilterCalls.WithLabelValues("bad-os-arch").Inc()
return rels
}
os := osArch[1]
var filtered []upgrade.Release
for _, rel := range rels {
if rel.Compatibility == nil {
// No requirements means it's compatible with everything.
filtered = append(filtered, rel)
continue
}
req, ok := rel.Compatibility.Requirements[os]
if !ok {
// No entry for the current OS means it's compatible.
filtered = append(filtered, rel)
continue
}
if upgrade.CompareVersions(osv, req) >= 0 {
filtered = append(filtered, rel)
continue
}
}
if len(filtered) != len(rels) {
metricFilterCalls.WithLabelValues("filtered").Inc()
} else {
metricFilterCalls.WithLabelValues("unchanged").Inc()
}
return filtered
}
type cachedReleases struct {
url string
mut sync.RWMutex
current []upgrade.Release
}
func (c *cachedReleases) Releases() []upgrade.Release {
c.mut.RLock()
defer c.mut.RUnlock()
return c.current
}
func (c *cachedReleases) Update(ctx context.Context) error {
rels, err := fetchGithubReleases(ctx, c.url)
if err != nil {
return err
}
c.mut.Lock()
c.current = rels
c.mut.Unlock()
return nil
}
func fetchGithubReleases(ctx context.Context, url string) ([]upgrade.Release, error) {
req, err := http.NewRequestWithContext(context.TODO(), http.MethodGet, url, nil)
if err != nil {
metricHTTPRequests.WithLabelValues("github-releases", "error").Inc()
return nil, err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
metricHTTPRequests.WithLabelValues("github-releases", "error").Inc()
return nil, err
}
defer resp.Body.Close()
var rels []upgrade.Release
if err := json.NewDecoder(resp.Body).Decode(&rels); err != nil {
metricHTTPRequests.WithLabelValues("github-releases", "error").Inc()
return nil, err
}
metricHTTPRequests.WithLabelValues("github-releases", "success").Inc()
// Move the URL used for browser downloads to the URL field, and remove
// the browser URL field. This avoids going via the GitHub API for
// downloads, since Syncthing uses the URL field.
for _, rel := range rels {
for j, asset := range rel.Assets {
rel.Assets[j].URL = asset.BrowserURL
rel.Assets[j].BrowserURL = ""
}
}
addReleaseCompatibility(ctx, rels)
sort.Sort(upgrade.SortByRelease(rels))
return rels, nil
}
func addReleaseCompatibility(ctx context.Context, rels []upgrade.Release) {
for i := range rels {
rel := &rels[i]
for i, asset := range rel.Assets {
if asset.Name != "compat.json" {
continue
}
// Load compat.json into the Compatibility field
req, err := http.NewRequestWithContext(ctx, http.MethodGet, asset.URL, nil)
if err != nil {
metricHTTPRequests.WithLabelValues("compat-json", "error").Inc()
break
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
metricHTTPRequests.WithLabelValues("compat-json", "error").Inc()
break
}
if resp.StatusCode != http.StatusOK {
metricHTTPRequests.WithLabelValues("compat-json", "error").Inc()
resp.Body.Close()
break
}
_ = json.NewDecoder(io.LimitReader(resp.Body, 10<<10)).Decode(&rel.Compatibility)
metricHTTPRequests.WithLabelValues("compat-json", "success").Inc()
resp.Body.Close()
// Remove compat.json from the asset list since it's been processed
rel.Assets = append(rel.Assets[:i], rel.Assets[i+1:]...)
break
}
}
}

View File

@@ -0,0 +1,30 @@
// Copyright (C) 2024 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricUpgradeChecks = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "metadata_requests",
})
metricFilterCalls = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "filter_calls",
}, []string{"result"})
metricHTTPRequests = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "http_requests",
}, []string{"target", "result"})
)

View File

@@ -8,21 +8,22 @@ package main
import (
"log"
"log/slog"
"os"
"github.com/alecthomas/kong"
"github.com/syncthing/syncthing/cmd/ursrv/aggregate"
"github.com/syncthing/syncthing/cmd/ursrv/serve"
"github.com/syncthing/syncthing/cmd/infra/ursrv/serve"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
)
type CLI struct {
Serve serve.CLI `cmd:"" default:""`
Aggregate aggregate.CLI `cmd:""`
Serve serve.CLI `cmd:"" default:""`
}
func main() {
log.SetFlags(log.Ltime | log.Ldate | log.Lshortfile)
log.SetOutput(os.Stdout)
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
})))
var cli CLI
ctx := kong.Parse(&cli)

View File

@@ -0,0 +1,46 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricReportsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "incoming_reports_total",
}, []string{"result"})
metricsCollectsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "collects_total",
})
metricsCollectSecondsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "collect_seconds_total",
})
metricsCollectSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "collect_seconds_last",
})
metricsWriteSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "write_seconds_last",
})
)
func init() {
metricReportsTotal.WithLabelValues("fail")
metricReportsTotal.WithLabelValues("replace")
metricReportsTotal.WithLabelValues("accept")
}

View File

@@ -0,0 +1,314 @@
// Copyright (C) 2024 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"reflect"
"slices"
"strconv"
"strings"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/syncthing/syncthing/lib/ur/contract"
)
const namePrefix = "syncthing_usage_"
type metricsSet struct {
srv *server
gauges map[string]prometheus.Gauge
gaugeVecs map[string]*prometheus.GaugeVec
gaugeVecLabels map[string][]string
summaries map[string]*metricSummary
collectMut sync.Mutex
collectCutoff time.Duration
}
func newMetricsSet(srv *server) *metricsSet {
s := &metricsSet{
srv: srv,
gauges: make(map[string]prometheus.Gauge),
gaugeVecs: make(map[string]*prometheus.GaugeVec),
gaugeVecLabels: make(map[string][]string),
summaries: make(map[string]*metricSummary),
collectCutoff: -24 * time.Hour,
}
var initForType func(reflect.Type)
initForType = func(t reflect.Type) {
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
if field.Type.Kind() == reflect.Struct {
initForType(field.Type)
continue
}
name, typ, label := fieldNameTypeLabel(field)
sname, labels := nameConstLabels(name)
switch typ {
case "gauge":
s.gauges[name] = prometheus.NewGauge(prometheus.GaugeOpts{
Name: namePrefix + sname,
ConstLabels: labels,
})
case "summary":
s.summaries[name] = newMetricSummary(namePrefix+sname, nil, labels)
case "gaugeVec":
s.gaugeVecLabels[name] = append(s.gaugeVecLabels[name], label)
case "summaryVec":
s.summaries[name] = newMetricSummary(namePrefix+sname, []string{label}, labels)
}
}
}
initForType(reflect.ValueOf(contract.Report{}).Type())
for name, labels := range s.gaugeVecLabels {
s.gaugeVecs[name] = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: namePrefix + name,
}, labels)
}
return s
}
func fieldNameTypeLabel(rf reflect.StructField) (string, string, string) {
metric := rf.Tag.Get("metric")
name, typ, ok := strings.Cut(metric, ",")
if !ok {
return "", "", ""
}
gv, label, ok := strings.Cut(typ, ":")
if ok {
typ = gv
}
return name, typ, label
}
func nameConstLabels(name string) (string, prometheus.Labels) {
if name == "-" {
return "", nil
}
name, labels, ok := strings.Cut(name, "{")
if !ok {
return name, nil
}
lls := strings.Split(labels[:len(labels)-1], ",")
m := make(map[string]string)
for _, l := range lls {
k, v, _ := strings.Cut(l, "=")
m[k] = v
}
return name, m
}
func (s *metricsSet) addReport(r *contract.Report) {
gaugeVecs := make(map[string][]string)
s.addReportStruct(reflect.ValueOf(r).Elem(), gaugeVecs)
for name, lv := range gaugeVecs {
s.gaugeVecs[name].WithLabelValues(lv...).Add(1)
}
}
func (s *metricsSet) addReportStruct(v reflect.Value, gaugeVecs map[string][]string) {
t := v.Type()
for i := 0; i < v.NumField(); i++ {
field := v.Field(i)
if field.Kind() == reflect.Struct {
s.addReportStruct(field, gaugeVecs)
continue
}
name, typ, label := fieldNameTypeLabel(t.Field(i))
switch typ {
case "gauge":
switch v := field.Interface().(type) {
case int:
s.gauges[name].Add(float64(v))
case string:
s.gaugeVecs[name].WithLabelValues(v).Add(1)
case bool:
if v {
s.gauges[name].Add(1)
}
}
case "gaugeVec":
var labelValue string
switch v := field.Interface().(type) {
case string:
labelValue = v
case int:
labelValue = strconv.Itoa(v)
case map[string]int:
for k, v := range v {
labelValue = k
field.SetInt(int64(v))
break
}
}
if _, ok := gaugeVecs[name]; !ok {
gaugeVecs[name] = make([]string, len(s.gaugeVecLabels[name]))
}
for i, l := range s.gaugeVecLabels[name] {
if l == label {
gaugeVecs[name][i] = labelValue
break
}
}
case "summary", "summaryVec":
switch v := field.Interface().(type) {
case int:
s.summaries[name].Observe("", float64(v))
case float64:
s.summaries[name].Observe("", v)
case []int:
for _, v := range v {
s.summaries[name].Observe("", float64(v))
}
case map[string]int:
for k, v := range v {
if k == "" {
// avoid empty string labels as those are the sign
// of a non-vec summary
k = "unknown"
}
s.summaries[name].Observe(k, float64(v))
}
}
}
}
}
func (s *metricsSet) Describe(c chan<- *prometheus.Desc) {
for _, g := range s.gauges {
g.Describe(c)
}
for _, g := range s.gaugeVecs {
g.Describe(c)
}
for _, g := range s.summaries {
g.Describe(c)
}
}
func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
s.collectMut.Lock()
defer s.collectMut.Unlock()
t0 := time.Now()
defer func() {
dur := time.Since(t0).Seconds()
metricsCollectSecondsLast.Set(dur)
metricsCollectSecondsTotal.Add(dur)
metricsCollectsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
for _, g := range s.gauges {
c <- g
}
for _, g := range s.gaugeVecs {
g.Collect(c)
}
for _, g := range s.summaries {
g.Collect(c)
}
}
type metricSummary struct {
name string
values map[string][]float64
zeroes map[string]int
qDesc *prometheus.Desc
countDesc *prometheus.Desc
sumDesc *prometheus.Desc
zDesc *prometheus.Desc
}
func newMetricSummary(name string, labels []string, constLabels prometheus.Labels) *metricSummary {
return &metricSummary{
name: name,
values: make(map[string][]float64),
zeroes: make(map[string]int),
qDesc: prometheus.NewDesc(name, "", append(labels, "quantile"), constLabels),
countDesc: prometheus.NewDesc(name+"_nonzero_count", "", labels, constLabels),
sumDesc: prometheus.NewDesc(name+"_sum", "", labels, constLabels),
zDesc: prometheus.NewDesc(name+"_zero_count", "", labels, constLabels),
}
}
func (q *metricSummary) Observe(labelValue string, v float64) {
if v == 0 {
q.zeroes[labelValue]++
return
}
q.values[labelValue] = append(q.values[labelValue], v)
}
func (q *metricSummary) Describe(c chan<- *prometheus.Desc) {
c <- q.qDesc
c <- q.countDesc
c <- q.sumDesc
c <- q.zDesc
}
func (q *metricSummary) Collect(c chan<- prometheus.Metric) {
for lv, vs := range q.values {
var labelVals []string
if lv != "" {
labelVals = []string{lv}
}
c <- prometheus.MustNewConstMetric(q.countDesc, prometheus.GaugeValue, float64(len(vs)), labelVals...)
c <- prometheus.MustNewConstMetric(q.zDesc, prometheus.GaugeValue, float64(q.zeroes[lv]), labelVals...)
var sum float64
for _, v := range vs {
sum += v
}
c <- prometheus.MustNewConstMetric(q.sumDesc, prometheus.GaugeValue, sum, labelVals...)
if len(vs) == 0 {
return
}
slices.Sort(vs)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[0], append(labelVals, "0")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*5/100], append(labelVals, "0.05")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)/2], append(labelVals, "0.5")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*9/10], append(labelVals, "0.9")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*95/100], append(labelVals, "0.95")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)-1], append(labelVals, "1")...)
}
}
func (q *metricSummary) Reset() {
clear(q.values)
clear(q.zeroes)
}

View File

@@ -0,0 +1,408 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"bufio"
"compress/gzip"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log/slog"
"net"
"net/http"
"os"
"regexp"
"strings"
"time"
_ "net/http/pprof"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/puzpuzpuz/xsync/v3"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/s3"
"github.com/syncthing/syncthing/lib/ur/contract"
)
type CLI struct {
Listen string `env:"UR_LISTEN" help:"Usage reporting & metrics endpoint listen address" default:"0.0.0.0:8080"`
ListenInternal string `env:"UR_LISTEN_INTERNAL" help:"Internal metrics endpoint listen address" default:"0.0.0.0:8082"`
GeoIPLicenseKey string `env:"UR_GEOIP_LICENSE_KEY"`
GeoIPAccountID int `env:"UR_GEOIP_ACCOUNT_ID"`
DumpFile string `env:"UR_DUMP_FILE" default:"reports.jsons.gz"`
DumpInterval time.Duration `env:"UR_DUMP_INTERVAL" default:"5m"`
S3Endpoint string `name:"s3-endpoint" hidden:"true" env:"UR_S3_ENDPOINT"`
S3Region string `name:"s3-region" hidden:"true" env:"UR_S3_REGION"`
S3Bucket string `name:"s3-bucket" hidden:"true" env:"UR_S3_BUCKET"`
S3AccessKeyID string `name:"s3-access-key-id" hidden:"true" env:"UR_S3_ACCESS_KEY_ID"`
S3SecretKey string `name:"s3-secret-key" hidden:"true" env:"UR_S3_SECRET_KEY"`
}
var (
compilerRe = regexp.MustCompile(`\(([A-Za-z0-9()., -]+) \w+-\w+(?:| android| default)\) ([\w@.-]+)`)
knownDistributions = []distributionMatch{
// Maps well known builders to the official distribution method that
// they represent
{regexp.MustCompile(`\steamcity@build\.syncthing\.net`), "GitHub"},
{regexp.MustCompile(`\sjenkins@build\.syncthing\.net`), "GitHub"},
{regexp.MustCompile(`\sbuilder@github\.syncthing\.net`), "GitHub"},
{regexp.MustCompile(`\sdeb@build\.syncthing\.net`), "APT"},
{regexp.MustCompile(`\sdebian@github\.syncthing\.net`), "APT"},
{regexp.MustCompile(`\sdocker@syncthing\.net`), "Docker Hub"},
{regexp.MustCompile(`\sdocker@build.syncthing\.net`), "Docker Hub"},
{regexp.MustCompile(`\sdocker@github.syncthing\.net`), "Docker Hub"},
{regexp.MustCompile(`\sandroid-builder@github\.syncthing\.net`), "Google Play"},
{regexp.MustCompile(`\sandroid-.*teamcity@build\.syncthing\.net`), "Google Play"},
{regexp.MustCompile(`\sandroid-.*vagrant@basebox-stretch64`), "F-Droid"},
{regexp.MustCompile(`\svagrant@bullseye`), "F-Droid"},
{regexp.MustCompile(`\svagrant@bookworm`), "F-Droid"},
{regexp.MustCompile(`Anwender@NET2017`), "Syncthing-Fork (3rd party)"},
{regexp.MustCompile(`\sbuilduser@(archlinux|svetlemodry)`), "Arch (3rd party)"},
{regexp.MustCompile(`\ssyncthing@archlinux`), "Arch (3rd party)"},
{regexp.MustCompile(`@debian`), "Debian (3rd party)"},
{regexp.MustCompile(`@fedora`), "Fedora (3rd party)"},
{regexp.MustCompile(`\sbrew@`), "Homebrew (3rd party)"},
{regexp.MustCompile(`\sroot@buildkitsandbox`), "LinuxServer.io (3rd party)"},
{regexp.MustCompile(`\sports@freebsd`), "FreeBSD (3rd party)"},
{regexp.MustCompile(`\snix@nix`), "Nix (3rd party)"},
{regexp.MustCompile(`.`), "Others"},
}
)
type distributionMatch struct {
matcher *regexp.Regexp
distribution string
}
func (cli *CLI) Run() error {
slog.Info("Starting", "version", build.Version)
// Listening
urListener, err := net.Listen("tcp", cli.Listen)
if err != nil {
slog.Error("Failed to listen (usage reports)", "error", err)
return err
}
slog.Info("Listening (usage reports)", "address", urListener.Addr())
internalListener, err := net.Listen("tcp", cli.ListenInternal)
if err != nil {
slog.Error("Failed to listen (internal)", "error", err)
return err
}
slog.Info("Listening (internal)", "address", internalListener.Addr())
var geo *geoip.Provider
if cli.GeoIPAccountID != 0 && cli.GeoIPLicenseKey != "" {
geo, err = geoip.NewGeoLite2CityProvider(context.Background(), cli.GeoIPAccountID, cli.GeoIPLicenseKey, os.TempDir())
if err != nil {
slog.Error("Failed to load GeoIP", "error", err)
return err
}
go geo.Serve(context.TODO())
}
// s3
var s3sess *s3.Session
if cli.S3Endpoint != "" {
s3sess, err = s3.NewSession(cli.S3Endpoint, cli.S3Region, cli.S3Bucket, cli.S3AccessKeyID, cli.S3SecretKey)
if err != nil {
slog.Error("Failed to create S3 session", "error", err)
return err
}
}
if _, err := os.Stat(cli.DumpFile); err != nil && s3sess != nil {
if err := cli.downloadDumpFile(s3sess); err != nil {
slog.Error("Failed to download dump file", "error", err)
}
}
// server
srv := &server{
geo: geo,
reports: xsync.NewMapOf[string, *contract.Report](),
}
if fd, err := os.Open(cli.DumpFile); err == nil {
gr, err := gzip.NewReader(fd)
if err == nil {
srv.load(gr)
}
fd.Close()
}
go func() {
for range time.Tick(cli.DumpInterval) {
if err := cli.saveDumpFile(srv, s3sess); err != nil {
slog.Error("Failed to write dump file", "error", err)
}
}
}()
// The internal metrics endpoint just serves metrics about what the
// server is doing.
http.Handle("/metrics", promhttp.Handler())
internalSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
}
go internalSrv.Serve(internalListener)
// New external metrics endpoint accepts reports from clients and serves
// aggregated usage reporting metrics.
ms := newMetricsSet(srv)
reg := prometheus.NewRegistry()
reg.MustRegister(ms)
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
mux.HandleFunc("/newdata", srv.handleNewData)
mux.HandleFunc("/ping", srv.handlePing)
metricsSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
Handler: mux,
}
slog.Info("Ready to serve")
return metricsSrv.Serve(urListener)
}
func (cli *CLI) downloadDumpFile(s3sess *s3.Session) error {
latestKey, err := s3sess.LatestKey()
if err != nil {
return fmt.Errorf("list latest S3 key: %w", err)
}
fd, err := os.Create(cli.DumpFile)
if err != nil {
return fmt.Errorf("create dump file: %w", err)
}
if err := s3sess.Download(fd, latestKey); err != nil {
_ = fd.Close()
return fmt.Errorf("download dump file: %w", err)
}
if err := fd.Close(); err != nil {
return fmt.Errorf("close dump file: %w", err)
}
slog.Info("Dump file downloaded", "key", latestKey)
return nil
}
func (cli *CLI) saveDumpFile(srv *server, s3sess *s3.Session) error {
fd, err := os.Create(cli.DumpFile + ".tmp")
if err != nil {
return fmt.Errorf("creating dump file: %w", err)
}
gw := gzip.NewWriter(fd)
if err := srv.save(gw); err != nil {
return fmt.Errorf("saving dump file: %w", err)
}
if err := gw.Close(); err != nil {
fd.Close()
return fmt.Errorf("closing gzip writer: %w", err)
}
if err := fd.Close(); err != nil {
return fmt.Errorf("closing dump file: %w", err)
}
if err := os.Rename(cli.DumpFile+".tmp", cli.DumpFile); err != nil {
return fmt.Errorf("renaming dump file: %w", err)
}
slog.Info("Dump file saved")
if s3sess != nil {
key := fmt.Sprintf("reports-%s.jsons.gz", time.Now().UTC().Format("2006-01-02"))
fd, err := os.Open(cli.DumpFile)
if err != nil {
return fmt.Errorf("opening dump file: %w", err)
}
if err := s3sess.Upload(fd, key); err != nil {
return fmt.Errorf("uploading dump file: %w", err)
}
_ = fd.Close()
slog.Info("Dump file uploaded")
}
return nil
}
type server struct {
geo *geoip.Provider
reports *xsync.MapOf[string, *contract.Report]
}
func (s *server) handlePing(w http.ResponseWriter, r *http.Request) {
}
func (s *server) handleNewData(w http.ResponseWriter, r *http.Request) {
result := "fail"
defer func() {
// result is "accept" (new report), "replace" (existing report) or
// "fail"
metricReportsTotal.WithLabelValues(result).Inc()
}()
defer r.Body.Close()
if r.Method != http.MethodPost {
http.Error(w, "Method Not Allowed", http.StatusMethodNotAllowed)
return
}
addr := r.Header.Get("X-Forwarded-For")
if addr != "" {
addr = strings.Split(addr, ", ")[0]
} else {
addr = r.RemoteAddr
}
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
log := slog.With("addr", addr)
if net.ParseIP(addr) == nil {
addr = ""
}
var rep contract.Report
lr := &io.LimitedReader{R: r.Body, N: 40 * 1024}
bs, _ := io.ReadAll(lr)
if err := json.Unmarshal(bs, &rep); err != nil {
log.Error("Failed to decode JSON", "error", err)
http.Error(w, "JSON Decode Error", http.StatusInternalServerError)
return
}
rep.Received = time.Now()
rep.Date = rep.Received.UTC().Format("20060102")
rep.Address = addr
if err := rep.Validate(); err != nil {
log.Error("Failed to validate report", "error", err)
http.Error(w, "Validation Error", http.StatusInternalServerError)
return
}
if s.addReport(&rep) {
result = "replace"
} else {
result = "accept"
}
}
func (s *server) addReport(rep *contract.Report) bool {
if s.geo != nil {
if ip := net.ParseIP(rep.Address); ip != nil {
if city, err := s.geo.City(ip); err == nil {
rep.Country = city.Country.Names["en"]
rep.CountryCode = city.Country.IsoCode
}
}
}
if rep.Country == "" {
rep.Country = "Unknown"
}
if rep.CountryCode == "" {
rep.CountryCode = "ZZ"
}
rep.Version = transformVersion(rep.Version)
if strings.Contains(rep.Version, ".") {
split := strings.SplitN(rep.Version, ".", 3)
if len(split) == 3 {
rep.MajorVersion = strings.Join(split[:2], ".")
}
}
rep.OS, rep.Arch, _ = strings.Cut(rep.Platform, "-")
if m := compilerRe.FindStringSubmatch(rep.LongVersion); len(m) == 3 {
rep.Compiler = m[1]
rep.Builder = m[2]
}
for _, d := range knownDistributions {
if d.matcher.MatchString(rep.LongVersion) {
rep.Distribution = d.distribution
break
}
}
_, loaded := s.reports.LoadAndStore(rep.UniqueID, rep)
return loaded
}
func (s *server) save(w io.Writer) error {
bw := bufio.NewWriter(w)
enc := json.NewEncoder(bw)
var err error
s.reports.Range(func(k string, v *contract.Report) bool {
err = enc.Encode(v)
return err == nil
})
if err != nil {
return err
}
return bw.Flush()
}
func (s *server) load(r io.Reader) {
dec := json.NewDecoder(r)
s.reports.Clear()
for {
var rep contract.Report
if err := dec.Decode(&rep); errors.Is(err, io.EOF) {
break
} else if err != nil {
slog.Error("Failed to load record", "error", err)
break
}
s.addReport(&rep)
}
slog.Info("Loaded reports", "count", s.reports.Size())
}
var (
plusRe = regexp.MustCompile(`(\+.*|[.-]dev\..*)$`)
plusStr = "-dev"
)
// transformVersion returns a version number formatted correctly, with all
// development versions aggregated into one.
func transformVersion(v string) string {
if v == "unknown-dev" {
return v
}
if !strings.HasPrefix(v, "v") {
v = "v" + v
}
v = plusRe.ReplaceAllString(v, plusStr)
return v
}

257
cmd/stdiscosrv/amqp.go Normal file
View File

@@ -0,0 +1,257 @@
// Copyright (C) 2024 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"context"
"fmt"
"io"
"log"
amqp "github.com/rabbitmq/amqp091-go"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4"
)
type amqpReplicator struct {
suture.Service
broker string
sender *amqpSender
receiver *amqpReceiver
outbox chan ReplicationRecord
}
func newAMQPReplicator(broker, clientID string, db database) *amqpReplicator {
svc := suture.New("amqpReplicator", suture.Spec{PassThroughPanics: true})
sender := &amqpSender{
broker: broker,
clientID: clientID,
outbox: make(chan ReplicationRecord, replicationOutboxSize),
}
svc.Add(sender)
receiver := &amqpReceiver{
broker: broker,
clientID: clientID,
db: db,
}
svc.Add(receiver)
return &amqpReplicator{
Service: svc,
broker: broker,
sender: sender,
receiver: receiver,
outbox: make(chan ReplicationRecord, replicationOutboxSize),
}
}
func (s *amqpReplicator) send(key *protocol.DeviceID, ps []DatabaseAddress, seen int64) {
s.sender.send(key, ps, seen)
}
type amqpSender struct {
broker string
clientID string
outbox chan ReplicationRecord
}
func (s *amqpSender) Serve(ctx context.Context) error {
conn, ch, err := amqpChannel(s.broker)
if err != nil {
return err
}
defer ch.Close()
defer conn.Close()
buf := make([]byte, 1024)
for {
select {
case rec := <-s.outbox:
size := rec.Size()
if len(buf) < size {
buf = make([]byte, size)
}
n, err := rec.MarshalTo(buf)
if err != nil {
replicationSendsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication marshal: %w", err)
}
err = ch.PublishWithContext(ctx,
"discovery", // exchange
"", // routing key
false, // mandatory
false, // immediate
amqp.Publishing{
ContentType: "application/protobuf",
Body: buf[:n],
AppId: s.clientID,
})
if err != nil {
replicationSendsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication publish: %w", err)
}
replicationSendsTotal.WithLabelValues("success").Inc()
case <-ctx.Done():
return nil
}
}
}
func (s *amqpSender) String() string {
return fmt.Sprintf("amqpSender(%q)", s.broker)
}
func (s *amqpSender) send(key *protocol.DeviceID, ps []DatabaseAddress, seen int64) {
item := ReplicationRecord{
Key: key[:],
Addresses: ps,
Seen: seen,
}
// The send should never block. The inbox is suitably buffered for at
// least a few seconds of stalls, which shouldn't happen in practice.
select {
case s.outbox <- item:
default:
replicationSendsTotal.WithLabelValues("drop").Inc()
}
}
type amqpReceiver struct {
broker string
clientID string
db database
}
func (s *amqpReceiver) Serve(ctx context.Context) error {
conn, ch, err := amqpChannel(s.broker)
if err != nil {
return err
}
defer ch.Close()
defer conn.Close()
msgs, err := amqpConsume(ch)
if err != nil {
return err
}
for {
select {
case msg, ok := <-msgs:
if !ok {
return fmt.Errorf("subscription closed: %w", io.EOF)
}
// ignore messages from ourself
if msg.AppId == s.clientID {
continue
}
var rec ReplicationRecord
if err := rec.Unmarshal(msg.Body); err != nil {
replicationRecvsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication unmarshal: %w", err)
}
id, err := protocol.DeviceIDFromBytes(rec.Key)
if err != nil {
id, err = protocol.DeviceIDFromString(string(rec.Key))
}
if err != nil {
log.Println("Replication device ID:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
continue
}
if err := s.db.merge(&id, rec.Addresses, rec.Seen); err != nil {
return fmt.Errorf("replication database merge: %w", err)
}
replicationRecvsTotal.WithLabelValues("success").Inc()
case <-ctx.Done():
return nil
}
}
}
func (s *amqpReceiver) String() string {
return fmt.Sprintf("amqpReceiver(%q)", s.broker)
}
func amqpChannel(dst string) (*amqp.Connection, *amqp.Channel, error) {
conn, err := amqp.Dial(dst)
if err != nil {
return nil, nil, fmt.Errorf("AMQP dial: %w", err)
}
ch, err := conn.Channel()
if err != nil {
return nil, nil, fmt.Errorf("AMQP channel: %w", err)
}
err = ch.ExchangeDeclare(
"discovery", // name
"fanout", // type
false, // durable
false, // auto-deleted
false, // internal
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, nil, fmt.Errorf("AMQP declare exchange: %w", err)
}
return conn, ch, nil
}
func amqpConsume(ch *amqp.Channel) (<-chan amqp.Delivery, error) {
q, err := ch.QueueDeclare(
"", // name
false, // durable
false, // delete when unused
true, // exclusive
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, fmt.Errorf("AMQP declare queue: %w", err)
}
err = ch.QueueBind(
q.Name, // queue name
"", // routing key
"discovery", // exchange
false,
nil,
)
if err != nil {
return nil, fmt.Errorf("AMQP bind queue: %w", err)
}
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
if err != nil {
return nil, fmt.Errorf("AMQP consume: %w", err)
}
return msgs, nil
}

View File

@@ -22,7 +22,7 @@ import (
"net"
"net/http"
"net/url"
"sort"
"slices"
"strconv"
"strings"
"sync"
@@ -39,15 +39,20 @@ type announcement struct {
}
type apiSrv struct {
addr string
cert tls.Certificate
db database
listener net.Listener
repl replicator // optional
useHTTP bool
addr string
cert tls.Certificate
db database
listener net.Listener
repl replicator // optional
useHTTP bool
compression bool
gzipWriters sync.Pool
seenTracker *retryAfterTracker
notSeenTracker *retryAfterTracker
}
mapsMut sync.Mutex
misses map[string]int32
type replicator interface {
send(key *protocol.DeviceID, addrs []DatabaseAddress, seen int64)
}
type requestID int64
@@ -60,18 +65,30 @@ type contextKey int
const idKey contextKey = iota
func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator, useHTTP bool) *apiSrv {
func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator, useHTTP, compression bool) *apiSrv {
return &apiSrv{
addr: addr,
cert: cert,
db: db,
repl: repl,
useHTTP: useHTTP,
misses: make(map[string]int32),
addr: addr,
cert: cert,
db: db,
repl: repl,
useHTTP: useHTTP,
compression: compression,
seenTracker: &retryAfterTracker{
name: "seenTracker",
bucketStarts: time.Now(),
desiredRate: 250,
currentDelay: notFoundRetryUnknownMinSeconds,
},
notSeenTracker: &retryAfterTracker{
name: "notSeenTracker",
bucketStarts: time.Now(),
desiredRate: 250,
currentDelay: notFoundRetryUnknownMaxSeconds / 2,
},
}
}
func (s *apiSrv) Serve(_ context.Context) error {
func (s *apiSrv) Serve(ctx context.Context) error {
if s.useHTTP {
listener, err := net.Listen("tcp", s.addr)
if err != nil {
@@ -105,6 +122,11 @@ func (s *apiSrv) Serve(_ context.Context) error {
ErrorLog: log.New(io.Discard, "", 0),
}
go func() {
<-ctx.Done()
srv.Shutdown(context.Background())
}()
err := srv.Serve(s.listener)
if err != nil {
log.Println("Serve:", err)
@@ -181,8 +203,7 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
return
}
key := deviceID.String()
rec, err := s.db.get(key)
rec, err := s.db.get(&deviceID)
if err != nil {
// some sort of internal error
lookupRequestsTotal.WithLabelValues("internal_error").Inc()
@@ -192,28 +213,14 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
}
if len(rec.Addresses) == 0 {
lookupRequestsTotal.WithLabelValues("not_found").Inc()
s.mapsMut.Lock()
misses := s.misses[key]
if misses < rec.Misses {
misses = rec.Misses + 1
var afterS int
if rec.Seen == 0 {
afterS = s.notSeenTracker.retryAfterS()
lookupRequestsTotal.WithLabelValues("not_found_ever").Inc()
} else {
misses++
afterS = s.seenTracker.retryAfterS()
lookupRequestsTotal.WithLabelValues("not_found_recent").Inc()
}
s.misses[key] = misses
s.mapsMut.Unlock()
if misses%notFoundMissesWriteInterval == 0 {
rec.Misses = misses
rec.Missed = time.Now().UnixNano()
rec.Addresses = nil
// rec.Seen retained from get
s.db.put(key, rec)
}
afterS := notFoundRetryAfterSeconds(int(misses))
retryAfterHistogram.Observe(float64(afterS))
w.Header().Set("Retry-After", strconv.Itoa(afterS))
http.Error(w, "Not Found", http.StatusNotFound)
return
@@ -225,10 +232,16 @@ func (s *apiSrv) handleGET(w http.ResponseWriter, req *http.Request) {
var bw io.Writer = w
// Use compression if the client asks for it
if strings.Contains(req.Header.Get("Accept-Encoding"), "gzip") {
if s.compression && strings.Contains(req.Header.Get("Accept-Encoding"), "gzip") {
gw, ok := s.gzipWriters.Get().(*gzip.Writer)
if ok {
gw.Reset(w)
} else {
gw = gzip.NewWriter(w)
}
w.Header().Set("Content-Encoding", "gzip")
gw := gzip.NewWriter(bw)
defer gw.Close()
defer s.gzipWriters.Put(gw)
bw = gw
}
@@ -291,25 +304,25 @@ func (s *apiSrv) Stop() {
}
func (s *apiSrv) handleAnnounce(deviceID protocol.DeviceID, addresses []string) error {
key := deviceID.String()
now := time.Now()
expire := now.Add(addressExpiryTime).UnixNano()
// The address slice must always be sorted for database merges to work
// properly.
slices.Sort(addresses)
addresses = slices.Compact(addresses)
dbAddrs := make([]DatabaseAddress, len(addresses))
for i := range addresses {
dbAddrs[i].Address = addresses[i]
dbAddrs[i].Expires = expire
}
// The address slice must always be sorted for database merges to work
// properly.
sort.Sort(databaseAddressOrder(dbAddrs))
seen := now.UnixNano()
if s.repl != nil {
s.repl.send(key, dbAddrs, seen)
s.repl.send(&deviceID, dbAddrs, seen)
}
return s.db.merge(key, dbAddrs, seen)
return s.db.merge(&deviceID, dbAddrs, seen)
}
func handlePing(w http.ResponseWriter, _ *http.Request) {
@@ -359,7 +372,7 @@ func certificateBytes(req *http.Request) ([]byte, error) {
}
bs = pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: hdr})
} else if hdr := req.Header.Get("X-Forwarded-Tls-Client-Cert"); hdr != "" {
} else if cert := req.Header.Get("X-Forwarded-Tls-Client-Cert"); cert != "" {
// Traefik 2 passtlsclientcert
//
// The certificate is in PEM format, maybe with URL encoding
@@ -367,19 +380,36 @@ func certificateBytes(req *http.Request) ([]byte, error) {
// statements. We need to decode, reinstate the newlines every 64
// character and add statements for the PEM decoder
if strings.Contains(hdr, "%") {
if unesc, err := url.QueryUnescape(hdr); err == nil {
hdr = unesc
if strings.Contains(cert, "%") {
if unesc, err := url.QueryUnescape(cert); err == nil {
cert = unesc
}
}
for i := 64; i < len(hdr); i += 65 {
hdr = hdr[:i] + "\n" + hdr[i:]
const (
header = "-----BEGIN CERTIFICATE-----"
footer = "-----END CERTIFICATE-----"
)
var b bytes.Buffer
b.Grow(len(header) + 1 + len(cert) + len(cert)/64 + 1 + len(footer) + 1)
b.WriteString(header)
b.WriteByte('\n')
for i := 0; i < len(cert); i += 64 {
end := i + 64
if end > len(cert) {
end = len(cert)
}
b.WriteString(cert[i:end])
b.WriteByte('\n')
}
hdr = "-----BEGIN CERTIFICATE-----\n" + hdr
hdr += "\n-----END CERTIFICATE-----\n"
bs = []byte(hdr)
b.WriteString(footer)
b.WriteByte('\n')
bs = b.Bytes()
}
if bs == nil {
@@ -444,7 +474,6 @@ func fixupAddresses(remote *net.TCPAddr, addresses []string) []string {
// remote is nil, unable to determine host IP
continue
}
}
// If zero port was specified, use remote port.
@@ -494,15 +523,44 @@ func errorRetryAfterString() string {
return strconv.Itoa(errorRetryAfterSeconds + rand.Intn(errorRetryFuzzSeconds))
}
func notFoundRetryAfterSeconds(misses int) int {
retryAfterS := notFoundRetryMinSeconds + notFoundRetryIncSeconds*misses
if retryAfterS > notFoundRetryMaxSeconds {
retryAfterS = notFoundRetryMaxSeconds
}
retryAfterS += rand.Intn(notFoundRetryFuzzSeconds)
return retryAfterS
}
func reannounceAfterString() string {
return strconv.Itoa(reannounceAfterSeconds + rand.Intn(reannounzeFuzzSeconds))
}
type retryAfterTracker struct {
name string
desiredRate float64 // requests per second
mut sync.Mutex
lastCount int // requests in the last bucket
curCount int // requests in the current bucket
bucketStarts time.Time // start of the current bucket
currentDelay int // current delay in seconds
}
func (t *retryAfterTracker) retryAfterS() int {
now := time.Now()
t.mut.Lock()
if durS := now.Sub(t.bucketStarts).Seconds(); durS > float64(t.currentDelay) {
t.bucketStarts = now
t.lastCount = t.curCount
lastRate := float64(t.lastCount) / durS
switch {
case t.currentDelay > notFoundRetryUnknownMinSeconds &&
lastRate < 0.75*t.desiredRate:
t.currentDelay = max(8*t.currentDelay/10, notFoundRetryUnknownMinSeconds)
case t.currentDelay < notFoundRetryUnknownMaxSeconds &&
lastRate > 1.25*t.desiredRate:
t.currentDelay = min(3*t.currentDelay/2, notFoundRetryUnknownMaxSeconds)
}
t.curCount = 0
}
if t.curCount == 0 {
retryAfterLevel.WithLabelValues(t.name).Set(float64(t.currentDelay))
}
t.curCount++
t.mut.Unlock()
return t.currentDelay + rand.Intn(t.currentDelay/4)
}

View File

@@ -7,9 +7,20 @@
package main
import (
"context"
"crypto/tls"
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"os"
"regexp"
"strings"
"testing"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
)
func TestFixupAddresses(t *testing.T) {
@@ -94,3 +105,79 @@ func addr(host string, port int) *net.TCPAddr {
Port: port,
}
}
func BenchmarkAPIRequests(b *testing.B) {
db := newInMemoryStore(b.TempDir(), 0, nil)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go db.Serve(ctx)
api := newAPISrv("127.0.0.1:0", tls.Certificate{}, db, nil, true, true)
srv := httptest.NewServer(http.HandlerFunc(api.handler))
kf := b.TempDir() + "/cert"
crt, err := tlsutil.NewCertificate(kf+".crt", kf+".key", "localhost", 7)
if err != nil {
b.Fatal(err)
}
certBs, err := os.ReadFile(kf + ".crt")
if err != nil {
b.Fatal(err)
}
certBs = regexp.MustCompile(`---[^\n]+---\n`).ReplaceAll(certBs, nil)
certString := string(strings.ReplaceAll(string(certBs), "\n", " "))
devID := protocol.NewDeviceID(crt.Certificate[0])
devIDString := devID.String()
b.Run("Announce", func(b *testing.B) {
b.ReportAllocs()
url := srv.URL + "/v2/?device=" + devIDString
for i := 0; i < b.N; i++ {
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(`{"addresses":["tcp://10.10.10.10:42000"]}`))
req.Header.Set("X-Forwarded-Tls-Client-Cert", certString)
resp, err := http.DefaultClient.Do(req)
if err != nil {
b.Fatal(err)
}
resp.Body.Close()
if resp.StatusCode != http.StatusNoContent {
b.Fatalf("unexpected status %s", resp.Status)
}
}
})
b.Run("Lookup", func(b *testing.B) {
b.ReportAllocs()
url := srv.URL + "/v2/?device=" + devIDString
for i := 0; i < b.N; i++ {
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
b.Fatal(err)
}
io.Copy(io.Discard, resp.Body)
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b.Fatalf("unexpected status %s", resp.Status)
}
}
})
b.Run("LookupNoCompression", func(b *testing.B) {
b.ReportAllocs()
url := srv.URL + "/v2/?device=" + devIDString
for i := 0; i < b.N; i++ {
req, _ := http.NewRequest(http.MethodGet, url, nil)
req.Header.Set("Accept-Encoding", "identity") // disable compression
resp, err := http.DefaultClient.Do(req)
if err != nil {
b.Fatal(err)
}
io.Copy(io.Discard, resp.Body)
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b.Fatalf("unexpected status %s", resp.Status)
}
}
})
}

View File

@@ -10,17 +10,24 @@
package main
import (
"bufio"
"cmp"
"context"
"encoding/binary"
"errors"
"io"
"log"
"net"
"net/url"
"sort"
"os"
"path"
"runtime"
"slices"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sliceutil"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/storage"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/puzpuzpuz/xsync/v3"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/s3"
)
type clock interface {
@@ -34,380 +41,400 @@ func (defaultClock) Now() time.Time {
}
type database interface {
put(key string, rec DatabaseRecord) error
merge(key string, addrs []DatabaseAddress, seen int64) error
get(key string) (DatabaseRecord, error)
put(key *protocol.DeviceID, rec DatabaseRecord) error
merge(key *protocol.DeviceID, addrs []DatabaseAddress, seen int64) error
get(key *protocol.DeviceID) (DatabaseRecord, error)
}
type levelDBStore struct {
db *leveldb.DB
inbox chan func()
clock clock
marshalBuf []byte
type inMemoryStore struct {
m *xsync.MapOf[protocol.DeviceID, DatabaseRecord]
dir string
flushInterval time.Duration
s3 *s3.Session
objKey string
clock clock
}
func newLevelDBStore(dir string) (*levelDBStore, error) {
db, err := leveldb.OpenFile(dir, levelDBOptions)
func newInMemoryStore(dir string, flushInterval time.Duration, s3sess *s3.Session) *inMemoryStore {
hn, err := os.Hostname()
if err != nil {
return nil, err
hn = rand.String(8)
}
return &levelDBStore{
db: db,
inbox: make(chan func(), 16),
clock: defaultClock{},
}, nil
}
func newMemoryLevelDBStore() (*levelDBStore, error) {
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
return nil, err
s := &inMemoryStore{
m: xsync.NewMapOf[protocol.DeviceID, DatabaseRecord](),
dir: dir,
flushInterval: flushInterval,
s3: s3sess,
objKey: hn + ".db",
clock: defaultClock{},
}
return &levelDBStore{
db: db,
inbox: make(chan func(), 16),
clock: defaultClock{},
}, nil
}
func (s *levelDBStore) put(key string, rec DatabaseRecord) error {
t0 := time.Now()
defer func() {
databaseOperationSeconds.WithLabelValues(dbOpPut).Observe(time.Since(t0).Seconds())
}()
rc := make(chan error)
s.inbox <- func() {
size := rec.Size()
if len(s.marshalBuf) < size {
s.marshalBuf = make([]byte, size)
nr, err := s.read()
if os.IsNotExist(err) && s3sess != nil {
// Try to read from AWS
latestKey, cerr := s3sess.LatestKey()
if cerr != nil {
log.Println("Error reading database from S3:", err)
return s
}
n, _ := rec.MarshalTo(s.marshalBuf)
rc <- s.db.Put([]byte(key), s.marshalBuf[:n], nil)
fd, cerr := os.Create(path.Join(s.dir, "records.db"))
if cerr != nil {
log.Println("Error creating database file:", err)
return s
}
if cerr := s3sess.Download(fd, latestKey); cerr != nil {
log.Printf("Error reading database from S3: %v", err)
}
_ = fd.Close()
nr, err = s.read()
}
err := <-rc
if err != nil {
databaseOperations.WithLabelValues(dbOpPut, dbResError).Inc()
} else {
databaseOperations.WithLabelValues(dbOpPut, dbResSuccess).Inc()
log.Println("Error reading database:", err)
}
return err
log.Printf("Read %d records from database", nr)
s.expireAndCalculateStatistics()
return s
}
func (s *levelDBStore) merge(key string, addrs []DatabaseAddress, seen int64) error {
func (s *inMemoryStore) put(key *protocol.DeviceID, rec DatabaseRecord) error {
t0 := time.Now()
s.m.Store(*key, rec)
databaseOperations.WithLabelValues(dbOpPut, dbResSuccess).Inc()
databaseOperationSeconds.WithLabelValues(dbOpPut).Observe(time.Since(t0).Seconds())
return nil
}
func (s *inMemoryStore) merge(key *protocol.DeviceID, addrs []DatabaseAddress, seen int64) error {
t0 := time.Now()
defer func() {
databaseOperationSeconds.WithLabelValues(dbOpMerge).Observe(time.Since(t0).Seconds())
}()
rc := make(chan error)
newRec := DatabaseRecord{
Addresses: addrs,
Seen: seen,
}
s.inbox <- func() {
// grab the existing record
oldRec, err := s.get(key)
if err != nil {
// "not found" is not an error from get, so this is serious
// stuff only
rc <- err
return
}
newRec = merge(newRec, oldRec)
oldRec, _ := s.m.Load(*key)
newRec = merge(oldRec, newRec)
s.m.Store(*key, newRec)
// We replicate s.put() functionality here ourselves instead of
// calling it because we want to serialize our get above together
// with the put in the same function.
size := newRec.Size()
if len(s.marshalBuf) < size {
s.marshalBuf = make([]byte, size)
}
n, _ := newRec.MarshalTo(s.marshalBuf)
rc <- s.db.Put([]byte(key), s.marshalBuf[:n], nil)
}
databaseOperations.WithLabelValues(dbOpMerge, dbResSuccess).Inc()
databaseOperationSeconds.WithLabelValues(dbOpMerge).Observe(time.Since(t0).Seconds())
err := <-rc
if err != nil {
databaseOperations.WithLabelValues(dbOpMerge, dbResError).Inc()
} else {
databaseOperations.WithLabelValues(dbOpMerge, dbResSuccess).Inc()
}
return err
return nil
}
func (s *levelDBStore) get(key string) (DatabaseRecord, error) {
func (s *inMemoryStore) get(key *protocol.DeviceID) (DatabaseRecord, error) {
t0 := time.Now()
defer func() {
databaseOperationSeconds.WithLabelValues(dbOpGet).Observe(time.Since(t0).Seconds())
}()
keyBs := []byte(key)
val, err := s.db.Get(keyBs, nil)
if err == leveldb.ErrNotFound {
rec, ok := s.m.Load(*key)
if !ok {
databaseOperations.WithLabelValues(dbOpGet, dbResNotFound).Inc()
return DatabaseRecord{}, nil
}
if err != nil {
databaseOperations.WithLabelValues(dbOpGet, dbResError).Inc()
return DatabaseRecord{}, err
}
var rec DatabaseRecord
if err := rec.Unmarshal(val); err != nil {
databaseOperations.WithLabelValues(dbOpGet, dbResUnmarshalError).Inc()
return DatabaseRecord{}, nil
}
rec.Addresses = expire(rec.Addresses, s.clock.Now().UnixNano())
rec.Addresses = expire(rec.Addresses, s.clock.Now())
databaseOperations.WithLabelValues(dbOpGet, dbResSuccess).Inc()
return rec, nil
}
func (s *levelDBStore) Serve(ctx context.Context) error {
t := time.NewTimer(0)
defer t.Stop()
defer s.db.Close()
func (s *inMemoryStore) Serve(ctx context.Context) error {
if s.flushInterval <= 0 {
<-ctx.Done()
return nil
}
// Start the statistics serve routine. It will exit with us when
// statisticsTrigger is closed.
statisticsTrigger := make(chan struct{})
statisticsDone := make(chan struct{})
go s.statisticsServe(statisticsTrigger, statisticsDone)
t := time.NewTimer(s.flushInterval)
defer t.Stop()
loop:
for {
select {
case fn := <-s.inbox:
// Run function in serialized order.
fn()
case <-t.C:
// Trigger the statistics routine to do its thing in the
// background.
statisticsTrigger <- struct{}{}
case <-statisticsDone:
// The statistics routine is done with one iteratation, schedule
// the next.
t.Reset(databaseStatisticsInterval)
log.Println("Calculating statistics")
s.expireAndCalculateStatistics()
log.Println("Flushing database")
if err := s.write(); err != nil {
log.Println("Error writing database:", err)
}
log.Println("Finished flushing database")
t.Reset(s.flushInterval)
case <-ctx.Done():
// We're done.
close(statisticsTrigger)
break loop
}
}
// Also wait for statisticsServe to return
<-statisticsDone
return s.write()
}
func (s *inMemoryStore) expireAndCalculateStatistics() {
now := s.clock.Now()
cutoff24h := now.Add(-24 * time.Hour).UnixNano()
cutoff1w := now.Add(-7 * 24 * time.Hour).UnixNano()
current, currentIPv4, currentIPv6, currentIPv6GUA, last24h, last1w := 0, 0, 0, 0, 0, 0
n := 0
s.m.Range(func(key protocol.DeviceID, rec DatabaseRecord) bool {
if n%1000 == 0 {
runtime.Gosched()
}
n++
addresses := expire(rec.Addresses, now)
if len(addresses) == 0 {
rec.Addresses = nil
s.m.Store(key, rec)
} else if len(addresses) != len(rec.Addresses) {
rec.Addresses = addresses
s.m.Store(key, rec)
}
switch {
case len(rec.Addresses) > 0:
current++
seenIPv4, seenIPv6, seenIPv6GUA := false, false, false
for _, addr := range rec.Addresses {
// We do fast and loose matching on strings here instead of
// parsing the address and the IP and doing "proper" checks,
// to keep things fast and generate less garbage.
if strings.Contains(addr.Address, "[") {
seenIPv6 = true
if strings.Contains(addr.Address, "[2") {
seenIPv6GUA = true
}
} else {
seenIPv4 = true
}
if seenIPv4 && seenIPv6 && seenIPv6GUA {
break
}
}
if seenIPv4 {
currentIPv4++
}
if seenIPv6 {
currentIPv6++
}
if seenIPv6GUA {
currentIPv6GUA++
}
case rec.Seen > cutoff24h:
last24h++
case rec.Seen > cutoff1w:
last1w++
default:
// drop the record if it's older than a week
s.m.Delete(key)
}
return true
})
databaseKeys.WithLabelValues("current").Set(float64(current))
databaseKeys.WithLabelValues("currentIPv4").Set(float64(currentIPv4))
databaseKeys.WithLabelValues("currentIPv6").Set(float64(currentIPv6))
databaseKeys.WithLabelValues("currentIPv6GUA").Set(float64(currentIPv6GUA))
databaseKeys.WithLabelValues("last24h").Set(float64(last24h))
databaseKeys.WithLabelValues("last1w").Set(float64(last1w))
databaseStatisticsSeconds.Set(time.Since(now).Seconds())
}
func (s *inMemoryStore) write() (err error) {
t0 := time.Now()
defer func() {
if err == nil {
databaseWriteSeconds.Set(time.Since(t0).Seconds())
databaseLastWritten.Set(float64(t0.Unix()))
}
}()
dbf := path.Join(s.dir, "records.db")
fd, err := os.Create(dbf + ".tmp")
if err != nil {
return err
}
bw := bufio.NewWriter(fd)
var buf []byte
var rangeErr error
now := s.clock.Now()
cutoff1w := now.Add(-7 * 24 * time.Hour).UnixNano()
n := 0
s.m.Range(func(key protocol.DeviceID, value DatabaseRecord) bool {
if n%1000 == 0 {
runtime.Gosched()
}
n++
if value.Seen < cutoff1w {
// drop the record if it's older than a week
return true
}
rec := ReplicationRecord{
Key: key[:],
Addresses: value.Addresses,
Seen: value.Seen,
}
s := rec.Size()
if s+4 > len(buf) {
buf = make([]byte, s+4)
}
n, err := rec.MarshalTo(buf[4:])
if err != nil {
rangeErr = err
return false
}
binary.BigEndian.PutUint32(buf, uint32(n))
if _, err := bw.Write(buf[:n+4]); err != nil {
rangeErr = err
return false
}
return true
})
if rangeErr != nil {
_ = fd.Close()
return rangeErr
}
if err := bw.Flush(); err != nil {
_ = fd.Close
return err
}
if err := fd.Close(); err != nil {
return err
}
if err := os.Rename(dbf+".tmp", dbf); err != nil {
return err
}
// Upload to S3
if s.s3 != nil {
fd, err = os.Open(dbf)
if err != nil {
log.Printf("Error uploading database to S3: %v", err)
return nil
}
defer fd.Close()
if err := s.s3.Upload(fd, s.objKey); err != nil {
log.Printf("Error uploading database to S3: %v", err)
}
log.Println("Finished uploading database")
}
return nil
}
func (s *levelDBStore) statisticsServe(trigger <-chan struct{}, done chan<- struct{}) {
defer close(done)
func (s *inMemoryStore) read() (int, error) {
fd, err := os.Open(path.Join(s.dir, "records.db"))
if err != nil {
return 0, err
}
defer fd.Close()
for range trigger {
t0 := time.Now()
nowNanos := t0.UnixNano()
cutoff24h := t0.Add(-24 * time.Hour).UnixNano()
cutoff1w := t0.Add(-7 * 24 * time.Hour).UnixNano()
cutoff2Mon := t0.Add(-60 * 24 * time.Hour).UnixNano()
current, currentIPv4, currentIPv6, last24h, last1w, inactive, errors := 0, 0, 0, 0, 0, 0, 0
iter := s.db.NewIterator(&util.Range{}, nil)
for iter.Next() {
// Attempt to unmarshal the record and count the
// failure if there's something wrong with it.
var rec DatabaseRecord
if err := rec.Unmarshal(iter.Value()); err != nil {
errors++
continue
}
// If there are addresses that have not expired it's a current
// record, otherwise account it based on when it was last seen
// (last 24 hours or last week) or finally as inactice.
addrs := expire(rec.Addresses, nowNanos)
switch {
case len(addrs) > 0:
current++
seenIPv4, seenIPv6 := false, false
for _, addr := range addrs {
uri, err := url.Parse(addr.Address)
if err != nil {
continue
}
host, _, err := net.SplitHostPort(uri.Host)
if err != nil {
continue
}
if ip := net.ParseIP(host); ip != nil && ip.To4() != nil {
seenIPv4 = true
} else if ip != nil {
seenIPv6 = true
}
if seenIPv4 && seenIPv6 {
break
}
}
if seenIPv4 {
currentIPv4++
}
if seenIPv6 {
currentIPv6++
}
case rec.Seen > cutoff24h:
last24h++
case rec.Seen > cutoff1w:
last1w++
case rec.Seen > cutoff2Mon:
inactive++
case rec.Missed < cutoff2Mon:
// It hasn't been seen lately and we haven't recorded
// someone asking for this device in a long time either;
// delete the record.
if err := s.db.Delete(iter.Key(), nil); err != nil {
databaseOperations.WithLabelValues(dbOpDelete, dbResError).Inc()
} else {
databaseOperations.WithLabelValues(dbOpDelete, dbResSuccess).Inc()
}
default:
inactive++
br := bufio.NewReader(fd)
var buf []byte
nr := 0
for {
var n uint32
if err := binary.Read(br, binary.BigEndian, &n); err != nil {
if errors.Is(err, io.EOF) {
break
}
return nr, err
}
if int(n) > len(buf) {
buf = make([]byte, n)
}
if _, err := io.ReadFull(br, buf[:n]); err != nil {
return nr, err
}
rec := ReplicationRecord{}
if err := rec.Unmarshal(buf[:n]); err != nil {
return nr, err
}
key, err := protocol.DeviceIDFromBytes(rec.Key)
if err != nil {
key, err = protocol.DeviceIDFromString(string(rec.Key))
}
if err != nil {
log.Println("Bad device ID:", err)
continue
}
iter.Release()
databaseKeys.WithLabelValues("current").Set(float64(current))
databaseKeys.WithLabelValues("currentIPv4").Set(float64(currentIPv4))
databaseKeys.WithLabelValues("currentIPv6").Set(float64(currentIPv6))
databaseKeys.WithLabelValues("last24h").Set(float64(last24h))
databaseKeys.WithLabelValues("last1w").Set(float64(last1w))
databaseKeys.WithLabelValues("inactive").Set(float64(inactive))
databaseKeys.WithLabelValues("error").Set(float64(errors))
databaseStatisticsSeconds.Set(time.Since(t0).Seconds())
// Signal that we are done and can be scheduled again.
done <- struct{}{}
slices.SortFunc(rec.Addresses, DatabaseAddress.Cmp)
rec.Addresses = slices.CompactFunc(rec.Addresses, DatabaseAddress.Equal)
s.m.Store(key, DatabaseRecord{
Addresses: expire(rec.Addresses, s.clock.Now()),
Seen: rec.Seen,
})
nr++
}
return nr, nil
}
// merge returns the merged result of the two database records a and b. The
// result is the union of the two address sets, with the newer expiry time
// chosen for any duplicates.
// chosen for any duplicates. The address list in a is overwritten and
// reused for the result.
func merge(a, b DatabaseRecord) DatabaseRecord {
// Both lists must be sorted for this to work.
if !sort.IsSorted(databaseAddressOrder(a.Addresses)) {
log.Println("Warning: bug: addresses not correctly sorted in merge")
a.Addresses = sortedAddressCopy(a.Addresses)
}
if !sort.IsSorted(databaseAddressOrder(b.Addresses)) {
// no warning because this is the side we read from disk and it may
// legitimately predate correct sorting.
b.Addresses = sortedAddressCopy(b.Addresses)
}
res := DatabaseRecord{
Addresses: make([]DatabaseAddress, 0, len(a.Addresses)+len(b.Addresses)),
Seen: a.Seen,
}
if b.Seen > a.Seen {
res.Seen = b.Seen
}
a.Seen = max(a.Seen, b.Seen)
aIdx := 0
bIdx := 0
aAddrs := a.Addresses
bAddrs := b.Addresses
loop:
for {
switch {
case aIdx == len(aAddrs) && bIdx == len(bAddrs):
// both lists are exhausted, we are done
break loop
case aIdx == len(aAddrs):
// a is exhausted, pick from b and continue
res.Addresses = append(res.Addresses, bAddrs[bIdx])
bIdx++
continue
case bIdx == len(bAddrs):
// b is exhausted, pick from a and continue
res.Addresses = append(res.Addresses, aAddrs[aIdx])
aIdx++
continue
}
// We have values left on both sides.
aVal := aAddrs[aIdx]
bVal := bAddrs[bIdx]
switch {
case aVal.Address == bVal.Address:
// update for same address, pick newer
if aVal.Expires > bVal.Expires {
res.Addresses = append(res.Addresses, aVal)
} else {
res.Addresses = append(res.Addresses, bVal)
}
for aIdx < len(a.Addresses) && bIdx < len(b.Addresses) {
switch cmp.Compare(a.Addresses[aIdx].Address, b.Addresses[bIdx].Address) {
case 0:
// a == b, choose the newer expiry time
a.Addresses[aIdx].Expires = max(a.Addresses[aIdx].Expires, b.Addresses[bIdx].Expires)
aIdx++
bIdx++
case aVal.Address < bVal.Address:
// a is smallest, pick it and continue
res.Addresses = append(res.Addresses, aVal)
case -1:
// a < b, keep a and move on
aIdx++
default:
// b is smallest, pick it and continue
res.Addresses = append(res.Addresses, bVal)
case 1:
// a > b, insert b before a
a.Addresses = append(a.Addresses[:aIdx], append([]DatabaseAddress{b.Addresses[bIdx]}, a.Addresses[aIdx:]...)...)
bIdx++
}
}
return res
if bIdx < len(b.Addresses) {
a.Addresses = append(a.Addresses, b.Addresses[bIdx:]...)
}
return a
}
// expire returns the list of addresses after removing expired entries.
// Expiration happen in place, so the slice given as the parameter is
// destroyed. Internal order is not preserved.
func expire(addrs []DatabaseAddress, now int64) []DatabaseAddress {
i := 0
for i < len(addrs) {
if addrs[i].Expires < now {
addrs = sliceutil.RemoveAndZero(addrs, i)
// destroyed. Internal order is preserved.
func expire(addrs []DatabaseAddress, now time.Time) []DatabaseAddress {
cutoff := now.UnixNano()
naddrs := addrs[:0]
for i := range addrs {
if i > 0 && addrs[i].Address == addrs[i-1].Address {
// Skip duplicates
continue
}
i++
if addrs[i].Expires >= cutoff {
naddrs = append(naddrs, addrs[i])
}
}
return addrs
if len(naddrs) == 0 {
return nil
}
return naddrs
}
func sortedAddressCopy(addrs []DatabaseAddress) []DatabaseAddress {
sorted := make([]DatabaseAddress, len(addrs))
copy(sorted, addrs)
sort.Sort(databaseAddressOrder(sorted))
return sorted
func (d DatabaseAddress) Cmp(other DatabaseAddress) (n int) {
if c := cmp.Compare(d.Address, other.Address); c != 0 {
return c
}
return cmp.Compare(d.Expires, other.Expires)
}
type databaseAddressOrder []DatabaseAddress
func (s databaseAddressOrder) Less(a, b int) bool {
return s[a].Address < s[b].Address
}
func (s databaseAddressOrder) Swap(a, b int) {
s[a], s[b] = s[b], s[a]
}
func (s databaseAddressOrder) Len() int {
return len(s)
func (d DatabaseAddress) Equal(other DatabaseAddress) bool {
return d.Address == other.Address
}

View File

@@ -25,9 +25,7 @@ const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type DatabaseRecord struct {
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses"`
Misses int32 `protobuf:"varint,2,opt,name=misses,proto3" json:"misses,omitempty"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
Missed int64 `protobuf:"varint,4,opt,name=missed,proto3" json:"missed,omitempty"`
}
func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
@@ -64,7 +62,7 @@ func (m *DatabaseRecord) XXX_DiscardUnknown() {
var xxx_messageInfo_DatabaseRecord proto.InternalMessageInfo
type ReplicationRecord struct {
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
Key []byte `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
Addresses []DatabaseAddress `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
}
@@ -149,24 +147,23 @@ func init() {
func init() { proto.RegisterFile("database.proto", fileDescriptor_b90fe3356ea5df07) }
var fileDescriptor_b90fe3356ea5df07 = []byte{
// 270 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x90, 0x41, 0x4a, 0xc4, 0x30,
0x18, 0x85, 0x9b, 0x49, 0x1d, 0x99, 0x08, 0xa3, 0x06, 0x94, 0x20, 0x12, 0x4b, 0xdd, 0x74, 0xd5,
0x01, 0x5d, 0xb9, 0x74, 0xd0, 0x0b, 0xe4, 0x06, 0xe9, 0xe4, 0x77, 0x08, 0x3a, 0x4d, 0x49, 0x2a,
0xe8, 0x29, 0xf4, 0x58, 0x5d, 0xce, 0xd2, 0x95, 0x68, 0x7b, 0x11, 0x69, 0x26, 0x55, 0x14, 0x37,
0xb3, 0x7b, 0xdf, 0xff, 0xbf, 0x97, 0xbc, 0x84, 0x4c, 0x95, 0xac, 0x65, 0x21, 0x1d, 0xe4, 0x95,
0x35, 0xb5, 0xa1, 0xf1, 0x4a, 0xea, 0xf2, 0xe4, 0xdc, 0x42, 0x65, 0xdc, 0xcc, 0x8f, 0x8a, 0xc7,
0xbb, 0xd9, 0xd2, 0x2c, 0x8d, 0x07, 0xaf, 0x36, 0xd6, 0xf4, 0x05, 0x91, 0xe9, 0x4d, 0x48, 0x0b,
0x58, 0x18, 0xab, 0xe8, 0x15, 0x99, 0x48, 0xa5, 0x2c, 0x38, 0x07, 0x8e, 0xa1, 0x04, 0x67, 0x7b,
0x17, 0x47, 0x79, 0x7f, 0x62, 0x3e, 0x18, 0xaf, 0x37, 0xeb, 0x79, 0xdc, 0xbc, 0x9f, 0x45, 0xe2,
0xc7, 0x4d, 0x8f, 0xc9, 0x78, 0xa5, 0x7d, 0x6e, 0x94, 0xa0, 0x6c, 0x47, 0x04, 0xa2, 0x94, 0xc4,
0x0e, 0xa0, 0x64, 0x38, 0x41, 0x19, 0x16, 0x5e, 0x7f, 0x7b, 0x15, 0x8b, 0xfd, 0x34, 0x50, 0x5a,
0x93, 0x43, 0x01, 0xd5, 0x83, 0x5e, 0xc8, 0x5a, 0x9b, 0x32, 0x74, 0x3a, 0x20, 0xf8, 0x1e, 0x9e,
0x19, 0x4a, 0x50, 0x36, 0x11, 0xbd, 0xfc, 0xdd, 0x72, 0xb4, 0x55, 0xcb, 0x7f, 0xda, 0xa4, 0xb7,
0x64, 0xff, 0x4f, 0x8e, 0x32, 0xb2, 0x1b, 0x32, 0xe1, 0xde, 0x01, 0xfb, 0x0d, 0x3c, 0x55, 0xda,
0x86, 0x77, 0x62, 0x31, 0xe0, 0xfc, 0xb4, 0xf9, 0xe4, 0x51, 0xd3, 0x72, 0xb4, 0x6e, 0x39, 0xfa,
0x68, 0x39, 0x7a, 0xed, 0x78, 0xb4, 0xee, 0x78, 0xf4, 0xd6, 0xf1, 0xa8, 0x18, 0xfb, 0x3f, 0xbf,
0xfc, 0x0a, 0x00, 0x00, 0xff, 0xff, 0x7a, 0xa2, 0xf6, 0x1e, 0xb0, 0x01, 0x00, 0x00,
// 243 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4b, 0x49, 0x2c, 0x49,
0x4c, 0x4a, 0x2c, 0x4e, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0xc9, 0x4d, 0xcc, 0xcc,
0x93, 0x52, 0x2e, 0x4a, 0x2d, 0xc8, 0x2f, 0xd6, 0x07, 0x0b, 0x25, 0x95, 0xa6, 0xe9, 0xa7, 0xe7,
0xa7, 0xe7, 0x83, 0x39, 0x60, 0x16, 0x44, 0xa9, 0x52, 0x3c, 0x17, 0x9f, 0x0b, 0x54, 0x73, 0x50,
0x6a, 0x72, 0x7e, 0x51, 0x8a, 0x90, 0x25, 0x17, 0x67, 0x62, 0x4a, 0x4a, 0x51, 0x6a, 0x71, 0x71,
0x6a, 0xb1, 0x04, 0xa3, 0x02, 0xb3, 0x06, 0xb7, 0x91, 0xa8, 0x1e, 0xc8, 0x40, 0x3d, 0x98, 0x42,
0x47, 0x88, 0xb4, 0x13, 0xcb, 0x89, 0x7b, 0xf2, 0x0c, 0x41, 0x08, 0xd5, 0x42, 0x42, 0x5c, 0x2c,
0xc5, 0xa9, 0xa9, 0x79, 0x12, 0xcc, 0x0a, 0x8c, 0x1a, 0xcc, 0x41, 0x60, 0xb6, 0x52, 0x09, 0x97,
0x60, 0x50, 0x6a, 0x41, 0x4e, 0x66, 0x72, 0x62, 0x49, 0x66, 0x7e, 0x1e, 0xd4, 0x0e, 0x01, 0x2e,
0xe6, 0xec, 0xd4, 0x4a, 0x09, 0x46, 0x05, 0x46, 0x0d, 0x9e, 0x20, 0x10, 0x13, 0xd5, 0x56, 0x26,
0x8a, 0x6d, 0x75, 0xe5, 0xe2, 0x47, 0xd3, 0x27, 0x24, 0xc1, 0xc5, 0x0e, 0xd5, 0x03, 0xb6, 0x97,
0x33, 0x08, 0xc6, 0x05, 0xc9, 0xa4, 0x56, 0x14, 0x64, 0x16, 0x81, 0x6d, 0x06, 0x99, 0x01, 0xe3,
0x3a, 0xc9, 0x9c, 0x78, 0x28, 0xc7, 0x70, 0xe2, 0x91, 0x1c, 0xe3, 0x85, 0x47, 0x72, 0x8c, 0x0f,
0x1e, 0xc9, 0x31, 0x4e, 0x78, 0x2c, 0xc7, 0x70, 0xe1, 0xb1, 0x1c, 0xc3, 0x8d, 0xc7, 0x72, 0x0c,
0x49, 0x6c, 0xe0, 0x20, 0x34, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xc6, 0x0b, 0x9b, 0x77, 0x7f,
0x01, 0x00, 0x00,
}
func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
@@ -189,21 +186,11 @@ func (m *DatabaseRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
if m.Missed != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
i--
dAtA[i] = 0x20
}
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if m.Misses != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
i--
dAtA[i] = 0x10
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
@@ -328,15 +315,9 @@ func (m *DatabaseRecord) Size() (n int) {
n += 1 + l + sovDatabase(uint64(l))
}
}
if m.Misses != 0 {
n += 1 + sovDatabase(uint64(m.Misses))
}
if m.Seen != 0 {
n += 1 + sovDatabase(uint64(m.Seen))
}
if m.Missed != 0 {
n += 1 + sovDatabase(uint64(m.Missed))
}
return n
}
@@ -447,25 +428,6 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Misses", wireType)
}
m.Misses = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Misses |= int32(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
@@ -485,25 +447,6 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
break
}
}
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Missed", wireType)
}
m.Missed = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Missed |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
@@ -558,7 +501,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
}
var stringLen uint64
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
@@ -568,23 +511,25 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
if byteLen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Key = string(dAtA[iNdEx:postIndex])
m.Key = append(m.Key[:0], dAtA[iNdEx:postIndex]...)
if m.Key == nil {
m.Key = []byte{}
}
iNdEx = postIndex
case 2:
if wireType != 2 {

View File

@@ -17,15 +17,11 @@ option (gogoproto.goproto_sizecache_all) = false;
message DatabaseRecord {
repeated DatabaseAddress addresses = 1 [(gogoproto.nullable) = false];
int32 misses = 2; // Number of lookups* without hits
int64 seen = 3; // Unix nanos, last device announce
int64 missed = 4; // Unix nanos, last* failed lookup
}
// *) Not every lookup results in a write, so may not be completely accurate
message ReplicationRecord {
string key = 1;
bytes key = 1; // raw 32 byte device ID
repeated DatabaseAddress addresses = 2 [(gogoproto.nullable) = false];
int64 seen = 3; // Unix nanos, last device announce
}

View File

@@ -11,29 +11,25 @@ import (
"fmt"
"testing"
"time"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestDatabaseGetSet(t *testing.T) {
db, err := newMemoryLevelDBStore()
if err != nil {
t.Fatal(err)
}
db := newInMemoryStore(t.TempDir(), 0, nil)
ctx, cancel := context.WithCancel(context.Background())
go db.Serve(ctx)
defer cancel()
// Check missing record
rec, err := db.get("abcd")
rec, err := db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Error("not found should not be an error")
}
if len(rec.Addresses) != 0 {
t.Error("addresses should be empty")
}
if rec.Misses != 0 {
t.Error("missing should be zero")
}
// Set up a clock
@@ -46,13 +42,13 @@ func TestDatabaseGetSet(t *testing.T) {
rec.Addresses = []DatabaseAddress{
{Address: "tcp://1.2.3.4:5", Expires: tc.Now().Add(time.Minute).UnixNano()},
}
if err := db.put("abcd", rec); err != nil {
if err := db.put(&protocol.EmptyDeviceID, rec); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("abcd")
rec, err = db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -72,13 +68,13 @@ func TestDatabaseGetSet(t *testing.T) {
addrs := []DatabaseAddress{
{Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()},
}
if err := db.merge("abcd", addrs, tc.Now().UnixNano()); err != nil {
if err := db.merge(&protocol.EmptyDeviceID, addrs, tc.Now().UnixNano()); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("abcd")
rec, err = db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -101,7 +97,7 @@ func TestDatabaseGetSet(t *testing.T) {
// Verify it
rec, err = db.get("abcd")
rec, err = db.get(&protocol.EmptyDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -114,40 +110,18 @@ func TestDatabaseGetSet(t *testing.T) {
t.Error("incorrect address")
}
// Put a record with misses
rec = DatabaseRecord{Misses: 42, Missed: tc.Now().UnixNano()}
if err := db.put("efgh", rec); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("efgh")
if err != nil {
t.Fatal(err)
}
if len(rec.Addresses) != 0 {
t.Log(rec.Addresses)
t.Fatal("should have no addresses")
}
if rec.Misses != 42 {
t.Log(rec.Misses)
t.Error("incorrect misses")
}
// Set an address
addrs = []DatabaseAddress{
{Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()},
}
if err := db.merge("efgh", addrs, tc.Now().UnixNano()); err != nil {
if err := db.merge(&protocol.GlobalDeviceID, addrs, tc.Now().UnixNano()); err != nil {
t.Fatal(err)
}
// Verify it
rec, err = db.get("efgh")
rec, err = db.get(&protocol.GlobalDeviceID)
if err != nil {
t.Fatal(err)
}
@@ -155,10 +129,6 @@ func TestDatabaseGetSet(t *testing.T) {
t.Log(rec.Addresses)
t.Fatal("should have one address")
}
if rec.Misses != 0 {
t.Log(rec.Misses)
t.Error("should have no misses")
}
}
func TestFilter(t *testing.T) {
@@ -190,13 +160,95 @@ func TestFilter(t *testing.T) {
}
for _, tc := range cases {
res := expire(tc.a, 10)
res := expire(tc.a, time.Unix(0, 10))
if fmt.Sprint(res) != fmt.Sprint(tc.b) {
t.Errorf("Incorrect result %v, expected %v", res, tc.b)
}
}
}
func TestMerge(t *testing.T) {
cases := []struct {
a, b, res []DatabaseAddress
}{
{nil, nil, nil},
{
nil,
[]DatabaseAddress{{Address: "a", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 10}},
},
{
nil,
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
},
{
[]DatabaseAddress{{Address: "a", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 15}},
},
{
[]DatabaseAddress{{Address: "a", Expires: 10}},
[]DatabaseAddress{{Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
},
{
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}},
},
{
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
},
{
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
},
{
[]DatabaseAddress{{Address: "y", Expires: 10}, {Address: "z", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "y", Expires: 10}, {Address: "z", Expires: 10}},
},
{
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "d", Expires: 10}},
[]DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}, {Address: "d", Expires: 10}},
},
}
for _, tc := range cases {
rec := merge(DatabaseRecord{Addresses: tc.a}, DatabaseRecord{Addresses: tc.b})
if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) {
t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res)
}
rec = merge(DatabaseRecord{Addresses: tc.b}, DatabaseRecord{Addresses: tc.a})
if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) {
t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res)
}
}
}
func BenchmarkMergeEqual(b *testing.B) {
for i := 0; i < b.N; i++ {
ar := []DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}
br := []DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 10}}
res := merge(DatabaseRecord{Addresses: ar}, DatabaseRecord{Addresses: br})
if len(res.Addresses) != 2 {
b.Fatal("wrong length")
}
if res.Addresses[0].Address != "a" || res.Addresses[1].Address != "b" {
b.Fatal("wrong address")
}
if res.Addresses[0].Expires != 15 || res.Addresses[1].Expires != 15 {
b.Fatal("wrong expiry")
}
}
b.ReportAllocs() // should be zero per operation
}
type testClock struct {
now time.Time
}

View File

@@ -9,20 +9,23 @@ package main
import (
"context"
"crypto/tls"
"flag"
"log"
"net"
"net/http"
"os"
"os/signal"
"runtime"
"strings"
"time"
_ "net/http/pprof"
"github.com/alecthomas/kong"
"github.com/prometheus/client_golang/prometheus/promhttp"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/s3"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/thejerf/suture/v4"
)
@@ -37,17 +40,12 @@ const (
errorRetryAfterSeconds = 1500
errorRetryFuzzSeconds = 300
// Retry for not found is minSeconds + failures * incSeconds +
// random(fuzz), where failures is the number of consecutive lookups
// with no answer, up to maxSeconds. The fuzz is applied after capping
// to maxSeconds.
notFoundRetryMinSeconds = 60
notFoundRetryMaxSeconds = 3540
notFoundRetryIncSeconds = 10
notFoundRetryFuzzSeconds = 60
// How often (in requests) we serialize the missed counter to database.
notFoundMissesWriteInterval = 10
// Retry for not found is notFoundRetrySeenSeconds for records we have
// seen an announcement for (but it's not active right now) and
// notFoundRetryUnknownSeconds for records we have never seen (or not
// seen within the last week).
notFoundRetryUnknownMinSeconds = 60
notFoundRetryUnknownMaxSeconds = 3600
httpReadTimeout = 5 * time.Second
httpWriteTimeout = 5 * time.Second
@@ -57,164 +55,116 @@ const (
replicationOutboxSize = 10000
)
// These options make the database a little more optimized for writes, at
// the expense of some memory usage and risk of losing writes in a (system)
// crash.
var levelDBOptions = &opt.Options{
NoSync: true,
WriteBuffer: 32 << 20, // default 4<<20
}
var debug = false
type CLI struct {
Cert string `group:"Listen" help:"Certificate file" default:"./cert.pem" env:"DISCOVERY_CERT_FILE"`
Key string `group:"Listen" help:"Key file" default:"./key.pem" env:"DISCOVERY_KEY_FILE"`
HTTP bool `group:"Listen" help:"Listen on HTTP (behind an HTTPS proxy)" env:"DISCOVERY_HTTP"`
Compression bool `group:"Listen" help:"Enable GZIP compression of responses" env:"DISCOVERY_COMPRESSION"`
Listen string `group:"Listen" help:"Listen address" default:":8443" env:"DISCOVERY_LISTEN"`
MetricsListen string `group:"Listen" help:"Metrics listen address" env:"DISCOVERY_METRICS_LISTEN"`
DBDir string `group:"Database" help:"Database directory" default:"." env:"DISCOVERY_DB_DIR"`
DBFlushInterval time.Duration `group:"Database" help:"Interval between database flushes" default:"5m" env:"DISCOVERY_DB_FLUSH_INTERVAL"`
DBS3Endpoint string `name:"db-s3-endpoint" group:"Database (S3 backup)" hidden:"true" help:"S3 endpoint for database" env:"DISCOVERY_DB_S3_ENDPOINT"`
DBS3Region string `name:"db-s3-region" group:"Database (S3 backup)" hidden:"true" help:"S3 region for database" env:"DISCOVERY_DB_S3_REGION"`
DBS3Bucket string `name:"db-s3-bucket" group:"Database (S3 backup)" hidden:"true" help:"S3 bucket for database" env:"DISCOVERY_DB_S3_BUCKET"`
DBS3AccessKeyID string `name:"db-s3-access-key-id" group:"Database (S3 backup)" hidden:"true" help:"S3 access key ID for database" env:"DISCOVERY_DB_S3_ACCESS_KEY_ID"`
DBS3SecretKey string `name:"db-s3-secret-key" group:"Database (S3 backup)" hidden:"true" help:"S3 secret key for database" env:"DISCOVERY_DB_S3_SECRET_KEY"`
AMQPAddress string `group:"AMQP replication" hidden:"true" help:"Address to AMQP broker" env:"DISCOVERY_AMQP_ADDRESS"`
Debug bool `short:"d" help:"Print debug output" env:"DISCOVERY_DEBUG"`
Version bool `short:"v" help:"Print version and exit"`
}
func main() {
var listen string
var dir string
var metricsListen string
var replicationListen string
var replicationPeers string
var certFile string
var keyFile string
var replCertFile string
var replKeyFile string
var useHTTP bool
var largeDB bool
log.SetOutput(os.Stdout)
log.SetFlags(0)
flag.StringVar(&certFile, "cert", "./cert.pem", "Certificate file")
flag.StringVar(&keyFile, "key", "./key.pem", "Key file")
flag.StringVar(&dir, "db-dir", "./discovery.db", "Database directory")
flag.BoolVar(&debug, "debug", false, "Print debug output")
flag.BoolVar(&useHTTP, "http", false, "Listen on HTTP (behind an HTTPS proxy)")
flag.StringVar(&listen, "listen", ":8443", "Listen address")
flag.StringVar(&metricsListen, "metrics-listen", "", "Metrics listen address")
flag.StringVar(&replicationPeers, "replicate", "", "Replication peers, id@address, comma separated")
flag.StringVar(&replicationListen, "replication-listen", ":19200", "Replication listen address")
flag.StringVar(&replCertFile, "replication-cert", "", "Certificate file for replication")
flag.StringVar(&replKeyFile, "replication-key", "", "Key file for replication")
flag.BoolVar(&largeDB, "large-db", false, "Use larger database settings")
showVersion := flag.Bool("version", false, "Show version")
flag.Parse()
var cli CLI
kong.Parse(&cli)
debug = cli.Debug
log.Println(build.LongVersionFor("stdiscosrv"))
if *showVersion {
if cli.Version {
return
}
buildInfo.WithLabelValues(build.Version, runtime.Version(), build.User, build.Date.UTC().Format("2006-01-02T15:04:05Z")).Set(1)
if largeDB {
levelDBOptions.BlockCacheCapacity = 64 << 20
levelDBOptions.BlockSize = 64 << 10
levelDBOptions.CompactionTableSize = 16 << 20
levelDBOptions.CompactionTableSizeMultiplier = 2.0
levelDBOptions.WriteBuffer = 64 << 20
levelDBOptions.CompactionL0Trigger = 8
}
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if os.IsNotExist(err) {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv", 20*365)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}
} else if err != nil {
log.Fatalln("Failed to load keypair:", err)
}
devID := protocol.NewDeviceID(cert.Certificate[0])
log.Println("Server device ID is", devID)
replCert := cert
if replCertFile != "" && replKeyFile != "" {
replCert, err = tls.LoadX509KeyPair(replCertFile, replKeyFile)
if err != nil {
log.Fatalln("Failed to load replication keypair:", err)
}
}
replDevID := protocol.NewDeviceID(replCert.Certificate[0])
log.Println("Replication device ID is", replDevID)
// Parse the replication specs, if any.
var allowedReplicationPeers []protocol.DeviceID
var replicationDestinations []string
parts := strings.Split(replicationPeers, ",")
for _, part := range parts {
if part == "" {
continue
}
fields := strings.Split(part, "@")
switch len(fields) {
case 2:
// This is an id@address specification. Grab the address for the
// destination list. Try to resolve it once to catch obvious
// syntax errors here rather than having the sender service fail
// repeatedly later.
_, err := net.ResolveTCPAddr("tcp", fields[1])
var cert tls.Certificate
if !cli.HTTP {
var err error
cert, err = tls.LoadX509KeyPair(cli.Cert, cli.Key)
if os.IsNotExist(err) {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(cli.Cert, cli.Key, "stdiscosrv", 20*365)
if err != nil {
log.Fatalln("Resolving address:", err)
log.Fatalln("Failed to generate X509 key pair:", err)
}
replicationDestinations = append(replicationDestinations, fields[1])
fallthrough // N.B.
case 1:
// The first part is always a device ID.
id, err := protocol.DeviceIDFromString(fields[0])
if err != nil {
log.Fatalln("Parsing device ID:", err)
}
if id == protocol.EmptyDeviceID {
log.Fatalf("Missing device ID for peer in %q", part)
}
allowedReplicationPeers = append(allowedReplicationPeers, id)
default:
log.Fatalln("Unrecognized replication spec:", part)
} else if err != nil {
log.Fatalln("Failed to load keypair:", err)
}
devID := protocol.NewDeviceID(cert.Certificate[0])
log.Println("Server device ID is", devID)
}
// Root of the service tree.
main := suture.New("main", suture.Spec{
PassThroughPanics: true,
Timeout: 2 * time.Minute,
})
// Start the database.
db, err := newLevelDBStore(dir)
if err != nil {
log.Fatalln("Open database:", err)
// If configured, use S3 for database backups.
var s3c *s3.Session
if cli.DBS3Endpoint != "" {
var err error
s3c, err = s3.NewSession(cli.DBS3Endpoint, cli.DBS3Region, cli.DBS3Bucket, cli.DBS3AccessKeyID, cli.DBS3SecretKey)
if err != nil {
log.Fatalf("Failed to create S3 session: %v", err)
}
}
// Start the database.
db := newInMemoryStore(cli.DBDir, cli.DBFlushInterval, s3c)
main.Add(db)
// Start any replication senders.
var repl replicationMultiplexer
for _, dst := range replicationDestinations {
rs := newReplicationSender(dst, replCert, allowedReplicationPeers)
main.Add(rs)
repl = append(repl, rs)
}
// If we have replication configured, start the replication listener.
if len(allowedReplicationPeers) > 0 {
rl := newReplicationListener(replicationListen, replCert, allowedReplicationPeers, db)
main.Add(rl)
// If we have an AMQP broker for replication, start that
var repl replicator
if cli.AMQPAddress != "" {
clientID := rand.String(10)
kr := newAMQPReplicator(cli.AMQPAddress, clientID, db)
main.Add(kr)
repl = kr
}
// Start the main API server.
qs := newAPISrv(listen, cert, db, repl, useHTTP)
qs := newAPISrv(cli.Listen, cert, db, repl, cli.HTTP, cli.Compression)
main.Add(qs)
// If we have a metrics port configured, start a metrics handler.
if metricsListen != "" {
if cli.MetricsListen != "" {
go func() {
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(metricsListen, mux))
log.Fatal(http.ListenAndServe(cli.MetricsListen, mux))
}()
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Cancel on signal
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, os.Interrupt)
go func() {
sig := <-signalChan
log.Printf("Received signal %s; shutting down", sig)
cancel()
}()
// Engage!
main.Serve(context.Background())
main.Serve(ctx)
}

View File

@@ -1,324 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"context"
"crypto/tls"
"encoding/binary"
"fmt"
io "io"
"log"
"net"
"time"
"github.com/syncthing/syncthing/lib/protocol"
)
const (
replicationReadTimeout = time.Minute
replicationWriteTimeout = 30 * time.Second
replicationHeartbeatInterval = time.Second * 30
)
type replicator interface {
send(key string, addrs []DatabaseAddress, seen int64)
}
// a replicationSender tries to connect to the remote address and provide
// them with a feed of replication updates.
type replicationSender struct {
dst string
cert tls.Certificate // our certificate
allowedIDs []protocol.DeviceID
outbox chan ReplicationRecord
}
func newReplicationSender(dst string, cert tls.Certificate, allowedIDs []protocol.DeviceID) *replicationSender {
return &replicationSender{
dst: dst,
cert: cert,
allowedIDs: allowedIDs,
outbox: make(chan ReplicationRecord, replicationOutboxSize),
}
}
func (s *replicationSender) Serve(ctx context.Context) error {
// Sleep a little at startup. Peers often restart at the same time, and
// this avoid the service failing and entering backoff state
// unnecessarily, while also reducing the reconnect rate to something
// reasonable by default.
time.Sleep(2 * time.Second)
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{s.cert},
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: true,
}
// Dial the TLS connection.
conn, err := tls.Dial("tcp", s.dst, tlsCfg)
if err != nil {
log.Println("Replication connect:", err)
return err
}
defer func() {
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
}()
// The replication stream is not especially latency sensitive, but it is
// quite a lot of data in small writes. Make it more efficient.
if tcpc, ok := conn.NetConn().(*net.TCPConn); ok {
_ = tcpc.SetNoDelay(false)
}
// Get the other side device ID.
remoteID, err := deviceID(conn)
if err != nil {
log.Println("Replication connect:", err)
return err
}
// Verify it's in the set of allowed device IDs.
if !deviceIDIn(remoteID, s.allowedIDs) {
log.Println("Replication connect: unexpected device ID:", remoteID)
return err
}
heartBeatTicker := time.NewTicker(replicationHeartbeatInterval)
defer heartBeatTicker.Stop()
// Send records.
buf := make([]byte, 1024)
for {
select {
case <-heartBeatTicker.C:
if len(s.outbox) > 0 {
// No need to send heartbeats if there are events/prevrious
// heartbeats to send, they will keep the connection alive.
continue
}
// Empty replication message is the heartbeat:
s.outbox <- ReplicationRecord{}
case rec := <-s.outbox:
// Buffer must hold record plus four bytes for size
size := rec.Size()
if len(buf) < size+4 {
buf = make([]byte, size+4)
}
// Record comes after the four bytes size
n, err := rec.MarshalTo(buf[4:])
if err != nil {
// odd to get an error here, but we haven't sent anything
// yet so it's not fatal
replicationSendsTotal.WithLabelValues("error").Inc()
log.Println("Replication marshal:", err)
continue
}
binary.BigEndian.PutUint32(buf, uint32(n))
// Send
conn.SetWriteDeadline(time.Now().Add(replicationWriteTimeout))
if _, err := conn.Write(buf[:4+n]); err != nil {
replicationSendsTotal.WithLabelValues("error").Inc()
log.Println("Replication write:", err)
// Yes, we are losing the replication event here.
return err
}
replicationSendsTotal.WithLabelValues("success").Inc()
case <-ctx.Done():
return nil
}
}
}
func (s *replicationSender) String() string {
return fmt.Sprintf("replicationSender(%q)", s.dst)
}
func (s *replicationSender) send(key string, ps []DatabaseAddress, _ int64) {
item := ReplicationRecord{
Key: key,
Addresses: ps,
}
// The send should never block. The inbox is suitably buffered for at
// least a few seconds of stalls, which shouldn't happen in practice.
select {
case s.outbox <- item:
default:
replicationSendsTotal.WithLabelValues("drop").Inc()
}
}
// a replicationMultiplexer sends to multiple replicators
type replicationMultiplexer []replicator
func (m replicationMultiplexer) send(key string, ps []DatabaseAddress, seen int64) {
for _, s := range m {
// each send is nonblocking
s.send(key, ps, seen)
}
}
// replicationListener accepts incoming connections and reads replication
// items from them. Incoming items are applied to the KV store.
type replicationListener struct {
addr string
cert tls.Certificate
allowedIDs []protocol.DeviceID
db database
}
func newReplicationListener(addr string, cert tls.Certificate, allowedIDs []protocol.DeviceID, db database) *replicationListener {
return &replicationListener{
addr: addr,
cert: cert,
allowedIDs: allowedIDs,
db: db,
}
}
func (l *replicationListener) Serve(ctx context.Context) error {
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{l.cert},
ClientAuth: tls.RequestClientCert,
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: true,
}
lst, err := tls.Listen("tcp", l.addr, tlsCfg)
if err != nil {
log.Println("Replication listen:", err)
return err
}
defer lst.Close()
for {
select {
case <-ctx.Done():
return nil
default:
}
// Accept a connection
conn, err := lst.Accept()
if err != nil {
log.Println("Replication accept:", err)
return err
}
// Figure out the other side device ID
remoteID, err := deviceID(conn.(*tls.Conn))
if err != nil {
log.Println("Replication accept:", err)
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
continue
}
// Verify it is in the set of allowed device IDs
if !deviceIDIn(remoteID, l.allowedIDs) {
log.Println("Replication accept: unexpected device ID:", remoteID)
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
continue
}
go l.handle(ctx, conn)
}
}
func (l *replicationListener) String() string {
return fmt.Sprintf("replicationListener(%q)", l.addr)
}
func (l *replicationListener) handle(ctx context.Context, conn net.Conn) {
defer func() {
conn.SetWriteDeadline(time.Now().Add(time.Second))
conn.Close()
}()
buf := make([]byte, 1024)
for {
select {
case <-ctx.Done():
return
default:
}
conn.SetReadDeadline(time.Now().Add(replicationReadTimeout))
// First four bytes are the size
if _, err := io.ReadFull(conn, buf[:4]); err != nil {
log.Println("Replication read size:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
return
}
// Read the rest of the record
size := int(binary.BigEndian.Uint32(buf[:4]))
if len(buf) < size {
buf = make([]byte, size)
}
if size == 0 {
// Heartbeat, ignore
continue
}
if _, err := io.ReadFull(conn, buf[:size]); err != nil {
log.Println("Replication read record:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
return
}
// Unmarshal
var rec ReplicationRecord
if err := rec.Unmarshal(buf[:size]); err != nil {
log.Println("Replication unmarshal:", err)
replicationRecvsTotal.WithLabelValues("error").Inc()
continue
}
// Store
l.db.merge(rec.Key, rec.Addresses, rec.Seen)
replicationRecvsTotal.WithLabelValues("success").Inc()
}
}
func deviceID(conn *tls.Conn) (protocol.DeviceID, error) {
// Handshake may not be complete on the server side yet, which we need
// to get the client certificate.
if !conn.ConnectionState().HandshakeComplete {
if err := conn.Handshake(); err != nil {
return protocol.DeviceID{}, err
}
}
// We expect exactly one certificate.
certs := conn.ConnectionState().PeerCertificates
if len(certs) != 1 {
return protocol.DeviceID{}, fmt.Errorf("unexpected number of certificates (%d != 1)", len(certs))
}
return protocol.NewDeviceID(certs[0].Raw), nil
}
func deviceIDIn(id protocol.DeviceID, ids []protocol.DeviceID) bool {
for _, candidate := range ids {
if id == candidate {
return true
}
}
return false
}

View File

@@ -7,10 +7,7 @@
package main
import (
"os"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
)
var (
@@ -99,13 +96,28 @@ var (
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
}, []string{"operation"})
retryAfterHistogram = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "retry_after_seconds",
Help: "Retry-After header value in seconds.",
Buckets: prometheus.ExponentialBuckets(60, 2, 7), // 60, 120, 240, 480, 960, 1920, 3840
})
databaseWriteSeconds = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "database_write_seconds",
Help: "Time spent writing the database.",
})
databaseLastWritten = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "database_last_written",
Help: "Timestamp of the last successful database write.",
})
retryAfterLevel = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "discovery",
Name: "retry_after_seconds",
Help: "Retry-After header value in seconds.",
}, []string{"name"})
)
const (
@@ -126,16 +138,6 @@ func init() {
replicationSendsTotal, replicationRecvsTotal,
databaseKeys, databaseStatisticsSeconds,
databaseOperations, databaseOperationSeconds,
retryAfterHistogram)
processCollectorOpts := collectors.ProcessCollectorOpts{
Namespace: "syncthing_discovery",
PidFn: func() (int, error) {
return os.Getpid(), nil
},
}
prometheus.MustRegister(
collectors.NewProcessCollector(processCollectorOpts),
)
databaseWriteSeconds, databaseLastWritten,
retryAfterLevel)
}

View File

@@ -19,19 +19,18 @@ import (
"syscall"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/nat"
"github.com/syncthing/syncthing/lib/osutil"
_ "github.com/syncthing/syncthing/lib/pmp"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"golang.org/x/time/rate"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/nat"
_ "github.com/syncthing/syncthing/lib/pmp"
_ "github.com/syncthing/syncthing/lib/upnp"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/time/rate"
)
var (

View File

@@ -68,7 +68,6 @@ func findSession(key string) *session {
ses, ok := pendingSessions[key]
if !ok {
return nil
}
delete(pendingSessions, key)
return ses

View File

@@ -14,6 +14,7 @@ import (
"path/filepath"
"time"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/relay/protocol"

View File

@@ -1,148 +0,0 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"sort"
"strings"
"time"
"github.com/alecthomas/kong"
"github.com/syncthing/syncthing/lib/httpcache"
"github.com/syncthing/syncthing/lib/upgrade"
)
type cli struct {
Listen string `default:":8080" help:"Listen address"`
URL string `short:"u" default:"https://api.github.com/repos/syncthing/syncthing/releases?per_page=25" help:"GitHub releases url"`
Forward []string `short:"f" help:"Forwarded pages, format: /path->https://example/com/url"`
CacheTime time.Duration `default:"15m" help:"Cache time"`
}
func main() {
var params cli
kong.Parse(&params)
if err := server(&params); err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
}
func server(params *cli) error {
http.Handle("/meta.json", httpcache.SinglePath(&githubReleases{url: params.URL}, params.CacheTime))
for _, fwd := range params.Forward {
path, url, ok := strings.Cut(fwd, "->")
if !ok {
return fmt.Errorf("invalid forward: %q", fwd)
}
http.Handle(path, httpcache.SinglePath(&proxy{url: url}, params.CacheTime))
}
return http.ListenAndServe(params.Listen, nil)
}
type githubReleases struct {
url string
}
func (p *githubReleases) ServeHTTP(w http.ResponseWriter, _ *http.Request) {
log.Println("Fetching", p.url)
rels := upgrade.FetchLatestReleases(p.url, "")
if rels == nil {
http.Error(w, "no releases", http.StatusInternalServerError)
return
}
sort.Sort(upgrade.SortByRelease(rels))
rels = filterForLatest(rels)
// Move the URL used for browser downloads to the URL field, and remove
// the browser URL field. This avoids going via the GitHub API for
// downloads, since Syncthing uses the URL field.
for _, rel := range rels {
for j, asset := range rel.Assets {
rel.Assets[j].URL = asset.BrowserURL
rel.Assets[j].BrowserURL = ""
}
}
buf := new(bytes.Buffer)
_ = json.NewEncoder(buf).Encode(rels)
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
w.Write(buf.Bytes())
}
type proxy struct {
url string
}
func (p *proxy) ServeHTTP(w http.ResponseWriter, req *http.Request) {
log.Println("Fetching", p.url)
req, err := http.NewRequestWithContext(req.Context(), http.MethodGet, p.url, nil)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer resp.Body.Close()
ct := resp.Header.Get("Content-Type")
w.Header().Set("Content-Type", ct)
if resp.StatusCode == http.StatusOK {
w.Header().Set("Cache-Control", "public, max-age=900")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET")
}
w.WriteHeader(resp.StatusCode)
if strings.HasPrefix(ct, "application/json") {
// Special JSON handling; clean it up a bit.
var v interface{}
if err := json.NewDecoder(resp.Body).Decode(&v); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
_ = json.NewEncoder(w).Encode(v)
} else {
_, _ = io.Copy(w, resp.Body)
}
}
// filterForLatest returns the latest stable and prerelease only. If the
// stable version is newer (comes first in the list) there is no need to go
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
var havePre bool
for _, rel := range rels {
if !rel.Prerelease {
// We found a stable version, we're good now.
filtered = append(filtered, rel)
break
}
if rel.Prerelease && !havePre {
// We remember the first prerelease we find.
filtered = append(filtered, rel)
havePre = true
}
}
return filtered
}

View File

@@ -12,7 +12,7 @@ import (
"os"
"github.com/alecthomas/kong"
"github.com/flynn-archive/go-shlex"
"github.com/kballard/go-shellquote"
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/lib/config"
@@ -67,7 +67,7 @@ func (*stdinCommand) Run() error {
fmt.Println("Reading commands from stdin...", args)
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
input, err := shellquote.Split(scanner.Text())
if err != nil {
return fmt.Errorf("parsing input: %w", err)
}

View File

@@ -9,6 +9,7 @@ package main
import (
"bytes"
"context"
"crypto/sha256"
"fmt"
"net/http"
"os"
@@ -16,8 +17,6 @@ import (
"sort"
"strings"
"time"
"github.com/syncthing/syncthing/lib/sha256"
)
const (

View File

@@ -10,6 +10,4 @@ import (
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("main", "Main package")
)
var l = logger.DefaultLogger.NewFacility("main", "Main package")

View File

@@ -22,7 +22,6 @@ import (
"path"
"path/filepath"
"regexp"
"runtime"
"runtime/pprof"
"sort"
"strconv"
@@ -31,7 +30,9 @@ import (
"time"
"github.com/alecthomas/kong"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/thejerf/suture/v4"
"github.com/willabides/kongplete"
"github.com/syncthing/syncthing/cmd/syncthing/cli"
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
@@ -91,11 +92,6 @@ above.
STLOCKTHRESHOLD Used for debugging internal deadlocks; sets debug
sensitivity. Use only under direction of a developer.
STHASHING Select the SHA256 hashing package to use. Possible values
are "standard" for the Go standard library implementation,
"minio" for the github.com/minio/sha256-simd implementation,
and blank (the default) for auto detection.
STVERSIONEXTRA Add extra information to the version string in logs and the
version line in the GUI. Can be set to the name of a wrapper
or tool controlling syncthing to communicate this to the end
@@ -133,10 +129,11 @@ var (
// commands and options here are top level commands to syncthing.
// Cli is just a placeholder for the help text (see main).
var entrypoint struct {
Serve serveOptions `cmd:"" help:"Run Syncthing"`
Generate generate.CLI `cmd:"" help:"Generate key and config, then exit"`
Decrypt decrypt.CLI `cmd:"" help:"Decrypt or verify an encrypted folder"`
Cli cli.CLI `cmd:"" help:"Command line interface for Syncthing"`
Serve serveOptions `cmd:"" help:"Run Syncthing"`
Generate generate.CLI `cmd:"" help:"Generate key and config, then exit"`
Decrypt decrypt.CLI `cmd:"" help:"Decrypt or verify an encrypted folder"`
Cli cli.CLI `cmd:"" help:"Command line interface for Syncthing"`
InstallCompletions kongplete.InstallCompletions `cmd:"" help:"Print commands to install shell completions"`
}
// serveOptions are the options for the `syncthing serve` command.
@@ -247,6 +244,7 @@ func main() {
log.Fatal(err)
}
kongplete.Complete(parser)
ctx, err := parser.Parse(args)
parser.FatalIfErrorf(err)
ctx.BindTo(l, (*logger.Logger)(nil)) // main logger available to subcommands
@@ -648,10 +646,6 @@ func syncthingMain(options serveOptions) {
setupSignalHandling(app)
if os.Getenv("GOMAXPROCS") == "" {
runtime.GOMAXPROCS(runtime.NumCPU())
}
if options.DebugProfileCPU {
f, err := os.Create(fmt.Sprintf("cpu-%d.pprof", os.Getpid()))
if err != nil {
@@ -862,6 +856,7 @@ func cleanConfigDirectory() {
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
"tmp-index-sorter.*": time.Minute, // these should never exist on startup
"support-bundle-*": 30 * 24 * time.Hour, // keep old support bundle zip or folder for a month
"csrftokens.txt": 0, // deprecated, remove immediately
}
for pat, dur := range patterns {

View File

@@ -241,22 +241,6 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
if panicFd == nil {
dst.Write([]byte(line))
if strings.Contains(line, "SIGILL") {
l.Warnln(`
*******************************************************************************
* Crash due to illegal instruction detected. This is most likely due to a CPU *
* incompatibility with the high performance hashing package. Switching to the *
* standard hashing package instead. Please report this issue at: *
* *
* https://github.com/syncthing/syncthing/issues *
* *
* Include the details of your CPU. *
*******************************************************************************
`)
os.Setenv("STHASHING", "standard")
return
}
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {

View File

@@ -140,7 +140,7 @@ func checkNotExist(t *testing.T, name string) {
func TestAutoClosedFile(t *testing.T) {
os.RemoveAll("_autoclose")
defer os.RemoveAll("_autoclose")
os.Mkdir("_autoclose", 0755)
os.Mkdir("_autoclose", 0o755)
file := filepath.FromSlash("_autoclose/tmp")
data := []byte("hello, world\n")

View File

@@ -1,226 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package aggregate
import (
"database/sql"
"fmt"
"log"
"os"
"time"
_ "github.com/lib/pq"
)
type CLI struct {
DBConn string `env:"UR_DB_URL" default:"postgres://user:password@localhost/ur?sslmode=disable"`
}
func (cli *CLI) Run() error {
log.SetFlags(log.Ltime | log.Ldate)
log.SetOutput(os.Stdout)
db, err := sql.Open("postgres", cli.DBConn)
if err != nil {
return fmt.Errorf("database: %w", err)
}
err = setupDB(db)
if err != nil {
return fmt.Errorf("database: %w", err)
}
for {
runAggregation(db)
// Sleep until one minute past next midnight
sleepUntilNext(24*time.Hour, 1*time.Minute)
}
}
func runAggregation(db *sql.DB) {
since := maxIndexedDay(db, "VersionSummary")
log.Println("Aggregating VersionSummary data since", since)
rows, err := aggregateVersionSummary(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
log.Println("Inserted", rows, "rows")
since = maxIndexedDay(db, "Performance")
log.Println("Aggregating Performance data since", since)
rows, err = aggregatePerformance(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
log.Println("Inserted", rows, "rows")
since = maxIndexedDay(db, "BlockStats")
log.Println("Aggregating BlockStats data since", since)
rows, err = aggregateBlockStats(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
log.Println("Inserted", rows, "rows")
}
func sleepUntilNext(intv, margin time.Duration) {
now := time.Now().UTC()
next := now.Truncate(intv).Add(intv).Add(margin)
log.Println("Sleeping until", next)
time.Sleep(next.Sub(now))
}
func setupDB(db *sql.DB) error {
_, err := db.Exec(`CREATE TABLE IF NOT EXISTS VersionSummary (
Day TIMESTAMP NOT NULL,
Version VARCHAR(8) NOT NULL,
Count INTEGER NOT NULL
)`)
if err != nil {
return err
}
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS Performance (
Day TIMESTAMP NOT NULL,
TotFiles INTEGER NOT NULL,
TotMiB INTEGER NOT NULL,
SHA256Perf DOUBLE PRECISION NOT NULL,
MemorySize INTEGER NOT NULL,
MemoryUsageMiB INTEGER NOT NULL
)`)
if err != nil {
return err
}
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS BlockStats (
Day TIMESTAMP NOT NULL,
Reports INTEGER NOT NULL,
Total BIGINT NOT NULL,
Renamed BIGINT NOT NULL,
Reused BIGINT NOT NULL,
Pulled BIGINT NOT NULL,
CopyOrigin BIGINT NOT NULL,
CopyOriginShifted BIGINT NOT NULL,
CopyElsewhere BIGINT NOT NULL
)`)
if err != nil {
return err
}
var t string
row := db.QueryRow(`SELECT 'UniqueDayVersionIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE UNIQUE INDEX UniqueDayVersionIndex ON VersionSummary (Day, Version)`)
}
row = db.QueryRow(`SELECT 'VersionDayIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE INDEX VersionDayIndex ON VersionSummary (Day)`)
}
row = db.QueryRow(`SELECT 'PerformanceDayIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE INDEX PerformanceDayIndex ON Performance (Day)`)
}
row = db.QueryRow(`SELECT 'BlockStatsDayIndex'::regclass`)
if err := row.Scan(&t); err != nil {
_, _ = db.Exec(`CREATE INDEX BlockStatsDayIndex ON BlockStats (Day)`)
}
return nil
}
func maxIndexedDay(db *sql.DB, table string) time.Time {
var t time.Time
row := db.QueryRow("SELECT MAX(DATE_TRUNC('day', Day)) FROM " + table)
err := row.Scan(&t)
if err != nil {
return time.Time{}
}
return t
}
func aggregateVersionSummary(db *sql.DB, since time.Time) (int64, error) {
res, err := db.Exec(`INSERT INTO VersionSummary (
SELECT
DATE_TRUNC('day', Received) AS Day,
SUBSTRING(Report->>'version' FROM '^v\d.\d+') AS Ver,
COUNT(*) AS Count
FROM ReportsJson
WHERE
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
GROUP BY Day, Ver
);
`, since)
if err != nil {
return 0, err
}
return res.RowsAffected()
}
func aggregatePerformance(db *sql.DB, since time.Time) (int64, error) {
res, err := db.Exec(`INSERT INTO Performance (
SELECT
DATE_TRUNC('day', Received) AS Day,
AVG((Report->>'totFiles')::numeric) As TotFiles,
AVG((Report->>'totMiB')::numeric) As TotMiB,
AVG((Report->>'sha256Perf')::numeric) As SHA256Perf,
AVG((Report->>'memorySize')::numeric) As MemorySize,
AVG((Report->>'memoryUsageMiB')::numeric) As MemoryUsageMiB
FROM ReportsJson
WHERE
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
/* Some custom implementation reported bytes when we expect megabytes, cap at petabyte */
AND (Report->>'memorySize')::numeric < 1073741824
GROUP BY Day
);
`, since)
if err != nil {
return 0, err
}
return res.RowsAffected()
}
func aggregateBlockStats(db *sql.DB, since time.Time) (int64, error) {
// Filter out anything prior 0.14.41 as that has sum aggregations which
// made no sense.
res, err := db.Exec(`INSERT INTO BlockStats (
SELECT
DATE_TRUNC('day', Received) AS Day,
COUNT(1) As Reports,
SUM((Report->'blockStats'->>'total')::numeric)::bigint AS Total,
SUM((Report->'blockStats'->>'renamed')::numeric)::bigint AS Renamed,
SUM((Report->'blockStats'->>'reused')::numeric)::bigint AS Reused,
SUM((Report->'blockStats'->>'pulled')::numeric)::bigint AS Pulled,
SUM((Report->'blockStats'->>'copyOrigin')::numeric)::bigint AS CopyOrigin,
SUM((Report->'blockStats'->>'copyOriginShifted')::numeric)::bigint AS CopyOriginShifted,
SUM((Report->'blockStats'->>'copyElsewhere')::numeric)::bigint AS CopyElsewhere
FROM ReportsJson
WHERE
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND (Report->>'urVersion')::numeric >= 3
AND Report->>'version' like 'v_.%'
AND Report->>'version' NOT LIKE 'v0.14.40%'
AND Report->>'version' NOT LIKE 'v0.14.39%'
AND Report->>'version' NOT LIKE 'v0.14.38%'
GROUP BY Day
);
`, since)
if err != nil {
return 0, err
}
return res.RowsAffected()
}

View File

@@ -1,276 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"regexp"
"sort"
"strconv"
"strings"
)
type analytic struct {
Key string
Count int
Percentage float64
Items []analytic `json:",omitempty"`
}
type analyticList []analytic
func (l analyticList) Less(a, b int) bool {
if l[a].Key == "Others" {
return false
}
if l[b].Key == "Others" {
return true
}
return l[b].Count < l[a].Count // inverse
}
func (l analyticList) Swap(a, b int) {
l[a], l[b] = l[b], l[a]
}
func (l analyticList) Len() int {
return len(l)
}
// Returns a list of frequency analytics for a given list of strings.
func analyticsFor(ss []string, cutoff int) []analytic {
m := make(map[string]int)
t := 0
for _, s := range ss {
m[s]++
t++
}
l := make([]analytic, 0, len(m))
for k, c := range m {
l = append(l, analytic{
Key: k,
Count: c,
Percentage: 100 * float64(c) / float64(t),
})
}
sort.Sort(analyticList(l))
if cutoff > 0 && len(l) > cutoff {
c := 0
for _, i := range l[cutoff:] {
c += i.Count
}
l = append(l[:cutoff], analytic{
Key: "Others",
Count: c,
Percentage: 100 * float64(c) / float64(t),
})
}
return l
}
// Find the points at which certain penetration levels are met
func penetrationLevels(as []analytic, points []float64) []analytic {
sort.Slice(as, func(a, b int) bool {
return versionLess(as[b].Key, as[a].Key)
})
var res []analytic
idx := 0
sum := 0.0
for _, a := range as {
sum += a.Percentage
if sum >= points[idx] {
a.Count = int(points[idx])
a.Percentage = sum
res = append(res, a)
idx++
if idx == len(points) {
break
}
}
}
return res
}
func statsForInts(data []int) [4]float64 {
var res [4]float64
if len(data) == 0 {
return res
}
sort.Ints(data)
res[0] = float64(data[int(float64(len(data))*0.05)])
res[1] = float64(data[len(data)/2])
res[2] = float64(data[int(float64(len(data))*0.95)])
res[3] = float64(data[len(data)-1])
return res
}
func statsForInt64s(data []int64) [4]float64 {
var res [4]float64
if len(data) == 0 {
return res
}
sort.Slice(data, func(a, b int) bool {
return data[a] < data[b]
})
res[0] = float64(data[int(float64(len(data))*0.05)])
res[1] = float64(data[len(data)/2])
res[2] = float64(data[int(float64(len(data))*0.95)])
res[3] = float64(data[len(data)-1])
return res
}
func statsForFloats(data []float64) [4]float64 {
var res [4]float64
if len(data) == 0 {
return res
}
sort.Float64s(data)
res[0] = data[int(float64(len(data))*0.05)]
res[1] = data[len(data)/2]
res[2] = data[int(float64(len(data))*0.95)]
res[3] = data[len(data)-1]
return res
}
func group(by func(string) string, as []analytic, perGroup int, otherPct float64) []analytic {
var res []analytic
next:
for _, a := range as {
group := by(a.Key)
for i := range res {
if res[i].Key == group {
res[i].Count += a.Count
res[i].Percentage += a.Percentage
if len(res[i].Items) < perGroup {
res[i].Items = append(res[i].Items, a)
}
continue next
}
}
res = append(res, analytic{
Key: group,
Count: a.Count,
Percentage: a.Percentage,
Items: []analytic{a},
})
}
sort.Sort(analyticList(res))
if otherPct > 0 {
// Groups with less than otherPCt go into "Other"
other := analytic{
Key: "Other",
}
for i := 0; i < len(res); i++ {
if res[i].Percentage < otherPct || res[i].Key == "Other" {
other.Count += res[i].Count
other.Percentage += res[i].Percentage
res = append(res[:i], res[i+1:]...)
i--
}
}
if other.Count > 0 {
res = append(res, other)
}
}
return res
}
func byVersion(s string) string {
parts := strings.Split(s, ".")
if len(parts) >= 2 {
return strings.Join(parts[:2], ".")
}
return s
}
func byPlatform(s string) string {
parts := strings.Split(s, "-")
if len(parts) >= 2 {
return parts[0]
}
return s
}
var numericGoVersion = regexp.MustCompile(`^go[0-9]\.[0-9]+`)
func byCompiler(s string) string {
if m := numericGoVersion.FindString(s); m != "" {
return m
}
return "Other"
}
func versionLess(a, b string) bool {
arel, apre := versionParts(a)
brel, bpre := versionParts(b)
minlen := len(arel)
if l := len(brel); l < minlen {
minlen = l
}
for i := 0; i < minlen; i++ {
if arel[i] != brel[i] {
return arel[i] < brel[i]
}
}
// Longer version is newer, when the preceding parts are equal
if len(arel) != len(brel) {
return len(arel) < len(brel)
}
if apre != bpre {
// "(+dev)" versions are ahead
if apre == plusStr {
return false
}
if bpre == plusStr {
return true
}
return apre < bpre
}
// don't actually care how the prerelease stuff compares for our purposes
return false
}
// Split a version as returned from transformVersion into parts.
// "1.2.3-beta.2" -> []int{1, 2, 3}, "beta.2"}
func versionParts(v string) ([]int, string) {
parts := strings.SplitN(v[1:], " ", 2) // " (+dev)" versions
if len(parts) == 1 {
parts = strings.SplitN(parts[0], "-", 2) // "-rc.1" type versions
}
fields := strings.Split(parts[0], ".")
release := make([]int, len(fields))
for i, s := range fields {
v, _ := strconv.Atoi(s)
release[i] = v
}
var prerelease string
if len(parts) > 1 {
prerelease = parts[1]
}
return release, prerelease
}

View File

@@ -1,131 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"bytes"
"fmt"
"strings"
)
type NumberType int
const (
NumberMetric NumberType = iota
NumberBinary
NumberDuration
)
func number(ntype NumberType, v float64) string {
switch ntype {
case NumberMetric:
return metric(v)
case NumberDuration:
return duration(v)
case NumberBinary:
return binary(v)
default:
return metric(v)
}
}
type suffix struct {
Suffix string
Multiplier float64
}
var metricSuffixes = []suffix{
{"G", 1e9},
{"M", 1e6},
{"k", 1e3},
}
var binarySuffixes = []suffix{
{"Gi", 1 << 30},
{"Mi", 1 << 20},
{"Ki", 1 << 10},
}
var durationSuffix = []suffix{
{"year", 365 * 24 * 60 * 60},
{"month", 30 * 24 * 60 * 60},
{"day", 24 * 60 * 60},
{"hour", 60 * 60},
{"minute", 60},
{"second", 1},
}
func metric(v float64) string {
return withSuffix(v, metricSuffixes, false)
}
func binary(v float64) string {
return withSuffix(v, binarySuffixes, false)
}
func duration(v float64) string {
return withSuffix(v, durationSuffix, true)
}
func withSuffix(v float64, ps []suffix, pluralize bool) string {
for _, p := range ps {
if v >= p.Multiplier {
suffix := p.Suffix
if pluralize && v/p.Multiplier != 1.0 {
suffix += "s"
}
// If the number only has decimal zeroes, strip em off.
num := strings.TrimRight(strings.TrimRight(fmt.Sprintf("%.1f", v/p.Multiplier), "0"), ".")
return fmt.Sprintf("%s %s", num, suffix)
}
}
return strings.TrimRight(strings.TrimRight(fmt.Sprintf("%.1f", v), "0"), ".")
}
// commatize returns a number with sep as thousands separators. Handles
// integers and plain floats.
func commatize(sep, s string) string {
// If no dot, don't do anything.
if !strings.ContainsRune(s, '.') {
return s
}
var b bytes.Buffer
fs := strings.SplitN(s, ".", 2)
l := len(fs[0])
for i := range fs[0] {
b.Write([]byte{s[i]})
if i < l-1 && (l-i)%3 == 1 {
b.WriteString(sep)
}
}
if len(fs) > 1 && len(fs[1]) > 0 {
b.WriteString(".")
b.WriteString(fs[1])
}
return b.String()
}
func proportion(m map[string]int, count int) float64 {
total := 0
isMax := true
for _, n := range m {
total += n
if n > count {
isMax = false
}
}
pct := (100 * float64(count)) / float64(total)
// To avoid rounding errors in the template, surpassing 100% and breaking
// the progress bars.
if isMax && len(m) > 1 && count != total {
pct -= 0.01
}
return pct
}

View File

@@ -1,26 +0,0 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package serve
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var metricReportsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv",
Name: "reports_total",
}, []string{"version"})
func init() {
metricReportsTotal.WithLabelValues("fail")
metricReportsTotal.WithLabelValues("duplicate")
metricReportsTotal.WithLabelValues("v1")
metricReportsTotal.WithLabelValues("v2")
metricReportsTotal.WithLabelValues("v3")
}

View File

File diff suppressed because it is too large Load Diff

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

View File

File diff suppressed because one or more lines are too long

View File

File diff suppressed because one or more lines are too long

View File

File diff suppressed because one or more lines are too long

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

View File

@@ -1,623 +0,0 @@
<!DOCTYPE html>
<!--
Copyright (C) 2014 Jakob Borg and other contributors. All rights reserved.
Use of this source code is governed by an MIT-style license that can be
found in the LICENSE file.
-->
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
<meta name="author" content="">
<link rel="shortcut icon" href="static/assets/img/favicon.png">
<title>Syncthing Usage Reports</title>
<link href="static/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script type="text/javascript" src="static/bootstrap/js/bootstrap.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/leaflet.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/leaflet.js"></script>
<script src="https://cdn.jsdelivr.net/npm/heatmapjs@2.0.2/heatmap.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/leaflet-heatmap@1.0.0/leaflet-heatmap.js"></script>
<style type="text/css">
body {
margin: 40px;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
}
tr.main td {
font-weight: bold;
}
tr.child td.first {
padding-left: 2em;
}
.progress-bar {
overflow:hidden;
white-space:nowrap;
text-overflow: ellipsis;
}
</style>
<script type="text/javascript"
src='https://www.google.com/jsapi?autoload={
"modules":[{
"name":"visualization",
"version":"1",
"packages":["corechart"]
}]
}'></script>
<script type="text/javascript">
google.setOnLoadCallback(drawVersionChart);
google.setOnLoadCallback(drawBlockStatsChart);
google.setOnLoadCallback(drawPerformanceCharts);
function drawVersionChart() {
// Summary version chart for versions that at some point in the chart
// reaches 250 devices. This filters out versions that are old and
// uninteresting yet linger forever with like four users.
var jsonData = $.ajax({url: "summary.json?min=250", dataType:"json", async: false}).responseText;
var rows = JSON.parse(jsonData);
var data = new google.visualization.DataTable();
data.addColumn('date', 'Day');
for (var i = 1; i < rows[0].length; i++){
data.addColumn('number', rows[0][i]);
}
for (var i = 1; i < rows.length; i++){
rows[i][0] = new Date(rows[i][0]);
data.addRow(rows[i]);
};
var options = {
legend: { position: 'bottom', alignment: 'center' },
isStacked: true,
colors: ['rgb(102,194,165)','rgb(252,141,98)','rgb(141,160,203)','rgb(231,138,195)','rgb(166,216,84)','rgb(255,217,47)'],
chartArea: {left: 80, top: 20, width: '1020', height: '300'},
};
var chart = new google.visualization.AreaChart(document.getElementById('versionChart'));
chart.draw(data, options);
}
function formatGibibytes(gibibytes, decimals) {
if(gibibytes == 0) return '0 GiB';
var k = 1024,
dm = decimals || 2,
sizes = ['GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'],
i = Math.floor(Math.log(gibibytes) / Math.log(k));
if (i < 0) {
sizes = 'MiB';
} else {
sizes = sizes[i];
}
return parseFloat((gibibytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes;
}
function drawBlockStatsChart() {
var jsonData = $.ajax({url: "blockstats.json", dataType:"json", async: false}).responseText;
var rows = JSON.parse(jsonData);
var data = new google.visualization.DataTable();
data.addColumn('date', 'Day');
for (var i = 1; i < rows[0].length; i++){
data.addColumn('number', rows[0][i]);
}
var totals = [0, 0, 0, 0, 0, 0];
for (var i = 1; i < rows.length; i++){
rows[i][0] = new Date(rows[i][0]);
for (var j = 2; j < rows[i].length; j++) {
totals[j-2] += rows[i][j];
}
data.addRow(rows[i]);
};
var totalTotals = totals.reduce(function(a, b) { return a + b; }, 0);
if (totalTotals > 0) {
var content = "<table class='table'>\n"
for (var j = 2; j < rows[0].length; j++) {
content += "<tr><td><b>" + rows[0][j].replace(' (GiB)', '') + "</b></td><td>" + formatGibibytes(totals[j-2].toFixed(2)) + " (" + ((100*totals[j-2])/totalTotals).toFixed(2) +"%)</td></tr>\n";
}
content += "</table>";
document.getElementById("data-to-date").innerHTML = content;
} else {
// No data, hide it.
document.getElementById("block-stats").outerHTML = "";
return;
}
var options = {
focusTarget: 'category',
vAxes: {0: {}, 1: {}},
series: {0: {type: 'line', targetAxisIndex:1}},
isStacked: true,
legend: {position: 'none'},
colors: ['rgb(102,194,165)','rgb(252,141,98)','rgb(141,160,203)','rgb(231,138,195)','rgb(166,216,84)','rgb(255,217,47)'],
chartArea: {left: 80, top: 20, width: '1020', height: '300'},
};
var chart = new google.visualization.AreaChart(document.getElementById('blockStatsChart'));
chart.draw(data, options);
}
function drawPerformanceCharts() {
var jsonData = $.ajax({url: "/performance.json", dataType:"json", async: false}).responseText;
var rows = JSON.parse(jsonData);
for (var i = 1; i < rows.length; i++){
rows[i][0] = new Date(rows[i][0]);
}
drawChart(rows, 1, 'Total Number of Files', 'totFilesChart', 1e6, 1);
drawChart(rows, 2, 'Total Folder Size (GiB)', 'totMiBChart', 1e6, 1024);
drawChart(rows, 3, 'Hash Performance (MiB/s)', 'hashPerfChart', 1000, 1);
drawChart(rows, 4, 'System RAM Size (GiB)', 'memSizeChart', 1e6, 1024);
drawChart(rows, 5, 'Memory Usage (MiB)', 'memUsageChart', 250, 1);
}
function drawChart(rows, index, title, id, cutoff, divisor) {
var data = new google.visualization.DataTable();
data.addColumn('date', 'Day');
data.addColumn('number', title);
var row;
for (var i = 1; i < rows.length; i++){
row = [rows[i][0], rows[i][index] / divisor];
if (row[1] > cutoff) {
row[1] = null;
}
data.addRow(row);
}
var options = {
legend: { position: 'bottom', alignment: 'center' },
colors: ['rgb(102,194,165)','rgb(252,141,98)','rgb(141,160,203)','rgb(231,138,195)','rgb(166,216,84)','rgb(255,217,47)'],
chartArea: {left: 80, top: 20, width: '1020', height: '300'},
vAxes: {0: {minValue: 0}},
};
var chart = new google.visualization.LineChart(document.getElementById(id));
chart.draw(data, options);
}
var locations = [];
{{range $location, $weight := .locations}}
locations.push({lat:{{- $location.Latitude -}},lng:{{- $location.Longitude -}},count:Math.min(100, {{- $weight -}})});
{{- end}}
function drawHeatMap() {
if (locations.length == 0) {
return;
}
var testData = {
data: locations
};
var baseLayer = L.tileLayer(
'https://tile.openstreetmap.org/{z}/{x}/{y}.png',{
attribution: '...',
maxZoom: 18
}
);
var cfg = {
"radius": 1,
"minOpacity": .25,
"maxOpacity": .8,
"scaleRadius": true,
"useLocalExtrema": true,
latField: 'lat',
lngField: 'lng',
valueField: 'count',
gradient: {
'.1': 'cyan',
'.8': 'blue',
'.95': 'red'
}
};
var heatmapLayer = new HeatmapOverlay(cfg);
var map = new L.Map('map', {
center: new L.LatLng(25, 0),
zoom: 1,
layers: [baseLayer, heatmapLayer]
});
heatmapLayer.setData(testData);
}
</script>
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-12">
<h1>Syncthing Usage Data</h1>
<h4 id="active-users">Active Users per Day and Version</h4>
<p>
This is the total number of unique users with reporting enabled, per day. Area color represents the major version.
</p>
<div class="img-thumbnail" id="versionChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<div id="block-stats">
<h4>Data Transfers per Day</h4>
<p>
This is total data transferred per day. Also shows how much data was saved (not transferred) by each of the methods syncthing uses.
</p>
<div class="img-thumbnail" id="blockStatsChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="totals-to-date">Totals to date</h4>
<p id="data-to-date">
No data
</p>
</div>
<h4 id="metrics">Usage Metrics</h4>
<p>
This is the aggregated usage report data for the last 24 hours. Data based on <b>{{.nodes}}</b> devices that have reported in.
</p>
{{if .locations}}
<div class="img-thumbnail" id="map" style="width: 1130px; height: 400px; padding: 10px;"></div>
<p class="text-muted">
Heatmap max intensity is capped at 100 reports within a location.
</p>
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" href="#collapseTwo">Break down per country</a>
</h4>
</div>
<div id="collapseTwo" class="panel-collapse collapse">
<div class="panel-body">
<div class="row">
<div class="col-md-6">
<table class="table table-striped">
<tbody>
{{range .countries | slice 2 1}}
<tr>
<td style="width: 45%">{{.Key}}</td>
<td style="width: 5%" class="text-right">{{if ge .Pct 10.0}}{{.Pct | printf "%.0f"}}{{else if ge .Pct 1.0}}{{.Pct | printf "%.01f"}}{{else}}{{.Pct | printf "%.02f"}}{{end}}%</td>
<td style="width: 5%" class="text-right">{{.Count}}</td>
<td>
<div class="progress-bar" role="progressbar" aria-valuenow="{{.Pct | printf "%.02f"}}" aria-valuemin="0" aria-valuemax="100" style="width: {{.Pct | printf "%.02f"}}%; height:20px"></div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<tbody>
{{range .countries | slice 2 2}}
<tr>
<td style="width: 45%">{{.Key}}</td>
<td style="width: 5%" class="text-right">{{if ge .Pct 10.0}}{{.Pct | printf "%.0f"}}{{else if ge .Pct 1.0}}{{.Pct | printf "%.01f"}}{{else}}{{.Pct | printf "%.02f"}}{{end}}%</td>
<td style="width: 5%" class="text-right">{{.Count}}</td>
<td>
<div class="progress-bar" role="progressbar" aria-valuenow="{{.Pct | printf "%.02f"}}" aria-valuemin="0" aria-valuemax="100" style="width: {{.Pct | printf "%.02f"}}%; height:20px"></div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
{{end}}
<table class="table table-striped">
<thead>
<tr>
<th></th>
<th colspan="4" class="text-center">
<a href="https://en.wikipedia.org/wiki/Percentile">Percentile</a>
</th>
</tr>
<tr>
<th></th>
<th class="text-right">5%</th>
<th class="text-right">50%</th>
<th class="text-right">95%</th>
<th class="text-right">100%</th>
</tr>
</thead>
<tbody>
{{range .categories}}
<tr>
<td>{{.Descr}}</td>
<td class="text-right">{{index .Values 0 | number .Type | commatize " "}}{{.Unit}}</td>
<td class="text-right">{{index .Values 1 | number .Type | commatize " "}}{{.Unit}}</td>
<td class="text-right">{{index .Values 2 | number .Type | commatize " "}}{{.Unit}}</td>
<td class="text-right">{{index .Values 3 | number .Type | commatize " "}}{{.Unit}}</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
<div class="row">
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Version</th><th class="text-right">Devices</th><th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .versions}}
{{if gt .Percentage 0.1}}
<tr class="main">
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{range .Items}}
{{if gt .Percentage 0.1}}
<tr class="child">
<td class="first">{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
{{end}}
{{end}}
{{end}}
</tbody>
</table>
<table class="table table-striped">
<thead>
<tr>
<th>Penetration Level</th>
<th>Version</th>
<th class="text-right">Actual</th>
</tr>
</thead>
<tbody>
{{range .versionPenetrations}}
<tr>
<td>{{.Count}}%</td>
<td>&ge; {{.Key}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Platform</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .platforms}}
<tr class="main">
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{range .Items}}
<tr class="child">
<td class="first">{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
{{end}}
</tbody>
</table>
</div>
</div>
<div class="row">
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Compiler</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .compilers}}
<tr class="main">
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{range .Items}}
{{if or (gt .Percentage 0.1) (eq .Key "Others")}}
<tr class="child">
<td class="first">{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
{{end}}
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Distribution Channel</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .distributions}}
<tr>
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
</tbody>
</table>
</div>
<div class="col-md-6">
<table class="table table-striped">
<thead>
<tr>
<th>Builder</th>
<th class="text-right">Devices</th>
<th class="text-right">Share</th>
</tr>
</thead>
<tbody>
{{range .builders}}
<tr>
<td>{{.Key}}</td>
<td class="text-right">{{.Count}}</td>
<td class="text-right">{{.Percentage | printf "%.01f"}}%</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
<div class="row">
<div class="col-md-12">
<h4 id="features">Feature Usage</h4>
<p>
The following lists feature usage. Some features are reported per report, some are per sum of units within report (eg. devices with static addresses among all known devices per report).
Currently there are <b>{{.versionNodes.v2}}</b> devices reporting for version 2 and <b>{{.versionNodes.v3}}</b> for version 3.
</p>
</div>
</div>
<div class="row">
{{$i := counter}}
{{range $featureName := .featureOrder}}
{{$featureValues := index $.features $featureName }}
{{if $i.DrawTwoDivider}}
</div>
<div class="row">
{{end}}
{{ $i.Increment }}
<div class="col-md-6">
<table class="table table-striped">
<thead><tr>
<th>{{$featureName}} Features</th><th colspan="2" class="text-center">Usage</th>
</tr></thead>
<tbody>
{{range $featureValues}}
<tr>
<td style="width: 50%">{{.Key}} ({{.Version}})</td>
<td style="width: 10%" class="text-right">{{if ge .Pct 10.0}}{{.Pct | printf "%.0f"}}{{else if ge .Pct 1.0}}{{.Pct | printf "%.01f"}}{{else}}{{.Pct | printf "%.02f"}}{{end}}%</td>
<td style="width: 40%" {{if lt .Pct 5.0}}data-toggle="tooltip" title='{{.Count}}'{{end}}>
<div class="progress-bar" role="progressbar" aria-valuenow="{{.Pct | printf "%.02f"}}" aria-valuemin="0" aria-valuemax="100" style="width: {{.Pct | printf "%.02f"}}%; height:20px" {{if ge .Pct 5.0}}data-toggle="tooltip" title='{{.Count}}'{{end}}></div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
{{end}}
</div>
<div class="row">
<div class="col-md-12">
<h4 id="features">Feature Group Usage</h4>
<p>
The following lists feature usage groups, which might include multiple occourances of a feature use per report.
</p>
</div>
</div>
<div class="row">
{{$i := counter}}
{{range $featureName := .featureOrder}}
{{$featureValues := index $.featureGroups $featureName }}
{{if $i.DrawTwoDivider}}
</div>
<div class="row">
{{end}}
{{ $i.Increment }}
<div class="col-md-6">
<table class="table table-striped">
<thead><tr>
<th>{{$featureName}} Group Features</th><th colspan="2" class="text-center">Usage</th>
</tr></thead>
<tbody>
{{range $featureValues}}
{{$counts := .Counts}}
<tr>
<td style="width: 50%">
<div data-toggle="tooltip" title='{{range $key, $value := .Counts}}{{$key}} ({{$value | proportion $counts | printf "%.02f"}}% - {{$value}})</br>{{end}}'>
{{.Key}} ({{.Version}})
</div>
</td>
<td style="width: 50%">
<div class="progress" role="progressbar" style="width: 100%">
{{$j := counter}}
{{range $key, $value := .Counts}}
{{with $valuePct := $value | proportion $counts}}
<div class="progress-bar {{ $j.Current | progressBarClassByIndex }}" style='width: {{$valuePct | printf "%.02f"}}%' data-toggle="tooltip" title='{{$key}} ({{$valuePct | printf "%.02f"}}% - {{$value}})'>
{{if ge $valuePct 30.0}}{{$key}}{{end}}
</div>
{{end}}
{{ $j.Increment }}
{{end}}
</div>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
{{end}}
</div>
<div class="row">
<div class="col-md-12">
<h1 id="performance-charts">Historical Performance Data</h1>
<p>These charts are all the average of the corresponding metric, for the entire population of a given day.</p>
<h4 id="hash-performance">Hash Performance (MiB/s)</h4>
<div class="img-thumbnail" id="hashPerfChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="memory-usage">Memory Usage (MiB)</h4>
<div class="img-thumbnail" id="memUsageChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="total-files">Total Number of Files</h4>
<div class="img-thumbnail" id="totFilesChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="total-size">Total Folder Size (GiB)</h4>
<div class="img-thumbnail" id="totMiBChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
<h4 id="system-ram">System RAM Size (GiB)</h4>
<div class="img-thumbnail" id="memSizeChart" style="width: 1130px; height: 400px; padding: 10px;"></div>
</div>
</div>
</div>
<hr>
<p>
<a href="https://github.com/syncthing/syncthing/tree/main/cmd/ursrv">Source code</a>.
This product includes GeoLite2 data created by MaxMind, available from
<a href="http://www.maxmind.com">http://www.maxmind.com</a>.
</p>
<script type="text/javascript">
$('[data-toggle="tooltip"]').tooltip({html:true});
drawHeatMap();
</script>
</body>
</html>

28
compat.yaml Normal file
View File

@@ -0,0 +1,28 @@
- runtime: go1.21
requirements:
# See https://en.wikipedia.org/wiki/MacOS_version_history#Releases
#
# macOS 10.15 (Catalina) per https://go.dev/doc/go1.22#darwin
darwin: "19"
# Per https://go.dev/doc/go1.23#linux
linux: "2.6.32"
# Windows 10's initial release was 10.0.10240.16405, per
# https://learn.microsoft.com/en-us/windows/release-health/release-information
# and Windows 11's initial release was 10.0.22000.194 per
# https://learn.microsoft.com/en-us/windows/release-health/windows11-release-information
#
# Windows 10/Windows Server 2016 per https://go.dev/doc/go1.21#windows
windows: "10.0"
- runtime: go1.22
requirements:
darwin: "19"
linux: "2.6.32"
windows: "10.0"
- runtime: go1.23
requirements:
# macOS 11 (Big Sur) per https://tip.golang.org/doc/go1.23#darwin
darwin: "20"
linux: "2.6.32"
windows: "10.0"

View File

@@ -1,4 +1,4 @@
# Increase maximum socket buffer sizes to 2.5MiB for QUIC connections
# Increase maximum socket buffer sizes to 7MiB for QUIC connections
# see https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes
net.core.rmem_max = 2621440
net.core.wmem_max = 2621440
net.core.rmem_max = 7340032
net.core.wmem_max = 7340032

View File

@@ -1,11 +0,0 @@
[Unit]
Description=Restart Syncthing after resume
Documentation=man:syncthing(1)
After=sleep.target
[Service]
Type=oneshot
ExecStart=-/usr/bin/pkill -HUP -x syncthing
[Install]
WantedBy=sleep.target

118
go.mod
View File

@@ -1,78 +1,100 @@
module github.com/syncthing/syncthing
go 1.20
go 1.22.0
require (
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f
github.com/alecthomas/kong v0.8.1
github.com/calmh/incontainer v0.0.0-20221224152218-b3e71b103d7a
github.com/calmh/xdr v1.1.0
github.com/ccding/go-stun v0.1.4
github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/alecthomas/kong v1.2.1
github.com/aws/aws-sdk-go v1.55.5
github.com/calmh/incontainer v1.0.0
github.com/calmh/xdr v1.2.0
github.com/ccding/go-stun v0.1.5
github.com/chmduquesne/rollinghash v4.0.0+incompatible
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/d4l3k/messagediff v1.2.1
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568
github.com/getsentry/raven-go v0.2.0
github.com/go-asn1-ber/asn1-ber v1.5.5 // indirect
github.com/go-ldap/ldap/v3 v3.4.6
github.com/go-ldap/ldap/v3 v3.4.8
github.com/gobwas/glob v0.2.3
github.com/gogo/protobuf v1.3.2
github.com/golang/snappy v0.0.4 // indirect
github.com/greatroar/blobloom v0.7.2
github.com/greatroar/blobloom v0.8.0
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/jackpal/gateway v1.0.10
github.com/jackpal/gateway v1.0.15
github.com/jackpal/go-nat-pmp v1.0.2
github.com/julienschmidt/httprouter v1.3.0
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/lib/pq v1.10.9
github.com/maruel/panicparse/v2 v2.3.1
github.com/maxbrunsfeld/counterfeiter/v6 v6.5.0
github.com/minio/sha256-simd v1.0.1
github.com/maxbrunsfeld/counterfeiter/v6 v6.8.1
github.com/maxmind/geoipupdate/v6 v6.1.0
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75
github.com/oschwald/geoip2-golang v1.9.0
github.com/pierrec/lz4/v4 v4.1.18
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.17.0
github.com/prometheus/common v0.45.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/quic-go/quic-go v0.40.1
github.com/oschwald/geoip2-golang v1.11.0
github.com/pierrec/lz4/v4 v4.1.21
github.com/prometheus/client_golang v1.20.5
github.com/puzpuzpuz/xsync/v3 v3.4.0
github.com/quic-go/quic-go v0.48.0
github.com/rabbitmq/amqp091-go v1.10.0
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475
github.com/shirou/gopsutil/v3 v3.23.11
github.com/shirou/gopsutil/v4 v4.24.9
github.com/syncthing/notify v0.0.0-20210616190510-c6b7342338d2
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d
github.com/thejerf/suture/v4 v4.0.2
github.com/urfave/cli v1.22.14
github.com/thejerf/suture/v4 v4.0.5
github.com/urfave/cli v1.22.16
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0
golang.org/x/crypto v0.16.0
golang.org/x/exp v0.0.0-20231206192017-f3f8817b8deb
golang.org/x/mod v0.14.0 // indirect
golang.org/x/net v0.19.0
golang.org/x/sys v0.15.0
golang.org/x/text v0.14.0
golang.org/x/time v0.5.0
golang.org/x/tools v0.16.1
google.golang.org/protobuf v1.31.0
github.com/willabides/kongplete v0.4.0
go.uber.org/automaxprocs v1.6.0
golang.org/x/crypto v0.28.0
golang.org/x/net v0.30.0
golang.org/x/sys v0.26.0
golang.org/x/text v0.19.0
golang.org/x/time v0.7.0
golang.org/x/tools v0.26.0
google.golang.org/protobuf v1.35.1
sigs.k8s.io/yaml v1.4.0
)
require (
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/ebitengine/purego v0.8.0 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.7 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/google/pprof v0.0.0-20231212022811-ec68065c825e // indirect
github.com/google/uuid v1.4.0 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/onsi/ginkgo/v2 v2.13.2 // indirect
github.com/oschwald/maxminddb-golang v1.12.0 // indirect
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/quic-go/qtls-go1-20 v0.4.1 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/gofrs/flock v0.12.1 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/pprof v0.0.0-20241009165004-a3522334989c // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.11 // indirect
github.com/onsi/ginkgo/v2 v2.20.2 // indirect
github.com/oschwald/maxminddb-golang v1.13.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/posener/complete v1.2.3 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.60.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/riywo/loginshell v0.0.0-20200815045211-7d26008be1ab // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.uber.org/mock v0.3.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/stretchr/testify v1.9.0 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.9.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/mock v0.4.0 // indirect
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c // indirect
golang.org/x/mod v0.21.0 // indirect
golang.org/x/sync v0.8.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
// https://github.com/gobwas/glob/pull/55

304
go.sum
View File

@@ -2,59 +2,70 @@ github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f h1:GmH5
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f/go.mod h1:Nhfib1j/VFnLrXL9cHgA+/n2O6P5THuWelOnbfPNd78=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/alecthomas/assert/v2 v2.1.0 h1:tbredtNcQnoSd3QBhQWI7QZ3XHOVkw1Moklp2ojoH/0=
github.com/alecthomas/kong v0.8.1 h1:acZdn3m4lLRobeh3Zi2S2EpnXTd1mOL6U7xVml+vfkY=
github.com/alecthomas/kong v0.8.1/go.mod h1:n1iCIO2xS46oE8ZfYCNDqdR0b0wZNrXAIAqro/2132U=
github.com/alecthomas/repr v0.1.0 h1:ENn2e1+J3k09gyj2shc0dHr/yjaWSHRlrJ4DPMevDqE=
github.com/alexbrainman/sspi v0.0.0-20210105120005-909beea2cc74 h1:Kk6a4nehpJ3UuJRqlA3JxYxBZEqCeOmATOvrbT4p9RA=
github.com/alexbrainman/sspi v0.0.0-20210105120005-909beea2cc74/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/alecthomas/assert/v2 v2.10.0 h1:jjRCHsj6hBJhkmhznrCzoNpbA3zqy0fYiUcYZP/GkPY=
github.com/alecthomas/assert/v2 v2.10.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k=
github.com/alecthomas/kong v1.2.1 h1:E8jH4Tsgv6wCRX2nGrdPyHDUCSG83WH2qE4XLACD33Q=
github.com/alecthomas/kong v1.2.1/go.mod h1:rKTSFhbdp3Ryefn8x5MOEprnRFQ7nlmMC01GKhehhBM=
github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc=
github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7VVbI0o4wBRNQIgn917usHWOd6VAffYI=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/aws/aws-sdk-go v1.55.5 h1:KKUZBfBoyqy5d3swXyiC7Q76ic40rYcbqH7qjh59kzU=
github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/calmh/glob v0.0.0-20220615080505-1d823af5017b h1:Fjm4GuJ+TGMgqfGHN42IQArJb77CfD/mAwLbDUoJe6g=
github.com/calmh/glob v0.0.0-20220615080505-1d823af5017b/go.mod h1:91K7jfEsgJSyfSrX+gmrRfZMtntx6JsHolWubGXDopg=
github.com/calmh/incontainer v0.0.0-20221224152218-b3e71b103d7a h1:CjrQbpvnV4BMzPHf0r8p2FAvzEp/bp761CmBBeNIHXI=
github.com/calmh/incontainer v0.0.0-20221224152218-b3e71b103d7a/go.mod h1:eOhqnw15c9X+4RNBe0W3HlUZFfX16O0EDsCOInTndHY=
github.com/calmh/xdr v1.1.0 h1:U/Dd4CXNLoo8EiQ4ulJUXkgO1/EyQLgDKLgpY1SOoJE=
github.com/calmh/xdr v1.1.0/go.mod h1:E8sz2ByAdXC8MbANf1LCRYzedSnnc+/sXXJs/PVqoeg=
github.com/ccding/go-stun v0.1.4 h1:lC0co3Q3vjAuu2Jz098WivVPBPbemYFqbwE1syoka4M=
github.com/ccding/go-stun v0.1.4/go.mod h1:cCZjJ1J3WFSJV6Wj8Y9Di8JMTsEXh6uv2eNmLzKaUeM=
github.com/calmh/incontainer v1.0.0 h1:g2cTUtZuFGmMGX8GoykPkN1Judj2uw8/3/aEtq4Z/rg=
github.com/calmh/incontainer v1.0.0/go.mod h1:eOhqnw15c9X+4RNBe0W3HlUZFfX16O0EDsCOInTndHY=
github.com/calmh/xdr v1.2.0 h1:GaGSNH4ZDw9kNdYqle6+RcAENiaQ8/611Ok+jQbBEeU=
github.com/calmh/xdr v1.2.0/go.mod h1:vO5+lCx/8xz7Ekd/ZLf+xuy7c1x6FMO1pBJyjDebwyg=
github.com/ccding/go-stun v0.1.5 h1:qEM367nnezmj7dv+SdT52prv5x6HUTG3nlrjX5aitlo=
github.com/ccding/go-stun v0.1.5/go.mod h1:cCZjJ1J3WFSJV6Wj8Y9Di8JMTsEXh6uv2eNmLzKaUeM=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d h1:S2NE3iHSwP0XV47EEXL8mWmRdEfGscSJ+7EgePNgt0s=
github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chmduquesne/rollinghash v4.0.0+incompatible h1:hnREQO+DXjqIw3rUTzWN7/+Dpw+N5Um8zpKV0JOEgbo=
github.com/chmduquesne/rollinghash v4.0.0+incompatible/go.mod h1:Uc2I36RRfTAf7Dge82bi3RU0OQUmXT9iweIcPqvr8A0=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB0alcyc=
github.com/cpuguy83/go-md2man/v2 v2.0.5/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/d4l3k/messagediff v1.2.1 h1:ZcAIMYsUg0EAp9X+tt8/enBE/Q8Yd5kzPynLyKptt9U=
github.com/d4l3k/messagediff v1.2.1/go.mod h1:Oozbb1TVXFac9FtSIxHBMnBCq2qeH/2KkEQxENCrlLo=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568 h1:BMXYYRWTLOJKlh+lOBt6nUQgXAfB7oVIQt5cNreqSLI=
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:rZfgFAXFS/z/lEd6LJmf9HVZ1LkgYiHx5pHhV5DR16M=
github.com/ebitengine/purego v0.8.0 h1:JbqvnEzRvPpxhCJzJJ2y0RbiZ8nyjccVUrSM3q+GvvE=
github.com/ebitengine/purego v0.8.0/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/getsentry/raven-go v0.2.0 h1:no+xWJRb5ZI7eE8TWgIq1jLulQiIoLG0IfYxv5JYMGs=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/go-asn1-ber/asn1-ber v1.5.5 h1:MNHlNMBDgEKD4TcKr36vQN68BA00aDfjIt3/bD50WnA=
github.com/go-asn1-ber/asn1-ber v1.5.5/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.6 h1:ert95MdbiG7aWo/oPYp9btL3KJlMPKnP58r09rI8T+A=
github.com/go-ldap/ldap/v3 v3.4.6/go.mod h1:IGMQANNtxpsOzj7uUAMjpGBaOVTC4DYyIy8VsTdxmtc=
github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY=
github.com/go-asn1-ber/asn1-ber v1.5.7 h1:DTX+lbVTWaTw1hQ+PbZPlnDZPEIs0SS/GCZAl535dDk=
github.com/go-asn1-ber/asn1-ber v1.5.7/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.8 h1:loKJyspcRezt2Q3ZRMq2p/0v8iOurlmeXDPw6fikSvQ=
github.com/go-ldap/ldap/v3 v3.4.8/go.mod h1:qS3Sjlu76eHfHGpUdWkAXQTw4beih+cHsco2jXlIXrk=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@@ -66,158 +77,213 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20231212022811-ec68065c825e h1:bwOy7hAFd0C91URzMIEBfr6BAz29yk7Qj0cy6S7DJlU=
github.com/google/pprof v0.0.0-20231212022811-ec68065c825e/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/greatroar/blobloom v0.7.2 h1:F30MGLHOcb4zr0pwCPTcKdlTM70rEgkf+LzdUPc5ss8=
github.com/greatroar/blobloom v0.7.2/go.mod h1:mjMJ1hh1wjGVfr93QIHJ6FfDNVrA0IELv8OvMHJxHKs=
github.com/google/pprof v0.0.0-20241009165004-a3522334989c h1:NDovD0SMpBYXlE1zJmS1q55vWB/fUQBcPAqAboZSccA=
github.com/google/pprof v0.0.0-20241009165004-a3522334989c/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/greatroar/blobloom v0.8.0 h1:I9RlEkfqK9/6f1v9mFmDYegDQ/x0mISCpiNpAm23Pt4=
github.com/greatroar/blobloom v0.8.0/go.mod h1:mjMJ1hh1wjGVfr93QIHJ6FfDNVrA0IELv8OvMHJxHKs=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM=
github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/jackpal/gateway v1.0.10 h1:7g3fDo4Cd3RnTu6PzAfw6poO4Y81uNxrxFQFsBFSzJM=
github.com/jackpal/gateway v1.0.10/go.mod h1:+uPBgIllrbkwYCAoDkGSZbjvpre/bGYAFCYIcrH+LHs=
github.com/jackpal/gateway v1.0.15 h1:yb4Gltgr8ApHWWnSyybnDL1vURbqw7ooo7IIL5VZSeg=
github.com/jackpal/gateway v1.0.15/go.mod h1:dbyEDcDhHUh9EmjB9ung81elMUZfG0SoNc2TfTbcj4c=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8=
github.com/jcmturner/aescts/v2 v2.0.0/go.mod h1:AiaICIRyfYg35RUkr8yESTqvSy7csK90qZ5xfvvsoNs=
github.com/jcmturner/dnsutils/v2 v2.0.0 h1:lltnkeZGL0wILNvrNiVCR6Ro5PGU/SeBvVO/8c/iPbo=
github.com/jcmturner/dnsutils/v2 v2.0.0/go.mod h1:b0TnjGOvI/n42bZa+hmXL+kFJZsFT7G4t3HTlQ184QM=
github.com/jcmturner/gofork v1.7.6 h1:QH0l3hzAU1tfT3rZCnW5zXl+orbkNMMRGJfdJjHVETg=
github.com/jcmturner/gofork v1.7.6/go.mod h1:1622LH6i/EZqLloHfE7IeZ0uEJwMSUyQ/nDd82IeqRo=
github.com/jcmturner/goidentity/v6 v6.0.1 h1:VKnZd2oEIMorCTsFBnJWbExfNN7yZr3EhJAxwOkZg6o=
github.com/jcmturner/goidentity/v6 v6.0.1/go.mod h1:X1YW3bgtvwAXju7V3LCIMpY0Gbxyjn/mY9zx4tFonSg=
github.com/jcmturner/gokrb5/v8 v8.4.4 h1:x1Sv4HaTpepFkXbt2IkL29DXRf8sOfZXo8eRKh687T8=
github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP+F6aCACiMrs=
github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 h1:7UMa6KCCMjZEMDtTVdcGu0B1GmmC7QJKiCCjyTAWQy0=
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683/go.mod h1:ilwx/Dta8jXAgpFYFvSWEMwxmbWXyiUHkd5FwyKhb5k=
github.com/maruel/panicparse/v2 v2.3.1 h1:NtJavmbMn0DyzmmSStE8yUsmPZrZmudPH7kplxBinOA=
github.com/maruel/panicparse/v2 v2.3.1/go.mod h1:s3UmQB9Fm/n7n/prcD2xBGDkwXD6y2LeZnhbEXvs9Dg=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg=
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k=
github.com/maxbrunsfeld/counterfeiter/v6 v6.5.0 h1:rBhB9Rls+yb8kA4x5a/cWxOufWfXt24E+kq4YlbGj3g=
github.com/maxbrunsfeld/counterfeiter/v6 v6.5.0/go.mod h1:fJ0UAZc1fx3xZhU4eSHQDJ1ApFmTVhp5VTpV9tm2ogg=
github.com/maxbrunsfeld/counterfeiter/v6 v6.8.1 h1:NicmruxkeqHjDv03SfSxqmaLuisddudfP3h5wdXFbhM=
github.com/maxbrunsfeld/counterfeiter/v6 v6.8.1/go.mod h1:eyp4DdUJAKkr9tvxR3jWhw2mDK7CWABMG5r9uyaKC7I=
github.com/maxmind/geoipupdate/v6 v6.1.0 h1:sdtTHzzQNJlXF5+fd/EoPTucRHyMonYt/Cok8xzzfqA=
github.com/maxmind/geoipupdate/v6 v6.1.0/go.mod h1:cZYCDzfMzTY4v6dKRdV7KTB6SStxtn3yFkiJ1btTGGc=
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75 h1:cUVxyR+UfmdEAZGJ8IiKld1O0dbGotEnkMolG5hfMSY=
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75/go.mod h1:pBbZyGwC5i16IBkjVKoy/sznA8jPD/K9iedwe1ESE6w=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/nxadm/tail v1.4.11 h1:8feyoE3OzPrcshW5/MJ4sGESc5cqmGkGCWlco4l0bqY=
github.com/nxadm/tail v1.4.11/go.mod h1:OTaG3NK980DZzxbRq6lEuzgU+mug70nY11sMd4JXXHc=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.13.2 h1:Bi2gGVkfn6gQcjNjZJVO8Gf0FHzMPf2phUei9tejVMs=
github.com/onsi/ginkgo/v2 v2.13.2/go.mod h1:XStQ8QcGwLyF4HdfcZB8SFOS/MWCgDuXMSBe6zrvLgM=
github.com/onsi/ginkgo/v2 v2.20.2 h1:7NVCeyIWROIAheY21RLS+3j2bb52W0W82tkberYytp4=
github.com/onsi/ginkgo/v2 v2.20.2/go.mod h1:K9gyxPIlb+aIvnZ8bd9Ak+YP18w3APlR+5coaZoE2ag=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.29.0 h1:KIA/t2t5UBzoirT4H9tsML45GEbo3ouUnBHsCfD2tVg=
github.com/oschwald/geoip2-golang v1.9.0 h1:uvD3O6fXAXs+usU+UGExshpdP13GAqp4GBrzN7IgKZc=
github.com/oschwald/geoip2-golang v1.9.0/go.mod h1:BHK6TvDyATVQhKNbQBdrj9eAvuwOMi2zSFXizL3K81Y=
github.com/oschwald/maxminddb-golang v1.12.0 h1:9FnTOD0YOhP7DGxGsq4glzpGy5+w7pq50AS6wALUMYs=
github.com/oschwald/maxminddb-golang v1.12.0/go.mod h1:q0Nob5lTCqyQ8WT6FYgS1L7PXKVVbgiymefNwIjPzgY=
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
github.com/oschwald/geoip2-golang v1.11.0 h1:hNENhCn1Uyzhf9PTmquXENiWS6AlxAEnBII6r8krA3w=
github.com/oschwald/geoip2-golang v1.11.0/go.mod h1:P9zG+54KPEFOliZ29i7SeYZ/GM6tfEL+rgSn03hYuUo=
github.com/oschwald/maxminddb-golang v1.13.1 h1:G3wwjdN9JmIK2o/ermkHM+98oX5fS+k5MbwsmL4MRQE=
github.com/oschwald/maxminddb-golang v1.13.1/go.mod h1:K4pgV9N/GcK694KSTmVSDTODk4IsCNThNdTmnaBZ/F8=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b h1:0LFwY6Q3gMACTjAbMZBjXAqTOzOwFaj2Ld6cjeQ7Rig=
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q=
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM=
github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY=
github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo=
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/quic-go/qtls-go1-20 v0.4.1 h1:D33340mCNDAIKBqXuAvexTNMUByrYmFYVfKfDN5nfFs=
github.com/quic-go/qtls-go1-20 v0.4.1/go.mod h1:X9Nh97ZL80Z+bX/gUXMbipO6OxdiDi58b/fMC9mAL+k=
github.com/quic-go/quic-go v0.40.1 h1:X3AGzUNFs0jVuO3esAGnTfvdgvL4fq655WaOi1snv1Q=
github.com/quic-go/quic-go v0.40.1/go.mod h1:PeN7kuVJ4xZbxSv/4OX6S1USOX8MJvydwpTx31vx60c=
github.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.60.0 h1:+V9PAREWNvJMAuJ1x1BaWl9dewMW4YrHZQbx0sJNllA=
github.com/prometheus/common v0.60.0/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/puzpuzpuz/xsync/v3 v3.4.0 h1:DuVBAdXuGFHv8adVXjWWZ63pJq+NRXOWVXlKDBZ+mJ4=
github.com/puzpuzpuz/xsync/v3 v3.4.0/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/quic-go/quic-go v0.48.0 h1:2TCyvBrMu1Z25rvIAlnp2dPT4lgh/uTqLqiXVpp5AeU=
github.com/quic-go/quic-go v0.48.0/go.mod h1:yBgs3rWBOADpga7F+jJsb6Ybg1LSYiQvwWlLX+/6HMs=
github.com/rabbitmq/amqp091-go v1.10.0 h1:STpn5XsHlHGcecLmMFCtg7mqq0RnD+zFr4uzukfVhBw=
github.com/rabbitmq/amqp091-go v1.10.0/go.mod h1:Hy4jKW5kQART1u+JkDTF9YYOQUHXqMuhrgxOEeS7G4o=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/riywo/loginshell v0.0.0-20200815045211-7d26008be1ab h1:ZjX6I48eZSFetPb41dHudEyVr5v953N15TsNZXlkcWY=
github.com/riywo/loginshell v0.0.0-20200815045211-7d26008be1ab/go.mod h1:/PfPXh0EntGc3QAAyUaviy4S9tzy4Zp0e2ilq4voC6E=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sclevine/spec v1.4.0 h1:z/Q9idDcay5m5irkZ28M7PtQM4aOISzOpj4bUPkDee8=
github.com/shirou/gopsutil/v3 v3.23.11 h1:i3jP9NjCPUz7FiZKxlMnODZkdSIp2gnzfrvsu9CuWEQ=
github.com/shirou/gopsutil/v3 v3.23.11/go.mod h1:1FrWgea594Jp7qmjHUUPlJDTPgcsb9mGnXDxavtikzM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM=
github.com/shirou/gopsutil/v4 v4.24.9 h1:KIV+/HaHD5ka5f570RZq+2SaeFsb/pq+fp2DGNWYoOI=
github.com/shirou/gopsutil/v4 v4.24.9/go.mod h1:3fkaHNeYsUFCGZ8+9vZVWtbyM1k2eRnlL+bWO8Bxa/Q=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syncthing/notify v0.0.0-20210616190510-c6b7342338d2 h1:F4snRP//nIuTTW9LYEzVH4HVwDG9T3M4t8y/2nqMbiY=
github.com/syncthing/notify v0.0.0-20210616190510-c6b7342338d2/go.mod h1:J0q59IWjLtpRIJulohwqEZvjzwOfTEPp8SVhDJl+y0Y=
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d h1:vfofYNRScrDdvS342BElfbETmL1Aiz3i2t0zfRj16Hs=
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d/go.mod h1:RRCYJbIwD5jmqPI9XoAFR0OcDxqUctll6zUj/+B4S48=
github.com/thejerf/suture/v4 v4.0.2 h1:VxIH/J8uYvqJY1+9fxi5GBfGRkRZ/jlSOP6x9HijFQc=
github.com/thejerf/suture/v4 v4.0.2/go.mod h1:g0e8vwskm9tI0jRjxrnA6lSr0q6OfPdWJVX7G5bVWRs=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/thejerf/suture/v4 v4.0.5 h1:F1E/4FZwXWqvlWDKEUo6/ndLtxGAUzMmNqkrMknZbAA=
github.com/thejerf/suture/v4 v4.0.5/go.mod h1:gu9Y4dXNUWFrByqRt30Rm9/UZ0wzRSt9AJS6xu/ZGxU=
github.com/tklauser/go-sysconf v0.3.14 h1:g5vzr9iPFFz24v2KZXs/pvpvh8/V9Fw6vQK5ZZb78yU=
github.com/tklauser/go-sysconf v0.3.14/go.mod h1:1ym4lWMLUOhuBOPGtRcJm7tEGX4SCYNEEEtghGG/8uY=
github.com/tklauser/numcpus v0.9.0 h1:lmyCHtANi8aRUgkckBgoDk1nHCux3n2cgkJLXdQGPDo=
github.com/tklauser/numcpus v0.9.0/go.mod h1:SN6Nq1O3VychhC1npsWostA+oW+VOQTxZrS604NSRyI=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.14 h1:ebbhrRiGK2i4naQJr+1Xj92HXZCrK7MsyTS/ob3HnAk=
github.com/urfave/cli v1.22.14/go.mod h1:X0eDS6pD6Exaclxm99NJ3FiCDRED7vIHpx2mDOHLvkA=
github.com/urfave/cli v1.22.16 h1:MH0k6uJxdwdeWQTwhSO42Pwr4YLrNLwBtg1MRgTqPdQ=
github.com/urfave/cli v1.22.16/go.mod h1:EeJR6BKodywf4zciqrdw6hpCPk68JO9z5LazXZMn5Po=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0 h1:okhMind4q9H1OxF44gNegWkiP4H/gsTFLalHFa4OOUI=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXceWtbTHq6lqcTvYPBKLNejBEbnUsQJtU=
github.com/willabides/kongplete v0.4.0 h1:eivXxkp5ud5+4+NVN9e4goxC5mSh3n1RHov+gsblM2g=
github.com/willabides/kongplete v0.4.0/go.mod h1:0P0jtWD9aTsqPSUAl4de35DLghrr57XcayPyvqSi2X8=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yusufpapurcu/wmi v1.2.3 h1:E1ctvB7uKFMOJw3fdOW32DwGE9I7t++CRUEMKvFoFiw=
github.com/yusufpapurcu/wmi v1.2.3/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/mock v0.3.0 h1:3mUxI1No2/60yUYax92Pt8eNOEecx2D3lcXZh2NEZJo=
go.uber.org/mock v0.3.0/go.mod h1:a6FSlNadKUHUa9IP5Vyt1zh4fC7uAwxMutEAscFbkZc=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.4.0 h1:VcM4ZOtdbR4f6VXfiOpwpVJDL6lCReaZ6mw31wqh7KU=
go.uber.org/mock v0.4.0/go.mod h1:a6FSlNadKUHUa9IP5Vyt1zh4fC7uAwxMutEAscFbkZc=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.16.0 h1:mMMrFzRSCF0GvB7Ne27XVtVAaXLrPmgPC7/v0tkwHaY=
golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20231206192017-f3f8817b8deb h1:c0vyKkb6yr3KR7jEfJaOSv4lG7xPkbN6r52aJz1d8a8=
golang.org/x/exp v0.0.0-20231206192017-f3f8817b8deb/go.mod h1:iRJReGqOEeBhDZGkGbynYwcHlctCvnjTYIamk7uXpHI=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c h1:7dEasQXItcW1xKJ2+gg5VOiBnqWrJc+rq0DPKyvvdbY=
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c/go.mod h1:NQtJDoLvd6faHhE7m4T/1IY708gDefGGjR/iUW8yQQ8=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0=
golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@@ -227,16 +293,20 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180926160741-c2ed4eda69e7/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -260,29 +330,31 @@ golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
@@ -290,8 +362,8 @@ golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.16.1 h1:TLyB3WofjdOEepBHAU20JdNC1Zbg87elYofWYAY5oZA=
golang.org/x/tools v0.16.1/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0=
golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ=
golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -305,16 +377,22 @@ google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzi
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@@ -224,7 +224,6 @@ code.ng-binding{
}
.well, .form-control[readonly="readonly"], .popover { /* read-only fields*/
color: #666 !important;
border-color: #444 !important;
background-color: #111 !important;
}
@@ -278,3 +277,17 @@ code.ng-binding{
.reception {
filter: invert(77%) sepia(0%) saturate(724%) hue-rotate(146deg) brightness(91%) contrast(85%);
}
/* Disabled checkbox panels */
.checkbox[disabled] {
background-color: #222222;
}
.checkbox[disabled] * {
color: #666666;
}
.checkbox[disabled] .help-block {
color: #666666 !important;
}

View File

@@ -228,7 +228,6 @@ code.ng-binding{
}
.well, .form-control[readonly="readonly"], .popover { /* read-only fields*/
color: #666 !important;
border-color: #424242 !important;
background-color: #3B3B3B !important;
}
@@ -289,4 +288,18 @@ code.ng-binding{
/* Remote Devices 'connection type'-icon color set to #aaa */
.reception {
filter: invert(77%) sepia(0%) saturate(724%) hue-rotate(146deg) brightness(91%) contrast(85%);
}
}
/* Disabled checkbox panels */
.checkbox[disabled] {
background-color: #3B3B3B;
}
.checkbox[disabled] * {
color: #999999;
}
.checkbox[disabled] .help-block {
color: #999999 !important;
}

View File

@@ -189,6 +189,13 @@ input[type="checkbox"].extended-attributes-filter-rule-checkbox {
word-break: break-all;
}
/* Break up long words in paragraphs only if necessary to prevent text overflow. */
.overflow-break-word {
overflow-wrap: break-word;
/* Legacy name alias */
word-wrap: break-word;
}
.folder-advanced {
padding: 1rem;
margin-bottom: 15px;
@@ -297,11 +304,7 @@ a.toggler:hover {
text-decoration: none;
}
/**
* Panel padding decrease
*/
.panel-collapse .panel-body {
.panel-body.less-padding {
padding: 5px;
}
@@ -445,7 +448,6 @@ ul.three-columns li, ul.two-columns li {
}
@media (max-width:479px) {
nav .dropdown-toggle {
font-size: 1em;
}
@@ -453,13 +455,7 @@ ul.three-columns li, ul.two-columns li {
.navbar-nav .open .dropdown-menu > li > a {
padding: 12px 15px 12px 25px;
}
}
.tab-content {
padding-top: 10px;
}
@media (max-width: 419px) {
/* The selectors are build to target only the content of folder and device
panels as it would "destroy" e.g. out of sync or recent changes listings.
The !important is needed to override .visible-xs that sets display to a
@@ -510,6 +506,10 @@ ul.three-columns li, ul.two-columns li {
}
}
.tab-content {
padding-top: 10px;
}
.form-horizontal .form-group {
margin-bottom: 5px;
}

View File

@@ -11,30 +11,30 @@
"Add Device": "إضافة جهاز",
"Add Folder": "إضافة مجلد",
"Add Remote Device": "إضافة جهاز بعيد",
"Add devices from the introducer to our device list, for mutually shared folders.": "اضف أجهزة من المعرف/المقدم إلى قائمة الأجهزة الخاصة بنا، للمجلدات المشتركة بشكل متبادل.",
"Add devices from the introducer to our device list, for mutually shared folders.": "أضف أجهزة من الوسيط إلى قائمة الأجهزة الخاصة بنا، للمجلدات المشتركة بشكل متبادل.",
"Add filter entry": "إضافة عامل التصفية",
"Add ignore patterns": "أضف أنماط التجاهل",
"Add new folder?": "إضافة مجلد جديد؟",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "بالإضافة إلى ذلك ، سيتم زيادة الفاصل الزمني لإعادة الفحص الكامل (60 مرة، وهو الافتراضي الجديد من 1H). يمكنك أيضًا التحكم بالإعدادات وتعديلها يدويًا لكل مجلد لاحقًا بعد اختيار \"لا\".",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "بالإضافة إلى ذلك ، سيُزاد الفاصل الزمني لإعادة الفحص الكامل (60 مرة، وهو الافتراضي الجديد من 1H). يمكنك أيضًا التحكم بالإعدادات وتعديلها يدويًا لكل مجلد لاحقًا بعد اختيار \"لا\".",
"Address": "العنوان",
"Addresses": "العناوين",
"Advanced": "متقدم",
"Advanced Configuration": "ضبط متقدم",
"All Data": "كل البيانات",
"All Time": "كل الوقت",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "يجب حماية جميع المجلدات التي تمت مشاركتها مع هذا الجهاز بكلمة مرور ، بحيث تكون جميع البيانات المرسلة غير قابلة للقراءة بدون كلمة المرور المقدمة.",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "يجب حماية جميع المجلدات التي شاركتها مع هذا الجهاز بكلمة مرور ، بحيث تكون جميع البيانات المرسلة غير قابلة للقراءة بدون كلمة المرور المقدمة.",
"Allow Anonymous Usage Reporting?": "السماح بإرسال تقارير الإستخدام المجهولة؟",
"Allowed Networks": "الشبكات المسموح بها",
"Alphabetic": "أبجدية",
"Altered by ignoring deletes.": م التغيير بتجاهل عمليات الحذف.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "الإصدار يتم معالجته بواسطة أمر خارجي. يجب إزالة الملف من المجلدات المشتركة. إذا كان المسار للتطبيق يحتوي على مسافات، يجب وضعها بين علامتي تنصيص دلالة على الاقتباس.",
"Altered by ignoring deletes.": غير بتجاهل عمليات الحذف.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "الإصدار يعالج بواسطة أمر خارجي. يجب إزالة الملف من المجلدات المشتركة. إذا كان المسار للتطبيق يحتوي على مسافات، يجب وضعها بين علامتي تنصيص دلالة على الاقتباس.",
"Anonymous Usage Reporting": "تقارير الإستخدام المجهولة",
"Anonymous usage report format has changed. Would you like to move to the new format?": "هل تريد الانتقال الى التصميم الجديد لتقرير الاستخدام المجهول ؟",
"Applied to LAN": "الشبكة المحلية",
"Apply": "تقدم",
"Are you sure you want to override all remote changes?": "هل أنت متأكد أنك تريد تجاوز كافة التغييرات عن بُعد؟",
"Are you sure you want to permanently delete all these files?": "هل أنت متأكد أنك تريد حذف كل هذه الملفات بشكل دائم؟",
"Are you sure you want to remove device {%name%}?": " هل انت متاكد من حذف الجهاز {{name}}? ",
"Are you sure you want to remove device {%name%}?": "هل أنت متيقِّن من حذف هذا الجهاز {{name}}؟",
"Are you sure you want to remove folder {%label%}?": "هل انت متاكد من حذف المجلد {{label}}؟",
"Are you sure you want to restore {%count%} files?": "هل انت متاكد من استعادة {{count}} ملف؟",
"Are you sure you want to revert all local changes?": "هل أنت متأكد أنك تريد التراجع عن كافة التغييرات المحلية؟",
@@ -45,8 +45,8 @@
"Automatic Crash Reporting": "التبليغ التلقائي للاخطاء",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "توفر الترقية التلقائية الاختيار بين النسخ الثابتة أو النسخ المرشحة.",
"Automatic upgrades": "تحديث تلقائي",
"Automatic upgrades are always enabled for candidate releases.": "الترقية التلقائية مفعلة دائمًا للنسخ المرشحة. ",
"Automatically create or share folders that this device advertises at the default path.": "تلقائيا أنشئ وشارك المجلدات الموجودة في المسار الافتراضي.",
"Automatic upgrades are always enabled for candidate releases.": "الترقية التلقائية مفعلة دائمًا للنسخ المرشحة.",
"Automatically create or share folders that this device advertises at the default path.": "أنشئ وشارك المجلدات الموجودة في المسار الافتراضي تلقائيا.",
"Available debug logging facilities:": "خدمات سجلات تدقيق البرمجيات المتوفرة:",
"Be careful!": "احذر!",
"Body:": "جسم:",
@@ -70,13 +70,13 @@
"Connection Type": "نوع الاتصال",
"Connections": "اتصالات",
"Connections via relays might be rate limited by the relay": "قد يكون معدل التوصيلات عبر المرحلات محدودًا بواسطة المرحل",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "مراقبة الملفات بشكل مستمر متوفر في Syncthing. يتم فحص الملفات التي تم تغييرها في المسار فقط. هذا يساعد على تجنب فحص كامل المسار لأداء اسرع. ",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "مراقبة الملفات بشكل مستمر متوفر في Syncthing. تفحص الملفات التي تغيرت في المسار فقط. هذا يساعد على تجنب فحص كامل المسار لأداء اسرع.",
"Copied from elsewhere": "منسوخ من مكان أخر",
"Copied from original": "منسوخ من الأصل",
"Copied!": "تم النسخ",
"Copied!": "نُسِخَ!",
"Copy": "نسخ",
"Copy failed! Try to select and copy manually.": "فشل النسخ! حاول التحديد والنسخ يدويًا.",
"Currently Shared With Devices": "حاليًا تم مشاركته مع الأجهزة",
"Currently Shared With Devices": "مُشارَك مع الأجهزة حاليا",
"Custom Range": "نطاق مخصص",
"Danger!": "خطر!",
"Database Location": "موقع قاعدة البيانات",
@@ -94,15 +94,15 @@
"Deselect devices to stop sharing this folder with.": "قم بإلغاء تحديد الأجهزة لإيقاف مشاركة هذا المجلد معها.",
"Deselect folders to stop sharing with this device.": "قم بإلغاء تحديد المجلدات لإيقاف المشاركة مع هذا الجهاز.",
"Device": "جهاز",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "الجهاز \"{{الاسم}}\" ({{الجهاز}} في {{العنوان}}) يرغب في الاتصال، إضافة جهاز جديد؟",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "الجهاز \"{{name}}\" ({{device}} في {{address}}) يرغب في الاتصال، إضافة جهاز جديد؟",
"Device Certificate": "شهادة الجهاز",
"Device ID": "هوية الجهاز",
"Device Identification": "هوية الجهاز",
"Device Name": "أسم الجهاز",
"Device ID": "معرف الجهاز",
"Device Identification": "معرف الجهاز",
"Device Name": "اسم الجهاز",
"Device Status": "حالة الجهاز",
"Device is untrusted, enter encryption password": "الجهاز غير موثوق به، أدخل كلمة مرور التشفير",
"Device rate limits": "حدود معدل نقل البيانات",
"Device that last modified the item": "اخر جهاز جهاز عدل على العنصر",
"Device that last modified the item": "آخر من عدل على العنصر",
"Devices": "الأجهزة",
"Disable Crash Reporting": "تعطيل ميزة التبليغ عن الأخطاء",
"Disabled": "معطل",
@@ -158,15 +158,15 @@
"Failure to connect to IPv6 servers is expected if there is no IPv6 connectivity.": "يُتوقع فشل الاتصال بخوادم IPv6، إذا لم يكن IPv6 متاحا.",
"File Pull Order": "ترتيب استيراد الملفات",
"File Versioning": "إصدارات الملف",
"Files are moved to .stversions directory when replaced or deleted by Syncthing.": "الملفات يتم نقلها إلى مجلد `.stversions` عند الاستبدال أو الحذف بواسطة البرنامج.",
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "يتم نقل الملفات إلى الإصدارات المؤرخة المختومة في مجلد `.stversions` عند استبدالها أو حذفها بواسطة Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "الملفات محمية من التغييرات التي تم إجراؤها على الأجهزة الأخرى ، ولكن سيتم إرسال التغييرات التي تم إجراؤها على هذا الجهاز إلى بقية الأجهزة.",
"Files are moved to .stversions directory when replaced or deleted by Syncthing.": "تنقل الملفات إلى مجلد `.stversions` عند الاستبدال أو الحذف بواسطة البرنامج.",
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "تنقل الملفات إلى الإصدارات المؤرخة المختومة في مجلد `.stversions` عند استبدالها أو حذفها بواسطة Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "الملفات محمية من التغييرات التي أجريت على الأجهزة الأخرى ، ولكن سترسل التغييرات التي أجريت على هذا الجهاز إلى بقية الأجهزة.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "تُزامَنُ الملفات من العنقود، لكن التغيرات المحلية على هذا الجهاز لاتُطَبَّقُ على غيره من الأجهزة.",
"Filesystem Watcher Errors": "أخطاء مراقب نظام الملفات",
"Filter by date": "فلتره بالتاريخ ",
"Filter by date": "فلترة بالتاريخ",
"Filter by name": "فلتر باستخدام الاسم",
"Folder": "مجلد",
"Folder ID": "هوية المجلد",
"Folder ID": "مُعرِّف المجلد",
"Folder Label": "تسمية المجلد",
"Folder Path": "مسار المجلد",
"Folder Status": "حالة المجلد",
@@ -174,27 +174,27 @@
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "نوع المجلد \"{{receiveEncrypted}}\" لا يمكن إعداده إلا أثناء إنشاء الملف.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "نوع المجلد \"{{receiveEncrypted}}\" لا يمكن إعداده بعد إضافة المجلد. يجب أن تحذف المجلد وأن تفك تشفير البيانات على قرص التخزين أو تحذفها، بعدها تنشئ المجلد مجددا.",
"Folders": "المجلدات",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "للمجلدات التالية، حدث خطأ قبل بدء مشاهدة التغييرات. ستتم إعادة المحاولة كل دقيقة، نظرًا لذلك قد تختفي الأخطاء قريبًا. لكن إذا استمرت، فحاول حل المشكلة واطلب المساعدة إذا لم تستطع حل المشكلة.",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "للمجلدات التالية، حدث خطأ قبل بدء مشاهدة التغييرات. ستعاد المحاولة كل دقيقة، نظرًا لذلك قد تختفي الأخطاء قريبًا. لكن إذا استمرت، فحاول حل المشكلة واطلب المساعدة إذا لم تستطع حل المشكلة.",
"Forever": "للأبد",
"Full Rescan Interval (s)": "مدة أعاده الفحص الكامل (ثانية)",
"Full Rescan Interval (s)": "مدة إعادة الفحص الكامل (ثانية)",
"GUI": "واجهة المستخدم الرسومية",
"GUI / API HTTPS Certificate": "الواجهة / API وثيقة HTTPS",
"GUI Authentication Password": "كلمة السر لتوثيق الواجهة",
"GUI Authentication User": "أسم المستخدم لدخول واجهة الرسومية",
"GUI Authentication User": "اسم المستخدم لدخول واجهة الرسومية",
"GUI Authentication: Set User and Password": "توثيق الواجهة: أنشئ كلمة مرور للمستخدم",
"GUI Listen Address": "واجهة الرسومية الاستماع الى العنوان",
"GUI Listen Address": "عنوان ترقب الواجهة الرسومية",
"GUI Override Directory": "مجلد إحلال الواجهة",
"GUI Theme": "شكل الواجه",
"GUI Theme": "شكل الواجهة",
"General": "عام",
"Generate": "توليد",
"Global Discovery": "الاكتشاف العالمي",
"Global Discovery Servers": "الاكتشاف العالمي",
"Global State": "الحالة العامة ",
"Global State": "الحالة العامة",
"Help": "مساعدة",
"Hint: only deny-rules detected while the default is deny. Consider adding \"permit any\" as last rule.": "ملحوظة: إذا كان الإعداد الافتراضي هو الرفض، وحدها قواعد الرفض تُرصد. جرب إضافة \"السماح للكل\" كخيار أخير.",
"Home page": "الصفحة الرئيسية",
"However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.": "ومع ذلك، تشير إعداداتك الحالية إلى أنك قد لا ترغب في تمكينه. لذلك تم تعطيل الإبلاغ التلقائي عن الأعطال.",
"Identification": "التعرف",
"However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.": "ومع ذلك، تشير إعداداتك الحالية إلى أنك قد لا ترغب في تمكينه. لذلك توقف الإبلاغ التلقائي عن الأعطال.",
"Identification": "المُعرِّف",
"If untrusted, enter encryption password": "في حالة الرِّيبة، أدخل كلمة سر التشفير",
"If you want to prevent other users on this computer from accessing Syncthing and through it your files, consider setting up authentication.": "إذا أردت منع المستخدمين الآخرين على هذا الحاسب من الوصول لملفاتك من خلال Syncthing، يُنصَح بإعداد وثائق الملكية.",
"Ignore": "تجاهل",
@@ -206,11 +206,11 @@
"Ignored at": "تجاهل عند",
"Included Software": "البرامج المُضمَّنة",
"Incoming Rate Limit (KiB/s)": "الحد الأقصى البيانات الواردة (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "الإعدادات الغير صحيحه قد تدمر بيانات المجلد وتجعل المزامنة غير صالحه للعمل",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "الإعدادات الغير صحيحة قد تدمر بيانات المجلد وتُفْشِلُ المزامنة.",
"Incorrect user name or password.": "رُصِدَ خطأ في اسم المستخدم أو كلمة المرور.",
"Internally used paths:": "المسار المستخدم محليّا:",
"Introduced By": "عرف بواسطة",
"Introducer": "المعرف",
"Introducer": "الوسيط",
"Introduction": "تقديم",
"Inversion of the given condition (i.e. do not exclude)": "عكس الحالة المذكورة (مثلا: لا تستثنِ)",
"Keep Versions": "احتفظ بالاصدارات",
@@ -219,15 +219,15 @@
"Last 30 Days": "الثلاثون يومًا السابقة",
"Last 7 Days": "الأيام السبعة السابقة",
"Last Month": "الشهر الماضي",
"Last Scan": "اخر فحص",
"Last Scan": "آخر فحص",
"Last seen": "اخر ظهور",
"Latest Change": "اخر التغييرات",
"Learn more": "اعرف اكثر ",
"Learn more": "اعرف أكثر",
"Learn more at {%url%}": "اطلع على المزيد في {{url}}",
"Limit": "الحد",
"Listener Failures": "فشل المستمع",
"Listener Status": "حالة المستمع",
"Listeners": "المستمعين",
"Listener Failures": "أعطال المنصت",
"Listener Status": "حالة المنصت",
"Listeners": "المنصتين",
"Loading data...": "تحميل بيانات...",
"Loading...": "تحميل...",
"Local Additions": "الإضافات المحلِّيَّة",
@@ -264,7 +264,7 @@
"Newest First": "الأحدث أولا",
"No": "لا",
"No File Versioning": "لا تقسيم لإصدارات الملفات",
"No files will be deleted as a result of this operation.": "لن يتم حذف اي ملفات بسبب هذا العملية",
"No files will be deleted as a result of this operation.": "لن يحذف أي ملف بسبب هذه العملية.",
"No rules set": "لم تحدد قواعد",
"No upgrades": "لا يوجد ترقيات",
"Not shared": "لم يُشارَك",
@@ -273,7 +273,7 @@
"OK": "موافق",
"Off": "اطفئ",
"Oldest First": "الأقدم أولا",
"Optional descriptive label for the folder. Can be different on each device.": "تسمية وصفية اختيارية للمجلد . يمكن أن تكون مختلفة على كل جهاز. ",
"Optional descriptive label for the folder. Can be different on each device.": "تسمية وصفية اختيارية للمجلد. يمكن أن تكون مختلفة على كل جهاز.",
"Options": "خيارات",
"Out of Sync": "خارج التزامن",
"Out of Sync Items": "عناصر خارج التزامن",
@@ -287,13 +287,13 @@
"Path where versions should be stored (leave empty for the default .stversions directory in the shared folder).": "المسار حيث تخزن الإصدارات (يترك فارغًا لدليل .vversions الافتراضي في المجلد المشترك).",
"Paths": "المسارات",
"Pause": "إيقاف",
"Pause All": "أيقاف الكل ",
"Pause All": "إيقاف الكل",
"Paused": "توقف",
"Paused (Unused)": "متوقف (مهمل)",
"Pending changes": "التغييرات المعلقة",
"Periodic scanning at given interval and disabled watching for changes": "المسح الدوري خلال فترة زمنية معينة وتعطيل مشاهدة التغييرات.",
"Periodic scanning at given interval and enabled watching for changes": "المسح الدوري خلال فترة زمنية معينة وتفعيل مشاهدة التغييرات.",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "المسح الدوري خلال فترة زمنية معينة وفشل اعداد مشاهدة التغييرات، اعادة المحاولة كل 1 دقيقة.",
"Periodic scanning at given interval and disabled watching for changes": "الفحص الدوري خلال فترة زمنية معينة وتعطيل تَرقُّب التغييرات",
"Periodic scanning at given interval and enabled watching for changes": "المسح الدوري خلال فترة زمنية معينة وتفعيل تَرقُّب التغييرات",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "المسح الدوري خلال فترة زمنية معينة وفشل إعداد تَرقُّب التغييرات، إعادة المحاولة كل 1 دقيقة:",
"Permanently add it to the ignore list, suppressing further notifications.": "أدرجها أبداً على قائمة التجاهل، اكتم الإشعارات مستقبلا.",
"Please consult the release notes before performing a major upgrade.": "يرجى العودة إلى ملاحظات الإصدار قبل تنفيذ ترقية رئيسية.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "تفضل بإنشاء مستخدما موثقا للواجهة وكلمة مرور من خلال قائمة الإعدادات.",
@@ -306,13 +306,13 @@
"QR code": "الصورة المشفرة (QR)",
"QUIC LAN": "اتصال QUIC للشبكة المحلية (LAN)",
"QUIC WAN": "QUIC الشبكة العامة",
"Quick guide to supported patterns": "الدليل مختصر للأنماط المدعومة ",
"Quick guide to supported patterns": "دليل مختصر للأنماط المدعومة",
"Random": "عشوائي",
"Receive Encrypted": "استلام المشفَّر",
"Receive Only": "استقبال فقط",
"Received data is already encrypted": "البيانات المستوردة مشفرة بالفعل",
"Recent Changes": "اخر التغييرات",
"Reduced by ignore patterns": "تقليص بواسطة تجاهل الأنماط. ",
"Reduced by ignore patterns": "تقليص بواسطة تجاهل الأنماط",
"Relay LAN": "ترحيل الشبكة المحلية (LAN)",
"Relay WAN": "ترحيل الشبكة العامة (WAN)",
"Release Notes": "ملاحظات الإصدار",
@@ -322,32 +322,32 @@
"Remove": "إزالة",
"Remove Device": "حذف جهاز",
"Remove Folder": "حذف مجلد",
"Required identifier for the folder. Must be the same on all cluster devices.": "يتطلب معرفًا للمجلد. يجب أن يستخدم نفس المعرف لبقية الأجهزة. ",
"Required identifier for the folder. Must be the same on all cluster devices.": "يتطلب معرفًا للمجلد. يجب أن يستخدم نفس المعرف لبقية الأجهزة.",
"Rescan": "إعادة فحص",
"Rescan All": "أعادة فحص الكل",
"Rescan All": "إعادة فحص الكل",
"Rescans": "يعيد الفحص",
"Restart": "إعادة تشغيل",
"Restart Needed": "مطلوب أعادة تشغيل",
"Restarting": تم إعادة التشغيل",
"Restarting": ُعاد التشغيل",
"Restore": "استعادة",
"Restore Versions": "استعادة أصدارات ",
"Restore Versions": "استعادة إصدارات",
"Resume": "استرد",
"Resume All": "استعادة الكل ",
"Reused": "إعادة الاستخدام",
"Resume All": "استئناف الجميع",
"Reused": "مُعادة الاستخدام",
"Revert": "الرجوع عن التغيير",
"Revert Local Changes": "التراجع عن التغييرات",
"Save": "حفظ",
"Saving changes": "تُحفَظ التعديلات",
"Scan Time Remaining": "فحص الوقت المتبقي",
"Scanning": "يتم الفحص",
"See external versioning help for supported templated command line parameters.": "راجع تعليمات الإصدارات الخارجية لمعرفة القيم المدعومة في سطر الأوامر. ",
"Scanning": "جار الفحص",
"See external versioning help for supported templated command line parameters.": "راجع تعليمات الإصدارات الخارجية لمعرفة القيم المدعومة في سطر الأوامر.",
"Select All": "تحديد الكل",
"Select a version": "اختار أصدار ",
"Select a version": "اختر إصداراً",
"Select additional devices to share this folder with.": "اختيار المزيد من الأجهزة التي ترغب في مشاركة هذا المجلد معها.",
"Select additional folders to share with this device.": "اختيار المزيد من المجلدات لمشاركتها مع هذا الجهاز.",
"Select latest version": "اختار اخر أصدار ",
"Select latest version": "اختر آخر إصدار",
"Select oldest version": "اختيار أقدم إصدار",
"Send & Receive": "إرسال واستقبال ",
"Send & Receive": "إرسال واستقبال",
"Send Extended Attributes": "أرسل البيانات الثانوية",
"Send Only": "إرسال فقط",
"Send Ownership": "أرسل الملكية",
@@ -361,15 +361,15 @@
"Shared Folders": "المجلدات المُشارَكة",
"Shared With": "مشاركة مع",
"Sharing": "مشاركه",
"Show ID": "عرض الهوية",
"Show ID": "عرض المُعرِّف",
"Show QR": "اظهار QR",
"Show detailed discovery status": "اعرض حالة الاكتشاف تفصيليا",
"Show detailed listener status": "اعرض حالة الاستماع تفصيليا",
"Show diff with previous version": "اظهر الفرق مع النسخة السابقة ",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "يُعرَض بدلا من الهوية في حالة العنقود. سيُروَّج للأجهزة الأخرى على انه اسم اساسي محتمل.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "يُعرَض بدلا من الهوية في حالة العنقود. إذا تُرك فارغا، سيُحدَّث إلى الاسم المختار من قِبَل الجهاز.",
"Show detailed listener status": "اعرض حالة الإنصات تفصيليا",
"Show diff with previous version": "أظهر الفرق مقارنةً بالنسخة السابقة",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "يُعرَض بدلا من المُعرِّف ضمن العناقيد. سيُروَّج للأجهزة الأخرى على أنه اسم أساسي محتمل.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "يُعرَض بدلا من المُعرِّف ضمن العناقيد. إذا تُرك فارغا، سيُحدَّث إلى الاسم المختار من قِبَل الجهاز.",
"Shutdown": "إغلاق",
"Shutdown Complete": "تم الإغلاق",
"Shutdown Complete": "أُغلِق",
"Simple": "بسيط",
"Simple File Versioning": "التقسيم البسيط لإصدارات الملفات",
"Single level wildcard (matches within a directory only)": "المقارنة على مستوى واحد (المقارنة مع الملفات في المجلد الحالي فقط)",
@@ -377,15 +377,16 @@
"Smallest First": "الأصغر أولا",
"Some discovery methods could not be established for finding other devices or announcing this device:": "بعض أساليب الاستكشاف يمكن استخدامها للبحث عن أجهزة أخرى أو الإعلان عن هذا الجهاز:",
"Some items could not be restored:": "بعض العناصر لا يمكن استرجاعها:",
"Some listening addresses could not be enabled to accept connections:": "بعض عناوين الاستماع لا يمكن تفعيلها لقبول الاتصالات:",
"Some listening addresses could not be enabled to accept connections:": "بعض عناوين الإنصات لا يمكن تفعيلها لقبول الاتصالات:",
"Source Code": "مصدر الشفرة",
"Stable releases and release candidates": "الإصدارات المستقرة والإصدارات المرشحة.",
"Stable releases are delayed by about two weeks. During this time they go through testing as release candidates.": "الإصدارات المستقرة تأخرت بنحو أسبوعين. خلال هذه الفترة يتم إجراء الاختبارات كإصدارات مرشحة.",
"Stable releases and release candidates": "الإصدارات المستقرة والإصدارات المرشحة",
"Stable releases are delayed by about two weeks. During this time they go through testing as release candidates.": "الإصدارات المستقرة تأخرت نحو أسبوعين. خلال هذه الفترة نُجري الاختبارات عن طريق الإصدارات المرشحة.",
"Stable releases only": "الإصدارات المستقرة فقط",
"Staggered": "مترنِّح",
"Staggered File Versioning": "تقسمات إصدارات الملف مهترئة",
"Start Browser": "تشغيل المتصفح",
"Statistics": "إحصائيات",
"Stay logged in": "ابقِ مُسجل الدخول",
"Stopped": "متوقف",
"Stores and syncs only encrypted data. Folders on all connected devices need to be set up with the same password or be of type \"{%receiveEncrypted%}\" too.": "يُزامن ويخزن البيانات المشفرة فقط. يجب أن تكون المجلدات على جميع الأجهزة مُجهزَّة بكلمة المرور نفسها، أو أن تكون من نوع \"{{receiveEncrypted}}\".",
"Subject:": "الموضوع:",
@@ -395,17 +396,17 @@
"Sync Ownership": "زامن الملكية",
"Sync Protocol Listen Addresses": "عناوين بروتوكول استقبال المزامنة",
"Sync Status": "وضع المزامنة",
"Syncing": تم التزامن",
"Syncthing device ID for \"{%devicename%}\"": "هوية Syncthing الجهاز لـ {{devicename}}",
"Syncthing has been shut down.": "تم إيقاف Syncthing.",
"Syncing": ُزامَن",
"Syncthing device ID for \"{%devicename%}\"": "مُعرِّف Syncthing للجهاز {{devicename}}",
"Syncthing has been shut down.": "أُوقِف Syncthing.",
"Syncthing includes the following software or portions thereof:": "المزامنة تتضمن البرامج التالية أو أجزائها:",
"Syncthing is Free and Open Source Software licensed as MPL v2.0.": "Syncthing هو برنامج حر مفتوح المصدر تحت ترخيص MPL v2.0 (ترخيص موزيلا العام النسخة الثانية).",
"Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time, safely protected from prying eyes. Your data is your data alone and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.": "Syncthing هو تطبيق للمزامنة المستمرة للملفات. يزامن الملفات بين جهازين أو أكثر بشكل لحظي، آمن من الأعين المتربصة. بياناتك ملك لك وحدك، من حقك أن تختار أين تُخَزَّن، وهل يطلع عليها طرف ثالث أم لا، وكيف تتنقل عبر الشبكة.",
"Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time, safely protected from prying eyes. Your data is your data alone and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.": "Syncthing هو تطبيق للمزامنة المستمرة للملفات. يزامن الملفات بين جهازين أو أكثر بشكل لحظي، آمن من الأعين المتربصة. بياناتك ملك لك وحدك، من حقك أن تختار أين تُخَزَّن، وهل يطلع عليها غيرك أم لا، وكيف تتنقل عبر الشبكة.",
"Syncthing is listening on the following network addresses for connection attempts from other devices:": "Syncthing يترقب محاولات الاتصال على العنوان التالي:",
"Syncthing is not listening for connection attempts from other devices on any address. Only outgoing connections from this device may work.": "Syncthing لا يترقب أي محاولة للاتصال على أي من عناوين الشبكة. الاتصالات الصادرة فقط هي التي يمكن أن تعمل.",
"Syncthing is restarting.": تم إعادة تشغيل Syncthing.",
"Syncthing is restarting.": "يعاد تشغيل Syncthing.",
"Syncthing is saving changes.": "Syncthing يحفظ التعديلات.",
"Syncthing is upgrading.": "يتم تطوير Syncthing.",
"Syncthing is upgrading.": "نطور Syncthing.",
"Syncthing now supports automatically reporting crashes to the developers. This feature is enabled by default.": "Syncthing يدعم إعادة التشغيل التلقائي للمطورين. هذه الخاصية مفعلة بشكل افتراضي.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing معطل على ما يبدو، ربما يكون العطل في شبكتك. إعادة المحاولة…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing يواجه مشكلة في معالجة طلبك. إذا استعصت المشكلة، أعد تحميل الصفحة رجاء.",
@@ -417,19 +418,19 @@
"The Syncthing admin interface is configured to allow remote access without a password.": "واجهة مدير Syncthing معدة للسماح بالوصول بغير كلمة مرور.",
"The aggregated statistics are publicly available at the URL below.": "الإحصاءات المجمعة متاحة للجميع على العنوان التالي.",
"The cleanup interval cannot be blank.": "المدة بين عمليات التنظيف لا يمكن تركها فارغة.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "تم حفظ الإعدادات ولكن لم يتم تفعيلها بعد. يجب أعادة تشغيل Syncthing حتى تم تفعيل الإعدادات.",
"The device ID cannot be blank.": "هوية الجهاز لا يمكن أن تكون فارغة.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "يمكنك أن تجد هوية الجهاز الذي ينبغي استخدامه هنا في قائمة \"الإجراءات > عرض الهوية\" على الجهاز الآخر. المسافات والخطوط الاعتراضية تُتجاهَل.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "حفظت الإعدادات ولكن لم تفعَّل بعد. يجب أعادة تشغيل Syncthing حتى تفعَّل الإعدادات.",
"The device ID cannot be blank.": "مُعرِّف الجهاز لا يمكن أن يكون فارغاً.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "يمكنك أن تجد مُعرِّف الجهاز الذي ينبغي استخدامه هنا في قائمة \"الإجراءات > عرض المُعرِّف\" على الجهاز الآخر. المسافات والخطوط الاعتراضية تُتجاهَل.",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes, and app versions. If the reported data set is changed you will be prompted with this dialog again.": "تقارير الاستخدام المشفرة ترسل يوميا. تُستخدم هذه التقارير لتتبع المنصات الشائعة، أحجام المجلدات، إصدارات التطبيق. إذا تغيرت بنود هذا التقرير، ستواجَهُ بهذه النافذة مرة أخرى.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "الهوية المُقَدَّمَة ناقصة على ما يبدو. الهوية مكوَّنة من 52 أو 56 رمزا بين حروف وأرقام، مع تجاهل المسافات الفارغة وخطوط الاعتراض.",
"The folder ID cannot be blank.": "هوية المجلد لا يمكن أن تكون فارغة.",
"The folder ID must be unique.": "يجب أن يكون عنوان المجلد فريد ",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "المُعرِّف المُقَدَّم ناقص على ما يبدو. المُعرِّف مكوَّن من 52 أو 56 رمزاً بين حروف وأرقام، مع تجاهل المسافات الفارغة والخطوط الفاصلة.",
"The folder ID cannot be blank.": "معرف المجلد لا يمكن أن يكون فارغاً.",
"The folder ID must be unique.": "يجب أن يكون عنوان المجلد فريداً.",
"The folder content on other devices will be overwritten to become identical with this device. Files not present here will be deleted on other devices.": "ستحل محتويات هذا المجلد محل محتويات مجلدات الأجهزة الأخرى. الملفات التي ليس لها وجود هنا ستحذف عن الأجهزة الأخرى.",
"The folder content on this device will be overwritten to become identical with other devices. Files newly added here will be deleted.": "ستحل البيانات في الأجهزة الأخرى محل البيانات في هذا المجلد. ستحذف الملفات التي ستُنشأ هنا.",
"The folder path cannot be blank.": "مسار المجلد لا يمكن أن يكون فارغ",
"The folder path cannot be blank.": "مسار المجلد لا يمكن أن يكون فارغاً.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "المُدَد التالية مُستخدَمة: تُحفظ نسخة كل 30 ثانية في الساعة الأولى، ونسخة كل ساعة لبقية اليوم الأول، ونسخة يومية لأول ثلاثين يوما، ونسخة أسبوعية للأمد.",
"The following items could not be synchronized.": "فشل مزامنة العناصر التالية",
"The following items were changed locally.": "تم تغيير العناصر التالية محليا",
"The following items could not be synchronized.": "فشلت مزامنة العناصر التالية.",
"The following items were changed locally.": "غُيِّرت العناصر التالية محليا.",
"The following methods are used to discover other devices on the network and announce this device to be found by others:": "الطرق التالية مستخدمة للإعلان عن هذا الجهاز، وإيجاد الأجهزة الأخرى أيضا:",
"The following text will automatically be inserted into a new message.": "النص التالي سيُضمَّن تلقائيا في رسالة جديدة.",
"The following unexpected items were found.": "عُثِر على المحتوى غير المتوقع التالي.",
@@ -440,7 +441,7 @@
"The number of connections must be a non-negative number.": "عدد الاتصالات يجب ألا يكون سالبا.",
"The number of days must be a number and cannot be blank.": "حقل عدد الأيام يجب أن يكون رقم ولا يمكن تركه فارغ.",
"The number of days to keep files in the trash can. Zero means forever.": "عدد أيام حفظ الملفات في سلة المهملات. الصفر يعني إلى الأبد.",
"The number of old versions to keep, per file.": "عدد النسخ القديمة المحفوظة، لكل ملف. ",
"The number of old versions to keep, per file.": "عدد النسخ القديمة المحفوظة، لكل ملف.",
"The number of versions must be a number and cannot be blank.": "حقل عدد النسخ يجب أن يكون رقم ولا يمكن أن تركة فارغا.",
"The path cannot be blank.": "المسار لا يمكن أن يكون فارغ.",
"The rate limit is applied to the accumulated traffic of all connections to this device.": "السرعة القصوى المختارة لنقل البيانات المتراكمة من جميع الاتصالات على هذا الجهاز.",
@@ -451,12 +452,12 @@
"There are no devices to share this folder with.": "لا توجد أجهزة أخرى لتشاركها هذا المجلد.",
"There are no file versions to restore.": "لا توجد إصدارات يمكن استعادتها لهذا الملف.",
"There are no folders to share with this device.": "لا توجد مجلدات لمشاركتها مع هذا الجهاز.",
"They are retried automatically and will be synced when the error is resolved.": تم إعادة المحاولة تلقائيًا وسيتم مزامنتها عند إصلاح الخطأ.",
"They are retried automatically and will be synced when the error is resolved.": ُعاد المحاولة تلقائيًا وستزامن عند إصلاح الخطأ.",
"This Device": "هذا الجهاز",
"This Month": "هذا الشهر",
"This can easily give hackers access to read and change any files on your computer.": "هذا قد يسبب في اختراق جهازك.",
"This device cannot automatically discover other devices or announce its own address to be found by others. Only devices with statically configured addresses can connect.": "لا يمكن لهذا الجهاز أن يرصد الأجهزة الأخرى تلقائيا، ولا أن يعلن عنوانه ليمكن الأجهزة الأخرى من إيجاده. الأجهزة ثابتة العناوين فقط يمكن أن تتصل.",
"This is a major version upgrade.": "ترقية أساسية ",
"This is a major version upgrade.": "ترقية أساسية.",
"This setting controls the free space required on the home (i.e., index database) disk.": "هذا الخيار يتحكم في المساحة الفارغة المطلوبة من القرص الرئيسي.",
"Time": "الوقت",
"Time the item was last modified": "توقيت اخر تعديل للعنصر",
@@ -472,22 +473,22 @@
"Undecided (will prompt)": "غير محدد ( ستظهر نافذة للتحديد لاحقًا )",
"Unexpected Items": "المحتويات المفاجِئة",
"Unexpected items have been found in this folder.": "عُثِر على محتويات غير متوقعة في هذا المجلد.",
"Unignore": "لا يتم التجاهل",
"Unignore": "لا تتجاهل",
"Unknown": "غير معرف",
"Unshared": "غير مشترك",
"Unshared Devices": "الأجهزة غير المُشَارَكة",
"Unshared Folders": "المجلدات غير المُشارَكة",
"Untrusted": "غير موثوق",
"Up to Date": "اخز أصدار ",
"Up to Date": "مُزَامَن",
"Updated {%file%}": "مُحَدَّث {{file}}",
"Upgrade": "ترقية",
"Upgrade To {%version%}": "ترقية الى {{version}} ",
"Upgrade To {%version%}": "ترقية إلى النسخة {{version}}",
"Upgrading": "جاري الترقية",
"Upload Rate": "معدل الرفع",
"Uptime": "وقت التشغيل",
"Usage reporting is always enabled for candidate releases.": "تقارير الاستخدام مفعلة دائمًا للنسخ المرشحة.",
"Use HTTPS for GUI": "استخدام HTTPS مع الواجه الرسومية ",
"Use notifications from the filesystem to detect changed items.": "استخدم أشغارات نظام الملفات لمعرفة الملفات المتغيرة",
"Use HTTPS for GUI": "استخدم HTTPS لتأمين واجهة المستخدم",
"Use notifications from the filesystem to detect changed items.": "استخدم إشعارات نظام الملفات لمعرفة الملفات المتغيرة.",
"User": "مستخدِم",
"User Home": "منزل المستخدم",
"Username/Password has not been set for the GUI authentication. Please consider setting it up.": "اسم المستخدم/كلمة المرور لم يُنشَآ لتوثيق الواجهة. يُرجى إنشاؤهما من فضلك.",
@@ -498,7 +499,7 @@
"Version": "الإصدار",
"Versions": "نسخ",
"Versions Path": "مسار النسخ",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "يتم حذف الإصدارات تلقائيًا إذا تجاوزت العمر الأقصى أو تجاوزت عدد الملفات المسموح بها خلال فاصل زمني محدد.",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "تحذف الإصدارات تلقائيًا إذا تجاوزت العمر الأقصى أو تجاوزت عدد الملفات المسموح بها خلال فاصل زمني محدد.",
"Waiting to Clean": "في انتظار التنظيف",
"Waiting to Scan": "في انتظار الفحص",
"Waiting to Sync": "في انتظار المزامنة",
@@ -511,19 +512,19 @@
"Watch for Changes": "راقب التغييرات",
"Watching for Changes": "جاري مراقبة التغيرات",
"Watching for changes discovers most changes without periodic scanning.": "مراقبة التغييرات تكشف معظم التغييرات دون إجراء المسح الدوري.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "يجب أضافه الأجهزة الجديدة في الطرفين",
"When adding a new device, keep in mind that this device must be added on the other side too.": "يجب إضافة الأجهزة الجديدة في الطرفين.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "عند إضافة مجلد جديد ، ضع في الاعتبار أن معرف المجلد يُستخدم لربط المجلدات معًا بين الأجهزة المختلفة. وهي حساسة لحالة الأحرف لذا يجب أن تتطابق تمامًا بين جميع الأجهزة.",
"When set to more than one on both devices, Syncthing will attempt to establish multiple concurrent connections. If the values differ, the highest will be used. Set to zero to let Syncthing decide.": "إذا عُرفَّ Syncthing بأنه أكثر من واحد على كلا الجهازين، فإنه سيحاول إقامة عدة اتصالات متوازية. إذا اختلفت القِيَم، أعلاها ستُستخدَم. صَفِّرها لتترك القرار لـ Syncthing.",
"Yes": "نعم",
"Yesterday": "أمس",
"You can also copy and paste the text into a new message manually.": "يكنك نسخ النص لتدرجه في رسالة جديدة بنفسك.",
"You can also select one of these nearby devices:": "يمكنك أيضا اختيار واحد من الأجهزة القريبة ",
"You can also select one of these nearby devices:": "يمكنك أيضا اختيار واحدة من الأجهزة القريبة:",
"You can change your choice at any time in the Settings dialog.": "يمكنك تغيير اختيارك في أي وقت بواسطة الاعدادات.",
"You can read more about the two release channels at the link below.": "يمكنك قراءة المزيد عن إصداريّ القناتين عبر الرابط بالأسفل.",
"You have no ignored devices.": "لا يوجد أجهزة في قائمة التجاهل ",
"You have no ignored folders.": "لا يوجد مجلدات في قائمه التجاهل ",
"You have unsaved changes. Do you really want to discard them?": "الإعدادات لم تحفظ. هل انت متأكد من الإلغاء؟ ",
"You must keep at least one version.": "يجب الاحتفاظ بنسخة واحده على الاقل",
"You have no ignored devices.": "لا أجهزة مُتجاهَلَةٌ.",
"You have no ignored folders.": "لا مجلدات مُتجاهَلَةٌ.",
"You have unsaved changes. Do you really want to discard them?": "الإعدادات لم تُحفظ. هل أنت متأكد من الإلغاء؟",
"You must keep at least one version.": "يجب الاحتفاظ بنسخة واحدة على الأقل.",
"You should never add or change anything locally in a \"{%receiveEncrypted%}\" folder.": "ينبغي ألا تغير شيئا في المجلد المحلي في حالة كان \"{{receiveEncrypted}}\".",
"Your SMS app should open to let you choose the recipient and send it from your own number.": "ينبغي أن يسمح تطبيق SMS لديك بأن تختار مستلما ويرسلها من رقمك.",
"Your email app should open to let you choose the recipient and send it from your own address.": "ينبغي أن يسمح تطبيق البريد الإلكتروني الخاص بك باختيار مستلم و أن يرسلها من عنوانك.",
@@ -544,11 +545,11 @@
"black": "أسوَد",
"dark": "داكن",
"default": "افتراضي",
"light": "خفيف"
"light": "أبيض"
}
},
"unknown device": "جهاز مجهول",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} يريد مشاركة مجلد \"{{folder}}\". ",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} يريد مشاركة مجلد \"{{folderlabel}}\" ({{folder}}). ",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} يريد مشاركة هذا المجلد \"{{folder}}\".",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} يريد مشاركة هذا المجلد \"{{folderlabel}}\" ({{folder}}).",
"{%reintroducer%} might reintroduce this device.": "{{reintroducer}} يمكن أن يعيد تقديم هذا الجهاز."
}

View File

@@ -39,6 +39,7 @@
"Are you sure you want to restore {%count%} files?": "Вы сапраўды жадаеце аднавіць {{count}} файл(ы)?",
"Are you sure you want to revert all local changes?": "Вы сапраўды жадаеце адмяніць усе лакальныя змены?",
"Are you sure you want to upgrade?": "Вы сапраўды жадаеце абнавіць?",
"Authentication Required": "Патрабуецца Аутэнтыфікацыя",
"Authors": "Аўтары",
"Auto Accept": "Прынімаць Аўтаматычна",
"Automatic Crash Reporting": "Аўтаматычныя Спрадвыздачы Пра Памылкі",
@@ -69,12 +70,28 @@
"Connection Type": "Тып Падлучэння",
"Connections": "Падлучэнні",
"Connections via relays might be rate limited by the relay": "Падлучэнні праз рэтранслятар могуць быць абмежаванымі самім рэтранслятарам",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Працяглы прагляд змен цяпер даступны без Syncthing. Ён выяўляе змены на дыску і скануе толькі мадыфікаваныя шляхі. Перавага метада ў тым, што змены распаўсюджваюцца хучэй і патарабуецца меньшая колькасць поўных сканаванняў.",
"Copied from elsewhere": "Скапіявана з іншага месца",
"Copied from original": "Скапіявана з арыгіналу",
"Copied!": "Скапіявана!",
"Copy": "Скапіяваць",
"Copy failed! Try to select and copy manually.": "Капіяванне не адбылося! Паспрабуйце вылучыць і скапіяваць уласнаручна.",
"Currently Shared With Devices": "Ужо Абагулена З Прыладамі",
"Custom Range": "Уласны Дыяпазон",
"Danger!": "Небязпечна!",
"Database Location": "Шлях Да Базы Даных",
"Debugging Facilities": "Сродкі Адладкі",
"Default": "Стандартныя",
"Default Configuration": "Стандартныя Налады",
"Default Device": "Стандартная Прылада",
"Default Folder": "Стандартная Тэчка",
"Default Ignore Patterns": "Шаблоны ігнаравання па змаўчанні",
"Defaults": "Па змаўчанні",
"Delete": "Выдаліць",
"Delete Unexpected Items": "Выдаліць Нечаканыя Элементы",
"Deleted {%file%}": "Выдалены {{file}}",
"Deselect All": "Зняць выбар з усіх",
"Device": "Прылада",
"Device ID": "ID прылады",
"Device Identification": "Ідэнтыфікацыя прылады",
"Device Name": "Назва прылады",

View File

@@ -22,7 +22,7 @@
"Advanced Configuration": "Разширени настройки",
"All Data": "Всички данни",
"All Time": "През цялото време",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Всички папки, споделени с устройството трябва да бъдат защитени с парола, така че данните да са недостъпни без нея.",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Папките, споделени с устройството трябва да бъдат защитени с парола, така че данните да са недостъпни без нея.",
"Allow Anonymous Usage Reporting?": "Разрешавате ли анонимно отчитане на употребата?",
"Allowed Networks": "Разрешени мрежи",
"Alphabetic": "Азбучен ред",
@@ -97,7 +97,7 @@
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Устройство \"{{name}}\" ({{device}}) с адрес {{address}} желае да се свърже. Да бъде ли добавено?",
"Device Certificate": "Сертификат на устройството",
"Device ID": "Идентификатор на устройство",
"Device Identification": "Идентификация на устройство",
"Device Identification": "Идентификатор на устройство",
"Device Name": "Име на устройството",
"Device Status": "Състояние на устройството",
"Device is untrusted, enter encryption password": "Устройството е недоверено, въведете парола за шифроване",
@@ -111,7 +111,7 @@
"Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:": "Изключено периодично обхождане и грешка при започване на наблюдението за промени, прави се опит всяка минута:",
"Disables comparing and syncing file permissions. Useful on systems with nonexistent or custom permissions (e.g. FAT, exFAT, Synology, Android).": "Изключва сравняването и синхронизацията на правата на файловете. Полезно за системи с липсващи или специфични права (като FAT, exFAT, Synology, Android).",
"Discard": "Отказване",
"Disconnected": "Не е свързано",
"Disconnected": "Няма връзка",
"Disconnected (Inactive)": "Не е свързано (неизползвано)",
"Disconnected (Unused)": "Не е свързано (неизползвано)",
"Discovered": "Открит",
@@ -134,8 +134,8 @@
"Edit Folder Defaults": "За нови папки",
"Editing {%path%}.": "Променяне на {{path}}.",
"Enable Crash Reporting": "Включване на доклад за срив",
"Enable NAT traversal": "Преминаване през NAT",
"Enable Relaying": "Препращане",
"Enable NAT traversal": "Обхождане на NAT",
"Enable Relaying": "Разрешаване на ретранслатори",
"Enabled": "Включено",
"Enables sending extended attributes to other devices, and applying incoming extended attributes. May require running with elevated privileges.": "Когато е отметнато разширените атрибути се изпращат към другите устройства, а получените разширени атрибути се прилагат. Обикновено изисква съответните за целта права.",
"Enables sending extended attributes to other devices, but not applying incoming extended attributes. This can have a significant performance impact. Always enabled when \"Sync Extended Attributes\" is enabled.": "Когато е отметнато разширените атрибути се изпращат към другите устройства, но получените разширени атрибути не се прилагат. Може да има значително неблагоприятно влияние върху производителността. Винаги е отметнато когато „Синхронизиране на разширени атрибути“ е отметнато.",
@@ -158,8 +158,8 @@
"Failure to connect to IPv6 servers is expected if there is no IPv6 connectivity.": "Неуспешна връзка към сървъри по IPv6 може да се очаква ако няма свързаност по IPv6.",
"File Pull Order": "Ред на изтегляне",
"File Versioning": "Версии на файловете",
"Files are moved to .stversions directory when replaced or deleted by Syncthing.": "Файловете биват преместени в папка .stversions при заменяне или изтриване от Syncthing.",
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Когато Syncthing замени или изтрие файл той бива преместен в папката .stversions и преименуван чрез добавяне на датата и часа.",
"Files are moved to .stversions directory when replaced or deleted by Syncthing.": "Когато Syncthing замени или премахне файл, негова версия се копира в папка .stversions.",
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Когато Syncthing замени или премахне файл, негова версия се копира в папка .stversions, като в името му се добавят датата и часът.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Предпазва местните файлове от промени, идващи от другите устройства, но местните промени се изпращат.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Файловете се синхронизират от другите устройства, но местните промени не се изпращат.",
"Filesystem Watcher Errors": "Грешка при наблюдаване на файловата система",
@@ -189,12 +189,12 @@
"Generate": "Подновяване",
"Global Discovery": "Глобално откриване",
"Global Discovery Servers": "Сървъри за глобално откриване",
"Global State": "Глобално състояние",
"Global State": "Общо състояние",
"Help": "Помощ",
"Hint: only deny-rules detected while the default is deny. Consider adding \"permit any\" as last rule.": "Подсказка: има само забрабяващи правила, а по подразбиране е „забранено“. Помислете дали да не добавите „permit any“ като последно правило.",
"Home page": "Страница",
"However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.": "Текущите настройки обаче показват, че може би не искате да бъде включена. За това автоматичното докладване на сривове е изключено.",
"Identification": "Идентификация",
"Identification": "Идентифициране",
"If untrusted, enter encryption password": "При недоверено задайте парола за шифроване",
"If you want to prevent other users on this computer from accessing Syncthing and through it your files, consider setting up authentication.": "Ако желаете да предотвратите достъпа на другите потребители на устройството до Syncthing, а чрез него и до файловете ви, помислете за удостоверяване на графичния интерфейс.",
"Ignore": "Пренебрегване",
@@ -205,7 +205,7 @@
"Ignored Folders": "Пренебрегнати папки",
"Ignored at": "Пренебрегнато на",
"Included Software": "Използван софтуер",
"Incoming Rate Limit (KiB/s)": "Ограничение при изтегляне (KiB/s)",
"Incoming Rate Limit (KiB/s)": "Ограничение при изтегляне (КиБ/с)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Неправилни настройки могат да повредят файлове и да попречат на синхронизирането.",
"Incorrect user name or password.": "Грешно потребителско име или парола.",
"Internally used paths:": "Вътрешно използвани пътища:",
@@ -215,7 +215,7 @@
"Inversion of the given condition (i.e. do not exclude)": "Обръща значението на условието (напр. да не се отхвърля)",
"Keep Versions": "Пазени версии",
"LDAP": "LDAP",
"Largest First": "Първо най-големи",
"Largest First": "Първо най-големите",
"Last 30 Days": "Последните 30 дена",
"Last 7 Days": "Последните 7 дена",
"Last Month": "Миналия месец",
@@ -261,7 +261,7 @@
"Never": "никога",
"New Device": "Ново устройство",
"New Folder": "Нова папка",
"Newest First": "Първо най-нови",
"Newest First": "Първо най-новите",
"No": "Не",
"No File Versioning": "Без пазене на версии",
"No files will be deleted as a result of this operation.": "В резултат на операцията няма да бъдат премахнати файлове.",
@@ -272,12 +272,12 @@
"Number of Connections": "Брой на връзките",
"OK": "Добре",
"Off": "Изключено",
"Oldest First": "Първо най-стари",
"Oldest First": "Първо най-старите",
"Optional descriptive label for the folder. Can be different on each device.": "Незадължително име на папката. Може да бъде различно на всяко устройство.",
"Options": "Настройки",
"Out of Sync": "Несинхронизирано",
"Out of Sync Items": "Несинхронизирани елементи",
"Outgoing Rate Limit (KiB/s)": "Ограничение при качване (KiB/s)",
"Outgoing Rate Limit (KiB/s)": "Ограничение при качване (КиБ/с)",
"Override": "Налагане",
"Override Changes": "Налагане на местни промени",
"Ownership": "Собственост",
@@ -307,11 +307,11 @@
"QUIC LAN": "QUIC LAN",
"QUIC WAN": "QUIC WAN",
"Quick guide to supported patterns": "Кратък наръчник на поддържаните шаблони",
"Random": "Произволен",
"Receive Encrypted": риема шифровани данни",
"Random": "В случаен ред",
"Receive Encrypted": олучава шифровани данни",
"Receive Only": "Само получава",
"Received data is already encrypted": "Получените данни вече са шифровани",
"Recent Changes": "Последни промени",
"Recent Changes": "Скорошни промени",
"Reduced by ignore patterns": "Наложени са шаблони за пренебрегване",
"Relay LAN": "Препращане по LAN",
"Relay WAN": "Препращане по WAN",
@@ -374,7 +374,7 @@
"Simple File Versioning": "Обикновени версии",
"Single level wildcard (matches within a directory only)": "Заместващ символ за едно ниво (съвпада само с папка)",
"Size": "Размер",
"Smallest First": "Първо най-малки",
"Smallest First": "Първо най-малките",
"Some discovery methods could not be established for finding other devices or announcing this device:": "Следните методи за откриване не могат да бъдат използвани за намиране на други устройства или за обявяване на това устройство, за да бъде открито от останалите:",
"Some items could not be restored:": "Някои елементи не могат да бъдат възстановени:",
"Some listening addresses could not be enabled to accept connections:": "Някои от адресите, на които Syncthing очаква връзка не могат да бъдат настроени да получават входящи връзки:",
@@ -384,8 +384,9 @@
"Stable releases only": "Само стабилни версии",
"Staggered": "Разпределени",
"Staggered File Versioning": "Разпределени версии",
"Start Browser": "Отваряне в мрежов четец",
"Start Browser": "Отваряне на мрежов четец",
"Statistics": "Статистика",
"Stay logged in": "Оставане в системата",
"Stopped": "Спряна",
"Stores and syncs only encrypted data. Folders on all connected devices need to be set up with the same password or be of type \"{%receiveEncrypted%}\" too.": "Съхранява и синхронизира само шифровани данни. Папките на всички свързани устройства трябва да бъдат настроени със същата парола или също да са от вида „{{receiveEncrypted}}“.",
"Subject:": "Относно:",
@@ -419,9 +420,9 @@
"The cleanup interval cannot be blank.": "Интервалът на почистване не може да бъде празен.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Настройките са запазени, но не са приложени. За да влязат в сила Syncthing трябва да се рестартира.",
"The device ID cannot be blank.": "Полето идентификатор на устройство не може да бъде празно.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Идентификаторът на устройството, който да въведете се намира в „Действия > Идентификатор“. Интервалите и дефисите са незадължителни.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Идентификаторът на устройството, който трябва да въведете се намира в „Действия > Идентификатор“. Интервалите и дефисите са незадължителни.",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes, and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Шифрованият отчет за употреба се изпраща ежедневно. Използва се за отчитане на най-често срещаните платформи, размери на папки и издания на приложението. При промяна в събираните данни отново ще бъде поискано вашето съгласие.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведеният идентификатор на устройство не е валиден. Трябва да бъде 52 или 56 символа и да се състои от букви и цифри, като интервалите и дефисите са незадължителни.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведеният идентификатор на устройство не е приемлив. Трябва да бъде 52 или 56 символа и да се състои от букви и цифри, като интервалите и дефисите са незадължителни.",
"The folder ID cannot be blank.": "Полето идентификатор на папка не може да бъде празно.",
"The folder ID must be unique.": "Идентификаторът на папката трябва да бъде уникален.",
"The folder content on other devices will be overwritten to become identical with this device. Files not present here will be deleted on other devices.": "Съдържанието на папката в другите устройства ще бъде презаписано, за да стане еднакво със съдържанието на това устройство. Файловете, които ги няма тук, но съществуват на другите устройства ще бъдат премахнати.",
@@ -436,11 +437,11 @@
"The interval must be a positive number of seconds.": "Интервалът трябва да е положителен брой секунди.",
"The interval, in seconds, for running cleanup in the versions directory. Zero to disable periodic cleaning.": "Интервал, в секунди, на почистване на папката с версии. Нула изключва периодичното почистване.",
"The maximum age must be a number and cannot be blank.": "Максималната възраст трябва да е число, полето не може да бъде празно.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Максимална продължителност за пазене на версия (в дни, за да не бъдат изтривани версии задайте 0).",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Максимален срок за пазене на версия (в дни, 0 без ограничение).",
"The number of connections must be a non-negative number.": "Броят на връзките трябва да бъде положително число.",
"The number of days must be a number and cannot be blank.": "Броят дни трябва да бъде число и не може да бъде празно.",
"The number of days to keep files in the trash can. Zero means forever.": "Брой дни за пазене на файловете в кошчето. Нула значи завинаги.",
"The number of old versions to keep, per file.": "Брой стари версии, които да бъдат пазени за всеки файл.",
"The number of old versions to keep, per file.": "Брой пазени стари версии на файл.",
"The number of versions must be a number and cannot be blank.": "Броят версии трябва да бъде число и не може да бъде празно.",
"The path cannot be blank.": "Пътят не може да бъде празен.",
"The rate limit is applied to the accumulated traffic of all connections to this device.": "Ограничението се прилага към общия трафик от всички връзки към това устройство.",
@@ -473,7 +474,7 @@
"Unexpected Items": "Неочаквани елементи",
"Unexpected items have been found in this folder.": "В папката са намерени неочаквани елементи.",
"Unignore": "Отменяне на пренебрегване",
"Unknown": "Неясно",
"Unknown": "Неизвестно",
"Unshared": "Несподелена",
"Unshared Devices": "Устройства, с които не е споделена",
"Unshared Folders": "Несподелени папки",
@@ -497,7 +498,7 @@
"Using a direct TCP connection over WAN": "Използване на директна свързаност с TCP през широкодостъпна мрежа",
"Version": "Издание",
"Versions": "Версии",
"Versions Path": ът до версиите",
"Versions Path": апка с версиите",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Версиите биват изтривани автоматично ако са по-стари от максималната възраст или надминават броя разрешени версии за определено време.",
"Waiting to Clean": "Изчаква за почистване",
"Waiting to Scan": "Изчаква за обхождане",

View File

@@ -14,7 +14,7 @@
"Add devices from the introducer to our device list, for mutually shared folders.": "Afegiu dispositius de l'introductor a la nostra llista de dispositius, per a carpetes compartides mútuament.",
"Add filter entry": "Afegeix una entrada de filtre",
"Add ignore patterns": "Afegiu patrons per ignorar",
"Add new folder?": "Vols afegir una carpeta nova?",
"Add new folder?": "Voleu afegir una carpeta nova?",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "A més, s'augmentarà l'interval de reexploració complet (vegades 60, és a dir, el nou predeterminat d'1 h). També podeu configurar-lo manualment per a cada carpeta més tard després de triar No.",
"Address": "Adreça",
"Addresses": "Adreces",
@@ -29,15 +29,17 @@
"Altered by ignoring deletes.": "S'ha alterat ignorant les supressions.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Una ordre externa gestiona la versió. Ha d'eliminar el fitxer de la carpeta compartida. Si el camí a l'aplicació conté espais, s'ha de citar.",
"Anonymous Usage Reporting": "Informe anònim d'ús",
"Anonymous usage report format has changed. Would you like to move to the new format?": "El format de l'informe d'ús anònim ha canviat. Vols canviar a aquest nou format?",
"Anonymous usage report format has changed. Would you like to move to the new format?": "El format de l'informe d'ús anònim ha canviat. Voleu canviar a aquest nou format?",
"Applied to LAN": "Aplicat a LAN",
"Apply": "Aplica",
"Are you sure you want to override all remote changes?": "Esteu segur que voleu anul·lar tots els canvis remots?",
"Are you sure you want to permanently delete all these files?": "Estàs segur que vols esborrar tots aquests fitxers permanentment?",
"Are you sure you want to remove device {%name%}?": "Estàs segur que vols esborrar el dispositiu {{name}}?",
"Are you sure you want to remove folder {%label%}?": "Estàs segur que vols esborrar la carpeta {{label}}?",
"Are you sure you want to restore {%count%} files?": "Estàs segur que vols restaurar {{count}} fitxers?",
"Are you sure you want to permanently delete all these files?": "Segur que voleu esborrar tots aquests fitxers permanentment?",
"Are you sure you want to remove device {%name%}?": "Segur que voleu eliminar el dispositiu {{name}}?",
"Are you sure you want to remove folder {%label%}?": "Segur que voleu eliminar la carpeta {{label}}?",
"Are you sure you want to restore {%count%} files?": "Segur que voleu restaurar {{count}} fitxers?",
"Are you sure you want to revert all local changes?": "Esteu segur que voleu revertir tots els canvis locals?",
"Are you sure you want to upgrade?": "Esteu segur que voleu actualitzar?",
"Authentication Required": "Autenticació necessària",
"Authors": "Autors",
"Auto Accept": "Auto Acceptar",
"Automatic Crash Reporting": "Informe automàtic d'incidències",
@@ -55,7 +57,7 @@
"Cleaning Versions": "Netejant versions",
"Cleanup Interval": "Interval de neteja",
"Click to see full identification string and QR code.": "Feu clic per veure la cadena d'identificació completa i el codi QR.",
"Close": "Tancar",
"Close": "Tanca",
"Command": "Comando",
"Comment, when used at the start of a line": "Comentari quan és usat al principi d'una línia",
"Compression": "Compressió",
@@ -64,6 +66,7 @@
"Configured": "Configurat",
"Connected (Unused)": "Connectat (no utilitzat)",
"Connection Error": "Error de connexió",
"Connection Management": "Gestió de connexions",
"Connection Type": "Tipus de connexió",
"Connections": "Connexions",
"Connections via relays might be rate limited by the relay": "Les connexions mitjançant relés poden estar limitades pel relé",
@@ -91,11 +94,12 @@
"Deselect devices to stop sharing this folder with.": "Desseleccioneu els dispositius amb els quals deixar de compartir aquesta carpeta.",
"Deselect folders to stop sharing with this device.": "Desseleccioneu les carpetes per deixar de compartir-les amb aquest dispositiu.",
"Device": "Dispositiu",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "El dispositiu \"{{name}}\" ({{device}} a {{address}}) vol connectar-se. Vols afegir un dispositiu nou?",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "El dispositiu «{{name}}⁣» ({{device}} a {{address}}) vol connectar-se. Voleu afegir un dispositiu nou?",
"Device Certificate": "Certificat del dispositiu",
"Device ID": "ID del dispositiu",
"Device Identification": "Identificació del dispositiu",
"Device Name": "Nom del dispositiu",
"Device Status": "Estat del dispositiu",
"Device is untrusted, enter encryption password": "El dispositiu no és de confiança, introduïu la contrasenya d'encriptació",
"Device rate limits": "Límits de velocitat del dispositiu",
"Device that last modified the item": "Dispositiu que ha modificat el fitxer per última vegada",
@@ -118,13 +122,13 @@
"Do not add it to the ignore list, so this notification may recur.": "No l'afegiu a la llista d'ignorar, de manera que aquesta notificació pot repetir-se.",
"Do not restore": "No restaurar",
"Do not restore all": "No restaurar-ho tot",
"Do you want to enable watching for changes for all your folders?": "Vols activar la cerca de canvis a totes les teves carpetes?",
"Do you want to enable watching for changes for all your folders?": "Voleu activar el control de canvis a totes les carpetes?",
"Documentation": "Documentació",
"Download Rate": "Tasca de descarrega",
"Downloaded": "Descarregat",
"Downloading": "Descarregant",
"Edit": "Editar",
"Edit Device": "Editar dispositiu",
"Edit": "Edita",
"Edit Device": "Edita el dispositiu",
"Edit Device Defaults": "Edita els valors predeterminats del dispositiu",
"Edit Folder": "Modificar carpeta",
"Edit Folder Defaults": "Edita els valors per defecte de la carpeta",
@@ -165,8 +169,9 @@
"Folder ID": "ID de carpeta",
"Folder Label": "Etiqueta de la carpeta",
"Folder Path": "Camí de carpeta",
"Folder Status": "Estat de la carpeta",
"Folder Type": "Tipus de carpeta",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "El tipus de carpeta \"{{receiveEncrypted}}\" només es pot definir quan s'afegeix una carpeta nova.",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "El tipus de carpeta «⁣{{receiveEncrypted}}» només es pot definir quan s'afegeix una carpeta nova.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "El tipus de carpeta \"{{receiveEncrypted}}\" no es pot canviar després d'afegir la carpeta. Heu d'eliminar la carpeta, suprimir o desxifrar les dades del disc i tornar a afegir la carpeta.",
"Folders": "Carpetes",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "A les carpetes següents s'ha produït un error en començar a buscar canvis. Es tornarà a provar cada minut, de manera que els errors poden desaparèixer aviat. Si persisteixen, intenteu solucionar el problema subjacent i demaneu ajuda si no podeu.",
@@ -182,8 +187,8 @@
"GUI Theme": "Tema de la GUI",
"General": "General",
"Generate": "Generar",
"Global Discovery": "Descobriment Global",
"Global Discovery Servers": "Servidors de Descobriment Global",
"Global Discovery": "Descobriment global",
"Global Discovery Servers": "Servidors de descobriment global",
"Global State": "Estat global",
"Help": "Ajuda",
"Hint: only deny-rules detected while the default is deny. Consider adding \"permit any\" as last rule.": "Suggeriment: només s'han detectat regles de denegació mentre el valor predeterminat és denegació. Penseu a afegir \"permet qualsevol\" com a darrera regla.",
@@ -202,9 +207,11 @@
"Included Software": "Programari inclòs",
"Incoming Rate Limit (KiB/s)": "Límit de velocitat d'entrada (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Una configuració incorrecta pot malmetre els continguts de la teva carpeta i que Syncthing esdevingui inoperatiu.",
"Incorrect user name or password.": "Nom d'usuari o contrasenya incorrecta.",
"Internally used paths:": "Camins utilitzats internament:",
"Introduced By": "Introduït per",
"Introducer": "Introductor",
"Introduction": "Introducció",
"Inversion of the given condition (i.e. do not exclude)": "Inversió del patrò introduït",
"Keep Versions": "Mantenir Versions",
"LDAP": "LDAP",
@@ -224,13 +231,18 @@
"Loading data...": "Carregant dades...",
"Loading...": "Carregant...",
"Local Additions": "Addicions locals",
"Local Discovery": "Descobriment Local",
"Local Discovery": "Descobriment local",
"Local State": "Estat local",
"Local State (Total)": "Estat local (Total)",
"Locally Changed Items": "Elements canviats localment",
"Log": "Registre",
"Log File": "Fitxer de registre",
"Log In": "Inicia la sessió",
"Log Out": "Tanca la sessió",
"Log in to see paths information.": "Inicieu sessió per veure la informació dels camins.",
"Log in to see version information.": "Inicieu sessió per veure la informació de la versió.",
"Log tailing paused. Scroll to the bottom to continue.": "S'ha posat en pausa el seguiment del registre. Desplaceu-vos cap a la part inferior per continuar.",
"Login failed, see Syncthing logs for details.": "No s'ha pogut iniciar la sessió; consulteu els registres de Syncthing per obtenir més informació.",
"Logs": "Registres",
"Major Upgrade": "Actualització major",
"Mass actions": "Accions massives",
@@ -255,8 +267,9 @@
"No files will be deleted as a result of this operation.": "No se suprimirà cap fitxer com a resultat d'aquesta operació.",
"No rules set": "No hi ha regles establertes",
"No upgrades": "No hi ha actualitzacions",
"Not shared": "No compartit",
"Not shared": "Sense compartir",
"Notice": "Avís",
"Number of Connections": "Nombre de connexions",
"OK": "D'acord",
"Off": "Desactivar",
"Oldest First": "Més antic primer",
@@ -268,13 +281,14 @@
"Override": "Sobreescriu",
"Override Changes": "Sobreescriure Canvis",
"Ownership": "Propietat",
"Password": "Contrasenya",
"Path": "Ruta",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta de la carpeta a l'equip local. Si no existeix serà creada. El caràcter (~) es pot fer servir com a drecera de",
"Path where versions should be stored (leave empty for the default .stversions directory in the shared folder).": "Ruta on s'han d'emmagatzemar les versions (deixeu buit per al directori predeterminat .stversions a la carpeta compartida).",
"Paths": "Rutes",
"Pause": "Pausa",
"Pause": "Posa-ho en pausa",
"Pause All": "Posa-ho tot en pausa",
"Paused": "Pausat",
"Paused": "En pausa",
"Paused (Unused)": "En pausa (no utilitzat)",
"Pending changes": "Canvis pendents",
"Periodic scanning at given interval and disabled watching for changes": "Escaneig periòdic a un interval determinat i vigilància desactivada dels canvis",
@@ -294,8 +308,8 @@
"QUIC WAN": "Connexió QUIC WAN",
"Quick guide to supported patterns": "Guia ràpida per als possibles patrons",
"Random": "Aleatori",
"Receive Encrypted": "Rebre xifrat",
"Receive Only": "Només rebre",
"Receive Encrypted": "Rep xifrat",
"Receive Only": "Només rep",
"Received data is already encrypted": "Les dades rebudes ja estan xifrades",
"Recent Changes": "Canvis recents",
"Reduced by ignore patterns": "Reduït per ignorar patrons",
@@ -309,20 +323,21 @@
"Remove Device": "Elimina el dispositiu",
"Remove Folder": "Elimina la carpeta",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identificador obligatori per a la carpeta. Ha de ser el mateix en tots els dispositius del clúster.",
"Rescan": "Re-escanejar",
"Rescan All": "Re-escanejar tot",
"Rescan": "Reescaneja",
"Rescan All": "Reescaneja-ho tot",
"Rescans": "Escaneja de nou",
"Restart": "Reiniciar",
"Restart Needed": "És Necessari Reiniciar",
"Restarting": "Reiniciant",
"Restart": "Reinicia",
"Restart Needed": "Cal un reinici",
"Restarting": "S'està reiniciant",
"Restore": "Restaura",
"Restore Versions": "Restaura versions",
"Resume": "Reprendre",
"Resume All": "Reprèn tot",
"Resume": "Reprèn-ho",
"Resume All": "Reprèn-ho tot",
"Reused": "Reutilitzat",
"Revert": "Reverteix",
"Revert Local Changes": "Reverteix els canvis locals",
"Save": "Guardar",
"Save": "Desa",
"Saving changes": "S'estan desant els canvis",
"Scan Time Remaining": "Temps d'escanejat restant",
"Scanning": "Escanejant",
"See external versioning help for supported templated command line parameters.": "Consulteu l'ajuda de versions externa per als paràmetres de línia d'ordres de plantilla compatibles.",
@@ -332,28 +347,28 @@
"Select additional folders to share with this device.": "Seleccioneu carpetes addicionals per compartir amb aquest dispositiu.",
"Select latest version": "Seleccioneu la darrera versió",
"Select oldest version": "Seleccioneu la versió més antiga",
"Send & Receive": "Enviar i rebre",
"Send & Receive": "Envia i rep",
"Send Extended Attributes": "Envia atributs ampliats",
"Send Only": "Només enviar",
"Send Only": "Només envia",
"Send Ownership": "Envia la propietat",
"Set Ignores on Added Folder": "Estableix filtres per ignorar a la carpeta afegida",
"Settings": "Preferències",
"Share": "Compartir",
"Share Folder": "Compartir carpeta",
"Share": "Comparteix",
"Share Folder": "Comparteix la carpeta",
"Share by Email": "Comparteix per correu electrònic",
"Share by SMS": "Comparteix per SMS",
"Share this folder?": "Compartir aquesta carpeta?",
"Share this folder?": "Voleu compartir la carpeta?",
"Shared Folders": "Carpetes compartides",
"Shared With": "Compartir Amb",
"Shared With": "Compartida amb",
"Sharing": "Compartint",
"Show ID": "Mostrar ID",
"Show ID": "Mostra l'ID",
"Show QR": "Mostra QR",
"Show detailed discovery status": "Mostra l'estat detallat del descobriment",
"Show detailed listener status": "Mostra l'estat detallat de l'oient",
"Show diff with previous version": "Mostra la diferència amb la versió anterior",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Mostrat en comptes del ID del Node en l'estat del cluster. Serà advertit als altres dispositius com un nom opcional per defecte.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Mostrat en comptes del ID del Node en l'estat del cluster. S'actualitzarà al nom del dispositiu si es deixa buit.",
"Shutdown": "Apagar",
"Shutdown": "Apaga",
"Shutdown Complete": "Apagat complet",
"Simple": "Simple",
"Simple File Versioning": "Versionat de Fitxers Senzill",
@@ -369,8 +384,9 @@
"Stable releases only": "Només versions estables",
"Staggered": "Esglaonat",
"Staggered File Versioning": "Versionat de Fitxers Esglaonat",
"Start Browser": "Arrancar Navegador",
"Start Browser": "Inicia el navegador",
"Statistics": "Estadístiques",
"Stay logged in": "Mantén la sessió iniciada",
"Stopped": "Aturat",
"Stores and syncs only encrypted data. Folders on all connected devices need to be set up with the same password or be of type \"{%receiveEncrypted%}\" too.": "Emmagatzema i sincronitza només dades encriptades. Les carpetes de tots els dispositius connectats s'han de configurar amb la mateixa contrasenya o també ser del tipus \"{{receiveEncrypted}}\".",
"Subject:": "Assumpte:",
@@ -382,17 +398,18 @@
"Sync Status": "Estat de sincronització",
"Syncing": "Synthing",
"Syncthing device ID for \"{%devicename%}\"": "ID del dispositiu Syncthing amb el nom \"{{devicename}}\"",
"Syncthing has been shut down.": "S'ha aturat el synthing.",
"Syncthing has been shut down.": "Synthing s'ha aturat.",
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent programari o parts dels mateixos:",
"Syncthing is Free and Open Source Software licensed as MPL v2.0.": "Syncthing és un programari lliure i de codi obert amb llicència MPL v2.0.",
"Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time, safely protected from prying eyes. Your data is your data alone and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.": "Syncthing és un programa de sincronització contínua de fitxers. Sincronitza fitxers entre dos o més ordinadors en temps real, protegit de manera segura de mirades indiscretes. Les vostres dades són només les vostres i mereixeu triar on s'emmagatzemen, si es comparteixen amb un tercer i com es transmeten per Internet.",
"Syncthing is listening on the following network addresses for connection attempts from other devices:": "La sincronització està escoltant a les adreces de xarxa següents els intents de connexió des d'altres dispositius:",
"Syncthing is not listening for connection attempts from other devices on any address. Only outgoing connections from this device may work.": "La sincronització no és escoltar els intents de connexió d'altres dispositius a cap adreça. Només poden funcionar les connexions sortints d'aquest dispositiu.",
"Syncthing is restarting.": "Reiniciant syncthing.",
"Syncthing is restarting.": "Syncthing s'està reiniciant.",
"Syncthing is saving changes.": "Syncthing està desant els canvis.",
"Syncthing is upgrading.": "Actualitzant syncthing.",
"Syncthing now supports automatically reporting crashes to the developers. This feature is enabled by default.": "Syncthing ara admet la notificació automàtica d'errors als desenvolupadors. Aquesta funció està activada per defecte.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Sembla que Syncthing no està funcionant o hi ha un problema amb la connexió a Internet. S'està tornant a provar…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Sembla ser que Syncthing està tinguent problemes per processar la teva petició. Si us plau, refresca la pàgina o reinicia Syncthing si el problema persisteix.",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Sembla que Syncthing està tenint problemes per a processar la petició. Recarregueu la pàgina o reinicieu Syncthing si el problema persisteix.",
"TCP LAN": "Connexió TCP LAN",
"TCP WAN": "Connexió TCP WAN",
"Take me back": "Porta'm enrere",
@@ -401,7 +418,7 @@
"The Syncthing admin interface is configured to allow remote access without a password.": "La interfície d'administració de Syncthing està configurada per permetre l'accés remot sense contrasenya.",
"The aggregated statistics are publicly available at the URL below.": "Les estadístiques agregades estan disponibles públicament a l'URL següent.",
"The cleanup interval cannot be blank.": "L'interval de neteja no pot estar en blanc.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha desat, però no s'ha activat. S'ha de reiniciar el Synthing per a activar la configuració nova.",
"The device ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'identificador del dispositiu que cal introduir aquí es pot trobar al diàleg \"Accions > Mostra l'ID\" de l'altre dispositiu. Els espais i els guions són opcionals (ignorats).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes, and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de carpetes i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
@@ -421,11 +438,13 @@
"The interval, in seconds, for running cleanup in the versions directory. Zero to disable periodic cleaning.": "L'interval, en segons, per executar la neteja al directori de versions. Zero per desactivar la neteja periòdica.",
"The maximum age must be a number and cannot be blank.": "La màxima antiguitat ha de ser un número i no pot estar en blanc.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
"The number of connections must be a non-negative number.": "El nombre de connexions ha de ser un nombre no negatiu.",
"The number of days must be a number and cannot be blank.": "El nombre de dies ha de ser un número i no pot estar en blanc.",
"The number of days to keep files in the trash can. Zero means forever.": "El nombre de dies per guardar els fitxers a la paperera. Zero significa per sempre.",
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
"The path cannot be blank.": "El camí no pot estar en blanc.",
"The rate limit is applied to the accumulated traffic of all connections to this device.": "El límit de velocitat s'aplica al trànsit acumulat de totes les connexions a aquest dispositiu.",
"The rate limit must be a non-negative number (0: no limit)": "El límit de velocitat ha de ser un nombre positiu (0: sense límit)",
"The remote device has not accepted sharing this folder.": "El dispositiu remot no ha acceptat compartir aquesta carpeta.",
"The remote device has paused this folder.": "El dispositiu remot ha posat en pausa aquesta carpeta.",
@@ -456,7 +475,7 @@
"Unexpected items have been found in this folder.": "S'han trobat elements inesperats en aquesta carpeta.",
"Unignore": "No ignorar",
"Unknown": "Desconegut",
"Unshared": "No compartit",
"Unshared": "Sense compartir",
"Unshared Devices": "Dispositius no compartits",
"Unshared Folders": "Carpetes no compartides",
"Untrusted": "No fiable",
@@ -470,8 +489,11 @@
"Usage reporting is always enabled for candidate releases.": "Els informes d'ús sempre estan activats per a les versions candidates.",
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
"Use notifications from the filesystem to detect changed items.": "Utilitzeu les notificacions del sistema de fitxers per detectar elements canviats.",
"User": "Usuari",
"User Home": "Carpeta d'inici de l'usuari",
"Username/Password has not been set for the GUI authentication. Please consider setting it up.": "El nom d'usuari/contrasenya no s'ha establert per a l'autenticació de la GUI. Penseu en configurar-lo.",
"Username/Password has not been set for the GUI authentication. Please consider setting it up.": "El nom d'usuari/contrasenya no s'ha establert per a l'autenticació de la GUI. Penseu a configurar-lo.",
"Using a QUIC connection over LAN": "Utilitzant una connexió QUIC per LAN",
"Using a QUIC connection over WAN": "Utilitzant una connexió QUIC a través de WAN",
"Using a direct TCP connection over LAN": "Utilitzant una connexió TCP directa per LAN",
"Using a direct TCP connection over WAN": "Utilitzant una connexió TCP directa a través de WAN",
"Version": "Versió",
@@ -492,6 +514,7 @@
"Watching for changes discovers most changes without periodic scanning.": "Observant els canvis descobreix la majoria dels canvis sense escanejar periòdicament.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Quan s'afegeix un nou dispositiu, recorda que aquest dispositiu tambè s'ha d'afegir a l'altre banda.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Quan s'afegeix una nova carpeta recorda que el ID d'aquesta s'utilitza per lligar repositoris entre els dispositius. Es distingeix entre majúscules i minúscules i ha de ser exactament iguals entre tots els dispositius.",
"When set to more than one on both devices, Syncthing will attempt to establish multiple concurrent connections. If the values differ, the highest will be used. Set to zero to let Syncthing decide.": "Quan s'estableix en més d'un als dos dispositius, Syncthing intentarà establir diverses connexions simultànies. Si els valors són diferents, s'utilitzarà el més alt. Establiu a zero per deixar que Sincronització decideixi.",
"Yes": "Si",
"Yesterday": "Ahir",
"You can also copy and paste the text into a new message manually.": "També podeu copiar i enganxar el text en un missatge nou manualment.",
@@ -499,8 +522,8 @@
"You can change your choice at any time in the Settings dialog.": "Pots canviar la teva elecció en qualsevol moment al quadre de preferències.",
"You can read more about the two release channels at the link below.": "Podeu llegir més sobre els dos canals de llançament a l'enllaç següent.",
"You have no ignored devices.": "No teniu cap dispositiu ignorat.",
"You have no ignored folders.": "No tens carpetes compartides.",
"You have unsaved changes. Do you really want to discard them?": "Tens canvis no desats. Realment les voleu descartar?",
"You have no ignored folders.": "No teniu carpetes ignorades.",
"You have unsaved changes. Do you really want to discard them?": "Teniu canvis no desats. Realment els voleu descartar?",
"You must keep at least one version.": "Has de mantenir com a mínim una versió.",
"You should never add or change anything locally in a \"{%receiveEncrypted%}\" folder.": "Mai no hauríeu d'afegir ni canviar res localment a una carpeta \"{{receiveEncrypted}}\".",
"Your SMS app should open to let you choose the recipient and send it from your own number.": "La vostra aplicació SMS s'hauria d'obrir per permetre't triar el destinatari i enviar-lo des del teu propi número.",
@@ -525,6 +548,7 @@
"light": "Clar"
}
},
"unknown device": "dispositiu desconegut",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vol compartir la carpeta \"{{folder}}\".",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} vol compartir la carpeta \"{{folderlabel}}\" ({{folder}}).",
"{%reintroducer%} might reintroduce this device.": "{{reintroducer}} podria tornar a introduir aquest dispositiu."

View File

@@ -1,5 +1,5 @@
{
"A device with that ID is already added.": "Zařízení s takovým identifikátorem už je přidáno.",
"A device with that ID is already added.": "Zařízení s takovým identifikátorem je již přidáno.",
"A negative number of days doesn't make sense.": "Záporný počet dní nedává smysl.",
"A new major version may not be compatible with previous versions.": "Nová hlavní verze nemusí být kompatibilní s předchozími verzemi.",
"API Key": "Klíč k API",
@@ -11,93 +11,95 @@
"Add Device": "Přidat zařízení",
"Add Folder": "Přidat složku",
"Add Remote Device": "Přidat vzdálené zařízení",
"Add devices from the introducer to our device list, for mutually shared folders.": "Přidat zařízení z uvaděče do místního seznamu zařízení a získat tak vzájemně sdílené složky.",
"Add filter entry": "Přidej filtr",
"Add ignore patterns": "Přidat vzory ignorovaného",
"Add devices from the introducer to our device list, for mutually shared folders.": "Přidat zařízení od zavaděče do našeho seznamu zařízení pro vzájemné sdílení složek.",
"Add filter entry": "Přidat položku filtru",
"Add ignore patterns": "Přidat vzory ignorování",
"Add new folder?": "Přidat novou složku?",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Dále bude prodloužen interval mezi plnými skeny (60krát, t.j. nová výchozí hodnota 1h). V případě, že nyní zvolíte Ne, stále ještě toto později můžete u každé složky jednotlivě ručně upravit.",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Kromě toho se prodlouží interval úplného opětovného skenování (60krát, t.j. nová výchozí hodnota 1h). Také to můžete později nastavit ručně pro každou složku, pokud zvolíte Ne.",
"Address": "Adresa",
"Addresses": "Adresy",
"Advanced": "Pokročilé",
"Advanced Configuration": "Pokročilá nastavení",
"Advanced": "Rozšířené",
"Advanced Configuration": "Rozšířená nastavení",
"All Data": "Všechna data",
"All Time": "Celou dobu",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Všechny složky sdílené s tímto zařízením musí být chráněna heslem, aby byla odesílaná data bez hesla nečitelná.",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Všechny složky sdílené s tímto zařízením musí být chráněny heslem, aby všechna odesílaná data byla bez zadaného hesla nečitelná.",
"Allow Anonymous Usage Reporting?": "Povolit anonymní hlášení o používání?",
"Allowed Networks": "Sítě, ze kterých je umožněn přístup",
"Alphabetic": "Abecední",
"Altered by ignoring deletes.": "Změněno pomocí vzorů ignorovaného.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Správu verzí obstarává externí příkaz. U toho je třeba, aby neaktuální soubory jím byly odsouvány pryč ze sdílené složky. Pokud popis umístění tohoto příkazu obsahuje mezeru, je třeba popis umístění uzavřít do uvozovek.",
"Allowed Networks": "Povolené sítě",
"Alphabetic": "Abecedně",
"Altered by ignoring deletes.": "Změněno ignorováním odstraněných.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Verzování obstarává externí příkaz. Ten musí odebrat soubor ze sdílené složky. Pokud cesta k aplikaci obsahuje mezery, je potřeba ji uvést v uvozovkách.",
"Anonymous Usage Reporting": "Anonymní hlášení o používání",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Formát anonymního hlášení o používání byl změněn. Chcete přejít na nový formát?",
"Applied to LAN": "Použité pro místní síť",
"Apply": "Aplikovat",
"Are you sure you want to override all remote changes?": "Skutečně si přejete přet všechny vzdálené změny?",
"Are you sure you want to permanently delete all these files?": "Skutečně chcete smazat všechny tyto soubory?",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Formát anonymního hlášení o používání se změnil. Chcete přejít na nový formát?",
"Applied to LAN": "Použité pro LAN",
"Apply": "Použít",
"Are you sure you want to override all remote changes?": "Opravdu chcete přepsat všechny vzdálené změny?",
"Are you sure you want to permanently delete all these files?": "Opravdu chcete trvale odstranit všechny tyto soubory?",
"Are you sure you want to remove device {%name%}?": "Opravdu chcete odebrat zařízení {{name}}?",
"Are you sure you want to remove folder {%label%}?": "Opravdu chcete odebrat složku {{label}}?",
"Are you sure you want to restore {%count%} files?": "Opravdu chcete obnovit {{count}} souborů?",
"Are you sure you want to revert all local changes?": "Skutečně si přejete vrátit všechny lokální změny?",
"Are you sure you want to upgrade?": "Skutečně chcete provést aktualizaci?",
"Authentication Required": "Autentizace vyžadována",
"Are you sure you want to revert all local changes?": "Opravdu chcete vrátit všechny místní změny?",
"Are you sure you want to upgrade?": "Opravdu chcete provést aktualizaci?",
"Authentication Required": "Požadováno ověření",
"Authors": "Autoři",
"Auto Accept": "Přijmout automaticky",
"Automatic Crash Reporting": "Automatické hlášení pádů",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatická aktualizace nyní nabízí volbu mezi stabilními vydáními a kandidáty na .",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatická aktualizace nyní nabízí výběr mezi stabilními vydáními a kandidáty na vydání.",
"Automatic upgrades": "Automatické aktualizace",
"Automatic upgrades are always enabled for candidate releases.": "Automatické aktualizace jsou vždy povolené u kandidátů na vydání.",
"Automatically create or share folders that this device advertises at the default path.": "Automaticky vytvářet nebo sdílet složky, které toto zařízení propaguje ve výchozím popisu umístě.",
"Available debug logging facilities:": "Dostupná logovací zařízení pro ladění:",
"Automatically create or share folders that this device advertises at the default path.": "Automaticky vytvářet nebo sdílet složky, které toto zařízení propaguje ve výchozí cestě.",
"Available debug logging facilities:": "Dostupné možnosti protokolování ladění:",
"Be careful!": "Buďte opatrní!",
"Body:": "Obsah:",
"Bugs": "Chyby",
"Cancel": "Zrušit",
"Changelog": "Seznam změn",
"Clean out after": "Vyčistit po",
"Cleaning Versions": "Mazání verzí",
"Cleanup Interval": "Interval mazání",
"Click to see full identification string and QR code.": "Kliknutím zobrazíte úplnou identifikaci a QR kód.",
"Cleaning Versions": "Čištění verzí",
"Cleanup Interval": "Interval čištění",
"Click to see full identification string and QR code.": "Kliknutím zobrazíte celý identifikační řetězec a QR kód.",
"Close": "Zavřít",
"Command": "Příkaz",
"Comment, when used at the start of a line": "Pokud použito na jeho začátku, je řádek považován za komentář",
"Comment, when used at the start of a line": "Považováno za komentář, pokud je použito na začátku řádku",
"Compression": "Komprese",
"Configuration Directory": "Konfigurační složka",
"Configuration Directory": "Konfigurační adresář",
"Configuration File": "Konfigurační soubor",
"Configured": "Nastaveno",
"Connected (Unused)": "Připojeno (nepoužité)",
"Connected (Unused)": "Připojeno (nepoužito)",
"Connection Error": "Chyba připojení",
"Connection Management": "Správa připojení",
"Connection Type": "Typ připojení",
"Connections": "Spojení",
"Connections via relays might be rate limited by the relay": "Připojení přes přenašeč může být přenašečem omezena rychlost",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Syncthing nyní umožňuje nepřetržité sledování změn. To zachytí změny na úložišti a spustí sken pouze pro umístění, ve kterých se něco změnilo. Výhodami jsou rychlejší propagace změn a méně plných skenů.",
"Connections": "Připojení",
"Connections via relays might be rate limited by the relay": "Připojení přes relé může být omezeno rychlostí relé",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Syncthing nyní umožňuje průběžné sledování změn. To zachytí změny na disku a provede skenování pouze změněných cest. Výhodou je rychlejší šíření změn a menší počet úplných skenování.",
"Copied from elsewhere": "Zkopírováno odjinud",
"Copied from original": "Zkopírováno z originálu",
"Copied!": "Zkopírováno!",
"Copy": "Kopírovat",
"Copy failed! Try to select and copy manually.": "Kopírování selhalo! Zkuste vybrat a zkopírovat manuálně.",
"Copy failed! Try to select and copy manually.": "Kopírování selhalo! Zkuste vybrat a zkopírovat ručně.",
"Currently Shared With Devices": "Aktuálně sdíleno se zařízeními",
"Custom Range": "Přesný rozsah",
"Custom Range": "Vlastní rozsah",
"Danger!": "Nebezpečí!",
"Database Location": "Umístění databáze",
"Debugging Facilities": "Nástroje pro ladění",
"Default": "Výchozí",
"Default Configuration": "Výchozí nastavení",
"Default Configuration": "Výchozí konfigurace",
"Default Device": "Výchozí zařízení",
"Default Folder": "Výchozí složka",
"Default Ignore Patterns": "Výchozí vzory ignorovaného",
"Default Ignore Patterns": "Výchozí vzory ignorování",
"Defaults": "Výchozí hodnoty",
"Delete": "Smazat",
"Delete Unexpected Items": "Smazat neočekávané položky",
"Deleted {%file%}": "Smazáno {{file}}",
"Delete": "Odstranit",
"Delete Unexpected Items": "Odstranit neočekávané položky",
"Deleted {%file%}": "Odstraněn {{file}}",
"Deselect All": "Zrušit výběr všeho",
"Deselect devices to stop sharing this folder with.": "Zrušte výběr zařízení, se kterými již nemá být tato složka sdílena.",
"Deselect folders to stop sharing with this device.": "Zrušte výběr složek, které se mají přestat sdílet s tímto zařízením.",
"Deselect devices to stop sharing this folder with.": "Zrušením výběru zařízení s ním přestanete tuto složku sdílet.",
"Deselect folders to stop sharing with this device.": "Zrušením výběru složek zastavíte sdíle s tímto zařízením.",
"Device": "Zařízení",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Zařízení „{{name}}“ ({{device}} na {{address}}) se chce připojit. Přidat nové zařízení?",
"Device Certificate": "Certifikát zařízení",
"Device ID": "Identifikátor zařízení",
"Device Identification": "Identifikace zařízení",
"Device Name": "Název zařízení",
"Device Status": "Stav zařízení",
"Device is untrusted, enter encryption password": "Zařízení nemá důvěru, zadejte šifrovací heslo.",
"Device rate limits": "Omezení přenosové rychlosti pro zařízení",
"Device that last modified the item": "Zařízení, které položku změnilo naposledy",
@@ -153,7 +155,7 @@
"Failed to load file versions.": "Nepodařilo se nahrát verze souboru.",
"Failed to load ignore patterns.": "Načtení vzorů ignorovaného se nezdařilo.",
"Failed to setup, retrying": "Nastavování se nezdařilo, zkouší se znovu",
"Failure to connect to IPv6 servers is expected if there is no IPv6 connectivity.": "Je v pořádku, když připojení k IPv6 serverům nezdaří, pokud není k dispozici IPv6 konektivita.",
"Failure to connect to IPv6 servers is expected if there is no IPv6 connectivity.": "Je v pořádku, když se připojení k IPv6 serverům nezdaří, pokud není k dispozici IPv6 konektivita.",
"File Pull Order": "Pořadí stahování souborů",
"File Versioning": "Správa verzí souborů",
"Files are moved to .stversions directory when replaced or deleted by Syncthing.": "Při nahrazování nebo mazání aplikací Syncthing jsou původní soubory přesunuty do složky .stversions.",
@@ -219,13 +221,14 @@
"Last seen": "Naposledy spatřen",
"Latest Change": "Poslední změna",
"Learn more": "Zjistěte více",
"Learn more at {%url%}": "Více na {{url}}",
"Limit": "Limit",
"Listener Failures": "Selhání při naslouchání",
"Listener Status": "Stav naslouchání",
"Listeners": "Naslouchající",
"Loading data...": "Načítání dat…",
"Loading...": "Načítání…",
"Local Additions": "Místní příbytky",
"Local Additions": "Místní přebytky",
"Local Discovery": "Místní objevování",
"Local State": "Místní status",
"Local State (Total)": "Místní status (Celkem)",
@@ -234,11 +237,16 @@
"Log File": "Soubor logů",
"Log In": "Přihlásit se",
"Log Out": "Odhlásit se",
"Log in to see paths information.": "Pro zobrazení informací o cestě se přihlaste.",
"Log in to see version information.": "Pro zobrazení informací o verzi se přihlaste.",
"Log tailing paused. Scroll to the bottom to continue.": "Zaznamenávání událostí pozastaveno. Sjeďte dolů pro pokračování.",
"Login failed, see Syncthing logs for details.": "Přihlášení selhalo, detaily najdete v Syncthing logu.",
"Logs": "Záznamy událostí",
"Major Upgrade": "Aktualizace hlavní verze",
"Mass actions": "Hromadné akce",
"Maximum Age": "Maximální časový limit",
"Maximum single entry size": "Maximální velikost jedné položky",
"Maximum total size": "Maximální celková velikost",
"Metadata Only": "Pouze metadata",
"Minimum Free Disk Space": "Minimální velikost volného místa na úložišti",
"Mod. Device": "Zařízení, které provedlo změnu",
@@ -282,7 +290,7 @@
"Pending changes": "Čekající změny",
"Periodic scanning at given interval and disabled watching for changes": "Periodické skenování podle zadaného intervalu; sledování změn vypnuto",
"Periodic scanning at given interval and enabled watching for changes": "Periodické skenování podle zadaného intervalu; sledování změn zapnuto",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Periodické skenování podle zadaného intervalu; nastavení sledování změn se nezdařilo, opětovný pokus každou 1 min: ",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Periodické skenování podle zadaného intervalu; nastavení sledování změn se nezdařilo, opětovný pokus každou 1 min:",
"Permanently add it to the ignore list, suppressing further notifications.": "Natrvalo ignorovat, takže oznámení již nebudou přicházet.",
"Please consult the release notes before performing a major upgrade.": "Před přechodem na novější hlavní verzi si nejdříve přečtěte poznámky k vydání nové verze.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "V dialogu Nastavení zadejte uživatelské jméno a heslo pro ověření se v GUI.",
@@ -408,15 +416,17 @@
"The folder path cannot be blank.": "Popis umístění složky nemůže zůstat nevyplněný.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Jsou použity následující intervaly: za první hodinu jsou ponechány verze pro každých 30 sekund, za první den jsou ponechány verze pro každou hodinu, za prvních 30 dní jsou ponechány verze pro každý den a do nejvyššího nastaveného stáří jsou ponechány verze pro každý týden.",
"The following items could not be synchronized.": "Následující položky nemohly být synchronizovány.",
"The following items were changed locally.": "Tyto položky byly změněny lokálně",
"The following items were changed locally.": "Tyto položky byly změněny lokálně.",
"The following methods are used to discover other devices on the network and announce this device to be found by others:": "K objevování ostatních zařízení a oznamování tohoto zařízení se používají následující metody:",
"The following text will automatically be inserted into a new message.": "Následující text bude automaticky vložen do nové zprávy.",
"The following unexpected items were found.": "Byly nalezeny tyto neočekávané položky.",
"The interval must be a positive number of seconds.": "Interval musí být kladný počet sekund.",
"The interval, in seconds, for running cleanup in the versions directory. Zero to disable periodic cleaning.": "Interval (v sekundách) pro spouštění čištění ve složce s verzemi. Nula pravidelné čištění vypíná.",
"The maximum age must be a number and cannot be blank.": "Nejvyšší stáří je třeba zadat v podobě čísla a nemůže být prázdné.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maximální doba pro zachování verze (dny, zapsáním hodnoty 0 bude ponecháno navždy).",
"The number of days must be a number and cannot be blank.": "Je třeba, aby počet dní bylo číslo a nemůže zůstat nevyplněné.",
"The number of days to keep files in the trash can. Zero means forever.": "Počet dní, po který budou soubory uchovány v koši. Nula znamená navždy.",
"The number of connections must be a non-negative number.": "Počet spojení musí být nezáporné číslo.",
"The number of days must be a number and cannot be blank.": "Počet dní musí být číslo a nemůže zůstat nevyplněné.",
"The number of days to keep files in the trash can. Zero means forever.": "Počet dní, po které budou soubory uchovány v koši. Nula znamená navždy.",
"The number of old versions to keep, per file.": "Počet uchovávaných starších verzí každého ze souborů.",
"The number of versions must be a number and cannot be blank.": "Je třeba, aby počet verzí bylo číslo a nemůže zůstat nevyplněné.",
"The path cannot be blank.": "Popis umístění nemůže zůstat nevyplněný.",
@@ -475,8 +485,8 @@
"Warning, this path is a parent directory of an existing folder \"{%otherFolder%}\".": "Varování, tento popis umístění je nadřazenou složkou existující „{{otherFolder}}“.",
"Warning, this path is a parent directory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Varování, tento popis umístění je nadřazenou složkou existující „{{otherFolderLabel}}“ ({{otherFolder}}).",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Varování: toto umístění je podsložkou existující „{{otherFolder}}“.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Varování, toto umístění e podsložkou existující „{{otherFolderLabel}}“ ({{otherFolder}}).",
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Pozor: Pokud používáte externí sledování změn jako {{syncthingInotify}}, měly byste se ujistit, že je toto sledování vypnuto.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Varování, toto umístění je podsložkou existující „{{otherFolderLabel}}“ ({{otherFolder}}).",
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Pozor: Pokud používáte externí sledování změn jako {{syncthingInotify}}, měli byste se ujistit, že je toto sledování vypnuto.",
"Watch for Changes": "Sledovat změny",
"Watching for Changes": "Sledování změn",
"Watching for changes discovers most changes without periodic scanning.": "Sledování změn odhalí většinu změn ještě před periodickým skenováním.",
@@ -494,9 +504,13 @@
"You should never add or change anything locally in a \"{%receiveEncrypted%}\" folder.": "Ve složce typu „{{receiveEncrypted}}“ byste neměli lokálně nic měnit ani vytvářet.",
"days": "dní",
"directories": "složky",
"file": "soubor",
"files": "souborů",
"folder": "složka",
"full documentation": "úplná dokumentace",
"items": "položky",
"modified": "změněno",
"permit": "povolit",
"seconds": "sekund",
"theme": {
"name": {

View File

@@ -187,7 +187,7 @@
"GUI Theme": "GUI-tema",
"General": "Generelt",
"Generate": "Opret",
"Global Discovery": "Globalt opslag",
"Global Discovery": "Globalt opdagelse",
"Global Discovery Servers": "Globale opslagsservere",
"Global State": "Global tilstand",
"Help": "Hjælp",
@@ -386,6 +386,7 @@
"Staggered File Versioning": "Forskudt filversionering",
"Start Browser": "Start browser",
"Statistics": "Statistikker",
"Stay logged in": "Forbliv logget ind",
"Stopped": "Stoppet",
"Stores and syncs only encrypted data. Folders on all connected devices need to be set up with the same password or be of type \"{%receiveEncrypted%}\" too.": "Gemmer og synkroniserer kun krypterede data. Mapper på alle tilsluttede enheder skal være oprettet med samme adgangskode eller også være af typen \"{{receiveEncrypted}}\".",
"Subject:": "Emne:",

Some files were not shown because too many files have changed in this diff Show More