Compare commits

..

72 Commits

Author SHA1 Message Date
Jakob Borg
0945304a79 build: fix detection of next rc version 2025-06-20 11:17:23 +02:00
Jakob Borg
9703dd9f57 build: import release workflow changes from main 2025-06-20 11:12:05 +02:00
yparitcher
259e9ef08e fix(protocol): slightly loosen/correct ownership comparison criteria (fixes #9879) (#10176)
Only Require either matching UID & GID OR matching Names.

If the 2 devices have a different Name => UID mapping, they can never be
totaly equal. Therefore when syncing we try matching the Name and fall
back to the UID. However when scanning for changes we currently require
both the Name & UID to match. This leads to forever having out of sync
files back and forth, or local additions when receive only.

This patch does not change the sending behavoir. It only change what we
decide is equal for exisiting files with mismapped Name => UID,

The added testcases show the change: Test 1,5,6 are the same as current.
Test 2,3 Are what change with this patch (from false to true). Test 4 is
a subset of test 2 they is currently special cased as true, which does
not chnage.

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-06-20 09:55:42 +02:00
Simon Frei
6a0c6128d8 fix(watchaggregator): properly handle sub-second watch durations (fixes #9927) (#10179)
I'll let Audrius words from the ticket explain this :)

> I'm a bit lost, time.Duration is an int64, yet watcher delay is float,
> anything sub 1s gets rounded down to 0, so you just end up going into
an
> infinite loop.


https://github.com/syncthing/syncthing/issues/9927#issuecomment-2967736106
2025-06-15 10:29:33 +02:00
Jakob Borg
b05ece0681 build: more resilient pushes to releases 2025-06-07 13:18:58 +02:00
Jakob Borg
e9133ef82b docs: link to Docker image, APT, in release notes 2025-06-05 19:19:05 +02:00
Jakob Borg
67ba20d777 build: also create relaysrv and discosrv releases 2025-06-05 19:19:05 +02:00
Jakob Borg
21da0d7890 fix(stupgrades): return latest stable & pre for each major 2025-06-05 19:19:05 +02:00
ardevd
ebbe57d0ab fix(syncthing): avoid writing panic log to nil fd (#10154)
### Purpose

This change fixes a logical bug in the panic log writing where we could
end up writing to a uninitialized file descriptor.

On the very first iteration, `panicFd` is nil. We enter the if `panicFd
== nil { … }` block, check for “panic:” or “fatal error:”, and if
neither matches, we skip instantiating `panicFd` altogether. However,
immediately after, still within `if panicFd == nil { … }`, we call
`panicFd.WriteString("Panic at ...")`. But `panicFd` would in this case
be `nil`, which will cause a run‐time panic.

It's not clear to me why panicFd is only initialized if the lines start
with "panic:" or "fatal error:" so I've left that logic untouched. With
this change we at least avoid the risk of writing to a nil
filedescriptor.
## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-06-03 07:39:21 +02:00
Jakob Borg
f4abc71dcc chore: copyright in next-version script 2025-06-02 20:41:42 +02:00
Jakob Borg
8aa02da93a build: include "v" prefix in version tags... 2025-06-02 19:59:45 +02:00
Jakob Borg
0e560486db build: use own script instead of svu
We use a slightly different handling of features between prereleases.
2025-06-02 19:49:23 +02:00
Syncthing Release Automation
57d413099d chore(gui, man, authors): update docs, translations, and contributors 2025-06-02 05:24:58 +00:00
Jakob Borg
1fdf07933c feat(config): expose folder and device info as metrics (fixes #9519) (#10148)
Tihs makes it easier to use metrics based on device and folder labels,
names, and other attributes. Other metrics which are based on folder or
device ID can be joined with these info metrics to enrich their label
sets.

```
# HELP syncthing_config_device_info Provides additional information labels on devices
# TYPE syncthing_config_device_info gauge
syncthing_config_device_info{device="I6KAH76-66SLLLB-5PFXSOA-UFJCDZC-YAOMLEK-CP2GB32-BV5RQST-3PSROAU",introducer="false",name="s1",paused="false",untrusted="false"} 1

# HELP syncthing_config_folder_info Provides additional information labels on folders
# TYPE syncthing_config_folder_info gauge
syncthing_config_folder_info{folder="default",label="The default folder",path="s2",paused="false",type="sendreceive"} 1
```

With this you can e.g. query for

```
syncthing_connections_active * on(device) group_left syncthing_config_device_info
```

Fixes #9519 
Closes #10074 
Closes #10147
2025-05-31 17:09:23 +02:00
Jakob Borg
c50678618f chore: add issue types to GitHub issue templates 2025-05-30 11:57:27 +02:00
Jakob Borg
8094b459e4 build: remove schedule from PR metadata job
It shouldn't have touched non-PR issues, but it did
2025-05-30 11:57:27 +02:00
Simon Frei
6765867a2e chore(protocol): only allow enc. password changes on cluster config (#10145)
In practice we already always call SetPassword and ClusterConfig
together. However it's not just "sensible" to do that, it's required: If
the passwords change, the remote device needs to know about that to
check that the enc. setup is valid/consistent (e.g. tokens match,
folder-type is appropriate, ...).
And with the passwords set later, there's no point in adding them as
part of creating a new connection.

This is a "followup" (if one can call it that 4 years later :) ) to
resp. fix for the following commit:
924b96856f

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-05-30 09:52:47 +02:00
Simon Frei
4fb8ee6a6f chore(protocol): don't start connection routines a second time (#10146) 2025-05-30 06:28:42 +00:00
Jakob Borg
674834ccf4 build: properly propagate build tags to Debian build (#10144)
Previously all were ignored except noupgrade which was hard coded...
2025-05-29 15:06:57 +00:00
Jakob Borg
3bd2bff23b fix(protocol): avoid deadlock with concurrent connection start and close (#10140) 2025-05-29 14:56:58 +00:00
Jakob Borg
40660c5fb7 build: add labeler workflow for PRs (#10143)
Use labels to categorise release notes
2025-05-29 10:04:08 +02:00
Jakob Borg
d940d094a1 build(deps): update our notify package from upstream (#10142) 2025-05-28 15:04:24 +00:00
Jakob Borg
9d67727989 build(deps): update dependencies (#10141) 2025-05-28 13:52:08 +00:00
Jakob Borg
6f51700a7f docs: general notes about v2 coming (#10135)
This adds a file that will be prepended to release notes (tag messages,
GitHub releases, forum posts) for v1 releases. I'd like there to be
something there to flag that things are going to change.
2025-05-27 10:01:04 +02:00
Marcel Meyer
598915193a refactor: use slices package for sorting (#10136)
Few more complicated usages of the sort packages are left.

### Purpose

Make progress towards replacing the sort package with slices package.
2025-05-26 20:37:49 +02:00
Jakob Borg
905e5ec07f build: handle multiple general release notes 2025-05-26 16:27:23 +02:00
Jakob Borg
4075b886d0 build: no need to build on the branches that just trigger tags 2025-05-26 15:21:21 +02:00
Jakob Borg
cade790198 build: use specific token for pushing release tags 2025-05-26 14:13:02 +02:00
Luke Hamburg
98555a9a80 fix(gui): update uncamel() to handle strings like 'IDs' (fixes #10128) (#10131)
> ⚠️ resubmission targeting `main` instead of `v2`

### Purpose

Updates `uncamel()` function in
[uncamelFilter.js](https://github.com/syncthing/syncthing/blob/v2/gui/default/syncthing/core/uncamelFilter.js)
to fix camelCase conversion edge cases, see #10128

This adds an array called `reservedStrings` which will be printed as-is,
e.g. `IDs`, `LAN` etc. I pre-populated this with what I believe makes
sense, but of course this is easily updated.

### Testing

I compiled all the config variables I could find in
`syncthing/lib/config/*configuration.go` and tested this new function
against them. Everything seemed to pass.

### Screenshot


![Image](https://github.com/user-attachments/assets/af8c9821-58b3-4a6a-8462-bead8a6d845a)
2025-05-26 11:43:38 +00:00
Marcel Meyer
48b757cac1 refactor: use slices package for sort (#10132)
The sort package is still used in places that were not trivial to
change. Since Go 1.21 slices package can be uswed for sort. See
https://go.dev/doc/go1.21#slices

### Purpose

Make some progress with the migration to a more up-to-date syntax.
2025-05-26 13:37:26 +02:00
Jakob Borg
58c85fc9db build: process for automatic release tags (#10133)
Make the release tagging consistent. Push to release branch to create a
stable release; push to release-rc to release a new candidate.
2025-05-26 13:33:53 +02:00
Syncthing Release Automation
ddd98a818a chore(gui, man, authors): update docs, translations, and contributors 2025-05-26 03:55:20 +00:00
Jakob Borg
64b5a1b738 fix(syncthing): ensure both config and data dirs exist at startup (fixes #10126) (#10127)
Previously we'd only ensure the config dir, which is often but not
always the same as the data dir.

Fixes #10126
2025-05-25 08:10:17 +02:00
Ashish Bhate
1a131a56f2 fix(versioner): fix perms of created folders (fixes #9626) (#10105)
As suggested in the linked issue, I've updated the versioner code to use
the permissions of the corresponding directory in the synced folder,
when creating the folder in the versions directory

### Testing
- Some tests are included with the PR. Happy to add more if you think
there are some edge-cases that we're missing.
- I've tested manually on linux to confirm the permissions of the
created directories.
- I haven't tested on Windows or OSX (I don't have access to these OS)
2025-05-24 07:35:32 +02:00
pullmerge
beda37f28b refactor: use slices.Contains to simplify code (#10121)
There is a [new function](https://pkg.go.dev/slices@go1.21.0#Contains)
added in the go1.21 standard library, which can make the code more
concise and easy to read.
2025-05-23 10:36:06 +00:00
Jakob Borg
2532ac35cf build(deps): update dependency due to build breakage (#10120) 2025-05-21 06:52:29 +00:00
Jakob Borg
bcd30ceaec chore: move golangci-lint & meta to separate PR-only workflow (#10119)
For now. Existing code is not golangci-lint clean, but new PRs should
be, ideally.
2025-05-21 08:32:49 +02:00
Jakob Borg
9a3493c2f4 build: reactivate golangci-lint (#10118)
With DeepSource becoming (imho) less and less useful, let's get this one
back on track. It will likely require adjusting over time.
2025-05-20 14:03:43 +02:00
André Colomb
fa404d5a0d chore(gui): add Serbian (sr) translation template (#10116)
Based on user request from Weblate, user `@vlazic`.
2025-05-19 21:06:38 +00:00
Syncthing Release Automation
73ad18fbfb chore(gui, man, authors): update docs, translations, and contributors 2025-05-19 03:56:31 +00:00
Syncthing Release Automation
1dd264894a chore(gui, man, authors): update docs, translations, and contributors 2025-05-12 03:54:02 +00:00
Marcus B Spencer
8c3d2f3bc5 fix(config): mark audit log options as needing restart (fixes #10099) (#10100)
### Testing

Change the `auditEnabled` option and you should get a prompt in the Web
GUI.
Restart and change the `auditFile` option, and you should get that same
prompt.

The prompt you should get is shown in the screenshots below.

### Screenshots


![Screenshot_20250507_122546](https://github.com/user-attachments/assets/23ce7c42-5e60-4f88-ac58-f312a9a1f5cc)

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-05-09 10:49:11 +00:00
Hazem Krimi
702ed8ecc1 fix(config): deep copy configuration defaults (fixes #9916) (#10101)
### Purpose

Setting default configuration was not working properly since the
defaults struct is not deeply copied.

### Testing

Try running commands to change default configuration and either inspect
`config.xml` or `/rest/config` result to see the applied changed.
Example:
```
./syncthing cli config defaults folder versioning params set keep 5
```
2025-05-09 07:40:32 +02:00
Syncthing Release Automation
b038650810 chore(gui, man, authors): update docs, translations, and contributors 2025-05-05 03:52:42 +00:00
Hazem Krimi
a16bf555c0 feat(gui): close a modal when pressing ESC after switching modal tabs (fixes #9489) (#10092)
### Purpose

As stated in #9489 after clicking on a tab link to switch tabs in a
modal you can no longer close the modal through clicking the ESC key
unless you click anywhere on the modal to focus on it again.

### Testing

- Click on a modal that has tabs like "Settings" or "Add Folder" and
switch tabs then click on ESC.
- Check if clicking outside of a modal in the backdrop should still
close the modal.

### Demo


https://github.com/user-attachments/assets/a010db9a-72f7-4160-a7db-ddfebffb4834
2025-05-02 14:53:54 +00:00
Jakob Borg
cd6ea60fa1 build(deps): update dependencies (#10091)
Without bumping Go version
2025-05-01 18:15:37 +00:00
domain
0bf21d9db2 fix(strelaysrv): make the session limiter session-dependent (fixes #10072) (#10073)
### Purpose

Make the session limiter only apply to current session.

### Testing

Relay 2 or more sessions and check if the sum of the connection speed
can exceed the specified per-session rate.

2 sessions (-global-rate=50000000 and -per-session-rate=6250000):


![图片](https://github.com/user-attachments/assets/133e531a-ed49-4890-aef7-821c628bcfc8)

1 session (-global-rate=50000000 and -per-session-rate=6250000):


![图片](https://github.com/user-attachments/assets/ac89ea53-2d8e-4347-9bbc-4780d85e38d7)
2025-04-30 14:25:01 +00:00
Jakob Borg
f61843ef2e build: artifact uploads destination OCI 2025-04-29 14:01:25 -05:00
Syncthing Release Automation
23e8366f8d chore(gui, man, authors): update docs, translations, and contributors 2025-04-28 03:52:12 +00:00
Ross Smith II
93e72cc83f chore(gui): use go list --deps for dependency list (#10071) 2025-04-26 02:24:31 +00:00
Marcus B Spencer
190dff142c feat(config): add option for audit file (fixes #9481) (#10066) 2025-04-23 22:32:23 +07:00
bt90
c667ada63a chore(api): log X-Forwarded-For (#10035)
### Purpose

Fix https://github.com/syncthing/syncthing/issues/9336

The `emitLoginAttempt` function now checks for the presence of an
`X-Forwarded-For` header. The IP from this header is only used if the
connecting host is either on loopback or on the same LAN.

In the case of a host pretending to be a proxy, we'd still have both IPs
in the logs, which should make this much less critical from a security
standpoint.

### Testing

1. directly via localhost
2. via proxy an localhost

#### Logs

```
[3JPXJ] 2025/04/11 15:00:40 INFO: Wrong credentials supplied during API authorization from 127.0.0.1
[3JPXJ] 2025/04/11 15:03:04 INFO: Wrong credentials supplied during API authorization from 192.168.178.5 proxied by 127.0.0.1
```

#### Event API

```
  {
    "id": 23,
    "globalID": 23,
    "time": "2025-04-11T15:00:40.578577402+02:00",
    "type": "LoginAttempt",
    "data": {
      "remoteAddress": "127.0.0.1",
      "success": false,
      "username": "sdfsd"
    }
  },
  {
    "id": 24,
    "globalID": 24,
    "time": "2025-04-11T15:03:04.423403976+02:00",
    "type": "LoginAttempt",
    "data": {
      "proxy": "127.0.0.1",
      "remoteAddress": "192.168.178.5",
      "success": false,
      "username": "sdfsd"
    }
  }
```

### Documentation

https://github.com/syncthing/docs/pull/907

---------

Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-04-23 06:01:13 +00:00
Ross Smith II
93ae30d889 chore(gui): update dependency copyrights, add script for periodic maintenance (#10067)
### Purpose

This PR parses the output of `go mod graph` and updates the copyright
list in our [about
modal](486eebc4ac/gui/default/syncthing/core/aboutModalView.html (L38)).

If there are no changes, the program is silent. Otherwise, it reports
what additions, and deletions it made. It does not rewrite existing
copyright notices, but it does remove notices that we no longer use, as
well as add new ones.

It uses a GitHub API to try to determine the copyright string in the
license file. If one is not found, it defaults to `Copyright &copy;
<this_year> the <owner/repo> authors`. If a proper copyright is found,
simply update the notice in `aboutModalView.html`, and it will be used.
2025-04-23 12:41:05 +07:00
Syncthing Release Automation
486eebc4ac chore(gui, man, authors): update docs, translations, and contributors 2025-04-21 03:52:26 +00:00
Jakob Borg
ff33d976d1 chore(syncthing): remove support for TLS 1.2 sync connections (#10064)
This cleans up the option to allow old TLS 1.2 sync connections. The
flag existed for compatibility with old Syncthing versions that don't
support TLS 1.3, which is approximately Syncthing 1.2.2 (September 2019)
and older. ("Approximately" because it depends on the Go version it's
built with and that's when we switched to building with Go 1.13.)

Ref #10062 because it reminded me this exists.
2025-04-21 10:30:43 +07:00
TheCreeper
69890b4282 fix(osutil): give threads same I/O priority on Linux (#10063) 2025-04-21 02:30:52 +00:00
Jakob Borg
533c9a6ab0 chore(stun): switch lookup warning to debug level 2025-04-17 07:29:10 +07:00
Syncthing Release Automation
9521bb3931 chore(gui, man, authors): update docs, translations, and contributors 2025-04-14 03:51:01 +00:00
Jakob Borg
e46a0f99c3 chore: add missing copyright in new files from infra branch (#10055)
Let's see if it passes
2025-04-13 09:25:16 +00:00
Jakob Borg
ed97e365b2 Merge branch 'infrastructure'
* infrastructure:
  feat(stdiscosrv): configurable desired not-found rate
  chore(blobs): generalised blob storage
  chore(stdiscosrv): path style s3
  feat(ursv): add os/arch/distribution metric
  chore(strelaypoolsrv): limit number of returned relays
  build(infra): run in Docker environment for pushes
  chore(stupgrades): expose latest release as a metric
2025-04-13 09:41:45 +02:00
Jakob Borg
b4776ea4e0 feat(stdiscosrv): configurable desired not-found rate 2025-04-13 09:41:16 +02:00
Jakob Borg
b5ffd0a796 chore(blobs): generalised blob storage 2025-04-13 09:41:16 +02:00
Jakob Borg
c74299b59a chore(stdiscosrv): path style s3 2025-04-13 09:40:14 +02:00
Jakob Borg
8b6d837483 feat(ursv): add os/arch/distribution metric 2025-04-13 09:40:14 +02:00
Jakob Borg
3e74b3dee2 chore(strelaypoolsrv): limit number of returned relays
Avoid unnecessarily enormous responses by returning a random subset of
relays.
2025-04-13 09:40:14 +02:00
Jakob Borg
2902da996c build(infra): run in Docker environment for pushes 2025-04-13 09:40:14 +02:00
Jakob Borg
f6f144bf17 chore(stupgrades): expose latest release as a metric 2025-04-13 09:40:11 +02:00
Sébastien WENSKE
ab5c42f4a0 feat(api, gui): allow authentication bypass for metrics (#10045)
### Purpose

Give the ability to skip authentication for prometheus metrics
("/metrics").

### Testing

When authentication is enabled and "Metrics Without Auth" is checked
(not the default), the "/metrics" path remains accessible even when
disconnected.

### Screenshots


![image](https://github.com/user-attachments/assets/144b696b-dd72-46f4-94d5-cd21848e4a4c)

### Documentation

https://github.com/syncthing/docs/pull/906
2025-04-13 07:35:57 +00:00
Jakob Borg
7db3f7eaac Merge branch 'release-1.29.5'
* release-1.29.5:
  build: push artifacts to Azure (#10044)
  fix(syncthing): use separate lock file instead of locking the certificate (fixes #10053) (#10054)
2025-04-12 14:57:04 +02:00
Jakob Borg
f0b666269b build: push artifacts to Azure (#10044)
Provider migration
2025-04-12 14:55:24 +02:00
Jakob Borg
190a59842c fix(syncthing): use separate lock file instead of locking the certificate (fixes #10053) (#10054)
Apparently that nukes the cert under some circumstances on some Windows
🤷
2025-04-12 14:49:23 +02:00
Jakob Borg
40888c1a66 fix(syncthing): use separate lock file instead of locking the certificate (fixes #10053) (#10054)
Apparently that nukes the cert under some circumstances on some Windows
🤷
2025-04-12 14:46:57 +02:00
391 changed files with 58145 additions and 10601 deletions

View File

@@ -1,6 +1,7 @@
name: Feature request
description: File a new feature request
labels: ["enhancement", "needs-triage"]
type: Feature
body:
- type: textarea

View File

@@ -1,6 +1,7 @@
name: Bug report
description: If you're actually looking for support instead, see "I need help / I have a question".
labels: ["bug", "needs-triage"]
type: Bug
body:
- type: markdown
attributes:

23
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
version: 1
labels:
- label: enhancement
title: ^feat\b
- label: bug
title: ^fix\b
- label: documentation
title: ^docs\b
- label: chore
title: ^chore\b
- label: chore
title: ^refactor\b
- label: build
title: ^build\b
- label: dependencies
title: ^build\(deps\)\b

17
.github/release.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
changelog:
exclude:
labels:
- dependencies
categories:
- title: Fixes
labels:
- bug
- title: Features
labels:
- enhancement
- title: Other
labels:
- '*'

View File

@@ -21,7 +21,7 @@ jobs:
name: Build and push Docker images
if: github.repository == 'syncthing/syncthing'
runs-on: ubuntu-latest
environment: release
environment: docker
strategy:
matrix:
pkg:

View File

@@ -3,6 +3,9 @@ name: Build Syncthing
on:
pull_request:
push:
branches-ignore:
- release
- release-rc*
workflow_call:
workflow_dispatch:
@@ -13,6 +16,8 @@ env:
GO_VERSION: "~1.24.0"
# Optimize compatibility on the slow archictures.
GO386: softfloat
GOARM: "5"
GOMIPS: softfloat
# Avoid hilarious amounts of obscuring log output when running tests.
@@ -22,8 +27,6 @@ env:
BUILD_USER: builder
BUILD_HOST: github.syncthing.net
TAGS: "netgo osusergo sqlite_omit_load_extension"
# A note on actions and third party code... The actions under actions/ (like
# `uses: actions/checkout`) are maintained by GitHub, and we need to trust
# GitHub to maintain their code and infrastructure or we're in deep shit in
@@ -85,27 +88,6 @@ jobs:
LOKI_USER: ${{ secrets.LOKI_USER }}
LOKI_PASSWORD: ${{ secrets.LOKI_PASSWORD }}
LOKI_LABELS: "go=${{ matrix.go }},runner=${{ matrix.runner }},repo=${{ github.repository }},ref=${{ github.ref }}"
CGO_ENABLED: "1"
#
# Meta checks for formatting, copyright, etc
#
correctness:
name: Check correctness
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
check-latest: true
- name: Check correctness
run: |
go test -v ./meta
#
# The basic checks job is a virtual one that depends on the matrix tests,
@@ -121,7 +103,6 @@ jobs:
runs-on: ubuntu-latest
needs:
- build-test
- correctness
- package-linux
- package-cross
- package-source
@@ -137,8 +118,17 @@ jobs:
package-windows:
name: Package for Windows
runs-on: ubuntu-latest
runs-on: windows-latest
steps:
- name: Set git to use LF
# Without this, the checkout will happen with CRLF line endings,
# which is fine for the source code but messes up tests that depend
# on data on disk being as expected. Ideally, those tests should be
# fixed, but not today.
run: |
git config --global core.autocrlf false
git config --global core.eol lf
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -150,14 +140,17 @@ jobs:
cache: false
check-latest: true
- uses: mlugg/setup-zig@v1
- name: Get actual Go version
run: |
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-package-windows-${{ hashFiles('**/go.sum') }}
~\AppData\Local\go-build
~\go\pkg\mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-package-${{ hashFiles('**/go.sum') }}
- name: Install dependencies
run: |
@@ -165,14 +158,15 @@ jobs:
- name: Create packages
run: |
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" -goos windows -goarch amd64 -cc "zig cc -target x86_64-windows" zip $tgt
go run build.go -tags "${{env.TAGS}}" -goos windows -goarch 386 -cc "zig cc -target x86-windows" zip $tgt
go run build.go -tags "${{env.TAGS}}" -goos windows -goarch arm64 -cc "zig cc -target aarch64-windows" zip $tgt
# go run build.go -tags "${{env.TAGS}}" -goos windows -goarch arm -cc "zig cc -target thumb-windows" zip $tgt # failes with linker errors
done
$targets = 'syncthing', 'stdiscosrv', 'strelaysrv'
$archs = 'amd64', 'arm', 'arm64', '386'
foreach ($arch in $archs) {
foreach ($tgt in $targets) {
go run build.go -goarch $arch zip $tgt
}
}
env:
CGO_ENABLED: "1"
CGO_ENABLED: "0"
- name: Archive artifacts
uses: actions/upload-artifact@v4
@@ -182,7 +176,7 @@ jobs:
codesign-windows:
name: Codesign for Windows
if: github.repository_owner == 'syncthing' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-') || startsWith(github.ref, 'refs/tags/v'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
runs-on: windows-latest
needs:
@@ -257,8 +251,6 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: mlugg/setup-zig@v1
- uses: actions/cache@v4
with:
path: |
@@ -268,25 +260,14 @@ jobs:
- name: Create packages
run: |
sudo apt-get install -y gcc-mips64-linux-gnuabi64 gcc-mips64el-linux-gnuabi64
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch amd64 -cc "zig cc -target x86_64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch 386 -cc "zig cc -target x86-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch arm -cc "zig cc -target arm-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch arm64 -cc "zig cc -target aarch64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mips -cc "zig cc -target mips-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mipsle -cc "zig cc -target mipsel-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mips64 -cc mips64-linux-gnuabi64-gcc tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mips64le -cc mips64el-linux-gnuabi64-gcc tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch riscv64 -cc "zig cc -target riscv64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch s390x -cc "zig cc -target s390x-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch loong64 -cc "zig cc -target loongarch64-linux-musl" tar "$tgt"
# go run build.go -tags "${{env.TAGS}}" -goos linux -goarch ppc64 -cc "zig cc -target powerpc64-linux-musl" tar "$tgt" # fails with linkmode not supported
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch ppc64le -cc "zig cc -target powerpc64le-linux-musl" tar "$tgt"
archs=$(go tool dist list | grep linux | sed 's#linux/##')
for goarch in $archs ; do
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -goarch "$goarch" tar "$tgt"
done
done
env:
CGO_ENABLED: "1"
EXTRA_LDFLAGS: "-linkmode=external -extldflags=-static"
CGO_ENABLED: "0"
- name: Archive artifacts
uses: actions/upload-artifact@v4
@@ -302,10 +283,8 @@ jobs:
package-macos:
name: Package for macOS
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-') || startsWith(github.ref, 'refs/tags/v'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
env:
CODESIGN_IDENTITY: ${{ secrets.CODESIGN_IDENTITY }}
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
@@ -332,7 +311,6 @@ jobs:
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-package-${{ hashFiles('**/go.sum') }}
- name: Import signing certificate
if: env.CODESIGN_IDENTITY != ''
run: |
# Set up a run-specific keychain, making it available for the
# `codesign` tool.
@@ -360,7 +338,7 @@ jobs:
- name: Create package (amd64)
run: |
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" -goarch amd64 zip "$tgt"
go run build.go -goarch amd64 zip "$tgt"
done
env:
CGO_ENABLED: "1"
@@ -376,7 +354,7 @@ jobs:
EOT
chmod 755 xgo.sh
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" -gocmd ./xgo.sh -goarch arm64 zip "$tgt"
go run build.go -gocmd ./xgo.sh -goarch arm64 zip "$tgt"
done
env:
CGO_ENABLED: "1"
@@ -405,7 +383,7 @@ jobs:
notarize-macos:
name: Notarize for macOS
if: github.repository_owner == 'syncthing' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-') || startsWith(github.ref, 'refs/tags/v'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
needs:
- package-macos
@@ -487,7 +465,7 @@ jobs:
goarch="${plat#*/}"
echo "::group ::$plat"
for tgt in syncthing stdiscosrv strelaysrv ; do
if ! go run build.go -goos "$goos" -goarch "$goarch" tar "$tgt" ; then
if ! go run build.go -goos "$goos" -goarch "$goarch" tar "$tgt" 2>/dev/null; then
echo "::warning ::Failed to build $tgt for $plat"
fi
done
@@ -549,7 +527,7 @@ jobs:
sign-for-upgrade:
name: Sign for upgrade
if: github.repository_owner == 'syncthing' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-') || startsWith(github.ref, 'refs/tags/v'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
needs:
- codesign-windows
@@ -667,8 +645,6 @@ jobs:
run: |
gem install fpm
- uses: mlugg/setup-zig@v1
- uses: actions/cache@v4
with:
path: |
@@ -676,17 +652,15 @@ jobs:
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-debian-${{ hashFiles('**/go.sum') }}
- name: Package for Debian (CGO)
- name: Package for Debian
run: |
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch amd64 -cc "zig cc -target x86_64-linux-musl" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch arm -cc "zig cc -target arm-linux-musleabi" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch arm64 -cc "zig cc -target aarch64-linux-musl" deb "$tgt"
for arch in amd64 i386 armhf armel arm64 ; do
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -no-upgrade -installsuffix=no-upgrade -goarch "$arch" deb "$tgt"
done
done
env:
BUILD_USER: debian
CGO_ENABLED: "1"
EXTRA_LDFLAGS: "-linkmode=external -extldflags=-static"
- name: Archive artifacts
uses: actions/upload-artifact@v4
@@ -700,7 +674,7 @@ jobs:
publish-nightly:
name: Publish nightly build
if: github.repository_owner == 'syncthing' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && startsWith(github.ref, 'refs/heads/release-nightly')
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && startsWith(github.ref, 'refs/heads/release-nightly')
environment: release
needs:
- sign-for-upgrade
@@ -734,12 +708,15 @@ jobs:
- name: Push artifacts
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: ${{ secrets.AZUREBLOB_TYPE }}
RCLONE_CONFIG_OBJSTORE_ACCOUNT: ${{ secrets.AZUREBLOB_ACCOUNT }}
RCLONE_CONFIG_OBJSTORE_KEY: ${{ secrets.AZUREBLOB_KEY }}
RCLONE_AZUREBLOB_ACCESS_TIER: hot
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync -v packages objstore:nightly
args: sync -v --no-update-modtime packages objstore:nightly
#
# Push release artifacts to Spaces
@@ -747,8 +724,10 @@ jobs:
publish-release-files:
name: Publish release files
if: github.repository_owner == 'syncthing' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/tags/v'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/tags/v'))
environment: release
permissions:
contents: write
needs:
- sign-for-upgrade
- package-debian
@@ -785,22 +764,64 @@ jobs:
- name: Push to object store (${{ env.VERSION }})
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: ${{ secrets.AZUREBLOB_TYPE }}
RCLONE_CONFIG_OBJSTORE_ACCOUNT: ${{ secrets.AZUREBLOB_ACCOUNT }}
RCLONE_CONFIG_OBJSTORE_KEY: ${{ secrets.AZUREBLOB_KEY }}
RCLONE_AZUREBLOB_ACCESS_TIER: cool
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync -v packages objstore:release/${{ env.VERSION }}
args: sync -v --no-update-modtime packages objstore:release/${{ env.VERSION }}
- name: Push to object store (latest)
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: ${{ secrets.AZUREBLOB_TYPE }}
RCLONE_CONFIG_OBJSTORE_ACCOUNT: ${{ secrets.AZUREBLOB_ACCOUNT }}
RCLONE_CONFIG_OBJSTORE_KEY: ${{ secrets.AZUREBLOB_KEY }}
RCLONE_AZUREBLOB_ACCESS_TIER: hot
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync -v objstore:release/${{ env.VERSION }} objstore:release/latest
args: sync -v --no-update-modtime objstore:release/${{ env.VERSION }} objstore:release/latest
- name: Create GitHub releases and push binaries
run: |
maybePrerelease=""
if [[ $VERSION == *-* ]]; then
maybePrerelease="--prerelease"
fi
export GH_PROMPT_DISABLED=1
if ! gh release view --json name "$VERSION" >/dev/null 2>&1 ; then
gh release create "$VERSION" \
$maybePrerelease \
--title "$VERSION" \
--notes-from-tag
fi
gh release upload --clobber "$VERSION" \
packages/*.asc packages/*.json \
packages/syncthing-*.tar.gz \
packages/syncthing-*.zip \
packages/syncthing_*.deb
PKGS=$(pwd)/packages
cd /tmp # gh will not release for repo x while inside repo y
for repo in relaysrv discosrv ; do
export GH_REPO="syncthing/$repo"
if ! gh release view --json name "$VERSION" >/dev/null 2>&1 ; then
gh release create "$VERSION" \
$maybePrerelease \
--title "$VERSION" \
--notes "https://github.com/syncthing/syncthing/releases/tag/$VERSION"
fi
gh release upload --clobber "$VERSION" \
$PKGS/*.asc \
$PKGS/*${repo}*
done
env:
GH_TOKEN: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
#
# Push Debian/APT archive
@@ -808,7 +829,7 @@ jobs:
publish-apt:
name: Publish APT
if: github.repository_owner == 'syncthing' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release' || startsWith(github.ref, 'refs/heads/release-') || startsWith(github.ref, 'refs/tags/v'))
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/release-nightly' || startsWith(github.ref, 'refs/tags/v'))
environment: release
needs:
- package-debian
@@ -835,9 +856,7 @@ jobs:
- name: Prepare packages
run: |
kind=stable
if [[ $VERSION == v2* ]] ; then
kind=v2
elif [[ $VERSION == *-rc.[0-9] ]] ; then
if [[ $VERSION == *-rc.[0-9] ]] ; then
kind=candidate
elif [[ $VERSION == *-* ]] ; then
kind=nightly
@@ -849,9 +868,13 @@ jobs:
- name: Pull archive
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: ${{ secrets.AZUREBLOB_TYPE }}
RCLONE_CONFIG_OBJSTORE_ACCOUNT: ${{ secrets.AZUREBLOB_ACCOUNT }}
RCLONE_CONFIG_OBJSTORE_KEY: ${{ secrets.AZUREBLOB_KEY }}
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync objstore:apt/dists dists
@@ -868,12 +891,15 @@ jobs:
- name: Push archive
uses: docker://docker.io/rclone/rclone:latest
env:
RCLONE_CONFIG_OBJSTORE_TYPE: ${{ secrets.AZUREBLOB_TYPE }}
RCLONE_CONFIG_OBJSTORE_ACCOUNT: ${{ secrets.AZUREBLOB_ACCOUNT }}
RCLONE_CONFIG_OBJSTORE_KEY: ${{ secrets.AZUREBLOB_KEY }}
RCLONE_AZUREBLOB_ACCESS_TIER: hot
RCLONE_CONFIG_OBJSTORE_TYPE: s3
RCLONE_CONFIG_OBJSTORE_PROVIDER: ${{ secrets.S3_PROVIDER }}
RCLONE_CONFIG_OBJSTORE_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
RCLONE_CONFIG_OBJSTORE_SECRET_ACCESS_KEY: ${{ secrets.S3_SECRET_ACCESS_KEY }}
RCLONE_CONFIG_OBJSTORE_ENDPOINT: ${{ secrets.S3_ENDPOINT }}
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync -v dists objstore:apt/dists
args: sync -v --no-update-modtime dists objstore:apt/dists
#
# Build and push to Docker Hub
@@ -882,10 +908,8 @@ jobs:
docker-syncthing:
name: Build and push Docker images
runs-on: ubuntu-latest
if: github.event_name == 'push' || github.event_name == 'workflow_dispatch'
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/release-nightly' || github.ref == 'refs/heads/infrastructure' || startsWith(github.ref, 'refs/tags/v'))
environment: docker
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
permissions:
contents: read
packages: write
@@ -898,13 +922,13 @@ jobs:
include:
- pkg: syncthing
dockerfile: Dockerfile
image: syncthing
image: syncthing/syncthing
- pkg: strelaysrv
dockerfile: Dockerfile.strelaysrv
image: relaysrv
image: syncthing/relaysrv
- pkg: stdiscosrv
dockerfile: Dockerfile.stdiscosrv
image: discosrv
image: syncthing/discosrv
steps:
- uses: actions/checkout@v4
with:
@@ -922,8 +946,6 @@ jobs:
go version
echo "GO_VERSION=$(go version | sed 's#^.*go##;s# .*##')" >> $GITHUB_ENV
- uses: mlugg/setup-zig@v1
- uses: actions/cache@v4
with:
path: |
@@ -931,34 +953,33 @@ jobs:
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-docker-${{ matrix.pkg }}-${{ hashFiles('**/go.sum') }}
- name: Build binaries (CGO)
- name: Build binaries
run: |
# amd64
go run build.go -goos linux -goarch amd64 -tags "${{env.TAGS}}" -cc "zig cc -target x86_64-linux-musl" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-amd64
# arm64
go run build.go -goos linux -goarch arm64 -tags "${{env.TAGS}}" -cc "zig cc -target aarch64-linux-musl" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-arm64
# arm
go run build.go -goos linux -goarch arm -tags "${{env.TAGS}}" -cc "zig cc -target arm-linux-musleabi" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-arm
for arch in amd64 arm64 arm; do
go run build.go -goos linux -goarch "$arch" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-"$arch"
done
env:
CGO_ENABLED: "1"
CGO_ENABLED: "0"
BUILD_USER: docker
EXTRA_LDFLAGS: "-linkmode=external -extldflags=-static"
- name: Check if we will be able to push images
run: |
if [[ "${{ secrets.DOCKERHUB_TOKEN }}" != "" ]]; then
echo "DOCKER_PUSH=true" >> $GITHUB_ENV;
fi
- name: Login to Docker Hub
uses: docker/login-action@v3
if: env.DOCKERHUB_USERNAME != ''
if: env.DOCKER_PUSH == 'true'
with:
registry: docker.io
username: ${{ env.DOCKERHUB_USERNAME }}
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
if: env.DOCKER_PUSH == 'true'
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -971,31 +992,18 @@ jobs:
run: |
version=$(go run build.go version)
version=${version#v}
repo=ghcr.io/${{ github.repository_owner }}/${{ matrix.image }}
ref="${{github.ref_name}}"
ref=${ref//\//-} # slashes to dashes
# List of tags for ghcr.io
if [[ $version == @([0-9]|[0-9][0-9]).@([0-9]|[0-9][0-9]).@([0-9]|[0-9][0-9]) ]] ; then
echo Release version, pushing to :latest and version tags
major=${version%.*.*}
minor=${version%.*}
tags=$repo:$version,$repo:$major,$repo:$minor,$repo:latest
tags=docker.io/${{ matrix.image }}:$version,ghcr.io/${{ matrix.image }}:$version,docker.io/${{ matrix.image }}:$major,ghcr.io/${{ matrix.image }}:$major,docker.io/${{ matrix.image }}:$minor,ghcr.io/${{ matrix.image }}:$minor,docker.io/${{ matrix.image }}:latest,ghcr.io/${{ matrix.image }}:latest
elif [[ $version == *-rc.@([0-9]|[0-9][0-9]) ]] ; then
tags=$repo:$version,$repo:rc
elif [[ $ref == "main" ]] ; then
tags=$repo:edge
echo Release candidate, pushing to :rc and version tags
tags=docker.io/${{ matrix.image }}:$version,ghcr.io/${{ matrix.image }}:$version,docker.io/${{ matrix.image }}:rc,ghcr.io/${{ matrix.image }}:rc
else
tags=$repo:$ref
echo Development version, pushing to :edge
tags=docker.io/${{ matrix.image }}:edge,ghcr.io/${{ matrix.image }}:edge
fi
# If we have a Docker Hub secret, also push to there.
if [[ $DOCKERHUB_USERNAME != "" ]] ; then
dockerhubtags="${tags//ghcr.io\/syncthing/docker.io\/syncthing}"
tags="$tags,$dockerhubtags"
fi
echo Pushing to $tags
echo "DOCKER_TAGS=$tags" >> $GITHUB_ENV
echo "VERSION=$version" >> $GITHUB_ENV
@@ -1005,8 +1013,8 @@ jobs:
context: .
file: ${{ matrix.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/arm/7
push: ${{ env.DOCKER_PUSH == 'true' }}
tags: ${{ env.DOCKER_TAGS }}
push: true
labels: |
org.opencontainers.image.version=${{ env.VERSION }}
org.opencontainers.image.revision=${{ github.sha }}

49
.github/workflows/pr-linters.yaml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: Run PR linters
on:
pull_request:
workflow_dispatch:
permissions:
contents: read
pull-requests: read
jobs:
#
# golangci-lint runs a suite of static analysis checks on the code
#
golangci:
runs-on: ubuntu-latest
name: Golangci-lint
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: ensure asset generation
run: go run build.go assets
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
only-new-issues: true
#
# Meta checks for formatting, copyright, etc
#
meta:
name: Meta checks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- run: |
go run build.go assets
go test -v ./meta

27
.github/workflows/pr-metadata.yaml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: PR metadata
on:
pull_request_target:
types:
- opened
- reopened
- edited
- synchronize
permissions:
contents: read
pull-requests: write
jobs:
#
# Set labels on PRs, which are then used to categorise release notes
#
labels:
name: Set labels
runs-on: ubuntu-latest
steps:
- uses: srvaroa/labeler@v1
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -0,0 +1,60 @@
name: Release Syncthing
on:
push:
branches:
- release
- release-rc*
permissions:
contents: write
jobs:
create-release-tag:
name: Create release tag
runs-on: ubuntu-latest
environment: release
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.ref }} # https://github.com/actions/checkout/issues/882
token: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
- uses: actions/setup-go@v5
with:
go-version: stable
- name: Determine version to release
run: |
if [[ "$GITHUB_REF_NAME" == "release" ]] ; then
next=$(go run ./script/next-version.go)
else
next=$(go run ./script/next-version.go --pre)
fi
echo "NEXT=$next" >> $GITHUB_ENV
echo "Next version is $next"
prev=$(git describe --exclude "*-*" --abbrev=0)
echo "PREV=$prev" >> $GITHUB_ENV
echo "Previous version is $prev"
- name: Determine release notes
run: |
go run ./script/relnotes.go --new-ver "$NEXT" --branch "$GITHUB_REF_NAME" --prev-ver "$PREV" > notes.md
env:
GITHUB_TOKEN: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
- name: Create and push tag
run: |
git config --global user.name 'Syncthing Release Automation'
git config --global user.email 'release@syncthing.net'
git tag -a -F notes.md --cleanup=whitespace "$NEXT"
git push origin "$NEXT"
- name: Trigger the build
uses: benc-uk/workflow-dispatch@v1
with:
workflow: build-syncthing.yaml
ref: refs/tags/${{ env.NEXT }}
token: ${{ secrets.ACTIONS_GITHUB_TOKEN }}

1
.gitignore vendored
View File

@@ -17,4 +17,5 @@ deb
*.bz2
/repos
/proto/scripts/protoc-gen-gosyncthing
/gui/next-gen-gui
/compat.json

View File

@@ -1,45 +1,67 @@
version: "2"
linters:
enable-all: true
default: all
disable:
- cyclop
- depguard
- err113
- exhaustive
- exhaustruct
- forbidigo
- funlen
- gci
- gochecknoglobals
- gochecknoinits
- gocognit
- goconst
- gocyclo
- godot
- godox
- gofmt
- goimports
- gomoddirectives
- inamedparam
- interfacebloat
- ireturn
- lll
- maintidx
- mnd
- musttag
- nestif
- nlreturn
- nonamedreturns
- paralleltest
- prealloc
- predeclared
- protogetter
- scopelint
- recvcheck
- revive
- tagalign
- tagliatelle
- testpackage
- usetesting # go 1.24
- varnamelen
- whitespace
- wrapcheck
- wsl
issues:
exclude-dirs:
- internal/gen
- cmd/dev
- repos
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- internal/gen
- cmd/dev
- repos
- third_party$
- builtin$
- examples$
formatters:
enable:
- gofumpt
exclusions:
generated: lax
paths:
- internal/gen
- cmd/dev
- repos
- third_party$
- builtin$
- examples$

View File

@@ -48,6 +48,7 @@ Arkadiusz Tymiński <gevleeog@gmail.com>
Aroun <login@b-vo.fr>
Arthur Axel fREW Schmidt (frioux) <frew@afoolishmanifesto.com> <frioux@gmail.com>
Artur Zubilewicz <AkaZecik@users.noreply.github.com>
Ashish Bhate <bhate.ashish@gmail.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com> <github@audrius.rocks>
Aurélien Rainone <476650+arl@users.noreply.github.com>
BAHADIR YILMAZ <bahadiryilmaz32@gmail.com>
@@ -113,6 +114,7 @@ diemade <spamkill@posteo.ch>
digital <didev@dinid.net>
Dimitri Papadopoulos Orfanos <3234522+DimitriPapadopoulos@users.noreply.github.com>
Dmitry Saveliev (dsaveliev) <d.e.saveliev@gmail.com>
domain <32405309+szu17dmy@users.noreply.github.com>
Domenic Horner <domenic@tgxn.net>
Dominik Heidler (asdil12) <dominik@heidler.eu>
Elias Jarlebring (jarlebring) <jarlebring@gmail.com>
@@ -147,6 +149,7 @@ Gusted <postmaster@gusted.xyz> <williamzijl7@hotmail.com>
Han Boetes <han@boetes.org>
HansK-p <42314815+HansK-p@users.noreply.github.com>
Harrison Jones (harrisonhjones) <harrisonhjones@users.noreply.github.com>
Hazem Krimi <me@hazemkrimi.tech>
Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Hireworks <129852174+hireworksltd@users.noreply.github.com>
Hugo Locurcio <hugo.locurcio@hugo.pro>
@@ -221,9 +224,10 @@ luzpaz <luzpaz@users.noreply.github.com>
Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol (kilburn) <kilburn@la3.org>
Marcel Meyer <mm.marcelmeyer@gmail.com>
Marcin Dziadus (marcindziadus) <dziadus.marcin@gmail.com>
marco-m <marco.molteni@laposte.net>
Marcus B Spencer <marcus@marcusspencer.xyz>
Marcus B Spencer <marcus@marcusspencer.xyz> <marcus@marcusspencer.us>
Marcus Legendre <marcus.legendre@gmail.com>
Mario Majila <mariustshipichik@gmail.com>
Mark Pulford (mpx) <mark@kyne.com.au>
@@ -277,6 +281,7 @@ Oyebanji Jacob Mayowa <oyebanji05@gmail.com>
Pablo <pbaeyens31+github@gmail.com>
Pascal Jungblut (pascalj) <github@pascalj.com> <mail@pascal-jungblut.com>
Paul Brit <paulbrit44@gmail.com>
Paul Donald <newtwen+github@gmail.com>
Pawel Palenica (qepasa) <pawelpalenica11@gmail.com>
Paweł Rozlach <vespian@users.noreply.github.com>
perewa <cavalcante.ten@gmail.com>
@@ -292,6 +297,7 @@ Pier Paolo Ramon <ramonpierre@gmail.com>
Piotr Bejda (piobpl) <piotrb10@gmail.com>
polyfloyd <polyfloyd@users.noreply.github.com>
Pramodh KP (pramodhkp) <pramodh.p@directi.com> <1507241+pramodhkp@users.noreply.github.com>
pullmerge <166967364+pullmerge@users.noreply.github.com>
Quentin Hibon <qh.public@yahoo.com>
Rahmi Pruitt <rjpruitt16@gmail.com>
red_led <red-led@users.noreply.github.com>
@@ -327,6 +333,7 @@ Syncthing Release Automation <release@syncthing.net>
Sébastien WENSKE <sebastien@wenske.fr>
Taylor Khan (nelsonkhan) <nelsonkhan@gmail.com>
Terrance <git@terrance.allofti.me>
TheCreeper <TheCreeper@users.noreply.github.com>
Thomas <9749173+uhthomas@users.noreply.github.com>
Thomas Hipp <thomashipp@gmail.com>
Tim Abell (timabell) <tim@timwise.co.uk>

112
build.go
View File

@@ -38,26 +38,27 @@ import (
)
var (
goarch string
goos string
noupgrade bool
version string
goCmd string
race bool
debug = os.Getenv("BUILDDEBUG") != ""
extraTags string
installSuffix string
pkgdir string
cc string
run string
benchRun string
buildOut string
debugBinary bool
coverage bool
long bool
timeout = "120s"
longTimeout = "600s"
numVersions = 5
goarch string
goos string
noupgrade bool
version string
goCmd string
race bool
debug = os.Getenv("BUILDDEBUG") != ""
extraTags string
installSuffix string
pkgdir string
cc string
run string
benchRun string
buildOut string
debugBinary bool
coverage bool
long bool
timeout = "120s"
longTimeout = "600s"
numVersions = 5
withNextGenGUI = os.Getenv("BUILD_NEXT_GEN_GUI") != ""
)
type target struct {
@@ -288,10 +289,10 @@ func runCommand(cmd string, target target) {
build(target, tags)
case "test":
test(strings.Fields(extraTags), "github.com/syncthing/syncthing/internal/...", "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/...")
test(strings.Fields(extraTags), "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/...")
case "bench":
bench(strings.Fields(extraTags), "github.com/syncthing/syncthing/internal/...", "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/...")
bench(strings.Fields(extraTags), "github.com/syncthing/syncthing/lib/...", "github.com/syncthing/syncthing/cmd/...")
case "integration":
integration(false)
@@ -329,7 +330,7 @@ func runCommand(cmd string, target target) {
writeCompatJSON()
case "deb":
buildDeb(target)
buildDeb(target, tags)
case "vet":
metalintShort()
@@ -379,6 +380,7 @@ func parseFlags() {
flag.IntVar(&numVersions, "num-versions", numVersions, "Number of versions for changelog command")
flag.StringVar(&run, "run", "", "Specify which tests to run")
flag.StringVar(&benchRun, "bench", "", "Specify which benchmarks to run")
flag.BoolVar(&withNextGenGUI, "with-next-gen-gui", withNextGenGUI, "Also build 'newgui'")
flag.StringVar(&buildOut, "build-out", "", "Set the '-o' value for 'go build'")
flag.Parse()
}
@@ -451,6 +453,10 @@ func benchArgs() []string {
}
func install(target target, tags []string) {
if (target.name == "syncthing" || target.name == "") && !withNextGenGUI {
log.Println("Notice: Next generation GUI will not be built; see --with-next-gen-gui.")
}
lazyRebuildAssets()
tags = append(target.tags, tags...)
@@ -474,12 +480,16 @@ func install(target target, tags []string) {
defer shouldCleanupSyso(sysoPath)
}
args := []string{"install"}
args := []string{"install", "-v"}
args = appendParameters(args, tags, target.buildPkgs...)
runPrint(goCmd, args...)
}
func build(target target, tags []string) {
if (target.name == "syncthing" || target.name == "") && !withNextGenGUI {
log.Println("Notice: Next generation GUI will not be built; see --with-next-gen-gui.")
}
lazyRebuildAssets()
tags = append(target.tags, tags...)
@@ -502,7 +512,7 @@ func build(target target, tags []string) {
defer shouldCleanupSyso(sysoPath)
}
args := []string{"build"}
args := []string{"build", "-v"}
if buildOut != "" {
args = append(args, "-o", buildOut)
}
@@ -514,6 +524,13 @@ func setBuildEnvVars() {
os.Setenv("GOOS", goos)
os.Setenv("GOARCH", goarch)
os.Setenv("CC", cc)
if os.Getenv("CGO_ENABLED") == "" {
switch goos {
case "darwin", "solaris":
default:
os.Setenv("CGO_ENABLED", "0")
}
}
}
func appendParameters(args []string, tags []string, pkgs ...string) []string {
@@ -592,7 +609,7 @@ func buildZip(target target, tags []string) {
fmt.Println(filename)
}
func buildDeb(target target) {
func buildDeb(target target, tags []string) {
os.RemoveAll("deb")
// "goarch" here is set to whatever the Debian packages expect. We correct
@@ -606,7 +623,7 @@ func buildDeb(target target) {
goarch = "arm"
}
build(target, []string{"noupgrade"})
build(target, append(tags, "noupgrade"))
for i := range target.installationFiles {
target.installationFiles[i].src = strings.Replace(target.installationFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -732,9 +749,12 @@ func shouldBuildSyso(dir string) (string, error) {
sysoPath := filepath.Join(dir, "cmd", "syncthing", "resource.syso")
// See https://github.com/josephspurrier/goversioninfo#command-line-flags
arm := strings.HasPrefix(goarch, "arm")
a64 := strings.Contains(goarch, "64")
if _, err := runError("goversioninfo", "-o", sysoPath, fmt.Sprintf("-arm=%v", arm), fmt.Sprintf("-64=%v", a64)); err != nil {
armOption := ""
if strings.Contains(goarch, "arm") {
armOption = "-arm=true"
}
if _, err := runError("goversioninfo", "-o", sysoPath, armOption); err != nil {
return "", errors.New("failed to create " + sysoPath + ": " + err.Error())
}
@@ -806,11 +826,43 @@ func lazyRebuildAssets() {
shouldRebuild := shouldRebuildAssets("lib/api/auto/gui.files.go", "gui") ||
shouldRebuildAssets("cmd/infra/strelaypoolsrv/auto/gui.files.go", "cmd/infra/strelaypoolsrv/gui")
if withNextGenGUI {
shouldRebuild = buildNextGenGUI() || shouldRebuild
}
if shouldRebuild {
rebuildAssets()
}
}
func buildNextGenGUI() bool {
// Check if we need to run the npm process, and if so also set the flag
// to rebuild Go assets afterwards. The index.html is regenerated every
// time by the build process. This assumes the new GUI ends up in
// next-gen-gui/dist/next-gen-gui.
if !shouldRebuildAssets("gui/next-gen-gui/index.html", "next-gen-gui") {
// The GUI is up to date.
return false
}
runPrintInDir("next-gen-gui", "npm", "install")
runPrintInDir("next-gen-gui", "npm", "run", "build", "--", "--prod", "--subresource-integrity")
rmr("gui/tech-ui")
for _, src := range listFiles("next-gen-gui/dist") {
rel, _ := filepath.Rel("next-gen-gui/dist", src)
dst := filepath.Join("gui", rel)
if err := copyFile(src, dst, 0o644); err != nil {
fmt.Println("copy:", err)
os.Exit(1)
}
}
return true
}
func shouldRebuildAssets(target, srcdir string) bool {
info, err := os.Stat(target)
if err != nil {

View File

@@ -23,6 +23,7 @@ case "${1:-default}" in
prerelease)
script authors
script copyrights
build weblate
pushd man ; ./refresh.sh ; popd
git add -A gui man AUTHORS

View File

@@ -72,7 +72,7 @@ func main() {
if *standardBlocks || blockSize < protocol.MinBlockSize {
blockSize = protocol.BlockSize(fi.Size())
}
bs, err := scanner.Blocks(context.TODO(), fd, blockSize, fi.Size(), nil)
bs, err := scanner.Blocks(context.TODO(), fd, blockSize, fi.Size(), nil, true)
if err != nil {
log.Fatal(err)
}

View File

@@ -8,6 +8,7 @@ package main
import (
"bytes"
"cmp"
"compress/gzip"
"context"
"io"
@@ -15,7 +16,7 @@ import (
"math"
"os"
"path/filepath"
"sort"
"slices"
"time"
)
@@ -177,8 +178,8 @@ func (d *diskStore) inventory() error {
})
return nil
})
sort.Slice(d.currentFiles, func(i, j int) bool {
return d.currentFiles[i].mtime < d.currentFiles[j].mtime
slices.SortFunc(d.currentFiles, func(a, b currentFile) int {
return cmp.Compare(a.mtime, b.mtime)
})
var oldest time.Duration
if len(d.currentFiles) > 0 {

View File

@@ -29,6 +29,7 @@ import (
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
@@ -110,6 +111,7 @@ var (
requestProcessors = 8
geoipLicenseKey = os.Getenv("GEOIP_LICENSE_KEY")
geoipAccountID, _ = strconv.Atoi(os.Getenv("GEOIP_ACCOUNT_ID"))
maxRelaysReturned = 100
requests chan request
@@ -141,6 +143,7 @@ func main() {
flag.IntVar(&requestQueueLen, "request-queue", requestQueueLen, "Queue length for incoming test requests")
flag.IntVar(&requestProcessors, "request-processors", requestProcessors, "Number of request processor routines")
flag.StringVar(&geoipLicenseKey, "geoip-license-key", geoipLicenseKey, "License key for GeoIP database")
flag.IntVar(&maxRelaysReturned, "max-relays-returned", maxRelaysReturned, "Maximum number of relays returned for a normal endpoint query")
flag.Parse()
@@ -331,6 +334,10 @@ func handleEndpointShort(rw http.ResponseWriter, r *http.Request) {
relays = append(relays, relayShort{URL: slimURL(r.URL)})
}
mut.RUnlock()
if len(relays) > maxRelaysReturned {
rand.Shuffle(relays)
relays = relays[:maxRelaysReturned]
}
_ = json.NewEncoder(rw).Encode(map[string][]relayShort{
"relays": relays,

View File

@@ -201,17 +201,21 @@ func (p *proxy) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// looking for a prerelease at all.
func filterForLatest(rels []upgrade.Release) []upgrade.Release {
var filtered []upgrade.Release
var havePre bool
havePre := make(map[string]bool)
haveStable := make(map[string]bool)
for _, rel := range rels {
if !rel.Prerelease {
// We found a stable version, we're good now.
major, _, _ := strings.Cut(rel.Tag, ".")
if !rel.Prerelease && !haveStable[major] {
// Remember the first non-pre for each major
filtered = append(filtered, rel)
break
haveStable[major] = true
continue
}
if rel.Prerelease && !havePre {
// We remember the first prerelease we find.
if rel.Prerelease && !havePre[major] && !haveStable[major] {
// We remember the first prerelease we find, unless we've
// already found a non-pre of the same major.
filtered = append(filtered, rel)
havePre = true
havePre[major] = true
}
}
return filtered
@@ -258,9 +262,10 @@ func filterForCompabitility(rels []upgrade.Release, ua, osv string) []upgrade.Re
}
type cachedReleases struct {
url string
mut sync.RWMutex
current []upgrade.Release
url string
mut sync.RWMutex
current []upgrade.Release
latestRel, latestPre string
}
func (c *cachedReleases) Releases() []upgrade.Release {
@@ -274,8 +279,26 @@ func (c *cachedReleases) Update(ctx context.Context) error {
if err != nil {
return err
}
latestRel, latestPre := "", ""
for _, rel := range rels {
if !rel.Prerelease && latestRel == "" {
latestRel = rel.Tag
}
if rel.Prerelease && latestPre == "" {
latestPre = rel.Tag
}
if latestRel != "" && latestPre != "" {
break
}
}
c.mut.Lock()
c.current = rels
if latestRel != c.latestRel || latestPre != c.latestPre {
metricLatestReleaseInfo.DeleteLabelValues(c.latestRel, c.latestPre)
metricLatestReleaseInfo.WithLabelValues(latestRel, latestPre).Set(1)
c.latestRel = latestRel
c.latestPre = latestPre
}
c.mut.Unlock()
return nil
}

View File

@@ -27,4 +27,10 @@ var (
Subsystem: "upgrade",
Name: "http_requests",
}, []string{"target", "result"})
metricLatestReleaseInfo = promauto.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "upgrade",
Name: "latest_release_info",
Help: "Release information",
}, []string{"latest_release", "latest_pre"})
)

View File

@@ -26,9 +26,11 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/puzpuzpuz/xsync/v3"
"github.com/syncthing/syncthing/internal/blob"
"github.com/syncthing/syncthing/internal/blob/azureblob"
"github.com/syncthing/syncthing/internal/blob/s3"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/s3"
"github.com/syncthing/syncthing/lib/ur/contract"
)
@@ -40,11 +42,15 @@ type CLI struct {
DumpFile string `env:"UR_DUMP_FILE" default:"reports.jsons.gz"`
DumpInterval time.Duration `env:"UR_DUMP_INTERVAL" default:"5m"`
S3Endpoint string `name:"s3-endpoint" hidden:"true" env:"UR_S3_ENDPOINT"`
S3Region string `name:"s3-region" hidden:"true" env:"UR_S3_REGION"`
S3Bucket string `name:"s3-bucket" hidden:"true" env:"UR_S3_BUCKET"`
S3AccessKeyID string `name:"s3-access-key-id" hidden:"true" env:"UR_S3_ACCESS_KEY_ID"`
S3SecretKey string `name:"s3-secret-key" hidden:"true" env:"UR_S3_SECRET_KEY"`
S3Endpoint string `name:"s3-endpoint" env:"UR_S3_ENDPOINT"`
S3Region string `name:"s3-region" env:"UR_S3_REGION"`
S3Bucket string `name:"s3-bucket" env:"UR_S3_BUCKET"`
S3AccessKeyID string `name:"s3-access-key-id" env:"UR_S3_ACCESS_KEY_ID"`
S3SecretKey string `name:"s3-secret-key" env:"UR_S3_SECRET_KEY"`
AzureBlobAccount string `name:"azure-blob-account" env:"UR_AZUREBLOB_ACCOUNT"`
AzureBlobKey string `name:"azure-blob-key" env:"UR_AZUREBLOB_KEY"`
AzureBlobContainer string `name:"azure-blob-container" env:"UR_AZUREBLOB_CONTAINER"`
}
var (
@@ -77,6 +83,7 @@ var (
{regexp.MustCompile(`\ssyncthing@archlinux`), "Arch (3rd party)"},
{regexp.MustCompile(`@debian`), "Debian (3rd party)"},
{regexp.MustCompile(`@fedora`), "Fedora (3rd party)"},
{regexp.MustCompile(`@openSUSE`), "openSUSE (3rd party)"},
{regexp.MustCompile(`\sbrew@`), "Homebrew (3rd party)"},
{regexp.MustCompile(`\sroot@buildkitsandbox`), "LinuxServer.io (3rd party)"},
{regexp.MustCompile(`\sports@freebsd`), "FreeBSD (3rd party)"},
@@ -119,19 +126,25 @@ func (cli *CLI) Run() error {
go geo.Serve(context.TODO())
}
// s3
// Blob storage
var s3sess *s3.Session
var blobs blob.Store
if cli.S3Endpoint != "" {
s3sess, err = s3.NewSession(cli.S3Endpoint, cli.S3Region, cli.S3Bucket, cli.S3AccessKeyID, cli.S3SecretKey)
blobs, err = s3.NewSession(cli.S3Endpoint, cli.S3Region, cli.S3Bucket, cli.S3AccessKeyID, cli.S3SecretKey)
if err != nil {
slog.Error("Failed to create S3 session", "error", err)
return err
}
} else if cli.AzureBlobAccount != "" {
blobs, err = azureblob.NewBlobStore(cli.AzureBlobAccount, cli.AzureBlobKey, cli.AzureBlobContainer)
if err != nil {
slog.Error("Failed to create Azure blob store", "error", err)
return err
}
}
if _, err := os.Stat(cli.DumpFile); err != nil && s3sess != nil {
if err := cli.downloadDumpFile(s3sess); err != nil {
if _, err := os.Stat(cli.DumpFile); err != nil && blobs != nil {
if err := cli.downloadDumpFile(blobs); err != nil {
slog.Error("Failed to download dump file", "error", err)
}
}
@@ -153,7 +166,7 @@ func (cli *CLI) Run() error {
go func() {
for range time.Tick(cli.DumpInterval) {
if err := cli.saveDumpFile(srv, s3sess); err != nil {
if err := cli.saveDumpFile(srv, blobs); err != nil {
slog.Error("Failed to write dump file", "error", err)
}
}
@@ -192,8 +205,8 @@ func (cli *CLI) Run() error {
return metricsSrv.Serve(urListener)
}
func (cli *CLI) downloadDumpFile(s3sess *s3.Session) error {
latestKey, err := s3sess.LatestKey()
func (cli *CLI) downloadDumpFile(blobs blob.Store) error {
latestKey, err := blobs.LatestKey(context.Background())
if err != nil {
return fmt.Errorf("list latest S3 key: %w", err)
}
@@ -201,7 +214,7 @@ func (cli *CLI) downloadDumpFile(s3sess *s3.Session) error {
if err != nil {
return fmt.Errorf("create dump file: %w", err)
}
if err := s3sess.Download(fd, latestKey); err != nil {
if err := blobs.Download(context.Background(), latestKey, fd); err != nil {
_ = fd.Close()
return fmt.Errorf("download dump file: %w", err)
}
@@ -212,7 +225,7 @@ func (cli *CLI) downloadDumpFile(s3sess *s3.Session) error {
return nil
}
func (cli *CLI) saveDumpFile(srv *server, s3sess *s3.Session) error {
func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
fd, err := os.Create(cli.DumpFile + ".tmp")
if err != nil {
return fmt.Errorf("creating dump file: %w", err)
@@ -233,13 +246,13 @@ func (cli *CLI) saveDumpFile(srv *server, s3sess *s3.Session) error {
}
slog.Info("Dump file saved")
if s3sess != nil {
if blobs != nil {
key := fmt.Sprintf("reports-%s.jsons.gz", time.Now().UTC().Format("2006-01-02"))
fd, err := os.Open(cli.DumpFile)
if err != nil {
return fmt.Errorf("opening dump file: %w", err)
}
if err := s3sess.Upload(fd, key); err != nil {
if err := blobs.Upload(context.Background(), key, fd); err != nil {
return fmt.Errorf("uploading dump file: %w", err)
}
_ = fd.Close()
@@ -351,6 +364,9 @@ func (s *server) addReport(rep *contract.Report) bool {
break
}
}
rep.DistDist = rep.Distribution
rep.DistOS = rep.OS
rep.DistArch = rep.Arch
_, loaded := s.reports.LoadAndStore(rep.UniqueID, rep)
return loaded

View File

@@ -66,7 +66,7 @@ type contextKey int
const idKey contextKey = iota
func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator, useHTTP, compression bool) *apiSrv {
func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator, useHTTP, compression bool, desiredNotFoundRate float64) *apiSrv {
return &apiSrv{
addr: addr,
cert: cert,
@@ -77,13 +77,13 @@ func newAPISrv(addr string, cert tls.Certificate, db database, repl replicator,
seenTracker: &retryAfterTracker{
name: "seenTracker",
bucketStarts: time.Now(),
desiredRate: 250,
desiredRate: desiredNotFoundRate / 2,
currentDelay: notFoundRetryUnknownMinSeconds,
},
notSeenTracker: &retryAfterTracker{
name: "notSeenTracker",
bucketStarts: time.Now(),
desiredRate: 250,
desiredRate: desiredNotFoundRate / 2,
currentDelay: notFoundRetryUnknownMaxSeconds / 2,
},
}

View File

@@ -111,7 +111,7 @@ func BenchmarkAPIRequests(b *testing.B) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go db.Serve(ctx)
api := newAPISrv("127.0.0.1:0", tls.Certificate{}, db, nil, true, true)
api := newAPISrv("127.0.0.1:0", tls.Certificate{}, db, nil, true, true, 1000)
srv := httptest.NewServer(http.HandlerFunc(api.handler))
kf := b.TempDir() + "/cert"

View File

@@ -24,11 +24,11 @@ import (
"github.com/puzpuzpuz/xsync/v3"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/blob"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/internal/protoutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/s3"
)
type clock interface {
@@ -51,12 +51,12 @@ type inMemoryStore struct {
m *xsync.MapOf[protocol.DeviceID, *discosrv.DatabaseRecord]
dir string
flushInterval time.Duration
s3 *s3.Session
blobs blob.Store
objKey string
clock clock
}
func newInMemoryStore(dir string, flushInterval time.Duration, s3sess *s3.Session) *inMemoryStore {
func newInMemoryStore(dir string, flushInterval time.Duration, blobs blob.Store) *inMemoryStore {
hn, err := os.Hostname()
if err != nil {
hn = rand.String(8)
@@ -65,25 +65,25 @@ func newInMemoryStore(dir string, flushInterval time.Duration, s3sess *s3.Sessio
m: xsync.NewMapOf[protocol.DeviceID, *discosrv.DatabaseRecord](),
dir: dir,
flushInterval: flushInterval,
s3: s3sess,
blobs: blobs,
objKey: hn + ".db",
clock: defaultClock{},
}
nr, err := s.read()
if os.IsNotExist(err) && s3sess != nil {
// Try to read from AWS
latestKey, cerr := s3sess.LatestKey()
if os.IsNotExist(err) && blobs != nil {
// Try to read from blob storage
latestKey, cerr := blobs.LatestKey(context.Background())
if cerr != nil {
log.Println("Error reading database from S3:", err)
log.Println("Error finding database from blob storage:", cerr)
return s
}
fd, cerr := os.Create(path.Join(s.dir, "records.db"))
if cerr != nil {
log.Println("Error creating database file:", err)
log.Println("Error creating database file:", cerr)
return s
}
if cerr := s3sess.Download(fd, latestKey); cerr != nil {
log.Printf("Error reading database from S3: %v", err)
if cerr := blobs.Download(context.Background(), latestKey, fd); cerr != nil {
log.Printf("Error downloading database from blob storage: %v", cerr)
}
_ = fd.Close()
nr, err = s.read()
@@ -310,16 +310,16 @@ func (s *inMemoryStore) write() (err error) {
return err
}
// Upload to S3
if s.s3 != nil {
// Upload to blob storage
if s.blobs != nil {
fd, err = os.Open(dbf)
if err != nil {
log.Printf("Error uploading database to S3: %v", err)
log.Printf("Error uploading database to blob storage: %v", err)
return nil
}
defer fd.Close()
if err := s.s3.Upload(fd, s.objKey); err != nil {
log.Printf("Error uploading database to S3: %v", err)
if err := s.blobs.Upload(context.Background(), s.objKey, fd); err != nil {
log.Printf("Error uploading database to blob storage: %v", err)
}
log.Println("Finished uploading database")
}

View File

@@ -21,11 +21,13 @@ import (
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/internal/blob"
"github.com/syncthing/syncthing/internal/blob/azureblob"
"github.com/syncthing/syncthing/internal/blob/s3"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/s3"
"github.com/syncthing/syncthing/lib/tlsutil"
)
@@ -58,12 +60,13 @@ const (
var debug = false
type CLI struct {
Cert string `group:"Listen" help:"Certificate file" default:"./cert.pem" env:"DISCOVERY_CERT_FILE"`
Key string `group:"Listen" help:"Key file" default:"./key.pem" env:"DISCOVERY_KEY_FILE"`
HTTP bool `group:"Listen" help:"Listen on HTTP (behind an HTTPS proxy)" env:"DISCOVERY_HTTP"`
Compression bool `group:"Listen" help:"Enable GZIP compression of responses" env:"DISCOVERY_COMPRESSION"`
Listen string `group:"Listen" help:"Listen address" default:":8443" env:"DISCOVERY_LISTEN"`
MetricsListen string `group:"Listen" help:"Metrics listen address" env:"DISCOVERY_METRICS_LISTEN"`
Cert string `group:"Listen" help:"Certificate file" default:"./cert.pem" env:"DISCOVERY_CERT_FILE"`
Key string `group:"Listen" help:"Key file" default:"./key.pem" env:"DISCOVERY_KEY_FILE"`
HTTP bool `group:"Listen" help:"Listen on HTTP (behind an HTTPS proxy)" env:"DISCOVERY_HTTP"`
Compression bool `group:"Listen" help:"Enable GZIP compression of responses" env:"DISCOVERY_COMPRESSION"`
Listen string `group:"Listen" help:"Listen address" default:":8443" env:"DISCOVERY_LISTEN"`
MetricsListen string `group:"Listen" help:"Metrics listen address" env:"DISCOVERY_METRICS_LISTEN"`
DesiredNotFoundRate float64 `group:"Listen" help:"Desired maximum rate of not-found replies (/s)" default:"1000"`
DBDir string `group:"Database" help:"Database directory" default:"." env:"DISCOVERY_DB_DIR"`
DBFlushInterval time.Duration `group:"Database" help:"Interval between database flushes" default:"5m" env:"DISCOVERY_DB_FLUSH_INTERVAL"`
@@ -74,6 +77,10 @@ type CLI struct {
DBS3AccessKeyID string `name:"db-s3-access-key-id" group:"Database (S3 backup)" hidden:"true" help:"S3 access key ID for database" env:"DISCOVERY_DB_S3_ACCESS_KEY_ID"`
DBS3SecretKey string `name:"db-s3-secret-key" group:"Database (S3 backup)" hidden:"true" help:"S3 secret key for database" env:"DISCOVERY_DB_S3_SECRET_KEY"`
DBAzureBlobAccount string `name:"db-azure-blob-account" env:"DISCOVERY_DB_AZUREBLOB_ACCOUNT"`
DBAzureBlobKey string `name:"db-azure-blob-key" env:"DISCOVERY_DB_AZUREBLOB_KEY"`
DBAzureBlobContainer string `name:"db-azure-blob-container" env:"DISCOVERY_DB_AZUREBLOB_CONTAINER"`
AMQPAddress string `group:"AMQP replication" hidden:"true" help:"Address to AMQP broker" env:"DISCOVERY_AMQP_ADDRESS"`
Debug bool `short:"d" help:"Print debug output" env:"DISCOVERY_DEBUG"`
@@ -117,18 +124,20 @@ func main() {
Timeout: 2 * time.Minute,
})
// If configured, use S3 for database backups.
var s3c *s3.Session
// If configured, use blob storage for database backups.
var blobs blob.Store
var err error
if cli.DBS3Endpoint != "" {
var err error
s3c, err = s3.NewSession(cli.DBS3Endpoint, cli.DBS3Region, cli.DBS3Bucket, cli.DBS3AccessKeyID, cli.DBS3SecretKey)
if err != nil {
log.Fatalf("Failed to create S3 session: %v", err)
}
blobs, err = s3.NewSession(cli.DBS3Endpoint, cli.DBS3Region, cli.DBS3Bucket, cli.DBS3AccessKeyID, cli.DBS3SecretKey)
} else if cli.DBAzureBlobAccount != "" {
blobs, err = azureblob.NewBlobStore(cli.DBAzureBlobAccount, cli.DBAzureBlobKey, cli.DBAzureBlobContainer)
}
if err != nil {
log.Fatalf("Failed to create blob store: %v", err)
}
// Start the database.
db := newInMemoryStore(cli.DBDir, cli.DBFlushInterval, s3c)
db := newInMemoryStore(cli.DBDir, cli.DBFlushInterval, blobs)
main.Add(db)
// If we have an AMQP broker for replication, start that
@@ -141,7 +150,7 @@ func main() {
}
// Start the main API server.
qs := newAPISrv(cli.Listen, cert, db, repl, cli.HTTP, cli.Compression)
qs := newAPISrv(cli.Listen, cert, db, repl, cli.HTTP, cli.Compression, cli.DesiredNotFoundRate)
main.Add(qs)
// If we have a metrics port configured, start a metrics handler.

View File

@@ -184,7 +184,7 @@ func protocolConnectionHandler(tcpConn net.Conn, config *tls.Config, token strin
continue
}
// requestedPeer is the server, id is the client
ses := newSession(requestedPeer, id, sessionLimiter, globalLimiter)
ses := newSession(requestedPeer, id, sessionLimitBps, globalLimiter)
go ses.Serve()

View File

@@ -51,7 +51,6 @@ var (
globalLimitBps int
overLimit atomic.Bool
descriptorLimit int64
sessionLimiter *rate.Limiter
globalLimiter *rate.Limiter
networkBufferSize int
@@ -228,9 +227,6 @@ func main() {
}
}
if sessionLimitBps > 0 {
sessionLimiter = rate.NewLimiter(rate.Limit(sessionLimitBps), 2*sessionLimitBps)
}
if globalLimitBps > 0 {
globalLimiter = rate.NewLimiter(rate.Limit(globalLimitBps), 2*globalLimitBps)
}

View File

@@ -27,7 +27,7 @@ var (
bytesProxied atomic.Int64
)
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit, globalRateLimit *rate.Limiter) *session {
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionLimitBps int, globalRateLimit *rate.Limiter) *session {
serverkey := make([]byte, 32)
_, err := rand.Read(serverkey)
if err != nil {
@@ -40,12 +40,17 @@ func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit,
return nil
}
var sessionRateLimit *rate.Limiter
if sessionLimitBps > 0 {
sessionRateLimit = rate.NewLimiter(rate.Limit(sessionLimitBps), 2*sessionLimitBps)
}
ses := &session{
serverkey: serverkey,
serverid: serverid,
clientkey: clientkey,
clientid: clientid,
rateLimit: makeRateLimitFunc(sessionRateLimit, globalRateLimit),
limiter: sessionRateLimit,
connsChan: make(chan net.Conn),
conns: make([]net.Conn, 0, 2),
}
@@ -109,6 +114,7 @@ type session struct {
clientid syncthingprotocol.DeviceID
rateLimit func(bytes int)
limiter *rate.Limiter
connsChan chan net.Conn
conns []net.Conn

View File

@@ -41,4 +41,5 @@ func (p *profileCommand) Run(ctx Context) error {
type debugCommand struct {
File fileCommand `cmd:"" help:"Show information about a file (or directory/symlink)"`
Profile profileCommand `cmd:"" help:"Save a profile to help figuring out what Syncthing does"`
Index indexCommand `cmd:"" help:"Show information about the index (database)"`
}

View File

@@ -0,0 +1,32 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cli
import (
"github.com/alecthomas/kong"
)
type indexCommand struct {
Dump struct{} `cmd:"" help:"Print the entire db"`
DumpSize struct{} `cmd:"" help:"Print the db size of different categories of information"`
Check struct{} `cmd:"" help:"Check the database for inconsistencies"`
Account struct{} `cmd:"" help:"Print key and value size statistics per key type"`
}
func (*indexCommand) Run(kongCtx *kong.Context) error {
switch kongCtx.Selected().Name {
case "dump":
return indexDump()
case "dump-size":
return indexDumpSize()
case "check":
return indexCheck()
case "account":
return indexAccount()
}
return nil
}

View File

@@ -0,0 +1,62 @@
// Copyright (C) 2020 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cli
import (
"fmt"
"os"
"text/tabwriter"
)
// indexAccount prints key and data size statistics per class
func indexAccount() error {
ldb, err := getDB()
if err != nil {
return err
}
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
return err
}
var ksizes [256]int
var dsizes [256]int
var counts [256]int
var max [256]int
for it.Next() {
key := it.Key()
t := key[0]
ds := len(it.Value())
ks := len(key)
s := ks + ds
counts[t]++
ksizes[t] += ks
dsizes[t] += ds
if s > max[t] {
max[t] = s
}
}
tw := tabwriter.NewWriter(os.Stdout, 1, 1, 1, ' ', tabwriter.AlignRight)
toti, totds, totks := 0, 0, 0
for t := range ksizes {
if ksizes[t] > 0 {
// yes metric kilobytes 🤘
fmt.Fprintf(tw, "0x%02x:\t%d items,\t%d KB keys +\t%d KB data,\t%d B +\t%d B avg,\t%d B max\t\n", t, counts[t], ksizes[t]/1000, dsizes[t]/1000, ksizes[t]/counts[t], dsizes[t]/counts[t], max[t])
toti += counts[t]
totds += dsizes[t]
totks += ksizes[t]
}
}
fmt.Fprintf(tw, "Total\t%d items,\t%d KB keys +\t%d KB data.\t\n", toti, totks/1000, totds/1000)
tw.Flush()
return nil
}

View File

@@ -0,0 +1,162 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cli
import (
"encoding/binary"
"fmt"
"time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
)
func indexDump() error {
ldb, err := getDB()
if err != nil {
return err
}
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
return err
}
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
fmt.Printf("[device] F:%d D:%d N:%q", folder, device, name)
var f bep.FileInfo
err := proto.Unmarshal(it.Value(), &f)
if err != nil {
return err
}
fmt.Printf(" V:%v\n", &f)
case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv dbproto.VersionList
proto.Unmarshal(it.Value(), &flv)
fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, &flv)
case db.KeyTypeBlock:
folder := binary.BigEndian.Uint32(key[1:])
hash := key[1+4 : 1+4+32]
name := nulString(key[1+4+32:])
fmt.Printf("[block] F:%d H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
case db.KeyTypeDeviceStatistic:
fmt.Printf("[dstat] K:%x V:%x\n", key, it.Value())
case db.KeyTypeFolderStatistic:
fmt.Printf("[fstat] K:%x V:%x\n", key, it.Value())
case db.KeyTypeVirtualMtime:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
val := it.Value()
var realTime, virtualTime time.Time
realTime.UnmarshalBinary(val[:len(val)/2])
virtualTime.UnmarshalBinary(val[len(val)/2:])
fmt.Printf("[mtime] F:%d N:%q R:%v V:%v\n", folder, name, realTime, virtualTime)
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(key[1:])
fmt.Printf("[folderidx] K:%d V:%q\n", key, it.Value())
case db.KeyTypeDeviceIdx:
key := binary.BigEndian.Uint32(key[1:])
val := it.Value()
device := "<nil>"
if len(val) > 0 {
dev, err := protocol.DeviceIDFromBytes(val)
if err != nil {
device = fmt.Sprintf("<invalid %d bytes>", len(val))
} else {
device = dev.String()
}
}
fmt.Printf("[deviceidx] K:%d V:%s\n", key, device)
case db.KeyTypeIndexID:
device := binary.BigEndian.Uint32(key[1:])
folder := binary.BigEndian.Uint32(key[5:])
fmt.Printf("[indexid] D:%d F:%d I:%x\n", device, folder, it.Value())
case db.KeyTypeFolderMeta:
folder := binary.BigEndian.Uint32(key[1:])
fmt.Printf("[foldermeta] F:%d", folder)
var cs dbproto.CountsSet
if err := proto.Unmarshal(it.Value(), &cs); err != nil {
fmt.Printf(" (invalid)\n")
} else {
fmt.Printf(" V:%v\n", &cs)
}
case db.KeyTypeMiscData:
fmt.Printf("[miscdata] K:%q V:%q\n", key[1:], it.Value())
case db.KeyTypeSequence:
folder := binary.BigEndian.Uint32(key[1:])
seq := binary.BigEndian.Uint64(key[5:])
fmt.Printf("[sequence] F:%d S:%d V:%q\n", folder, seq, it.Value())
case db.KeyTypeNeed:
folder := binary.BigEndian.Uint32(key[1:])
file := string(key[5:])
fmt.Printf("[need] F:%d V:%q\n", folder, file)
case db.KeyTypeBlockList:
fmt.Printf("[blocklist] H:%x\n", key[1:])
case db.KeyTypeBlockListMap:
folder := binary.BigEndian.Uint32(key[1:])
hash := key[5:37]
fileName := string(key[37:])
fmt.Printf("[blocklistmap] F:%d H:%x N:%s\n", folder, hash, fileName)
case db.KeyTypeVersion:
fmt.Printf("[version] H:%x", key[1:])
var v bep.Vector
err := proto.Unmarshal(it.Value(), &v)
if err != nil {
fmt.Printf(" (invalid)\n")
} else {
fmt.Printf(" V:%v\n", &v)
}
case db.KeyTypePendingFolder:
device := binary.BigEndian.Uint32(key[1:])
folder := string(key[5:])
var of dbproto.ObservedFolder
proto.Unmarshal(it.Value(), &of)
fmt.Printf("[pendingFolder] D:%d F:%s V:%v\n", device, folder, &of)
case db.KeyTypePendingDevice:
device := "<invalid>"
dev, err := protocol.DeviceIDFromBytes(key[1:])
if err == nil {
device = dev.String()
}
var od dbproto.ObservedDevice
proto.Unmarshal(it.Value(), &od)
fmt.Printf("[pendingDevice] D:%v V:%v\n", device, &od)
default:
fmt.Printf("[??? %d]\n %x\n %x\n", key[0], key, it.Value())
}
}
return nil
}

View File

@@ -0,0 +1,89 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cli
import (
"cmp"
"encoding/binary"
"fmt"
"slices"
"github.com/syncthing/syncthing/lib/db"
)
func indexDumpSize() error {
type sizedElement struct {
key string
size int
}
ldb, err := getDB()
if err != nil {
return err
}
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
return err
}
var elems []sizedElement
for it.Next() {
var ele sizedElement
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
ele.key = fmt.Sprintf("DEVICE:%d:%d:%s", folder, device, name)
case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
ele.key = fmt.Sprintf("GLOBAL:%d:%s", folder, name)
case db.KeyTypeBlock:
folder := binary.BigEndian.Uint32(key[1:])
hash := key[1+4 : 1+4+32]
name := nulString(key[1+4+32:])
ele.key = fmt.Sprintf("BLOCK:%d:%x:%s", folder, hash, name)
case db.KeyTypeDeviceStatistic:
ele.key = fmt.Sprintf("DEVICESTATS:%s", key[1:])
case db.KeyTypeFolderStatistic:
ele.key = fmt.Sprintf("FOLDERSTATS:%s", key[1:])
case db.KeyTypeVirtualMtime:
ele.key = fmt.Sprintf("MTIME:%s", key[1:])
case db.KeyTypeFolderIdx:
id := binary.BigEndian.Uint32(key[1:])
ele.key = fmt.Sprintf("FOLDERIDX:%d", id)
case db.KeyTypeDeviceIdx:
id := binary.BigEndian.Uint32(key[1:])
ele.key = fmt.Sprintf("DEVICEIDX:%d", id)
default:
ele.key = fmt.Sprintf("UNKNOWN:%x", key)
}
ele.size = len(it.Value())
elems = append(elems, ele)
}
slices.SortFunc(elems, func(a, b sizedElement) int {
return cmp.Compare(b.size, a.size)
})
for _, ele := range elems {
fmt.Println(ele.key, ele.size)
}
return nil
}

View File

@@ -0,0 +1,435 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cli
import (
"bytes"
"cmp"
"encoding/binary"
"errors"
"fmt"
"slices"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
)
type fileInfoKey struct {
folder uint32
device uint32
name string
}
type globalKey struct {
folder uint32
name string
}
type sequenceKey struct {
folder uint32
sequence uint64
}
func indexCheck() (err error) {
ldb, err := getDB()
if err != nil {
return err
}
folders := make(map[uint32]string)
devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32)
fileInfos := make(map[fileInfoKey]*bep.FileInfo)
globals := make(map[globalKey]*dbproto.VersionList)
sequences := make(map[sequenceKey]string)
needs := make(map[globalKey]struct{})
blocklists := make(map[string]struct{})
versions := make(map[string]*bep.Vector)
usedBlocklists := make(map[string]struct{})
usedVersions := make(map[string]struct{})
var localDeviceKey uint32
success := true
defer func() {
if err == nil {
if success {
fmt.Println("Index check completed successfully.")
} else {
err = errors.New("Inconsistencies found in the index")
}
}
}()
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
return err
}
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
var f bep.FileInfo
err := proto.Unmarshal(it.Value(), &f)
if err != nil {
fmt.Println("Unable to unmarshal FileInfo:", err)
success = false
continue
}
fileInfos[fileInfoKey{folder, device, name}] = &f
case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv dbproto.VersionList
if err := proto.Unmarshal(it.Value(), &flv); err != nil {
fmt.Println("Unable to unmarshal VersionList:", err)
success = false
continue
}
globals[globalKey{folder, name}] = &flv
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
folders[key] = string(it.Value())
case db.KeyTypeDeviceIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
devices[key] = string(it.Value())
deviceToIDs[string(it.Value())] = key
if bytes.Equal(it.Value(), protocol.LocalDeviceID[:]) {
localDeviceKey = key
}
case db.KeyTypeSequence:
folder := binary.BigEndian.Uint32(key[1:])
seq := binary.BigEndian.Uint64(key[5:])
val := it.Value()
sequences[sequenceKey{folder, seq}] = string(val[9:])
case db.KeyTypeNeed:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
needs[globalKey{folder, name}] = struct{}{}
case db.KeyTypeBlockList:
hash := string(key[1:])
blocklists[hash] = struct{}{}
case db.KeyTypeVersion:
hash := string(key[1:])
var v bep.Vector
if err := proto.Unmarshal(it.Value(), &v); err != nil {
fmt.Println("Unable to unmarshal Vector:", err)
success = false
continue
}
versions[hash] = &v
}
}
if localDeviceKey == 0 {
fmt.Println("Missing key for local device in device index (bailing out)")
success = false
return
}
var missingSeq []sequenceKey
for fk, fi := range fileInfos {
if fk.name != fi.Name {
fmt.Printf("Mismatching FileInfo name, %q (key) != %q (actual)\n", fk.name, fi.Name)
success = false
}
folder := folders[fk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for FileInfo %q\n", fk.folder, fk.name)
success = false
continue
}
if devices[fk.device] == "" {
fmt.Printf("Unknown device ID %d for FileInfo %q, folder %q\n", fk.folder, fk.name, folder)
success = false
}
if fk.device == localDeviceKey {
sk := sequenceKey{fk.folder, uint64(fi.Sequence)}
name, ok := sequences[sk]
if !ok {
fmt.Printf("Sequence entry missing for FileInfo %q, folder %q, seq %d\n", fi.Name, folder, fi.Sequence)
missingSeq = append(missingSeq, sk)
success = false
continue
}
if name != fi.Name {
fmt.Printf("Sequence entry refers to wrong name, %q (seq) != %q (FileInfo), folder %q, seq %d\n", name, fi.Name, folder, fi.Sequence)
success = false
}
}
if len(fi.Blocks) == 0 && len(fi.BlocksHash) != 0 {
key := string(fi.BlocksHash)
if _, ok := blocklists[key]; !ok {
fmt.Printf("Missing block list for file %q, block list hash %x\n", fi.Name, fi.BlocksHash)
success = false
} else {
usedBlocklists[key] = struct{}{}
}
}
if fi.VersionHash != nil {
key := string(fi.VersionHash)
if _, ok := versions[key]; !ok {
fmt.Printf("Missing version vector for file %q, version hash %x\n", fi.Name, fi.VersionHash)
success = false
} else {
usedVersions[key] = struct{}{}
}
}
_, ok := globals[globalKey{fk.folder, fk.name}]
if !ok {
fmt.Printf("Missing global for file %q\n", fi.Name)
success = false
continue
}
}
// Aggregate the ranges of missing sequence entries, print them
slices.SortFunc(missingSeq, func(a, b sequenceKey) int {
if a.folder != b.folder {
return cmp.Compare(a.folder, b.folder)
}
return cmp.Compare(a.sequence, b.sequence)
})
var folder uint32
var startSeq, prevSeq uint64
for _, sk := range missingSeq {
if folder != sk.folder || sk.sequence != prevSeq+1 {
if folder != 0 {
fmt.Printf("Folder %d missing %d sequence entries: #%d - #%d\n", folder, prevSeq-startSeq+1, startSeq, prevSeq)
}
startSeq = sk.sequence
folder = sk.folder
}
prevSeq = sk.sequence
}
if folder != 0 {
fmt.Printf("Folder %d missing %d sequence entries: #%d - #%d\n", folder, prevSeq-startSeq+1, startSeq, prevSeq)
}
for gk, vl := range globals {
folder := folders[gk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for VersionList %q\n", gk.folder, gk.name)
success = false
}
checkGlobal := func(i int, device []byte, version protocol.Vector, invalid, deleted bool) {
dev, ok := deviceToIDs[string(device)]
if !ok {
fmt.Printf("VersionList %q, folder %q refers to unknown device %q\n", gk.name, folder, device)
success = false
}
fi, ok := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
if !ok {
fmt.Printf("VersionList %q, folder %q, entry %d refers to unknown FileInfo\n", gk.name, folder, i)
success = false
}
fiv := fi.Version
if fi.VersionHash != nil {
fiv = versions[string(fi.VersionHash)]
}
if !protocol.VectorFromWire(fiv).Equal(version) {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, version, fi.Version)
success = false
}
ffi := protocol.FileInfoFromDB(fi)
if ffi.IsInvalid() != invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, invalid, ffi.IsInvalid())
success = false
}
if ffi.IsDeleted() != deleted {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo deleted mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, deleted, ffi.IsDeleted())
success = false
}
}
for i, fv := range vl.Versions {
ver := protocol.VectorFromWire(fv.Version)
for _, device := range fv.Devices {
checkGlobal(i, device, ver, false, fv.Deleted)
}
for _, device := range fv.InvalidDevices {
checkGlobal(i, device, ver, true, fv.Deleted)
}
}
// If we need this file we should have a need entry for it. False
// positives from needsLocally for deleted files, where we might
// legitimately lack an entry if we never had it, and ignored files.
if needsLocally(vl) {
_, ok := needs[gk]
if !ok {
fv, _ := vlGetGlobal(vl)
devB, _ := fvFirstDevice(fv)
dev := deviceToIDs[string(devB)]
fi := protocol.FileInfoFromDB(fileInfos[fileInfoKey{gk.folder, dev, gk.name}])
if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
}
}
}
}
seenSeq := make(map[fileInfoKey]uint64)
for sk, name := range sequences {
folder := folders[sk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for sequence entry %d, %q\n", sk.folder, sk.sequence, name)
success = false
continue
}
if prev, ok := seenSeq[fileInfoKey{folder: sk.folder, name: name}]; ok {
fmt.Printf("Duplicate sequence entry for %q, folder %q, seq %d (prev %d)\n", name, folder, sk.sequence, prev)
success = false
}
seenSeq[fileInfoKey{folder: sk.folder, name: name}] = sk.sequence
fi, ok := fileInfos[fileInfoKey{sk.folder, localDeviceKey, name}]
if !ok {
fmt.Printf("Missing FileInfo for sequence entry %d, folder %q, %q\n", sk.sequence, folder, name)
success = false
continue
}
if fi.Sequence != int64(sk.sequence) {
fmt.Printf("Sequence mismatch for %q, folder %q, %d (key) != %d (FileInfo)\n", name, folder, sk.sequence, fi.Sequence)
success = false
}
}
for nk := range needs {
folder := folders[nk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for need entry %q\n", nk.folder, nk.name)
success = false
continue
}
vl, ok := globals[nk]
if !ok {
fmt.Printf("Missing global for need entry %q, folder %q\n", nk.name, folder)
success = false
continue
}
if !needsLocally(vl) {
fmt.Printf("Need entry for file we don't need, %q, folder %q\n", nk.name, folder)
success = false
}
}
if d := len(blocklists) - len(usedBlocklists); d > 0 {
fmt.Printf("%d block list entries out of %d needs GC\n", d, len(blocklists))
}
if d := len(versions) - len(usedVersions); d > 0 {
fmt.Printf("%d version entries out of %d needs GC\n", d, len(versions))
}
return nil
}
func needsLocally(vl *dbproto.VersionList) bool {
gfv, gok := vlGetGlobal(vl)
if !gok { // That's weird, but we hardly need something non-existent
return false
}
fv, ok := vlGet(vl, protocol.LocalDeviceID[:])
return db.Need(gfv, ok, protocol.VectorFromWire(fv.Version))
}
// Get returns a FileVersion that contains the given device and whether it has
// been found at all.
func vlGet(vl *dbproto.VersionList, device []byte) (*dbproto.FileVersion, bool) {
_, i, _, ok := vlFindDevice(vl, device)
if !ok {
return &dbproto.FileVersion{}, false
}
return vl.Versions[i], true
}
// GetGlobal returns the current global FileVersion. The returned FileVersion
// may be invalid, if all FileVersions are invalid. Returns false only if
// VersionList is empty.
func vlGetGlobal(vl *dbproto.VersionList) (*dbproto.FileVersion, bool) {
i := vlFindGlobal(vl)
if i == -1 {
return nil, false
}
return vl.Versions[i], true
}
// findGlobal returns the first version that isn't invalid, or if all versions are
// invalid just the first version (i.e. 0) or -1, if there's no versions at all.
func vlFindGlobal(vl *dbproto.VersionList) int {
for i := range vl.Versions {
if !fvIsInvalid(vl.Versions[i]) {
return i
}
}
if len(vl.Versions) == 0 {
return -1
}
return 0
}
// findDevice returns whether the device is in InvalidVersions or Versions and
// in InvalidDevices or Devices (true for invalid), the positions in the version
// and device slices and whether it has been found at all.
func vlFindDevice(vl *dbproto.VersionList, device []byte) (bool, int, int, bool) {
for i, v := range vl.Versions {
if j := deviceIndex(v.Devices, device); j != -1 {
return false, i, j, true
}
if j := deviceIndex(v.InvalidDevices, device); j != -1 {
return true, i, j, true
}
}
return false, -1, -1, false
}
func deviceIndex(devices [][]byte, device []byte) int {
for i, dev := range devices {
if bytes.Equal(device, dev) {
return i
}
}
return -1
}
func fvFirstDevice(fv *dbproto.FileVersion) ([]byte, bool) {
if len(fv.Devices) != 0 {
return fv.Devices[0], true
}
if len(fv.InvalidDevices) != 0 {
return fv.InvalidDevices[0], true
}
return nil, false
}
func fvIsInvalid(fv *dbproto.FileVersion) bool {
return fv == nil || len(fv.Devices) == 0
}

View File

@@ -14,12 +14,15 @@ import (
"github.com/alecthomas/kong"
"github.com/kballard/go-shellquote"
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/lib/config"
)
type CLI struct {
GUIAddress string `name:"gui-address" env:"STGUIADDRESS"`
GUIAPIKey string `name:"gui-apikey" env:"STGUIAPIKEY"`
cmdutil.CommonOptions
DataDir string `name:"data" placeholder:"PATH" env:"STDATADIR" help:"Set data directory (database and logs)"`
GUIAddress string `name:"gui-address"`
GUIAPIKey string `name:"gui-apikey"`
Show showCommand `cmd:"" help:"Show command group"`
Debug debugCommand `cmd:"" help:"Debug command group"`
@@ -34,6 +37,11 @@ type Context struct {
}
func (cli CLI) AfterApply(kongCtx *kong.Context) error {
err := cmdutil.SetConfigDataLocationsFromFlags(cli.HomeDir, cli.ConfDir, cli.DataDir)
if err != nil {
return fmt.Errorf("command line options: %w", err)
}
clientFactory := &apiClientFactory{
cfg: config.GUIConfiguration{
RawAddress: cli.GUIAddress,

View File

@@ -17,6 +17,8 @@ import (
"path/filepath"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/locations"
)
func responseToBArray(response *http.Response) ([]byte, error) {
@@ -131,6 +133,10 @@ func prettyPrintResponse(response *http.Response) error {
return prettyPrintJSON(data)
}
func getDB() (backend.Backend, error) {
return backend.OpenLevelDBRO(locations.Get(locations.Database))
}
func nulString(bs []byte) string {
for i := range bs {
if bs[i] == 0 {

View File

@@ -0,0 +1,16 @@
// Copyright (C) 2021 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cmdutil
// CommonOptions are reused among several subcommands
type CommonOptions struct {
buildCommonOptions
ConfDir string `name:"config" placeholder:"PATH" env:"STCONFDIR" help:"Set configuration directory (config and keys)"`
HomeDir string `name:"home" placeholder:"PATH" env:"STHOMEDIR" help:"Set configuration and data directory"`
NoDefaultFolder bool `env:"STNODEFAULTFOLDER" help:"Don't create the \"default\" folder on first startup"`
SkipPortProbing bool `help:"Don't try to find free ports for GUI and listen addresses on first startup"`
}

View File

@@ -7,8 +7,8 @@
//go:build !windows
// +build !windows
package main
package cmdutil
type buildSpecificOptions struct {
type buildCommonOptions struct {
HideConsole bool `hidden:""`
}

View File

@@ -4,8 +4,8 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
package cmdutil
type buildSpecificOptions struct {
HideConsole bool `name:"no-console" help:"Hide console window" env:"STHIDECONSOLE"`
type buildCommonOptions struct {
HideConsole bool `name:"no-console" help:"Hide console window"`
}

View File

@@ -0,0 +1,35 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package cmdutil
import (
"errors"
"github.com/syncthing/syncthing/lib/locations"
)
func SetConfigDataLocationsFromFlags(homeDir, confDir, dataDir string) error {
homeSet := homeDir != ""
confSet := confDir != ""
dataSet := dataDir != ""
switch {
case dataSet != confSet:
return errors.New("either both or none of --config and --data must be given, use --home to set both at once")
case homeSet && dataSet:
return errors.New("--home must not be used together with --config and --data")
case homeSet:
confDir = homeDir
dataDir = homeDir
fallthrough
case dataSet:
if err := locations.SetBaseDir(locations.ConfigBaseDir, confDir); err != nil {
return err
}
return locations.SetBaseDir(locations.DataBaseDir, dataDir)
}
return nil
}

View File

@@ -14,7 +14,7 @@ import (
"net/http"
"os"
"path/filepath"
"sort"
"slices"
"strings"
"time"
)
@@ -37,7 +37,9 @@ func uploadPanicLogs(ctx context.Context, urlBase, dir string) {
return
}
sort.Sort(sort.Reverse(sort.StringSlice(files)))
slices.SortFunc(files, func(a, b string) int {
return strings.Compare(b, a)
})
for _, file := range files {
if strings.Contains(file, ".reported.") {
// We've already sent this file. It'll be cleaned out at some

View File

@@ -238,7 +238,7 @@ func (c *CLI) decryptFile(encFi *protocol.FileInfo, plainFi *protocol.FileInfo,
}
// Verify the hash against the plaintext block info
if !scanner.Validate(dec, plainBlock.Hash) {
if !scanner.Validate(dec, plainBlock.Hash, 0) {
// The block decrypted correctly but fails the hash check. This
// is odd and unexpected, but it it's still a valid block from
// the source. The file might have changed while we pulled it?

View File

@@ -11,26 +11,42 @@ import (
"bufio"
"context"
"crypto/tls"
"errors"
"fmt"
"os"
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/logger"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/syncthing"
)
type CLI struct {
GUIUser string `placeholder:"STRING" help:"Specify new GUI authentication user name"`
GUIPassword string `placeholder:"STRING" help:"Specify new GUI authentication password (use - to read from standard input)"`
NoDefaultFolder bool `help:"Don't create the \"default\" folder on first startup" env:"STNODEFAULTFOLDER"`
NoPortProbing bool `help:"Don't try to find free ports for GUI and listen addresses on first startup" env:"STNOPORTPROBING"`
cmdutil.CommonOptions
GUIUser string `placeholder:"STRING" help:"Specify new GUI authentication user name"`
GUIPassword string `placeholder:"STRING" help:"Specify new GUI authentication password (use - to read from standard input)"`
}
func (c *CLI) Run(l logger.Logger) error {
if c.HideConsole {
osutil.HideConsole()
}
if c.HomeDir != "" {
if c.ConfDir != "" {
return errors.New("--home must not be used together with --config")
}
c.ConfDir = c.HomeDir
}
if c.ConfDir == "" {
c.ConfDir = locations.GetBaseDir(locations.ConfigBaseDir)
}
// Support reading the password from a pipe or similar
if c.GUIPassword == "-" {
reader := bufio.NewReader(os.Stdin)
@@ -41,7 +57,7 @@ func (c *CLI) Run(l logger.Logger) error {
c.GUIPassword = string(password)
}
if err := Generate(l, locations.GetBaseDir(locations.ConfigBaseDir), c.GUIUser, c.GUIPassword, c.NoDefaultFolder, c.NoPortProbing); err != nil {
if err := Generate(l, c.ConfDir, c.GUIUser, c.GUIPassword, c.NoDefaultFolder, c.SkipPortProbing); err != nil {
return fmt.Errorf("failed to generate config and keys: %w", err)
}
return nil

View File

@@ -22,10 +22,10 @@ import (
"path"
"path/filepath"
"regexp"
"runtime"
"runtime/pprof"
"sort"
"slices"
"strconv"
"strings"
"syscall"
"time"
@@ -35,12 +35,13 @@ import (
"github.com/willabides/kongplete"
"github.com/syncthing/syncthing/cmd/syncthing/cli"
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/cmd/syncthing/decrypt"
"github.com/syncthing/syncthing/cmd/syncthing/generate"
"github.com/syncthing/syncthing/internal/db"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
@@ -127,68 +128,53 @@ var (
// The entrypoint struct is the main entry point for the command line parser. The
// commands and options here are top level commands to syncthing.
// Cli is just a placeholder for the help text (see main).
type CLI struct {
// The directory options are defined at top level and available for all
// subcommands. Their settings take effect on the `locations` package by
// way of the command line parser, so anything using `locations.Get` etc
// will be doing the right thing.
ConfDir string `name:"config" short:"C" placeholder:"PATH" env:"STCONFDIR" help:"Set configuration directory (config and keys)"`
DataDir string `name:"data" short:"D" placeholder:"PATH" env:"STDATADIR" help:"Set data directory (database and logs)"`
HomeDir string `name:"home" short:"H" placeholder:"PATH" env:"STHOMEDIR" help:"Set configuration and data directory"`
Serve serveCmd `cmd:"" help:"Run Syncthing (default)" default:"withargs"`
CLI cli.CLI `cmd:"" help:"Command line interface for Syncthing"`
Browser browserCmd `cmd:"" help:"Open GUI in browser, then exit"`
Decrypt decrypt.CLI `cmd:"" help:"Decrypt or verify an encrypted folder"`
DeviceID deviceIDCmd `cmd:"" help:"Show device ID, then exit"`
Generate generate.CLI `cmd:"" help:"Generate key and config, then exit"`
Paths pathsCmd `cmd:"" help:"Show configuration paths, then exit"`
Upgrade upgradeCmd `cmd:"" help:"Perform or check for upgrade, then exit"`
Version versionCmd `cmd:"" help:"Show current version, then exit"`
Debug debugCmd `cmd:"" help:"Various debugging commands"`
var entrypoint struct {
Serve serveOptions `cmd:"" help:"Run Syncthing"`
Generate generate.CLI `cmd:"" help:"Generate key and config, then exit"`
Decrypt decrypt.CLI `cmd:"" help:"Decrypt or verify an encrypted folder"`
Cli cli.CLI `cmd:"" help:"Command line interface for Syncthing"`
InstallCompletions kongplete.InstallCompletions `cmd:"" help:"Print commands to install shell completions"`
}
func (c *CLI) AfterApply() error {
// Executed after parsing command line options but before running actual
// subcommands
return setConfigDataLocationsFromFlags(c.HomeDir, c.ConfDir, c.DataDir)
}
// serveCmd are the options for the `syncthing serve` command.
type serveCmd struct {
buildSpecificOptions
AllowNewerConfig bool `help:"Allow loading newer than current config version" env:"STALLOWNEWERCONFIG"`
Audit bool `help:"Write events to audit file" env:"STAUDIT"`
AuditFile string `name:"auditfile" help:"Specify audit file (use \"-\" for stdout, \"--\" for stderr)" placeholder:"PATH" env:"STAUDITFILE"`
DBMaintenanceInterval time.Duration `help:"Database maintenance interval" default:"8h" env:"STDBMAINTENANCEINTERVAL"`
DBDeleteRetentionInterval time.Duration `help:"Database deleted item retention interval" default:"4320h" env:"STDBDELETERETENTIONINTERVAL"`
GUIAddress string `name:"gui-address" help:"Override GUI address (e.g. \"http://192.0.2.42:8443\")" placeholder:"URL" env:"STGUIADDRESS"`
GUIAPIKey string `name:"gui-apikey" help:"Override GUI API key" placeholder:"API-KEY" env:"STGUIAPIKEY"`
LogFile string `name:"logfile" help:"Log file name (see below)" default:"${logFile}" placeholder:"PATH" env:"STLOGFILE"`
LogFlags int `name:"logflags" help:"Select information in log line prefix (see below)" default:"${logFlags}" placeholder:"BITS" env:"STLOGFLAGS"`
LogMaxFiles int `name:"log-max-old-files" help:"Number of old files to keep (zero to keep only current)" default:"${logMaxFiles}" placeholder:"N" env:"STLOGMAXOLDFILES"`
LogMaxSize int `help:"Maximum size of any file (zero to disable log rotation)" default:"${logMaxSize}" placeholder:"BYTES" env:"STLOGMAXSIZE"`
NoBrowser bool `help:"Do not start browser" env:"STNOBROWSER"`
NoDefaultFolder bool `help:"Don't create the \"default\" folder on first startup" env:"STNODEFAULTFOLDER"`
NoPortProbing bool `help:"Don't try to find free ports for GUI and listen addresses on first startup" env:"STNOPORTPROBING"`
NoRestart bool `help:"Do not restart Syncthing when exiting due to API/GUI command, upgrade, or crash" env:"STNORESTART"`
NoUpgrade bool `help:"Disable automatic upgrades" env:"STNOUPGRADE"`
Paused bool `help:"Start with all devices and folders paused" env:"STPAUSED"`
Unpaused bool `help:"Start with all devices and folders unpaused" env:"STUNPAUSED"`
Verbose bool `help:"Print verbose log output" env:"STVERBOSE"`
// serveOptions are the options for the `syncthing serve` command.
type serveOptions struct {
cmdutil.CommonOptions
AllowNewerConfig bool `help:"Allow loading newer than current config version"`
Audit bool `help:"Write events to audit file"`
AuditFile string `name:"auditfile" placeholder:"PATH" help:"Specify audit file (use \"-\" for stdout, \"--\" for stderr)"`
BrowserOnly bool `help:"Open GUI in browser"`
DataDir string `name:"data" placeholder:"PATH" env:"STDATADIR" help:"Set data directory (database and logs)"`
DeviceID bool `help:"Show the device ID"`
GenerateDir string `name:"generate" placeholder:"PATH" help:"Generate key and config in specified dir, then exit"` // DEPRECATED: replaced by subcommand!
GUIAddress string `name:"gui-address" placeholder:"URL" help:"Override GUI address (e.g. \"http://192.0.2.42:8443\")"`
GUIAPIKey string `name:"gui-apikey" placeholder:"API-KEY" help:"Override GUI API key"`
LogFile string `name:"logfile" default:"${logFile}" placeholder:"PATH" help:"Log file name (see below)"`
LogFlags int `name:"logflags" default:"${logFlags}" placeholder:"BITS" help:"Select information in log line prefix (see below)"`
LogMaxFiles int `placeholder:"N" default:"${logMaxFiles}" name:"log-max-old-files" help:"Number of old files to keep (zero to keep only current)"`
LogMaxSize int `placeholder:"BYTES" default:"${logMaxSize}" help:"Maximum size of any file (zero to disable log rotation)"`
NoBrowser bool `help:"Do not start browser"`
NoRestart bool `env:"STNORESTART" help:"Do not restart Syncthing when exiting due to API/GUI command, upgrade, or crash"`
NoUpgrade bool `env:"STNOUPGRADE" help:"Disable automatic upgrades"`
Paths bool `help:"Show configuration paths"`
Paused bool `help:"Start with all devices and folders paused"`
Unpaused bool `help:"Start with all devices and folders unpaused"`
Upgrade bool `help:"Perform upgrade"`
UpgradeCheck bool `help:"Check for available upgrade"`
UpgradeTo string `placeholder:"URL" help:"Force upgrade directly from specified URL"`
Verbose bool `help:"Print verbose log output"`
Version bool `help:"Show version"`
// Debug options below
DebugGUIAssetsDir string `help:"Directory to load GUI assets from" placeholder:"PATH" env:"STGUIASSETS"`
DebugPerfStats bool `help:"Write running performance statistics to perf-$pid.csv (Unix only)" env:"STPERFSTATS"`
DebugProfileBlock bool `help:"Write block profiles to block-$pid-$timestamp.pprof every 20 seconds" env:"STBLOCKPROFILE"`
DebugProfileCPU bool `help:"Write a CPU profile to cpu-$pid.pprof on exit" env:"STCPUPROFILE"`
DebugProfileHeap bool `help:"Write heap profiles to heap-$pid-$timestamp.pprof each time heap usage increases" env:"STHEAPPROFILE"`
DebugProfilerListen string `help:"Network profiler listen address" placeholder:"ADDR" env:"STPROFILER" `
DebugResetDeltaIdxs bool `help:"Reset delta index IDs, forcing a full index exchange"`
DebugDBIndirectGCInterval time.Duration `env:"STGCINDIRECTEVERY" help:"Database indirection GC interval"`
DebugDBRecheckInterval time.Duration `env:"STRECHECKDBEVERY" help:"Database metadata recalculation interval"`
DebugGUIAssetsDir string `placeholder:"PATH" help:"Directory to load GUI assets from" env:"STGUIASSETS"`
DebugPerfStats bool `env:"STPERFSTATS" help:"Write running performance statistics to perf-$pid.csv (Unix only)"`
DebugProfileBlock bool `env:"STBLOCKPROFILE" help:"Write block profiles to block-$pid-$timestamp.pprof every 20 seconds"`
DebugProfileCPU bool `help:"Write a CPU profile to cpu-$pid.pprof on exit" env:"STCPUPROFILE"`
DebugProfileHeap bool `env:"STHEAPPROFILE" help:"Write heap profiles to heap-$pid-$timestamp.pprof each time heap usage increases"`
DebugProfilerListen string `placeholder:"ADDR" env:"STPROFILER" help:"Network profiler listen address"`
DebugResetDatabase bool `name:"reset-database" help:"Reset the database, forcing a full rescan and resync"`
DebugResetDeltaIdxs bool `name:"reset-deltas" help:"Reset delta index IDs, forcing a full index exchange"`
// Internal options, not shown to users
InternalRestarting bool `env:"STRESTART" hidden:"1"`
@@ -220,9 +206,31 @@ func defaultVars() kong.Vars {
}
func main() {
// First some massaging of the raw command line to fit the new model.
// Basically this means adding the default command at the front, and
// converting -options to --options.
args := os.Args[1:]
switch {
case len(args) == 0:
// Empty command line is equivalent to just calling serve
args = []string{"serve"}
case args[0] == "-help":
// For consistency, we consider this equivalent with --help even
// though kong would otherwise consider it a bad flag.
args[0] = "--help"
case args[0] == "-h", args[0] == "--help":
// Top level request for help, let it pass as-is to be handled by
// kong to list commands.
case strings.HasPrefix(args[0], "-"):
// There are flags not preceded by a command, so we tack on the
// "serve" command and convert the old style arguments (single dash)
// to new style (double dash).
args = append([]string{"serve"}, convertLegacyArgs(args)...)
}
// Create a parser with an overridden help function to print our extra
// help info.
var entrypoint CLI
parser, err := kong.New(
&entrypoint,
kong.ConfigureHelp(kong.HelpOptions{
@@ -237,7 +245,7 @@ func main() {
}
kongplete.Complete(parser)
ctx, err := parser.Parse(os.Args[1:])
ctx, err := parser.Parse(args)
parser.FatalIfErrorf(err)
ctx.BindTo(l, (*logger.Logger)(nil)) // main logger available to subcommands
err = ctx.Run()
@@ -256,54 +264,154 @@ func helpHandler(options kong.HelpOptions, ctx *kong.Context) error {
return nil
}
// serveCmd.Run() is the entrypoint for `syncthing serve`
func (c *serveCmd) Run() error {
l.SetFlags(c.LogFlags)
// serveOptions.Run() is the entrypoint for `syncthing serve`
func (options serveOptions) Run() error {
l.SetFlags(options.LogFlags)
if c.GUIAddress != "" {
if options.GUIAddress != "" {
// The config picks this up from the environment.
os.Setenv("STGUIADDRESS", c.GUIAddress)
os.Setenv("STGUIADDRESS", options.GUIAddress)
}
if c.GUIAPIKey != "" {
if options.GUIAPIKey != "" {
// The config picks this up from the environment.
os.Setenv("STGUIAPIKEY", c.GUIAPIKey)
os.Setenv("STGUIAPIKEY", options.GUIAPIKey)
}
if c.HideConsole {
if options.HideConsole {
osutil.HideConsole()
}
// Treat an explicitly empty log file name as no log file
if c.LogFile == "" {
c.LogFile = "-"
// Not set as default above because the strings can be really long.
err := cmdutil.SetConfigDataLocationsFromFlags(options.HomeDir, options.ConfDir, options.DataDir)
if err != nil {
l.Warnln("Command line options:", err)
os.Exit(svcutil.ExitError.AsInt())
}
if c.LogFile != "default" {
// Treat an explicitly empty log file name as no log file
if options.LogFile == "" {
options.LogFile = "-"
}
if options.LogFile != "default" {
// We must set this *after* expandLocations above.
if err := locations.Set(locations.LogFile, c.LogFile); err != nil {
if err := locations.Set(locations.LogFile, options.LogFile); err != nil {
l.Warnln("Setting log file path:", err)
os.Exit(svcutil.ExitError.AsInt())
}
}
if c.DebugGUIAssetsDir != "" {
if options.DebugGUIAssetsDir != "" {
// The asset dir is blank if STGUIASSETS wasn't set, in which case we
// should look for extra assets in the default place.
if err := locations.Set(locations.GUIAssets, c.DebugGUIAssetsDir); err != nil {
if err := locations.Set(locations.GUIAssets, options.DebugGUIAssetsDir); err != nil {
l.Warnln("Setting GUI assets path:", err)
os.Exit(svcutil.ExitError.AsInt())
}
}
// Ensure that our home directory exists.
if err := syncthing.EnsureDir(locations.GetBaseDir(locations.ConfigBaseDir), 0o700); err != nil {
l.Warnln("Failure on home directory:", err)
os.Exit(svcutil.ExitError.AsInt())
if options.Version {
fmt.Println(build.LongVersion)
return nil
}
if c.InternalInnerProcess {
c.syncthingMain()
if options.Paths {
fmt.Print(locations.PrettyPaths())
return nil
}
if options.DeviceID {
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
l.Warnln("Error reading device ID:", err)
os.Exit(svcutil.ExitError.AsInt())
}
fmt.Println(protocol.NewDeviceID(cert.Certificate[0]))
return nil
}
if options.BrowserOnly {
if err := openGUI(); err != nil {
l.Warnln("Failed to open web UI:", err)
os.Exit(svcutil.ExitError.AsInt())
}
return nil
}
if options.GenerateDir != "" {
if err := generate.Generate(l, options.GenerateDir, "", "", options.NoDefaultFolder, options.SkipPortProbing); err != nil {
l.Warnln("Failed to generate config and keys:", err)
os.Exit(svcutil.ExitError.AsInt())
}
return nil
}
// Ensure that our config and data directories exist.
for _, loc := range []locations.BaseDirEnum{locations.ConfigBaseDir, locations.DataBaseDir} {
if err := syncthing.EnsureDir(locations.GetBaseDir(loc), 0o700); err != nil {
l.Warnln("Failed to ensure directory exists:", err)
os.Exit(svcutil.ExitError.AsInt())
}
}
if options.UpgradeTo != "" {
err := upgrade.ToURL(options.UpgradeTo)
if err != nil {
l.Warnln("Error while Upgrading:", err)
os.Exit(svcutil.ExitError.AsInt())
}
l.Infoln("Upgraded from", options.UpgradeTo)
return nil
}
if options.UpgradeCheck {
if _, err := checkUpgrade(); err != nil {
l.Warnln("Checking for upgrade:", err)
os.Exit(exitCodeForUpgrade(err))
}
return nil
}
if options.Upgrade {
release, err := checkUpgrade()
if err == nil {
lf := flock.New(locations.Get(locations.LockFile))
locked, err := lf.TryLock()
if err != nil {
l.Warnln("Upgrade:", err)
os.Exit(1)
} else if locked {
err = upgradeViaRest()
} else {
err = upgrade.To(release)
}
_ = lf.Unlock()
_ = os.Remove(locations.Get(locations.LockFile))
}
if err != nil {
l.Warnln("Upgrade:", err)
os.Exit(exitCodeForUpgrade(err))
}
l.Infof("Upgraded to %q", release.Tag)
os.Exit(svcutil.ExitUpgrade.AsInt())
}
if options.DebugResetDatabase {
if err := resetDB(); err != nil {
l.Warnln("Resetting database:", err)
os.Exit(svcutil.ExitError.AsInt())
}
l.Infoln("Successfully reset database - it will be rebuilt after next start.")
return nil
}
if options.InternalInnerProcess {
syncthingMain(options)
} else {
c.monitorMain()
monitorMain(options)
}
return nil
}
@@ -335,7 +443,7 @@ func debugFacilities() string {
maxLen = len(name)
}
}
sort.Strings(names)
slices.Sort(names)
// Format the choices
b := new(bytes.Buffer)
@@ -412,14 +520,14 @@ func upgradeViaRest() error {
return err
}
func (c *serveCmd) syncthingMain() {
if c.DebugProfileBlock {
func syncthingMain(options serveOptions) {
if options.DebugProfileBlock {
startBlockProfiler()
}
if c.DebugProfileHeap {
if options.DebugProfileHeap {
startHeapProfiler()
}
if c.DebugPerfStats {
if options.DebugPerfStats {
startPerfStats()
}
@@ -442,7 +550,7 @@ func (c *serveCmd) syncthingMain() {
}
// Ensure we are the only running instance
lf := flock.New(locations.Get(locations.CertFile))
lf := flock.New(locations.Get(locations.LockFile))
locked, err := lf.TryLock()
if err != nil {
l.Warnln("Failed to acquire lock:", err)
@@ -464,18 +572,19 @@ func (c *serveCmd) syncthingMain() {
evLogger := events.NewLogger()
earlyService.Add(evLogger)
cfgWrapper, err := syncthing.LoadConfigAtStartup(locations.Get(locations.ConfigFile), cert, evLogger, c.AllowNewerConfig, c.NoDefaultFolder, c.NoPortProbing)
cfgWrapper, err := syncthing.LoadConfigAtStartup(locations.Get(locations.ConfigFile), cert, evLogger, options.AllowNewerConfig, options.NoDefaultFolder, options.SkipPortProbing)
if err != nil {
l.Warnln("Failed to initialize config:", err)
os.Exit(svcutil.ExitError.AsInt())
}
earlyService.Add(cfgWrapper)
config.RegisterInfoMetrics(cfgWrapper)
// Candidate builds should auto upgrade. Make sure the option is set,
// unless we are in a build where it's disabled or the STNOUPGRADE
// environment variable is set.
if build.IsCandidate && !upgrade.DisabledByCompilation && !c.NoUpgrade {
if build.IsCandidate && !upgrade.DisabledByCompilation && !options.NoUpgrade {
cfgWrapper.Modify(func(cfg *config.Configuration) {
l.Infoln("Automatic upgrade is always enabled for candidate releases.")
if cfg.Options.AutoUpgradeIntervalH == 0 || cfg.Options.AutoUpgradeIntervalH > 24 {
@@ -488,12 +597,8 @@ func (c *serveCmd) syncthingMain() {
})
}
if err := syncthing.TryMigrateDatabase(c.DBDeleteRetentionInterval); err != nil {
l.Warnln("Failed to migrate old-style database:", err)
os.Exit(1)
}
sdb, err := syncthing.OpenDatabase(locations.Get(locations.Database), c.DBDeleteRetentionInterval)
dbFile := locations.Get(locations.Database)
ldb, err := syncthing.OpenDBBackend(dbFile, cfgWrapper.Options().DatabaseTuning)
if err != nil {
l.Warnln("Error opening database:", err)
os.Exit(1)
@@ -502,11 +607,11 @@ func (c *serveCmd) syncthingMain() {
// Check if auto-upgrades is possible, and if yes, and it's enabled do an initial
// upgrade immediately. The auto-upgrade routine can only be started
// later after App is initialised.
autoUpgradePossible := c.autoUpgradePossible()
autoUpgradePossible := autoUpgradePossible(options)
if autoUpgradePossible && cfgWrapper.Options().AutoUpgradeEnabled() {
// try to do upgrade directly and log the error if relevant.
miscDB := db.NewMiscDB(sdb)
release, err := initialAutoUpgradeCheck(miscDB)
release, err := initialAutoUpgradeCheck(db.NewMiscDataNamespace(ldb))
if err == nil {
err = upgrade.To(release)
}
@@ -517,29 +622,48 @@ func (c *serveCmd) syncthingMain() {
l.Infoln("Initial automatic upgrade:", err)
}
} else {
l.Infof("Upgraded to %q, should exit now.", release.Tag)
l.Infof("Upgraded to %q, exiting now.", release.Tag)
os.Exit(svcutil.ExitUpgrade.AsInt())
}
}
if c.Unpaused {
if options.Unpaused {
setPauseState(cfgWrapper, false)
} else if c.Paused {
} else if options.Paused {
setPauseState(cfgWrapper, true)
}
appOpts := syncthing.Options{
NoUpgrade: c.NoUpgrade,
ProfilerAddr: c.DebugProfilerListen,
ResetDeltaIdxs: c.DebugResetDeltaIdxs,
Verbose: c.Verbose,
DBMaintenanceInterval: c.DBMaintenanceInterval,
}
if c.Audit {
appOpts.AuditWriter = auditWriter(c.AuditFile)
NoUpgrade: options.NoUpgrade,
ProfilerAddr: options.DebugProfilerListen,
ResetDeltaIdxs: options.DebugResetDeltaIdxs,
Verbose: options.Verbose,
DBRecheckInterval: options.DebugDBRecheckInterval,
DBIndirectGCInterval: options.DebugDBIndirectGCInterval,
}
app, err := syncthing.New(cfgWrapper, sdb, evLogger, cert, appOpts)
if options.Audit || cfgWrapper.Options().AuditEnabled {
l.Infoln("Auditing is enabled.")
auditFile := cfgWrapper.Options().AuditFile
// Ignore config option if command-line option is set
if options.AuditFile != "" {
l.Debugln("Using the audit file from the command-line parameter.")
auditFile = options.AuditFile
}
appOpts.AuditWriter = auditWriter(auditFile)
}
if dur, err := time.ParseDuration(os.Getenv("STRECHECKDBEVERY")); err == nil {
appOpts.DBRecheckInterval = dur
}
if dur, err := time.ParseDuration(os.Getenv("STGCINDIRECTEVERY")); err == nil {
appOpts.DBIndirectGCInterval = dur
}
app, err := syncthing.New(cfgWrapper, ldb, evLogger, cert, appOpts)
if err != nil {
l.Warnln("Failed to start Syncthing:", err)
os.Exit(svcutil.ExitError.AsInt())
@@ -551,7 +675,7 @@ func (c *serveCmd) syncthingMain() {
setupSignalHandling(app)
if c.DebugProfileCPU {
if options.DebugProfileCPU {
f, err := os.Create(fmt.Sprintf("cpu-%d.pprof", os.Getpid()))
if err != nil {
l.Warnln("Creating profile:", err)
@@ -569,7 +693,7 @@ func (c *serveCmd) syncthingMain() {
cleanConfigDirectory()
if cfgWrapper.Options().StartBrowser && !c.NoBrowser && !c.InternalRestarting {
if cfgWrapper.Options().StartBrowser && !options.NoBrowser && !options.InternalRestarting {
// Can potentially block if the utility we are invoking doesn't
// fork, and just execs, hence keep it in its own routine.
go func() { _ = openURL(cfgWrapper.GUI().URL()) }()
@@ -581,11 +705,14 @@ func (c *serveCmd) syncthingMain() {
l.Warnln("Syncthing stopped with error:", app.Error())
}
if c.DebugProfileCPU {
if options.DebugProfileCPU {
pprof.StopCPUProfile()
}
runtime.KeepAlive(lf) // ensure lock is still held to this point
// Best effort remove lockfile, doesn't matter if it succeeds
_ = lf.Unlock()
_ = os.Remove(locations.Get(locations.LockFile))
os.Exit(int(status))
}
@@ -655,11 +782,15 @@ func auditWriter(auditFile string) io.Writer {
return fd
}
func (c *serveCmd) autoUpgradePossible() bool {
func resetDB() error {
return os.RemoveAll(locations.Get(locations.Database))
}
func autoUpgradePossible(options serveOptions) bool {
if upgrade.DisabledByCompilation {
return false
}
if c.NoUpgrade {
if options.NoUpgrade {
l.Infof("No automatic upgrades; STNOUPGRADE environment variable defined.")
return false
}
@@ -723,7 +854,7 @@ func autoUpgrade(cfg config.Wrapper, app *syncthing.App, evLogger events.Logger)
}
}
func initialAutoUpgradeCheck(misc *db.Typed) (upgrade.Release, error) {
func initialAutoUpgradeCheck(misc *db.NamespacedKV) (upgrade.Release, error) {
if last, ok, err := misc.Time(upgradeCheckKey); err == nil && ok && time.Since(last) < upgradeCheckInterval {
return upgrade.Release{}, errTooEarlyUpgradeCheck
}
@@ -830,127 +961,3 @@ func convertLegacyArgs(args []string) []string {
return res
}
type versionCmd struct{}
func (versionCmd) Run() error {
fmt.Println(build.LongVersion)
return nil
}
type deviceIDCmd struct{}
func (deviceIDCmd) Run() error {
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
l.Warnln("Error reading device ID:", err)
os.Exit(svcutil.ExitError.AsInt())
}
fmt.Println(protocol.NewDeviceID(cert.Certificate[0]))
return nil
}
type pathsCmd struct{}
func (pathsCmd) Run() error {
fmt.Print(locations.PrettyPaths())
return nil
}
type upgradeCmd struct {
CheckOnly bool `short:"c" help:"Check for available upgrade, then exit"`
From string `short:"u" placeholder:"URL" help:"Force upgrade directly from specified URL"`
}
func (u upgradeCmd) Run() error {
if u.CheckOnly {
if _, err := checkUpgrade(); err != nil {
l.Warnln("Checking for upgrade:", err)
os.Exit(exitCodeForUpgrade(err))
}
return nil
}
if u.From != "" {
err := upgrade.ToURL(u.From)
if err != nil {
l.Warnln("Error while Upgrading:", err)
os.Exit(svcutil.ExitError.AsInt())
}
l.Infoln("Upgraded from", u.From)
return nil
}
release, err := checkUpgrade()
if err == nil {
lf := flock.New(locations.Get(locations.CertFile))
locked, err := lf.TryLock()
if err != nil {
l.Warnln("Upgrade:", err)
os.Exit(1)
} else if locked {
err = upgradeViaRest()
} else {
err = upgrade.To(release)
}
}
if err != nil {
l.Warnln("Upgrade:", err)
os.Exit(exitCodeForUpgrade(err))
}
l.Infof("Upgraded to %q", release.Tag)
os.Exit(svcutil.ExitUpgrade.AsInt())
return nil
}
type browserCmd struct{}
func (browserCmd) Run() error {
if err := openGUI(); err != nil {
l.Warnln("Failed to open web UI:", err)
os.Exit(svcutil.ExitError.AsInt())
}
return nil
}
type debugCmd struct {
ResetDatabase resetDatabaseCmd `cmd:"" help:"Reset the database, forcing a full rescan and resync"`
}
type resetDatabaseCmd struct{}
func (resetDatabaseCmd) Run() error {
l.Infoln("Removing database in", locations.Get(locations.Database))
if err := os.RemoveAll(locations.Get(locations.Database)); err != nil {
l.Warnln("Resetting database:", err)
os.Exit(svcutil.ExitError.AsInt())
}
l.Infoln("Successfully reset database - it will be rebuilt after next start.")
return nil
}
func setConfigDataLocationsFromFlags(homeDir, confDir, dataDir string) error {
homeSet := homeDir != ""
confSet := confDir != ""
dataSet := dataDir != ""
switch {
case dataSet != confSet:
return errors.New("either both or none of --config and --data must be given, use --home to set both at once")
case homeSet && dataSet:
return errors.New("--home must not be used together with --config and --data")
case homeSet:
confDir = homeDir
dataDir = homeDir
fallthrough
case dataSet:
if err := locations.SetBaseDir(locations.ConfigBaseDir, confDir); err != nil {
return err
}
return locations.SetBaseDir(locations.DataBaseDir, dataDir)
}
return nil
}

View File

@@ -43,7 +43,7 @@ const (
panicUploadNoticeWait = 10 * time.Second
)
func (c *serveCmd) monitorMain() {
func monitorMain(options serveOptions) {
l.SetPrefix("[monitor] ")
var dst io.Writer = os.Stdout
@@ -58,8 +58,8 @@ func (c *serveCmd) monitorMain() {
open := func(name string) (io.WriteCloser, error) {
return newAutoclosedFile(name, logFileAutoCloseDelay, logFileMaxOpenTime)
}
if c.LogMaxSize > 0 {
fileDst, err = newRotatedFile(logFile, open, int64(c.LogMaxSize), c.LogMaxFiles)
if options.LogMaxSize > 0 {
fileDst, err = newRotatedFile(logFile, open, int64(options.LogMaxSize), options.LogMaxFiles)
} else {
fileDst, err = open(logFile)
}
@@ -178,7 +178,7 @@ func (c *serveCmd) monitorMain() {
if exiterr, ok := err.(*exec.ExitError); ok {
exitCode := exiterr.ExitCode()
if stopped || c.NoRestart {
if stopped || options.NoRestart {
os.Exit(exitCode)
}
if exitCode == svcutil.ExitUpgrade.AsInt() {
@@ -192,7 +192,7 @@ func (c *serveCmd) monitorMain() {
}
}
if c.NoRestart {
if options.NoRestart {
os.Exit(svcutil.ExitError.AsInt())
}
@@ -238,19 +238,18 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
return
}
if panicFd == nil {
dst.Write([]byte(line))
dst.Write([]byte(line))
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
}
if panicFd == nil && (strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:")) {
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
}
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
if strings.Contains(line, "leveldb") && strings.Contains(line, "corrupt") {
l.Warnln(`
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
if strings.Contains(line, "leveldb") && strings.Contains(line, "corrupt") {
l.Warnln(`
*********************************************************************************
* Crash due to corrupt database. *
* *
@@ -263,22 +262,21 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
* https://docs.syncthing.net/users/faq.html#my-syncthing-database-is-corrupt *
*********************************************************************************
`)
} else {
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
}
stdoutMut.Lock()
for _, line := range stdoutFirstLines {
panicFd.WriteString(line)
}
panicFd.WriteString("...\n")
for _, line := range stdoutLastLines {
panicFd.WriteString(line)
}
stdoutMut.Unlock()
} else {
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
}
stdoutMut.Lock()
for _, line := range stdoutFirstLines {
panicFd.WriteString(line)
}
panicFd.WriteString("...\n")
for _, line := range stdoutLastLines {
panicFd.WriteString(line)
}
stdoutMut.Unlock()
panicFd.WriteString("Panic at " + time.Now().Format(time.RFC3339) + "\n")
}

View File

@@ -16,9 +16,7 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/exp/constraints"
)
func startPerfStats() {
@@ -31,68 +29,37 @@ func savePerfStats(file string) {
panic(err)
}
var prevTime time.Time
var curRus, prevRus syscall.Rusage
var curMem, prevMem runtime.MemStats
var prevUsage int64
var prevTime int64
var rusage syscall.Rusage
var memstats runtime.MemStats
var prevIn, prevOut int64
t0 := time.Now()
syscall.Getrusage(syscall.RUSAGE_SELF, &prevRus)
runtime.ReadMemStats(&prevMem)
fmt.Fprintf(fd, "TIME_S\tCPU_S\tHEAP_KIB\tRSS_KIB\tNETIN_KBPS\tNETOUT_KBPS\tDBSIZE_KIB\n")
for t := range time.NewTicker(250 * time.Millisecond).C {
syscall.Getrusage(syscall.RUSAGE_SELF, &curRus)
runtime.ReadMemStats(&curMem)
in, out := protocol.TotalInOut()
timeDiff := t.Sub(prevTime)
fmt.Fprintf(fd, "%.03f\t%f\t%d\t%d\t%.0f\t%.0f\t%d\n",
t.Sub(t0).Seconds(),
rate(cpusec(&prevRus), cpusec(&curRus), timeDiff, 1),
(curMem.Sys-curMem.HeapReleased)/1024,
curRus.Maxrss/1024,
rate(prevIn, in, timeDiff, 1e3),
rate(prevOut, out, timeDiff, 1e3),
dirsize(locations.Get(locations.Database))/1024,
)
prevTime = t
prevRus = curRus
prevMem = curMem
prevIn, prevOut = in, out
}
}
func cpusec(r *syscall.Rusage) float64 {
return float64(r.Utime.Nano()+r.Stime.Nano()) / float64(time.Second)
}
type number interface {
constraints.Float | constraints.Integer
}
func rate[T number](prev, cur T, d time.Duration, div float64) float64 {
diff := cur - prev
rate := float64(diff) / d.Seconds() / div
return rate
}
func dirsize(location string) int64 {
entries, err := os.ReadDir(location)
if err != nil {
return 0
}
var size int64
for _, entry := range entries {
fi, err := entry.Info()
if err != nil {
if err := syscall.Getrusage(syscall.RUSAGE_SELF, &rusage); err != nil {
continue
}
size += fi.Size()
}
return size
curTime := time.Now().UnixNano()
timeDiff := curTime - prevTime
curUsage := rusage.Utime.Nano() + rusage.Stime.Nano()
usageDiff := curUsage - prevUsage
cpuUsagePercent := 100 * float64(usageDiff) / float64(timeDiff)
prevTime = curTime
prevUsage = curUsage
in, out := protocol.TotalInOut()
var inRate, outRate float64
if timeDiff > 0 {
inRate = float64(in-prevIn) / (float64(timeDiff) / 1e9) // bytes per second
outRate = float64(out-prevOut) / (float64(timeDiff) / 1e9) // bytes per second
}
prevIn, prevOut = in, out
runtime.ReadMemStats(&memstats)
startms := int(t.Sub(t0).Seconds() * 1000)
fmt.Fprintf(fd, "%d\t%f\t%d\t%d\t%.0f\t%.0f\n", startms, cpuUsagePercent, memstats.Alloc, memstats.Sys-memstats.HeapReleased, inRate, outRate)
}
}

54
go.mod
View File

@@ -4,80 +4,79 @@ go 1.23.0
require (
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f
github.com/alecthomas/kong v1.10.0
github.com/aws/aws-sdk-go v1.55.6
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1
github.com/alecthomas/kong v1.11.0
github.com/aws/aws-sdk-go v1.55.7
github.com/calmh/incontainer v1.0.0
github.com/calmh/xdr v1.2.0
github.com/ccding/go-stun v0.1.5
github.com/chmduquesne/rollinghash v4.0.0+incompatible
github.com/d4l3k/messagediff v1.2.1
github.com/getsentry/raven-go v0.2.0
github.com/go-ldap/ldap/v3 v3.4.10
github.com/go-ldap/ldap/v3 v3.4.11
github.com/gobwas/glob v0.2.3
github.com/gofrs/flock v0.12.1
github.com/greatroar/blobloom v0.8.0
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/jackpal/gateway v1.0.16
github.com/jackpal/go-nat-pmp v1.0.2
github.com/jmoiron/sqlx v1.4.0
github.com/julienschmidt/httprouter v1.3.0
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/maruel/panicparse/v2 v2.5.0
github.com/mattn/go-sqlite3 v1.14.27
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.2
github.com/maxmind/geoipupdate/v6 v6.1.0
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75
github.com/oschwald/geoip2-golang v1.11.0
github.com/pierrec/lz4/v4 v4.1.22
github.com/prometheus/client_golang v1.21.1
github.com/prometheus/client_golang v1.22.0
github.com/puzpuzpuz/xsync/v3 v3.5.1
github.com/quic-go/quic-go v0.50.1
github.com/quic-go/quic-go v0.52.0
github.com/rabbitmq/amqp091-go v1.10.0
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9
github.com/shirou/gopsutil/v4 v4.25.3
github.com/syncthing/notify v0.0.0-20250207082249-f0fa8f99c2bc
github.com/shirou/gopsutil/v4 v4.25.4
github.com/syncthing/notify v0.0.0-20250528144937-c7027d4f7465
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d
github.com/thejerf/suture/v4 v4.0.6
github.com/urfave/cli v1.22.16
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0
github.com/willabides/kongplete v0.4.0
go.uber.org/automaxprocs v1.6.0
golang.org/x/crypto v0.36.0
golang.org/x/net v0.38.0
golang.org/x/sys v0.31.0
golang.org/x/text v0.23.0
golang.org/x/crypto v0.38.0
golang.org/x/net v0.40.0
golang.org/x/sys v0.33.0
golang.org/x/text v0.25.0
golang.org/x/time v0.11.0
golang.org/x/tools v0.31.0
golang.org/x/tools v0.33.0
google.golang.org/protobuf v1.36.6
modernc.org/sqlite v1.37.0
sigs.k8s.io/yaml v1.4.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/ebitengine/purego v0.8.2 // indirect
github.com/ebitengine/purego v0.8.3 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.7 // indirect
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e // indirect
github.com/google/pprof v0.0.0-20250423184734-337e5dd93bb4 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/nxadm/tail v1.4.11 // indirect
github.com/onsi/ginkgo/v2 v2.20.2 // indirect
github.com/onsi/ginkgo/v2 v2.23.4 // indirect
github.com/oschwald/maxminddb-golang v1.13.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
@@ -86,7 +85,6 @@ require (
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/riywo/loginshell v0.0.0-20200815045211-7d26008be1ab // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
@@ -94,14 +92,10 @@ require (
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.9.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/mock v0.5.0 // indirect
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect
go.uber.org/mock v0.5.2 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/sync v0.12.0 // indirect
golang.org/x/sync v0.14.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.62.1 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.9.1 // indirect
)
// https://github.com/gobwas/glob/pull/55

208
go.sum
View File

@@ -1,20 +1,30 @@
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f h1:GmH5lT+moM7PbAJFBq57nH9WJ+wRnBXr/tyaYWbSAx8=
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f/go.mod h1:Nhfib1j/VFnLrXL9cHgA+/n2O6P5THuWelOnbfPNd78=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 h1:Gt0j3wceWMwPmiazCa8MzMA0MfhmPIz0Qp0FJ6qcM0U=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.9.0 h1:OVoM452qUFBrX+URdH3VpR299ma4kfom0yB0URYky9g=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.9.0/go.mod h1:kUjrAo8bgEwLeZ/CmHqNl3Z/kPm7y6FKfxxK0izYUg4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0 h1:LR0kAX9ykz8G4YgLCaRDVJ3+n43R8MneB5dTy2konZo=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0=
github.com/alecthomas/assert/v2 v2.11.0/go.mod h1:Bze95FyfUr7x34QZrjL+XP+0qgp/zg8yS+TtBj1WA3k=
github.com/alecthomas/kong v1.10.0 h1:8K4rGDpT7Iu+jEXCIJUeKqvpwZHbsFRoebLbnzlmrpw=
github.com/alecthomas/kong v1.10.0/go.mod h1:p2vqieVMeTAnaC83txKtXe8FLke2X07aruPWXyMPQrU=
github.com/alecthomas/kong v1.11.0 h1:y++1gI7jf8O7G7l4LZo5ASFhrhJvzc+WgF/arranEmM=
github.com/alecthomas/kong v1.11.0/go.mod h1:p2vqieVMeTAnaC83txKtXe8FLke2X07aruPWXyMPQrU=
github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc=
github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7VVbI0o4wBRNQIgn917usHWOd6VAffYI=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/aws/aws-sdk-go v1.55.6 h1:cSg4pvZ3m8dgYcgqB97MrcdjUmZ1BeMYKUxMMB89IPk=
github.com/aws/aws-sdk-go v1.55.6/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/calmh/glob v0.0.0-20220615080505-1d823af5017b h1:Fjm4GuJ+TGMgqfGHN42IQArJb77CfD/mAwLbDUoJe6g=
@@ -31,9 +41,13 @@ github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d h1:S2NE3iHSwP0XV
github.com/certifi/gocertifi v0.0.0-20210507211836-431795d63e8d/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chmduquesne/rollinghash v4.0.0+incompatible h1:hnREQO+DXjqIw3rUTzWN7/+Dpw+N5Um8zpKV0JOEgbo=
github.com/chmduquesne/rollinghash v4.0.0+incompatible/go.mod h1:Uc2I36RRfTAf7Dge82bi3RU0OQUmXT9iweIcPqvr8A0=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB0alcyc=
github.com/cpuguy83/go-md2man/v2 v2.0.5/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/d4l3k/messagediff v1.2.1 h1:ZcAIMYsUg0EAp9X+tt8/enBE/Q8Yd5kzPynLyKptt9U=
@@ -41,10 +55,8 @@ github.com/d4l3k/messagediff v1.2.1/go.mod h1:Oozbb1TVXFac9FtSIxHBMnBCq2qeH/2KkE
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/ebitengine/purego v0.8.2 h1:jPPGWs2sZ1UgOSgD2bClL0MJIqu58nOmIcBuXr62z1I=
github.com/ebitengine/purego v0.8.2/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/ebitengine/purego v0.8.3 h1:K+0AjQp63JEZTEMZiwsI9g0+hAMNohwUOtY0RPGexmc=
github.com/ebitengine/purego v0.8.3/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
@@ -53,22 +65,22 @@ github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nos
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/getsentry/raven-go v0.2.0 h1:no+xWJRb5ZI7eE8TWgIq1jLulQiIoLG0IfYxv5JYMGs=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/go-asn1-ber/asn1-ber v1.5.7 h1:DTX+lbVTWaTw1hQ+PbZPlnDZPEIs0SS/GCZAl535dDk=
github.com/go-asn1-ber/asn1-ber v1.5.7/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.10 h1:ot/iwPOhfpNVgB1o+AVXljizWZ9JTp7YF5oeyONmcJU=
github.com/go-ldap/ldap/v3 v3.4.10/go.mod h1:JXh4Uxgi40P6E9rdsYqpUtbW46D9UTjJ9QSwGRznplY=
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 h1:BP4M0CvQ4S3TGls2FvczZtj5Re/2ZzkV9VwqPHH/3Bo=
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
github.com/go-ldap/ldap/v3 v3.4.11 h1:4k0Yxweg+a3OyBLjdYn5OKglv18JNvfDykSoI8bW0gU=
github.com/go-ldap/ldap/v3 v3.4.11/go.mod h1:bY7t0FLK8OAVpp/vV6sSlpz3EQDGcQwc8pF0ujLgKvM=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
@@ -89,19 +101,18 @@ github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/pprof v0.0.0-20250423184734-337e5dd93bb4 h1:gD0vax+4I+mAj+jEChEf25Ia07Jq7kYOFO5PPhAxFl4=
github.com/google/pprof v0.0.0-20250423184734-337e5dd93bb4/go.mod h1:5hDyRhoBCxViHszMt12TnOpEI4VVi+U8Gm9iphldiMA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/greatroar/blobloom v0.8.0 h1:I9RlEkfqK9/6f1v9mFmDYegDQ/x0mISCpiNpAm23Pt4=
github.com/greatroar/blobloom v0.8.0/go.mod h1:mjMJ1hh1wjGVfr93QIHJ6FfDNVrA0IELv8OvMHJxHKs=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
@@ -130,31 +141,22 @@ github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9Y
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 h1:7UMa6KCCMjZEMDtTVdcGu0B1GmmC7QJKiCCjyTAWQy0=
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683/go.mod h1:ilwx/Dta8jXAgpFYFvSWEMwxmbWXyiUHkd5FwyKhb5k=
github.com/maruel/panicparse/v2 v2.5.0 h1:yCtuS0FWjfd0RTYMXGpDvWcb0kINm8xJGu18/xMUh00=
github.com/maruel/panicparse/v2 v2.5.0/go.mod h1:DA2fDiBk63bKfBf4CVZP9gb4fuvzdPbLDsSI873hweQ=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mattn/go-sqlite3 v1.14.27 h1:drZCnuvf37yPfs95E5jd9s3XhdVWLal+6BOK6qrv6IU=
github.com/mattn/go-sqlite3 v1.14.27/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.2 h1:yVCLo4+ACVroOEr4iFU1iH46Ldlzz2rTuu18Ra7M8sU=
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.2/go.mod h1:VzB2VoMh1Y32/QqDfg9ZJYHj99oM4LiGtqPZydTiQSQ=
github.com/maxmind/geoipupdate/v6 v6.1.0 h1:sdtTHzzQNJlXF5+fd/EoPTucRHyMonYt/Cok8xzzfqA=
@@ -163,8 +165,6 @@ github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75 h1:cUVxyR+U
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75/go.mod h1:pBbZyGwC5i16IBkjVKoy/sznA8jPD/K9iedwe1ESE6w=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/nxadm/tail v1.4.11 h1:8feyoE3OzPrcshW5/MJ4sGESc5cqmGkGCWlco4l0bqY=
@@ -175,20 +175,22 @@ github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vv
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.20.2 h1:7NVCeyIWROIAheY21RLS+3j2bb52W0W82tkberYytp4=
github.com/onsi/ginkgo/v2 v2.20.2/go.mod h1:K9gyxPIlb+aIvnZ8bd9Ak+YP18w3APlR+5coaZoE2ag=
github.com/onsi/ginkgo/v2 v2.23.4 h1:ktYTpKJAVZnDT4VjxSbiBenUjmlL/5QkBEocaWXiQus=
github.com/onsi/ginkgo/v2 v2.23.4/go.mod h1:Bt66ApGPBFzHyR+JO10Zbt0Gsp4uWxu5mIOTusL46e8=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.36.1 h1:bJDPBO7ibjxcbHMgSCoo4Yj18UWbKDlLwX1x9sybDcw=
github.com/onsi/gomega v1.36.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/onsi/gomega v1.36.3 h1:hID7cr8t3Wp26+cYnfcjR6HpJ00fdogN6dqZ1t6IylU=
github.com/onsi/gomega v1.36.3/go.mod h1:8D9+Txp43QWKhM24yyOBEdpkzN8FvJyAwecBgsU4KU0=
github.com/oschwald/geoip2-golang v1.11.0 h1:hNENhCn1Uyzhf9PTmquXENiWS6AlxAEnBII6r8krA3w=
github.com/oschwald/geoip2-golang v1.11.0/go.mod h1:P9zG+54KPEFOliZ29i7SeYZ/GM6tfEL+rgSn03hYuUo=
github.com/oschwald/maxminddb-golang v1.13.1 h1:G3wwjdN9JmIK2o/ermkHM+98oX5fS+k5MbwsmL4MRQE=
github.com/oschwald/maxminddb-golang v1.13.1/go.mod h1:K4pgV9N/GcK694KSTmVSDTODk4IsCNThNdTmnaBZ/F8=
github.com/pierrec/lz4/v4 v4.1.22 h1:cKFw6uJDK+/gfw5BcDL0JL5aBsAFdsIT18eRtLj7VIU=
github.com/pierrec/lz4/v4 v4.1.22/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -200,8 +202,8 @@ github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
@@ -210,24 +212,22 @@ github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0leargg
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg=
github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/quic-go/quic-go v0.50.1 h1:unsgjFIUqW8a2oopkY7YNONpV1gYND6Nt9hnt1PN94Q=
github.com/quic-go/quic-go v0.50.1/go.mod h1:Vim6OmUvlYdwBhXP9ZVrtGmCMWa3wEqhq3NgYrI8b4E=
github.com/quic-go/quic-go v0.52.0 h1:/SlHrCRElyaU6MaEPKqKr9z83sBg2v4FLLvWM+Z47pA=
github.com/quic-go/quic-go v0.52.0/go.mod h1:MFlGGpcpJqRAfmYi6NC2cptDPSxRWTOGNuP4wqrWmzQ=
github.com/rabbitmq/amqp091-go v1.10.0 h1:STpn5XsHlHGcecLmMFCtg7mqq0RnD+zFr4uzukfVhBw=
github.com/rabbitmq/amqp091-go v1.10.0/go.mod h1:Hy4jKW5kQART1u+JkDTF9YYOQUHXqMuhrgxOEeS7G4o=
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9 h1:bsUq1dX0N8AOIL7EB/X911+m4EHsnWEHeJ0c+3TTBrg=
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/riywo/loginshell v0.0.0-20200815045211-7d26008be1ab h1:ZjX6I48eZSFetPb41dHudEyVr5v953N15TsNZXlkcWY=
github.com/riywo/loginshell v0.0.0-20200815045211-7d26008be1ab/go.mod h1:/PfPXh0EntGc3QAAyUaviy4S9tzy4Zp0e2ilq4voC6E=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sclevine/spec v1.4.0 h1:z/Q9idDcay5m5irkZ28M7PtQM4aOISzOpj4bUPkDee8=
github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM=
github.com/shirou/gopsutil/v4 v4.25.3 h1:SeA68lsu8gLggyMbmCn8cmp97V1TI9ld9sVzAUcKcKE=
github.com/shirou/gopsutil/v4 v4.25.3/go.mod h1:xbuxyoZj+UsgnZrENu3lQivsngRR5BdjbJwf2fv4szA=
github.com/shirou/gopsutil/v4 v4.25.4 h1:cdtFO363VEOOFrUCjZRh4XVJkb548lyF0q0uTeMqYPw=
github.com/shirou/gopsutil/v4 v4.25.4/go.mod h1:xbuxyoZj+UsgnZrENu3lQivsngRR5BdjbJwf2fv4szA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
@@ -238,13 +238,12 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syncthing/notify v0.0.0-20250207082249-f0fa8f99c2bc h1:xc3UfSFlH/X5hRw3h21RF6WXnRUYKmGRx06FEaVxfkM=
github.com/syncthing/notify v0.0.0-20250207082249-f0fa8f99c2bc/go.mod h1:J0q59IWjLtpRIJulohwqEZvjzwOfTEPp8SVhDJl+y0Y=
github.com/syncthing/notify v0.0.0-20250528144937-c7027d4f7465 h1:yhxdTGmFkAM2TFA65c3NgGwpnIkUM8oVqPX2e9S7IVg=
github.com/syncthing/notify v0.0.0-20250528144937-c7027d4f7465/go.mod h1:J0q59IWjLtpRIJulohwqEZvjzwOfTEPp8SVhDJl+y0Y=
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d h1:vfofYNRScrDdvS342BElfbETmL1Aiz3i2t0zfRj16Hs=
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d/go.mod h1:RRCYJbIwD5jmqPI9XoAFR0OcDxqUctll6zUj/+B4S48=
github.com/thejerf/suture/v4 v4.0.6 h1:QsuCEsCqb03xF9tPAsWAj8QOAJBgQI1c0VqJNaingg8=
@@ -261,67 +260,37 @@ github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXc
github.com/willabides/kongplete v0.4.0 h1:eivXxkp5ud5+4+NVN9e4goxC5mSh3n1RHov+gsblM2g=
github.com/willabides/kongplete v0.4.0/go.mod h1:0P0jtWD9aTsqPSUAl4de35DLghrr57XcayPyvqSi2X8=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU=
go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM=
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 h1:nDVHiLt8aIbd/VzvPWN6kSOPE7+F/fNFDSXLVYkE/Iw=
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394/go.mod h1:sIifuuw/Yco/y6yb6+bDNfyeQ/MdPUy/hKEMYQV17cM=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180926160741-c2ed4eda69e7/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -341,50 +310,25 @@ golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.31.0 h1:0EedkvKDbh+qistFTd0Bcwe/YLh4vHwWEkiI0toFIBU=
golang.org/x/tools v0.31.0/go.mod h1:naFTU+Cev749tSJRXJlna0T3WxKvb1kWEx15xA4SdmQ=
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -415,29 +359,5 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.25.2 h1:T2oH7sZdGvTaie0BRNFbIYsabzCxUQg8nLqCdQ2i0ic=
modernc.org/cc/v4 v4.25.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.25.1 h1:TFSzPrAGmDsdnhT9X2UrcPMI3N/mJ9/X9ykKXwLhDsU=
modernc.org/ccgo/v4 v4.25.1/go.mod h1:njjuAYiPflywOOrm3B7kCB444ONP5pAVr8PIEoE0uDw=
modernc.org/fileutil v1.3.0 h1:gQ5SIzK3H9kdfai/5x41oQiKValumqNTDXMvKo62HvE=
modernc.org/fileutil v1.3.0/go.mod h1:XatxS8fZi3pS8/hKG2GH/ArUogfxjpEKs3Ku3aK4JyQ=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/libc v1.62.1 h1:s0+fv5E3FymN8eJVmnk0llBe6rOxCu/DEU+XygRbS8s=
modernc.org/libc v1.62.1/go.mod h1:iXhATfJQLjG3NWy56a6WVU73lWOcdYVxsvwCgoPljuo=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.9.1 h1:V/Z1solwAVmMW1yttq3nDdZPJqV1rM05Ccq6KMSZ34g=
modernc.org/memory v1.9.1/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.37.0 h1:s1TMe7T3Q3ovQiK2Ouz4Jwh7dw4ZDqbebSDTlSJdfjI=
modernc.org/sqlite v1.37.0/go.mod h1:5YiWv+YviqGMuGw4V+PNplcyaJ5v+vQd7TQOgkACoJM=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@@ -2,7 +2,7 @@
"A device with that ID is already added.": "أضيف هذا الجهاز بالفعل.",
"A negative number of days doesn't make sense.": "لا يمكن استخدام قيمة سالبة لعدد الأيام.",
"A new major version may not be compatible with previous versions.": "الإصدار الجديد قد لا يتوافق مع الإصدارات السابقة.",
"API Key": "مفتاح API",
"API Key": "مفتاح واجهة برمجة التطبيقات \"API\"",
"About": "حول",
"Action": "إجراء",
"Actions": "الإجراءات",
@@ -27,6 +27,7 @@
"Allowed Networks": "الشبكات المسموح بها",
"Alphabetic": "أبجدية",
"Altered by ignoring deletes.": "تغير بتجاهل عمليات الحذف.",
"Always turned on when the folder type is \"{%foldertype%}\".": "مفعل دائمًا عندما يكون نوع المجلد هو \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "الإصدار يعالج بواسطة أمر خارجي. يجب إزالة الملف من المجلدات المشتركة. إذا كان المسار للتطبيق يحتوي على مسافات، يجب وضعها بين علامتي تنصيص دلالة على الاقتباس.",
"Anonymous Usage Reporting": "تقارير الإستخدام المجهولة",
"Anonymous usage report format has changed. Would you like to move to the new format?": "هل تريد الانتقال الى التصميم الجديد لتقرير الاستخدام المجهول ؟",
@@ -52,6 +53,7 @@
"Body:": "جسم:",
"Bugs": "أخطاء برمجية",
"Cancel": "إلغاء",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "لا يمكن تفعيله عندما يكون نوع المجلد هو \"{{foldertype}}\".",
"Changelog": "سجل التغيير",
"Clean out after": "نظف بعد",
"Cleaning Versions": "إصدارات نظيفة",

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Allowed Networks",
"Alphabetic": "Alphabetic",
"Altered by ignoring deletes.": "Altered by ignoring deletes.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Always turned on when the folder type is \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.",
"Anonymous Usage Reporting": "Anonymous Usage Reporting",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Anonymous usage report format has changed. Would you like to move to the new format?",
@@ -52,6 +53,7 @@
"Body:": "Body:",
"Bugs": "Bugs",
"Cancel": "Cancel",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Cannot be enabled when the folder type is \"{{foldertype}}\".",
"Changelog": "Changelog",
"Clean out after": "Clean out after",
"Cleaning Versions": "Cleaning Versions",

View File

@@ -9,9 +9,15 @@
"Add Folder": "Lisa kaust",
"Add new folder?": "Lisa uus kaust?",
"Address": "Aadress",
"Addresses": "Aadressid",
"All Data": "Kõik andmed",
"All Time": "Kõik ajad",
"Allowed Networks": "Lubatud võrgud",
"Alphabetic": "Tähestikuline",
"Automatic upgrades": "Automaatsed uuendused",
"Be careful!": "Ettevaatust!",
"Cancel": "Loobu",
"Changelog": "Muudatuste nimekiri",
"Close": "Sulge",
"Configured": "Seadistatud",
"Connection Error": "Ühenduse viga",

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Mga Pinapayagang Network",
"Alphabetic": "Alpabetiko",
"Altered by ignoring deletes.": "Binago sa pamamagitan ng hindi pagpansin sa mga pagtanggal.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Palaging nakabukas kung ang uri ng folder ay nakatakda bilang \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Pinapamahala ng external na command ang file versioning. Kailangan nitong tanggalin ang file mula sa binabahaging folder. Kung may mga space ang path sa application, kailangan itong i-quote.",
"Anonymous Usage Reporting": "Anonymous na Pag-uulat ng Paggamit",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Nagbago ang pormat ng anonymous na ulat ng paggamit. Gusto mo bang lumipat sa bagong pormat?",
@@ -52,6 +53,7 @@
"Body:": "Body:",
"Bugs": "Mga Bug",
"Cancel": "Kanselahin",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Hindi maaaring paganahin kapag ang uri ng folder ay \"{{foldertype}}\".",
"Changelog": "Mga Pagbabago",
"Clean out after": "Linisin pagkatapos",
"Cleaning Versions": "Mga Bersyon ng Paglinis",
@@ -311,7 +313,7 @@
"Receive Encrypted": "Makatanggap Naka-Encrypt",
"Receive Only": "Makatanggap Lamang",
"Received data is already encrypted": "Naka-encrypt na ang natanggap na data",
"Recent Changes": "Mga Kamakilang Pagbabago",
"Recent Changes": "Mga Kamakailang Pagbabago",
"Reduced by ignore patterns": "Binabawasan ng mga ignore pattern",
"Relay LAN": "Relay na LAN",
"Relay WAN": "Relay na WAN",

View File

@@ -26,6 +26,7 @@
"Allow Anonymous Usage Reporting?": "Permiteţi raportarea anonimă de folosire a aplicaţiei?",
"Allowed Networks": "Rețele permise",
"Alphabetic": "Alfabetic",
"Altered by ignoring deletes.": "Modificat prin ignorarea ștergerilor.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "O comandă externă gestionează versiunea. Trebuie să elimine fișierul din mapa partajat. Dacă calea către aplicație conține spații, ar trebui să fie pusă între ghilimele.",
"Anonymous Usage Reporting": "Raport Anonim despre Folosirea Aplicației",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Formatul raportului de utilizare anonim s-a schimbat. Doriți să vă mutați în noul format?",

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Разрешённые сети",
"Alphabetic": "По алфавиту",
"Altered by ignoring deletes.": "Изменено, игнорируя удаления.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Всегда включено для папок с типом «{{foldertype}}».",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Для версионирования используется внешняя программа. Ей нужно удалить файл из общей папки. Если путь к приложению содержит пробелы, его нужно взять в кавычки.",
"Anonymous Usage Reporting": "Анонимный отчет об использовании",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Формат анонимных отчётов изменился. Хотите переключиться на новый формат?",
@@ -52,6 +53,7 @@
"Body:": "Тело:",
"Bugs": "Ошибки",
"Cancel": "Отмена",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Не может быть включено для папок с типом «{{foldertype}}».",
"Changelog": "Журнал изменений",
"Clean out after": "Очистить после",
"Cleaning Versions": "Очистка версий",
@@ -171,8 +173,8 @@
"Folder Path": "Путь к папке",
"Folder Status": "Статус папки",
"Folder Type": "Тип папки",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Тип папки «{{receiveEncrypted}}» может быть указан только при создании новой папки.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Тип папки «{{receiveEncrypted}}» не может быть изменён после добавления. Вам необходимо убрать папку, удалить или дешифровать данные на диске, а затем снова добавить папку.",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Тип папки «{{receiveEncrypted}}» может быть выбран только при добавлении новой папки.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Тип папки «{{receiveEncrypted}}» не может быть изменён после добавления. Вам необходимо убрать папку, удалить или дешифровать данные на диске и затем снова её добавить.",
"Folders": "Папки",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Для следующих папок произошла ошибка при запуске отслеживания изменений. Попытки будут повторяться раз в минуту, и ошибки скоро могут быть устранены. Если этого не произойдёт, попробуйте разобраться в причинах и попросите поддержки, если у вас не получится.",
"Forever": "Вечно",

View File

@@ -0,0 +1,36 @@
{
"A device with that ID is already added.": "Уређај са тим идентификатором је већ додат.",
"A negative number of days doesn't make sense.": "Негативан број дана нема смисла.",
"A new major version may not be compatible with previous versions.": "Нова верзија можда неће радити са претходним верзијама.",
"API Key": "АПИ кључ",
"About": "Информације",
"Action": "Радња",
"Actions": "Радње",
"Active filter rules": "Активна правила филтера",
"Add": "Додај",
"Add Device": "Додај уређај",
"Add Folder": "Додај фасциклу",
"Add Remote Device": "Додаај удаљени уређај",
"Add devices from the introducer to our device list, for mutually shared folders.": "Додај уређаје од иницијатора на нашу листу уређаја, за међусобно дељене фасцикле.",
"Add filter entry": "Додај ставку филтера",
"Add ignore patterns": "Додај правила за игнорисање",
"Add new folder?": "Додај нову фасциклу?",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Додатно, интервал потпуног поновног скенирања ће бити повећан (60 пута, тј. нови подразумевани интервал од 1 сат). Такође можете ручно да га подесите за сваку фасциклу касније након што изаберете Не.",
"Address": "Адреса",
"Addresses": "Адресе",
"Advanced": "Напредно",
"Advanced Configuration": "Напредна конфигурација",
"All Data": "Сви подаци",
"All Time": "Све време",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Све фасцикле које се деле са овим уређајем морају бити заштићене лозинком, тако да сви послати подаци не могу бити прочитани без дате лозинке.",
"Allow Anonymous Usage Reporting?": "Дозволити анонимно слање података о коришћењу?",
"Allowed Networks": "Дозвољене мреже",
"Alphabetic": "Абецедним редом",
"Altered by ignoring deletes.": "Промењено због игнорисања брисања.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Увек укључено када је тип фасцикле „{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Екстерна команда управља верзионирањем. Она мора да уклони фајл из дељене фасцикле. Ако путања до апликације садржи размаке, треба да буде под наводницима.",
"Anonymous Usage Reporting": "Анонимно слање података о употреби",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Формат анонимног слања података о коришћењу је промењен. Желите ли да пређете на нови формат?",
"Applied to LAN": "Важи за локалну мрежу",
"Apply": "Примени"
}

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Tillåtna nätverk",
"Alphabetic": "Alfabetisk",
"Altered by ignoring deletes.": "Ändrad genom att ignorera borttagningar.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Alltid på när mapptypen är \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Ett externt kommando hanterar versionen. Det måste ta bort filen från den delade mappen. Om sökvägen till applikationen innehåller mellanslag bör den citeras.",
"Anonymous Usage Reporting": "Anonym användarstatistiksrapportering",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Anonymt användningsrapportformat har ändrats. Vill du flytta till det nya formatet?",
@@ -52,6 +53,7 @@
"Body:": "Meddelande:",
"Bugs": "Felrapporter",
"Cancel": "Avbryt",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Kan inte aktiveras när mapptypen är \"{{foldertype}}\".",
"Changelog": "Ändringslogg",
"Clean out after": "Rensa efteråt",
"Cleaning Versions": "Rensningsversioner",

View File

@@ -30,7 +30,7 @@
<h4 class="text-center" translate>The Syncthing Authors</h4>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Jesse Lucas, Simon Frei, Tomasz Wilczyński, Alexander Graf, Alexandre Viau, Anderson Mesquita, André Colomb, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Emil Lundberg, Eric P, Evgeny Kuznetsov, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ross Smith II, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Wulf Weich, bt90, greatroar, Aaron Bieber, Adam Piggott, Adel Qalieh, Alan Pope, Alberto Donato, Aleksey Vasenev, Alessandro G., Alex Ionescu, Alex Lindeman, Alex Xu, Alexander Seiler, Alexandre Alves, Aman Gupta, Anatoli Babenia, Andreas Sommer, Andrew Dunham, Andrew Meyer, Andrew Rabert, Andrey D, Anjan Momi, Anthony Goeckner, Antoine Lamielle, Anur, Aranjedeath, Arkadiusz Tymiński, Aroun, Arthur Axel fREW Schmidt, Artur Zubilewicz, Aurélien Rainone, BAHADIR YILMAZ, Bart De Vries, Beat Reichenbach, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benjamin Nater, Benno Fünfstück, Benny Ng, Boqin Qin, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Catfriend1, Cathryne Linenweaver, Cedric Staniewski, Chih-Hsuan Yen, Choongkyu, Chris Howie, Chris Joel, Chris Tonkinson, Christian Kujau, Christian Prescott, Colin Kennedy, Cromefire_, Cyprien Devillez, Dale Visser, Dan, Daniel Barczyk, Daniel Bergmann, Daniel Martí, Daniel Padrta, Darshil Chanpura, David Rimmer, DeflateAwning, Denis A., Dennis Wilson, DerRockWolf, Devon G. Redekopp, Dimitri Papadopoulos Orfanos, Dmitry Saveliev, Domenic Horner, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Eng Zer Jun, Eric Lesiuta, Erik Meitner, Evan Spensley, Federico Castagnini, Felix, Felix Ableitner, Felix Lampe, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gahl Saraf, Gilli Sigurdsson, Gleb Sinyavskiy, Graham Miln, Greg, Gusted, Han Boetes, HansK-p, Harrison Jones, Heiko Zuerker, Hireworks, Hugo Locurcio, Iain Barnett, Ian Johnson, Ikko Ashimine, Ilya Brin, Iskander Sharipov, Jaakko Hannikainen, Jacek Szafarkiewicz, Jack Croft, Jacob, Jake Peterson, James O'Beirne, James Patterson, Jaroslav Lichtblau, Jaroslav Malec, Jaspitta, Jauder Ho, Jaya Chithra, Jaya Kumar, Jeffery To, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonas Thelemann, Jonathan, Jonathan Cross, Jonta, Jose Manuel Delicado, Julian Lehrhuber, Jörg Thalheim, Jędrzej Kula, K.B.Dharun Krishna, Kalle Laine, Kapil Sareen, Karol Różycki, Kebin Liu, Keith Harrison, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin Bushiri, Kevin White, Jr., Kurt Fitzner, LSmithx2, Lars Lehtonen, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Lukas Lihotzki, Luke Hamburg, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Marcus B Spencer, Marcus Legendre, Mario Majila, Mark Pulford, Martchus, Martin Polehla, Mateusz Naściszewski, Mateusz Ż, Matic Potočnik, Matt Burke, Matt Robenolt, Matteo Ruina, Maurizio Tomasi, Max, Max Schulze, MaximAL, Maxime Thirouin, Maximilian, MichaIng, Michael Jephcote, Michael Rienstra, Michael Tilli, Migelo, Mike Boone, MikeLund, MikolajTwarog, Mingxuan Lin, Naveen, Nicholas Rishel, Nick Busey, Nico Stapelbroek, Nicolas Braud-Santoni, Nicolas Perraut, Niels Peter Roest, Nils Jakobi, NinoM4ster, Nitroretro, NoLooseEnds, Oliver Freyermuth, Otiel, Oyebanji Jacob Mayowa, Pablo, Pascal Jungblut, Paul Brit, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phani Rithvij, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Quentin Hibon, Rahmi Pruitt, Richard Hartmann, Robert Carosi, Roberto Santalla, Robin Schoonover, Roman Zaynetdinov, Ruslan Yevdokymov, Ryan Qian, Sacheendra Talluri, Scott Klupfel, Sertonix, Severin von Wnuck-Lipinski, Shaarad Dalvi, Simon Mwepu, Simon Pickup, Sly_tom_cat, Sonu Kumar Saw, Stefan Kuntz, Steven Eckhoff, Suhas Gundimeda, Sven Bachmann, Sébastien WENSKE, Taylor Khan, Terrance, Thomas, Thomas Hipp, Tim Abell, Tim Howes, Tim Nordenfur, Tobias Frölich, Tobias Klauser, Tobias Nygren, Tobias Tom, Tom Jakubowski, Tommy Thorn, Tommy van der Vorst, Tully Robinson, Tyler Brazier, Tyler Kropp, Unrud, Veeti Paananen, Victor Buinsky, Vik, Vil Brekin, Vladimir Rusinov, WangXi, Will Rouesnel, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, boomsquared, chenrui, chucic, cjc7373, cui fliter, d-volution, dashangcun, derekriemer, desbma, diemade, digital, entity0xfe, georgespatton, ghjklw, guangwu, gudvinr, ignacy123, janost, jaseg, jelle van der Waa, jtagcat, klemens, kylosus, luchenhan, luzpaz, marco-m, mathias4833, maxice8, mclang, mv1005, nf, orangekame3, otbutz, overkill, perewa, polyfloyd, red_led, rubenbe, sec65, vapatel2, villekalliomaki, wangguoliang, wouter bolsterlee, xarx00, xjtdy888, 佛跳墙, 落心
Jakob Borg, Audrius Butkevicius, Jesse Lucas, Simon Frei, Tomasz Wilczyński, Alexander Graf, Alexandre Viau, Anderson Mesquita, André Colomb, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Emil Lundberg, Eric P, Evgeny Kuznetsov, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ross Smith II, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Wulf Weich, bt90, greatroar, Aaron Bieber, Adam Piggott, Adel Qalieh, Alan Pope, Alberto Donato, Aleksey Vasenev, Alessandro G., Alex Ionescu, Alex Lindeman, Alex Xu, Alexander Seiler, Alexandre Alves, Aman Gupta, Anatoli Babenia, Andreas Sommer, Andrew Dunham, Andrew Meyer, Andrew Rabert, Andrey D, Anjan Momi, Anthony Goeckner, Antoine Lamielle, Anur, Aranjedeath, Arkadiusz Tymiński, Aroun, Arthur Axel fREW Schmidt, Artur Zubilewicz, Ashish Bhate, Aurélien Rainone, BAHADIR YILMAZ, Bart De Vries, Beat Reichenbach, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benjamin Nater, Benno Fünfstück, Benny Ng, Boqin Qin, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Catfriend1, Cathryne Linenweaver, Cedric Staniewski, Chih-Hsuan Yen, Choongkyu, Chris Howie, Chris Joel, Chris Tonkinson, Christian Kujau, Christian Prescott, Colin Kennedy, Cromefire_, Cyprien Devillez, Dale Visser, Dan, Daniel Barczyk, Daniel Bergmann, Daniel Martí, Daniel Padrta, Darshil Chanpura, David Rimmer, DeflateAwning, Denis A., Dennis Wilson, DerRockWolf, Devon G. Redekopp, Dimitri Papadopoulos Orfanos, Dmitry Saveliev, Domenic Horner, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Eng Zer Jun, Eric Lesiuta, Erik Meitner, Evan Spensley, Federico Castagnini, Felix, Felix Ableitner, Felix Lampe, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gahl Saraf, Gilli Sigurdsson, Gleb Sinyavskiy, Graham Miln, Greg, Gusted, Han Boetes, HansK-p, Harrison Jones, Hazem Krimi, Heiko Zuerker, Hireworks, Hugo Locurcio, Iain Barnett, Ian Johnson, Ikko Ashimine, Ilya Brin, Iskander Sharipov, Jaakko Hannikainen, Jacek Szafarkiewicz, Jack Croft, Jacob, Jake Peterson, James O'Beirne, James Patterson, Jaroslav Lichtblau, Jaroslav Malec, Jaspitta, Jauder Ho, Jaya Chithra, Jaya Kumar, Jeffery To, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonas Thelemann, Jonathan, Jonathan Cross, Jonta, Jose Manuel Delicado, Julian Lehrhuber, Jörg Thalheim, Jędrzej Kula, K.B.Dharun Krishna, Kalle Laine, Kapil Sareen, Karol Różycki, Kebin Liu, Keith Harrison, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin Bushiri, Kevin White, Jr., Kurt Fitzner, LSmithx2, Lars Lehtonen, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Lukas Lihotzki, Luke Hamburg, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcel Meyer, Marcin Dziadus, Marcus B Spencer, Marcus Legendre, Mario Majila, Mark Pulford, Martchus, Martin Polehla, Mateusz Naściszewski, Mateusz Ż, Matic Potočnik, Matt Burke, Matt Robenolt, Matteo Ruina, Maurizio Tomasi, Max, Max Schulze, MaximAL, Maxime Thirouin, Maximilian, MichaIng, Michael Jephcote, Michael Rienstra, Michael Tilli, Migelo, Mike Boone, MikeLund, MikolajTwarog, Mingxuan Lin, Naveen, Nicholas Rishel, Nick Busey, Nico Stapelbroek, Nicolas Braud-Santoni, Nicolas Perraut, Niels Peter Roest, Nils Jakobi, NinoM4ster, Nitroretro, NoLooseEnds, Oliver Freyermuth, Otiel, Oyebanji Jacob Mayowa, Pablo, Pascal Jungblut, Paul Brit, Paul Donald, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phani Rithvij, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Quentin Hibon, Rahmi Pruitt, Richard Hartmann, Robert Carosi, Roberto Santalla, Robin Schoonover, Roman Zaynetdinov, Ruslan Yevdokymov, Ryan Qian, Sacheendra Talluri, Scott Klupfel, Sertonix, Severin von Wnuck-Lipinski, Shaarad Dalvi, Simon Mwepu, Simon Pickup, Sly_tom_cat, Sonu Kumar Saw, Stefan Kuntz, Steven Eckhoff, Suhas Gundimeda, Sven Bachmann, Sébastien WENSKE, Taylor Khan, Terrance, TheCreeper, Thomas, Thomas Hipp, Tim Abell, Tim Howes, Tim Nordenfur, Tobias Frölich, Tobias Klauser, Tobias Nygren, Tobias Tom, Tom Jakubowski, Tommy Thorn, Tommy van der Vorst, Tully Robinson, Tyler Brazier, Tyler Kropp, Unrud, Veeti Paananen, Victor Buinsky, Vik, Vil Brekin, Vladimir Rusinov, WangXi, Will Rouesnel, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, boomsquared, chenrui, chucic, cjc7373, cui fliter, d-volution, dashangcun, derekriemer, desbma, diemade, digital, domain, entity0xfe, georgespatton, ghjklw, guangwu, gudvinr, ignacy123, janost, jaseg, jelle van der Waa, jtagcat, klemens, kylosus, luchenhan, luzpaz, marco-m, mathias4833, maxice8, mclang, mv1005, nf, orangekame3, otbutz, overkill, perewa, polyfloyd, pullmerge, red_led, rubenbe, sec65, vapatel2, villekalliomaki, wangguoliang, wouter bolsterlee, xarx00, xjtdy888, 佛跳墙, 落心
</div>
</div>
</div>
@@ -38,48 +38,70 @@ Jakob Borg, Audrius Butkevicius, Jesse Lucas, Simon Frei, Tomasz Wilczyński, Al
<div id="about-includes" class="tab-pane">
<p translate>Syncthing includes the following software or portions thereof:</p>
<ul class="list-unstyled two-columns" id="copyright-notices">
<li><a href="http://getbootstrap.com/">Bootstrap</a>, Copyright &copy; 2011-2016 Twitter, Inc.</li>
<li><a href="https://getbootstrap.com/">Bootstrap</a>, Copyright &copy; 2011-2016 Twitter, Inc.</li>
<li><a href="https://angularjs.org/">AngularJS</a>, Copyright &copy; 2010-2014, 2016 Google, Inc.</li>
<li><a href="http://www.daterangepicker.com/">Date Range Picker</a>, Copyright &copy; 2012-2018 Dan Grossman.</li>
<li><a href="https://www.daterangepicker.com/">Date Range Picker</a>, Copyright &copy; 2012-2018 Dan Grossman.</li>
<li><a href="https://github.com/mar10/fancytree">JQuery Fancytree Plugin</a>, Copyright &copy; 2008-2018 Martin Wendt.</li>
<li><a href="https://fontawesome.com/">Font Awesome</a>Copyright &copy; 2024 Fonticons, Inc.</li>
<li><a href="https://forkaweso.me/Fork-Awesome/">Fork Awesome</a>, Copyright &copy; 2018 Dave Gandy &amp; Fork Awesome.</li>
<li><a href="http://jquery.com/">jQuery JavaScript Library</a>, Copyright &copy; jQuery Foundation and other contributors.</li>
<li><a href="http://momentjs.com/">moment.js</a>, Copyright &copy; JS Foundation and other contributors.</li>
<li><a href="https://evanhahn.github.io/HumanizeDuration.js/">HumanDuration.js</a>, Copyright &copy; 2013-2024 Evan Hahn, portions copyright &copy; 2024 Ross Smith II.</li>
<li><a href="https://jquery.com/">jQuery JavaScript Library</a>, Copyright &copy; jQuery Foundation and other contributors.</li>
<li><a href="https://leafletjs.com/">leaflet.js</a>, Copyright &copy; 2010-2025 Volodymyr Agafonkin, Copyright &copy; 2010-2011 CloudMade.</li>
<li><a href="https://momentjs.com/">moment.js</a>, Copyright &copy; JS Foundation and other contributors.</li>
<li><a href="https://golang.org/">The Go Programming Language</a>, Copyright &copy; 2009 The Go Authors.</li>
<li><a href="https://prometheus.io/">Prometheus</a>, Copyright &copy; 2012-2015 The Prometheus Authors.</li>
<li><a href="https://github.com/AudriusButkevicius/go-nat-pmp">AudriusButkevicius/go-nat-pmp</a>, Copyright &copy; 2013 John Howard Palevich.</li>
<li><a href="https://github.com/AudriusButkevicius/recli">AudriusButkevicius/recli</a>, Copyright &copy; 2019 Audrius Butkevicius.</li>
<li><a href="https://github.com/Azure/go-ntlmssp">Azure/go-ntlmssp</a>, Copyright &copy; 2016 Microsoft.</li>
<li><a href="https://github.com/alecthomas/kong">alecthomas/kong</a>, Copyright &copy; 2018 Alec Thomas.</li>
<li><a href="https://github.com/beorn7/perks">beorn7/perks</a>, Copyright &copy; 2013 Blake Mizerany.</li>
<li><a href="https://github.com/pierrec/lz4">pierrec/lz4</a>, Copyright &copy; 2015 Pierre Curto.</li>
<li><a href="https://github.com/calmh/du">calmh/du</a>, Public domain.</li>
<li><a href="https://github.com/calmh/incontainer">calmh/incontainer</a>, Copyright &copy; 2022 calmh.</li>
<li><a href="https://github.com/calmh/xdr">calmh/xdr</a>, Copyright &copy; 2014 Jakob Borg.</li>
<li><a href="https://github.com/ccding/go-stun">ccding/go-stun</a>, Copyright &copy; 2016 Cong Ding.</li>
<li><a href="https://github.com/cespare/xxhash/v2">cespare/xxhash/v2</a>, Copyright &copy; 2016 Caleb Spare.</li>
<li><a href="https://github.com/chmduquesne/rollinghash">chmduquesne/rollinghash</a>, Copyright &copy; 2015 Christophe-Marie Duquesne.</li>
<li><a href="https://github.com/d4l3k/messagediff">d4l3k/messagediff</a>, Copyright &copy; 2015 Tristan Rice.</li>
<li><a href="https://github.com/cpuguy83/go-md2man/v2">cpuguy83/go-md2man/v2</a>, Copyright &copy; 2014 Brian Goff.</li>
<li><a href="https://github.com/davecgh/go-spew">davecgh/go-spew</a>, Copyright &copy; 2012-2016 Dave Collins.</li>
<li><a href="https://github.com/go-asn1-ber/asn1-ber">go-asn1-ber/asn1-ber</a>, Copyright &copy; 2011-2015 Michael Mitton (mmitton@gmail.com).</li>
<li><a href="https://github.com/go-ldap/ldap">go-ldap/ldap</a>, Copyright &copy; 2011-2015 Michael Mitton (mmitton@gmail.com).</li>
<li><a href="https://github.com/uber-go/automaxprocs">go.uber.org/automaxprocs</a>, Copyright &copy; 2017 Uber Technologies, Inc.</li>
<li><a href="https://github.com/gobwas/glob">gobwas/glob</a>, Copyright &copy; 2016 Sergey Kamardin.</li>
<li><a href="https://github.com/golang/groupcache">golang/groupcache</a>, Copyright &copy; 2013 Google Inc.</li>
<li><a href="https://github.com/golang/protobuf">golang/protobuf</a>, Copyright &copy; 2010 The Go Authors.</li>
<li><a href="https://github.com/gofrs/flock">gofrs/flock</a>, Copyright &copy; 2018-2025, The Gofrs.</li>
<li><a href="https://github.com/golang/snappy">golang/snappy</a>, Copyright &copy; 2011 The Snappy-Go Authors.</li>
<li><a href="https://github.com/protocolbuffers/protobuf-go">google.golang.org/protobuf</a>, Copyright &copy; 2018 The Go Authors.</li>
<li><a href="https://github.com/google/uuid">google/uuid</a>, Copyright &copy; 2009,2014 Google Inc.</li>
<li><a href="https://gopkg.in/yaml.v3">gopkg.in/yaml.v3</a>, Copyright &copy; 2025, the gopkg.in/yaml.v3 authors.</li>
<li><a href="https://github.com/greatroar/blobloom">greatroar/blobloom</a>, Copyright &copy; 2020-2024 the Blobloom authors.</li>
<li><a href="https://github.com/hashicorp/errwrap">hashicorp/errwrap</a>, Copyright &copy; 2014 HashiCorp, Inc.</li>
<li><a href="https://github.com/hashicorp/go-multierror">hashicorp/go-multierror</a>, Copyright &copy; 2014 HashiCorp, Inc.</li>
<li><a href="https://github.com/hashicorp/golang-lru">hashicorp/golang-lru</a>, Copyright &copy; 2014 HashiCorp, Inc.</li>
<li><a href="https://github.com/jackpal/gateway">jackpal/gateway</a>, Copyright &copy; 2010 Jack Palevich.</li>
<li><a href="https://github.com/jmoiron/sqlx">jmoiron/sqlx</a>, Copyright &copy; 2013 Jason Moiron.</li>
<li><a href="https://github.com/jackpal/go-nat-pmp">jackpal/go-nat-pmp</a>, Copyright 2013 John Howard Palevich.</li>
<li><a href="https://github.com/julienschmidt/httprouter">julienschmidt/httprouter</a>, Copyright &copy; 2013, Julien Schmidt.</li>
<li><a href="https://github.com/kballard/go-shellquote">kballard/go-shellquote</a>, Copyright &copy; 2014 Kevin Ballard.</li>
<li><a href="https://github.com/mattn/go-isatty">mattn/go-isatty</a>, Copyright &copy; Yasuhiro MATSUMOTO.</li>
<li><a href="https://github.com/mattn/go-sqlite3">mattn/go-sqlite3</a>, Copyright &copy; 2014 Yasuhiro Matsumoto</li>
<li><a href="https://github.com/matttproud/golang_protobuf_extensions">matttproud/golang_protobuf_extensions</a>, Copyright &copy; 2012 Matt T. Proud.</li>
<li><a href="https://modernc.org/sqlite">modernc.org/sqlite</a>, Copyright &copy; 2017 The Sqlite Authors</li>
<li><a href="https://github.com/oschwald/geoip2-golang">oschwald/geoip2-golang</a>, Copyright &copy; 2015, Gregory J. Oschwald.</li>
<li><a href="https://github.com/oschwald/maxminddb-golang">oschwald/maxminddb-golang</a>, Copyright &copy; 2015, Gregory J. Oschwald.</li>
<li><a href="https://github.com/petermattis/goid">petermattis/goid</a>, Copyright &copy; 2015-2016 Peter Mattis.</li>
<li><a href="https://github.com/miscreant/miscreant.go">miscreant/miscreant.go</a>, Copyright &copy; 2017-2019 The Miscreant Developers.</li>
<li><a href="https://github.com/munnerz/goautoneg">munnerz/goautoneg</a>, Copyright &copy; 2011, Open Knowledge Foundation Ltd.</li>
<li><a href="https://github.com/pierrec/lz4">pierrec/lz4</a>, Copyright &copy; 2015 Pierre Curto.</li>
<li><a href="https://github.com/pkg/errors">pkg/errors</a>, Copyright &copy; 2015, Dave Cheney.</li>
<li><a href="https://github.com/pmezard/go-difflib">pmezard/go-difflib</a>, Copyright &copy; 2013, Patrick Mezard.</li>
<li><a href="https://github.com/posener/complete">posener/complete</a>, Copyright &copy; 2017 Eyal Posener.</li>
<li><a href="https://github.com/prometheus/client_golang">prometheus/client_golang</a>, Copyright 2012-2015 The Prometheus Authors.</li>
<li><a href="https://github.com/prometheus/client_model">prometheus/client_model</a>, Copyright &copy; 2025, the prometheus/client_model authors.</li>
<li><a href="https://github.com/prometheus/common">prometheus/common</a>, Copyright &copy; 2025, the prometheus/common authors.</li>
<li><a href="https://github.com/prometheus/procfs">prometheus/procfs</a>, Copyright &copy; 2025, the prometheus/procfs authors.</li>
<li><a href="https://github.com/quic-go/quic-go">quic-go/quic-go</a>, Copyright &copy; 2016 the quic-go authors & Google, Inc.</li>
<li><a href="https://github.com/rcrowley/go-metrics">rcrowley/go-metrics</a>, Copyright &copy; 2012 Richard Crowley.</li>
<li><a href="https://github.com/sasha-s/go-deadlock">sasha-s/go-deadlock</a>, Copyright &copy; 2016 sasha-s.</li>
<li><a href="https://github.com/syncthing/notify">syncthing/notify</a>, Copyright &copy; 2014-2015 The Notify Authors.</li>
<li><a href="https://github.com/riywo/loginshell">riywo/loginshell</a>, Copyright &copy; 2019 Ryosuke IWANAGA.</li>
<li><a href="https://github.com/russross/blackfriday/v2">russross/blackfriday/v2</a>, Copyright &copy; 2011 Russ Ross.</li>
<li><a href="https://github.com/shirou/gopsutil">shirou/gopsutil</a>, Copyright &copy; 2014, WAKAYAMA Shirou.</li>
<li><a href="https://github.com/stretchr/objx">stretchr/objx</a>, Copyright &copy; 2014 Stretchr, Inc.</li>
<li><a href="https://github.com/stretchr/testify">stretchr/testify</a>, Copyright &copy; 2012-2020 Mat Ryer, Tyler Bunnell and contributors.</li>
<li><a href="https://github.com/syndtr/goleveldb">syndtr/goleveldb</a>, Copyright &copy; 2012 Suryandaru Triandana.</li>
<li><a href="https://github.com/thejerf/suture">thejerf/suture</a>, Copyright &copy; 2014-2015 Barracuda Networks, Inc.</li>
<li><a href="https://github.com/urfave/cli">urfave/cli</a>, Copyright &copy; 2016 Jeremy Saenz &amp; Contributors.</li>
<li><a href="https://github.com/tklauser/go-sysconf">tklauser/go-sysconf</a>, Copyright &copy; 2018-2022, Tobias Klauser.</li>
<li><a href="https://github.com/tklauser/numcpus">tklauser/numcpus</a>, Copyright &copy; 2018-2024 Tobias Klauser.</li>
<li><a href="https://github.com/urfave/cli">urfave/cli</a>, Copyright &copy; 2016 Jeremy Saenz & Contributors.</li>
<li><a href="https://github.com/vitrun/qart">vitrun/qart</a>, Copyright &copy; 2010-2011 The Go Authors.</li>
<li><a href="https://gopkg.in/asn1-ber.v1">gopkg.in/asn1-ber.v1</a>, Copyright &copy; 2011-2015 Michael Mitton, portions Copyright &copy; 2015-2016 go-asn1-ber Authors.</li>
<li><a href="https://gopkg.in/ldap.v2">gopkg.in/ldap.v2</a>, Copyright &copy; 2011-2015 Michael Mitton, portions Copyright &copy; 2015-2016 go-ldap Authors.</li>
<li><a href="https://golang.org">The Go Programming Language</a>, Copyright &copy; 2009 The Go Authors.</li>
<li>Font Awesome by Dave Gandy - <a href="http://fontawesome.io/">http://fontawesome.io</a></li>
<li><a href="https://github.com/willabides/kongplete">willabides/kongplete</a>, Copyright &copy; 2020 WillAbides.</li>
</ul>
</div>

View File

@@ -16,6 +16,14 @@ angular.module('syncthing.core')
},
link: function (scope, element, attrs) {
$(element).on('click', function (event) {
const closestTabAnchor = event.target.closest('a[data-toggle="tab"]');
if (closestTabAnchor && closestTabAnchor.href.includes('#')) {
event.preventDefault();
}
});
// before modal show animation
$(element).on('show.bs.modal', function () {

View File

@@ -1,27 +1,39 @@
angular.module('syncthing.core')
.filter('uncamel', function () {
const reservedStrings = [
'IDs', 'ID', // substrings must come AFTER longer keywords containing them
'URL', 'UR',
'API', 'QUIC', 'TCP', 'UDP', 'NAT', 'LAN', 'WAN',
'KiB', 'MiB', 'GiB', 'TiB'
];
return function (input) {
input = input.replace(/(.)([A-Z][a-z]+)/g, '$1 $2').replace(/([a-z0-9])([A-Z])/g, '$1 $2');
var parts = input.split(' ');
var lastPart = parts.splice(-1)[0];
if (!input || typeof input !== 'string') return '';
const placeholders = {};
let counter = 0;
reservedStrings.forEach(word => {
const placeholder = `__RSV${counter}__`;
const re = new RegExp(word, 'g');
input = input.replace(re, placeholder);
placeholders[placeholder] = word;
counter++;
});
input = input.replace(/([a-z0-9])([A-Z])/g, '$1 $2');
Object.entries(placeholders).forEach(([ph, word]) => {
input = input.replace(new RegExp(ph, 'g'), ` ${word} `);
});
let parts = input.split(' ');
const lastPart = parts.pop();
switch (lastPart) {
case "S":
parts.push('(seconds)');
break;
case "M":
parts.push('(minutes)');
break;
case "H":
parts.push('(hours)');
break;
case "Ms":
parts.push('(milliseconds)');
break;
default:
parts.push(lastPart);
break;
case 'S': parts.push('(seconds)'); break;
case 'M': parts.push('(minutes)'); break;
case 'H': parts.push('(hours)'); break;
case 'Ms': parts.push('(milliseconds)'); break;
default: parts.push(lastPart); break;
}
input = parts.join(' ');
return input.charAt(0).toUpperCase() + input.slice(1);
parts = parts.map(part => {
const match = reservedStrings.find(w => w.toUpperCase() === part.toUpperCase());
return match || part.charAt(0).toUpperCase() + part.slice(1);
});
return parts.join(' ').replace(/\s+/g, ' ').trim();
};
});

View File

@@ -0,0 +1,80 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package azureblob
import (
"context"
"io"
"time"
stblob "github.com/syncthing/syncthing/internal/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
)
var _ stblob.Store = (*BlobStore)(nil)
type BlobStore struct {
client *azblob.Client
container string
}
func NewBlobStore(accountName, accountKey, containerName string) (*BlobStore, error) {
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
if err != nil {
return nil, err
}
url := "https://" + accountName + ".blob.core.windows.net/"
sc, err := azblob.NewClientWithSharedKeyCredential(url, credential, &azblob.ClientOptions{})
if err != nil {
return nil, err
}
// This errors when the container already exists, which we ignore.
_, _ = sc.CreateContainer(context.Background(), containerName, &container.CreateOptions{})
return &BlobStore{
client: sc,
container: containerName,
}, nil
}
func (a *BlobStore) Upload(ctx context.Context, key string, data io.Reader) error {
_, err := a.client.UploadStream(ctx, a.container, key, data, &blockblob.UploadStreamOptions{})
return err
}
func (a *BlobStore) Download(ctx context.Context, key string, w stblob.Writer) error {
resp, err := a.client.DownloadStream(ctx, a.container, key, &blob.DownloadStreamOptions{})
if err != nil {
return err
}
defer resp.Body.Close()
_, err = io.Copy(w, resp.Body)
return err
}
func (a *BlobStore) LatestKey(ctx context.Context) (string, error) {
opts := &azblob.ListBlobsFlatOptions{}
pager := a.client.NewListBlobsFlatPager(a.container, opts)
var latest string
var lastModified time.Time
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return "", err
}
for _, blob := range page.Segment.BlobItems {
if latest == "" || blob.Properties.LastModified.After(lastModified) {
latest = *blob.Name
lastModified = *blob.Properties.LastModified
}
}
}
return latest, nil
}

View File

@@ -0,0 +1,23 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package blob
import (
"context"
"io"
)
type Store interface {
Upload(ctx context.Context, key string, r io.Reader) error
Download(ctx context.Context, key string, w Writer) error
LatestKey(ctx context.Context) (string, error)
}
type Writer interface {
io.Writer
io.WriterAt
}

View File

@@ -7,6 +7,7 @@
package s3
import (
"context"
"io"
"time"
@@ -15,8 +16,11 @@ import (
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/syncthing/syncthing/internal/blob"
)
var _ blob.Store = (*Session)(nil)
type Session struct {
bucket string
s3sess *session.Session
@@ -26,9 +30,10 @@ type Object = s3.Object
func NewSession(endpoint, region, bucket, accessKeyID, secretKey string) (*Session, error) {
sess, err := session.NewSession(&aws.Config{
Region: aws.String(region),
Endpoint: aws.String(endpoint),
Credentials: credentials.NewStaticCredentials(accessKeyID, secretKey, ""),
Region: aws.String(region),
Endpoint: aws.String(endpoint),
Credentials: credentials.NewStaticCredentials(accessKeyID, secretKey, ""),
S3ForcePathStyle: aws.Bool(true),
})
if err != nil {
return nil, err
@@ -39,7 +44,7 @@ func NewSession(endpoint, region, bucket, accessKeyID, secretKey string) (*Sessi
}, nil
}
func (s *Session) Upload(r io.Reader, key string) error {
func (s *Session) Upload(_ context.Context, key string, r io.Reader) error {
uploader := s3manager.NewUploader(s.s3sess)
_, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(s.bucket),
@@ -49,7 +54,31 @@ func (s *Session) Upload(r io.Reader, key string) error {
return err
}
func (s *Session) List(fn func(*Object) bool) error {
func (s *Session) Download(_ context.Context, key string, w blob.Writer) error {
downloader := s3manager.NewDownloader(s.s3sess)
_, err := downloader.Download(w, &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
})
return err
}
func (s *Session) LatestKey(_ context.Context) (string, error) {
var latestKey string
var lastModified time.Time
if err := s.list(func(obj *Object) bool {
if latestKey == "" || obj.LastModified.After(lastModified) {
latestKey = *obj.Key
lastModified = *obj.LastModified
}
return true
}); err != nil {
return "", err
}
return latestKey, nil
}
func (s *Session) list(fn func(*Object) bool) error {
svc := s3.New(s.s3sess)
opts := &s3.ListObjectsV2Input{
@@ -75,27 +104,3 @@ func (s *Session) List(fn func(*Object) bool) error {
return nil
}
func (s *Session) LatestKey() (string, error) {
var latestKey string
var lastModified time.Time
if err := s.List(func(obj *Object) bool {
if latestKey == "" || obj.LastModified.After(lastModified) {
latestKey = *obj.Key
lastModified = *obj.LastModified
}
return true
}); err != nil {
return "", err
}
return latestKey, nil
}
func (s *Session) Download(w io.WriterAt, key string) error {
downloader := s3manager.NewDownloader(s.s3sess)
_, err := downloader.Download(w, &s3.GetObjectInput{
Bucket: aws.String(s.bucket),
Key: aws.String(key),
})
return err
}

View File

@@ -1,73 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package db
import (
"fmt"
"strings"
"github.com/syncthing/syncthing/lib/protocol"
)
type Counts struct {
Files int
Directories int
Symlinks int
Deleted int
Bytes int64
Sequence int64 // zero for the global state
DeviceID protocol.DeviceID // device ID for remote devices, or special values for local/global
LocalFlags uint32 // the local flag for this count bucket
}
func (c Counts) Add(other Counts) Counts {
return Counts{
Files: c.Files + other.Files,
Directories: c.Directories + other.Directories,
Symlinks: c.Symlinks + other.Symlinks,
Deleted: c.Deleted + other.Deleted,
Bytes: c.Bytes + other.Bytes,
Sequence: c.Sequence + other.Sequence,
DeviceID: protocol.EmptyDeviceID,
LocalFlags: c.LocalFlags | other.LocalFlags,
}
}
func (c Counts) TotalItems() int {
return c.Files + c.Directories + c.Symlinks + c.Deleted
}
func (c Counts) String() string {
var flags strings.Builder
if c.LocalFlags&protocol.FlagLocalNeeded != 0 {
flags.WriteString("Need")
}
if c.LocalFlags&protocol.FlagLocalIgnored != 0 {
flags.WriteString("Ignored")
}
if c.LocalFlags&protocol.FlagLocalMustRescan != 0 {
flags.WriteString("Rescan")
}
if c.LocalFlags&protocol.FlagLocalReceiveOnly != 0 {
flags.WriteString("Recvonly")
}
if c.LocalFlags&protocol.FlagLocalUnsupported != 0 {
flags.WriteString("Unsupported")
}
if c.LocalFlags != 0 {
flags.WriteString(fmt.Sprintf("(%x)", c.LocalFlags))
}
if flags.Len() == 0 {
flags.WriteString("---")
}
return fmt.Sprintf("{Device:%v, Files:%d, Dirs:%d, Symlinks:%d, Del:%d, Bytes:%d, Seq:%d, Flags:%s}", c.DeviceID, c.Files, c.Directories, c.Symlinks, c.Deleted, c.Bytes, c.Sequence, flags.String())
}
// Equal compares the numbers only, not sequence/dev/flags.
func (c Counts) Equal(o Counts) bool {
return c.Files == o.Files && c.Directories == o.Directories && c.Symlinks == o.Symlinks && c.Deleted == o.Deleted && c.Bytes == o.Bytes
}

View File

@@ -1,123 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package db // import "github.com/syncthing/syncthing/internal/db/sqlite"
import (
"iter"
"time"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4"
)
type DB interface {
Service(maintenanceInterval time.Duration) suture.Service
// Basics
Update(folder string, device protocol.DeviceID, fs []protocol.FileInfo) error
Close() error
// Single files
GetDeviceFile(folder string, device protocol.DeviceID, file string) (protocol.FileInfo, bool, error)
GetGlobalAvailability(folder, file string) ([]protocol.DeviceID, error)
GetGlobalFile(folder string, file string) (protocol.FileInfo, bool, error)
// File iterators
//
// n.b. there is a slight inconsistency in the return types where some
// return a FileInfo iterator and some a FileMetadata iterator. The
// latter is more lightweight, and the discrepancy depends on how the
// functions tend to be used. We can introduce more variations as
// required.
AllGlobalFiles(folder string) (iter.Seq[FileMetadata], func() error)
AllGlobalFilesPrefix(folder string, prefix string) (iter.Seq[FileMetadata], func() error)
AllLocalFiles(folder string, device protocol.DeviceID) (iter.Seq[protocol.FileInfo], func() error)
AllLocalFilesBySequence(folder string, device protocol.DeviceID, startSeq int64, limit int) (iter.Seq[protocol.FileInfo], func() error)
AllLocalFilesWithPrefix(folder string, device protocol.DeviceID, prefix string) (iter.Seq[protocol.FileInfo], func() error)
AllLocalFilesWithBlocksHash(folder string, h []byte) (iter.Seq[FileMetadata], func() error)
AllNeededGlobalFiles(folder string, device protocol.DeviceID, order config.PullOrder, limit, offset int) (iter.Seq[protocol.FileInfo], func() error)
AllLocalBlocksWithHash(hash []byte) ([]BlockMapEntry, error)
AllLocalFilesWithBlocksHashAnyFolder(hash []byte) (map[string][]FileMetadata, error)
// Cleanup
DropAllFiles(folder string, device protocol.DeviceID) error
DropDevice(device protocol.DeviceID) error
DropFilesNamed(folder string, device protocol.DeviceID, names []string) error
DropFolder(folder string) error
// Various metadata
GetDeviceSequence(folder string, device protocol.DeviceID) (int64, error)
ListFolders() ([]string, error)
ListDevicesForFolder(folder string) ([]protocol.DeviceID, error)
RemoteSequences(folder string) (map[protocol.DeviceID]int64, error)
// Counts
CountGlobal(folder string) (Counts, error)
CountLocal(folder string, device protocol.DeviceID) (Counts, error)
CountNeed(folder string, device protocol.DeviceID) (Counts, error)
CountReceiveOnlyChanged(folder string) (Counts, error)
// Index IDs
DropAllIndexIDs() error
GetIndexID(folder string, device protocol.DeviceID) (protocol.IndexID, error)
SetIndexID(folder string, device protocol.DeviceID, id protocol.IndexID) error
// MtimeFS
DeleteMtime(folder, name string) error
GetMtime(folder, name string) (ondisk, virtual time.Time)
PutMtime(folder, name string, ondisk, virtual time.Time) error
KV
}
// Generic KV store
type KV interface {
GetKV(key string) ([]byte, error)
PutKV(key string, val []byte) error
DeleteKV(key string) error
PrefixKV(prefix string) (iter.Seq[KeyValue], func() error)
}
type BlockMapEntry struct {
BlocklistHash []byte
Offset int64
BlockIndex int
Size int
}
type KeyValue struct {
Key string
Value []byte
}
type FileMetadata struct {
Name string
Sequence int64
ModNanos int64
Size int64
LocalFlags int64
Type protocol.FileInfoType
Deleted bool
Invalid bool
}
func (f *FileMetadata) ModTime() time.Time {
return time.Unix(0, f.ModNanos)
}
func (f *FileMetadata) IsReceiveOnlyChanged() bool {
return f.LocalFlags&protocol.FlagLocalReceiveOnly != 0
}
func (f *FileMetadata) IsDirectory() bool {
return f.Type == protocol.FileInfoTypeDirectory
}
func (f *FileMetadata) ShouldConflict() bool {
return f.LocalFlags&protocol.LocalConflictFlags != 0
}

View File

@@ -1,229 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package db
import (
"iter"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
)
var (
metricCurrentOperations = promauto.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "db",
Name: "operations_current",
}, []string{"folder", "operation"})
metricTotalOperationSeconds = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "db",
Name: "operation_seconds_total",
Help: "Total time spent in database operations, per folder and operation",
}, []string{"folder", "operation"})
metricTotalOperationsCount = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "db",
Name: "operations_total",
Help: "Total number of database operations, per folder and operation",
}, []string{"folder", "operation"})
metricTotalFilesUpdatedCount = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "db",
Name: "files_updated_total",
Help: "Total number of files updated",
}, []string{"folder"})
)
func MetricsWrap(db DB) DB {
return metricsDB{db}
}
type metricsDB struct {
DB
}
func (m metricsDB) account(folder, op string) func() {
t0 := time.Now()
metricCurrentOperations.WithLabelValues(folder, op).Inc()
return func() {
if dur := time.Since(t0).Seconds(); dur > 0 {
metricTotalOperationSeconds.WithLabelValues(folder, op).Add(dur)
}
metricTotalOperationsCount.WithLabelValues(folder, op).Inc()
metricCurrentOperations.WithLabelValues(folder, op).Dec()
}
}
func (m metricsDB) AllLocalFilesWithBlocksHash(folder string, h []byte) (iter.Seq[FileMetadata], func() error) {
defer m.account(folder, "AllLocalFilesWithBlocksHash")()
return m.DB.AllLocalFilesWithBlocksHash(folder, h)
}
func (m metricsDB) AllLocalFilesWithBlocksHashAnyFolder(hash []byte) (map[string][]FileMetadata, error) {
defer m.account("-", "AllLocalFilesWithBlocksHashAnyFolder")()
return m.DB.AllLocalFilesWithBlocksHashAnyFolder(hash)
}
func (m metricsDB) AllGlobalFiles(folder string) (iter.Seq[FileMetadata], func() error) {
defer m.account(folder, "AllGlobalFiles")()
return m.DB.AllGlobalFiles(folder)
}
func (m metricsDB) AllGlobalFilesPrefix(folder string, prefix string) (iter.Seq[FileMetadata], func() error) {
defer m.account(folder, "AllGlobalFilesPrefix")()
return m.DB.AllGlobalFilesPrefix(folder, prefix)
}
func (m metricsDB) AllLocalFiles(folder string, device protocol.DeviceID) (iter.Seq[protocol.FileInfo], func() error) {
defer m.account(folder, "AllLocalFiles")()
return m.DB.AllLocalFiles(folder, device)
}
func (m metricsDB) AllLocalFilesWithPrefix(folder string, device protocol.DeviceID, prefix string) (iter.Seq[protocol.FileInfo], func() error) {
defer m.account(folder, "AllLocalFilesPrefix")()
return m.DB.AllLocalFilesWithPrefix(folder, device, prefix)
}
func (m metricsDB) AllLocalFilesBySequence(folder string, device protocol.DeviceID, startSeq int64, limit int) (iter.Seq[protocol.FileInfo], func() error) {
defer m.account(folder, "AllLocalFilesBySequence")()
return m.DB.AllLocalFilesBySequence(folder, device, startSeq, limit)
}
func (m metricsDB) AllNeededGlobalFiles(folder string, device protocol.DeviceID, order config.PullOrder, limit, offset int) (iter.Seq[protocol.FileInfo], func() error) {
defer m.account(folder, "AllNeededGlobalFiles")()
return m.DB.AllNeededGlobalFiles(folder, device, order, limit, offset)
}
func (m metricsDB) GetGlobalAvailability(folder, file string) ([]protocol.DeviceID, error) {
defer m.account(folder, "GetGlobalAvailability")()
return m.DB.GetGlobalAvailability(folder, file)
}
func (m metricsDB) AllLocalBlocksWithHash(hash []byte) ([]BlockMapEntry, error) {
defer m.account("-", "AllLocalBlocksWithHash")()
return m.DB.AllLocalBlocksWithHash(hash)
}
func (m metricsDB) Close() error {
defer m.account("-", "Close")()
return m.DB.Close()
}
func (m metricsDB) ListDevicesForFolder(folder string) ([]protocol.DeviceID, error) {
defer m.account(folder, "ListDevicesForFolder")()
return m.DB.ListDevicesForFolder(folder)
}
func (m metricsDB) RemoteSequences(folder string) (map[protocol.DeviceID]int64, error) {
defer m.account(folder, "RemoteSequences")()
return m.DB.RemoteSequences(folder)
}
func (m metricsDB) DropAllFiles(folder string, device protocol.DeviceID) error {
defer m.account(folder, "DropAllFiles")()
return m.DB.DropAllFiles(folder, device)
}
func (m metricsDB) DropDevice(device protocol.DeviceID) error {
defer m.account("-", "DropDevice")()
return m.DB.DropDevice(device)
}
func (m metricsDB) DropFilesNamed(folder string, device protocol.DeviceID, names []string) error {
defer m.account(folder, "DropFilesNamed")()
return m.DB.DropFilesNamed(folder, device, names)
}
func (m metricsDB) DropFolder(folder string) error {
defer m.account(folder, "DropFolder")()
return m.DB.DropFolder(folder)
}
func (m metricsDB) DropAllIndexIDs() error {
defer m.account("-", "IndexIDDropAll")()
return m.DB.DropAllIndexIDs()
}
func (m metricsDB) ListFolders() ([]string, error) {
defer m.account("-", "ListFolders")()
return m.DB.ListFolders()
}
func (m metricsDB) GetGlobalFile(folder string, file string) (protocol.FileInfo, bool, error) {
defer m.account(folder, "GetGlobalFile")()
return m.DB.GetGlobalFile(folder, file)
}
func (m metricsDB) CountGlobal(folder string) (Counts, error) {
defer m.account(folder, "CountGlobal")()
return m.DB.CountGlobal(folder)
}
func (m metricsDB) GetIndexID(folder string, device protocol.DeviceID) (protocol.IndexID, error) {
defer m.account(folder, "IndexIDGet")()
return m.DB.GetIndexID(folder, device)
}
func (m metricsDB) GetDeviceFile(folder string, device protocol.DeviceID, file string) (protocol.FileInfo, bool, error) {
defer m.account(folder, "GetDeviceFile")()
return m.DB.GetDeviceFile(folder, device, file)
}
func (m metricsDB) CountLocal(folder string, device protocol.DeviceID) (Counts, error) {
defer m.account(folder, "CountLocal")()
return m.DB.CountLocal(folder, device)
}
func (m metricsDB) CountNeed(folder string, device protocol.DeviceID) (Counts, error) {
defer m.account(folder, "CountNeed")()
return m.DB.CountNeed(folder, device)
}
func (m metricsDB) CountReceiveOnlyChanged(folder string) (Counts, error) {
defer m.account(folder, "CountReceiveOnlyChanged")()
return m.DB.CountReceiveOnlyChanged(folder)
}
func (m metricsDB) GetDeviceSequence(folder string, device protocol.DeviceID) (int64, error) {
defer m.account(folder, "GetDeviceSequence")()
return m.DB.GetDeviceSequence(folder, device)
}
func (m metricsDB) SetIndexID(folder string, device protocol.DeviceID, id protocol.IndexID) error {
defer m.account(folder, "IndexIDSet")()
return m.DB.SetIndexID(folder, device, id)
}
func (m metricsDB) Update(folder string, device protocol.DeviceID, fs []protocol.FileInfo) error {
defer m.account(folder, "Update")()
defer metricTotalFilesUpdatedCount.WithLabelValues(folder).Add(float64(len(fs)))
return m.DB.Update(folder, device, fs)
}
func (m metricsDB) GetKV(key string) ([]byte, error) {
defer m.account("-", "GetKV")()
return m.DB.GetKV(key)
}
func (m metricsDB) PutKV(key string, val []byte) error {
defer m.account("-", "PutKV")()
return m.DB.PutKV(key, val)
}
func (m metricsDB) DeleteKV(key string) error {
defer m.account("-", "DeleteKV")()
return m.DB.DeleteKV(key)
}
func (m metricsDB) PrefixKV(prefix string) (iter.Seq[KeyValue], func() error) {
defer m.account("-", "PrefixKV")()
return m.DB.PrefixKV(prefix)
}

View File

@@ -1,113 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import (
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/util"
)
// leveldbBackend implements Backend on top of a leveldb
type leveldbBackend struct {
ldb *leveldb.DB
closeWG *closeWaitGroup
location string
}
func newLeveldbBackend(ldb *leveldb.DB, location string) *leveldbBackend {
return &leveldbBackend{
ldb: ldb,
closeWG: &closeWaitGroup{},
location: location,
}
}
func (b *leveldbBackend) NewReadTransaction() (ReadTransaction, error) {
return b.newSnapshot()
}
func (b *leveldbBackend) newSnapshot() (leveldbSnapshot, error) {
rel, err := newReleaser(b.closeWG)
if err != nil {
return leveldbSnapshot{}, err
}
snap, err := b.ldb.GetSnapshot()
if err != nil {
rel.Release()
return leveldbSnapshot{}, wrapLeveldbErr(err)
}
return leveldbSnapshot{
snap: snap,
rel: rel,
}, nil
}
func (b *leveldbBackend) Close() error {
b.closeWG.CloseWait()
return wrapLeveldbErr(b.ldb.Close())
}
func (b *leveldbBackend) Get(key []byte) ([]byte, error) {
val, err := b.ldb.Get(key, nil)
return val, wrapLeveldbErr(err)
}
func (b *leveldbBackend) NewPrefixIterator(prefix []byte) (Iterator, error) {
return &leveldbIterator{b.ldb.NewIterator(util.BytesPrefix(prefix), nil)}, nil
}
func (b *leveldbBackend) NewRangeIterator(first, last []byte) (Iterator, error) {
return &leveldbIterator{b.ldb.NewIterator(&util.Range{Start: first, Limit: last}, nil)}, nil
}
func (b *leveldbBackend) Location() string {
return b.location
}
// leveldbSnapshot implements backend.ReadTransaction
type leveldbSnapshot struct {
snap *leveldb.Snapshot
rel *releaser
}
func (l leveldbSnapshot) Get(key []byte) ([]byte, error) {
val, err := l.snap.Get(key, nil)
return val, wrapLeveldbErr(err)
}
func (l leveldbSnapshot) NewPrefixIterator(prefix []byte) (Iterator, error) {
return l.snap.NewIterator(util.BytesPrefix(prefix), nil), nil
}
func (l leveldbSnapshot) NewRangeIterator(first, last []byte) (Iterator, error) {
return l.snap.NewIterator(&util.Range{Start: first, Limit: last}, nil), nil
}
func (l leveldbSnapshot) Release() {
l.snap.Release()
l.rel.Release()
}
type leveldbIterator struct {
iterator.Iterator
}
func (it *leveldbIterator) Error() error {
return wrapLeveldbErr(it.Iterator.Error())
}
// wrapLeveldbErr wraps errors so that the backend package can recognize them
func wrapLeveldbErr(err error) error {
switch err {
case leveldb.ErrClosed:
return errClosed
case leveldb.ErrNotFound:
return errNotFound
}
return err
}

View File

@@ -1,32 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import (
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
const dbMaxOpenFiles = 100
// OpenLevelDBRO attempts to open the database at the given location, read
// only.
func OpenLevelDBRO(location string) (Backend, error) {
opts := &opt.Options{
OpenFilesCacheCapacity: dbMaxOpenFiles,
ReadOnly: true,
}
ldb, err := open(location, opts)
if err != nil {
return nil, err
}
return newLeveldbBackend(ldb, location), nil
}
func open(location string, opts *opt.Options) (*leveldb.DB, error) {
return leveldb.OpenFile(location, opts)
}

View File

@@ -1,70 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package olddb
import (
"encoding/binary"
"time"
"github.com/syncthing/syncthing/internal/db/olddb/backend"
)
// deprecatedLowlevel is the lowest level database interface. It has a very simple
// purpose: hold the actual backend database, and the in-memory state
// that belong to that database. In the same way that a single on disk
// database can only be opened once, there should be only one deprecatedLowlevel for
// any given backend.
type deprecatedLowlevel struct {
backend.Backend
folderIdx *smallIndex
deviceIdx *smallIndex
keyer keyer
}
func NewLowlevel(backend backend.Backend) (*deprecatedLowlevel, error) {
// Only log restarts in debug mode.
db := &deprecatedLowlevel{
Backend: backend,
folderIdx: newSmallIndex(backend, []byte{KeyTypeFolderIdx}),
deviceIdx: newSmallIndex(backend, []byte{KeyTypeDeviceIdx}),
}
db.keyer = newDefaultKeyer(db.folderIdx, db.deviceIdx)
return db, nil
}
// ListFolders returns the list of folders currently in the database
func (db *deprecatedLowlevel) ListFolders() []string {
return db.folderIdx.Values()
}
func (db *deprecatedLowlevel) IterateMtimes(fn func(folder, name string, ondisk, virtual time.Time) error) error {
it, err := db.NewPrefixIterator([]byte{KeyTypeVirtualMtime})
if err != nil {
return err
}
defer it.Release()
for it.Next() {
key := it.Key()[1:]
folderID, ok := db.folderIdx.Val(binary.BigEndian.Uint32(key))
if !ok {
continue
}
name := key[4:]
val := it.Value()
var ondisk, virtual time.Time
if err := ondisk.UnmarshalBinary(val[:len(val)/2]); err != nil {
continue
}
if err := virtual.UnmarshalBinary(val[len(val)/2:]); err != nil {
continue
}
if err := fn(string(folderID), string(name), ondisk, virtual); err != nil {
return err
}
}
return it.Error()
}

View File

@@ -1,67 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// Package db provides a set type to track local/remote files with newness
// checks. We must do a certain amount of normalization in here. We will get
// fed paths with either native or wire-format separators and encodings
// depending on who calls us. We transform paths to wire-format (NFC and
// slashes) on the way to the database, and transform to native format
// (varying separator and encoding) on the way back out.
package olddb
import (
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
)
type deprecatedFileSet struct {
folder string
db *deprecatedLowlevel
}
// The Iterator is called with either a protocol.FileInfo or a
// FileInfoTruncated (depending on the method) and returns true to
// continue iteration, false to stop.
type Iterator func(f protocol.FileInfo) bool
func NewFileSet(folder string, db *deprecatedLowlevel) (*deprecatedFileSet, error) {
s := &deprecatedFileSet{
folder: folder,
db: db,
}
return s, nil
}
type Snapshot struct {
folder string
t readOnlyTransaction
}
func (s *deprecatedFileSet) Snapshot() (*Snapshot, error) {
t, err := s.db.newReadOnlyTransaction()
if err != nil {
return nil, err
}
return &Snapshot{
folder: s.folder,
t: t,
}, nil
}
func (s *Snapshot) Release() {
s.t.close()
}
func (s *Snapshot) WithHaveSequence(startSeq int64, fn Iterator) error {
return s.t.withHaveSequence([]byte(s.folder), startSeq, nativeFileIterator(fn))
}
func nativeFileIterator(fn Iterator) Iterator {
return func(fi protocol.FileInfo) bool {
fi.Name = osutil.NativeFilename(fi.Name)
return fn(fi)
}
}

View File

@@ -1,193 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package olddb
import (
"fmt"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/db/olddb/backend"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/protocol"
)
// A readOnlyTransaction represents a database snapshot.
type readOnlyTransaction struct {
backend.ReadTransaction
keyer keyer
}
func (db *deprecatedLowlevel) newReadOnlyTransaction() (readOnlyTransaction, error) {
tran, err := db.NewReadTransaction()
if err != nil {
return readOnlyTransaction{}, err
}
return db.readOnlyTransactionFromBackendTransaction(tran), nil
}
func (db *deprecatedLowlevel) readOnlyTransactionFromBackendTransaction(tran backend.ReadTransaction) readOnlyTransaction {
return readOnlyTransaction{
ReadTransaction: tran,
keyer: db.keyer,
}
}
func (t readOnlyTransaction) close() {
t.Release()
}
func (t readOnlyTransaction) getFileByKey(key []byte) (protocol.FileInfo, bool, error) {
f, ok, err := t.getFileTrunc(key, false)
if err != nil || !ok {
return protocol.FileInfo{}, false, err
}
return f, true, nil
}
func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (protocol.FileInfo, bool, error) {
bs, err := t.Get(key)
if backend.IsNotFound(err) {
return protocol.FileInfo{}, false, nil
}
if err != nil {
return protocol.FileInfo{}, false, err
}
f, err := t.unmarshalTrunc(bs, trunc)
if backend.IsNotFound(err) {
return protocol.FileInfo{}, false, nil
}
if err != nil {
return protocol.FileInfo{}, false, err
}
return f, true, nil
}
func (t readOnlyTransaction) unmarshalTrunc(bs []byte, trunc bool) (protocol.FileInfo, error) {
if trunc {
var bfi dbproto.FileInfoTruncated
err := proto.Unmarshal(bs, &bfi)
if err != nil {
return protocol.FileInfo{}, err
}
if err := t.fillTruncated(&bfi); err != nil {
return protocol.FileInfo{}, err
}
return protocol.FileInfoFromDBTruncated(&bfi), nil
}
var bfi bep.FileInfo
err := proto.Unmarshal(bs, &bfi)
if err != nil {
return protocol.FileInfo{}, err
}
if err := t.fillFileInfo(&bfi); err != nil {
return protocol.FileInfo{}, err
}
return protocol.FileInfoFromDB(&bfi), nil
}
type blocksIndirectionError struct {
err error
}
func (e *blocksIndirectionError) Error() string {
return fmt.Sprintf("filling Blocks: %v", e.err)
}
func (e *blocksIndirectionError) Unwrap() error {
return e.err
}
// fillFileInfo follows the (possible) indirection of blocks and version
// vector and fills it out.
func (t readOnlyTransaction) fillFileInfo(fi *bep.FileInfo) error {
var key []byte
if len(fi.Blocks) == 0 && len(fi.BlocksHash) != 0 {
// The blocks list is indirected and we need to load it.
key = t.keyer.GenerateBlockListKey(key, fi.BlocksHash)
bs, err := t.Get(key)
if err != nil {
return &blocksIndirectionError{err}
}
var bl dbproto.BlockList
if err := proto.Unmarshal(bs, &bl); err != nil {
return err
}
fi.Blocks = bl.Blocks
}
if len(fi.VersionHash) != 0 {
key = t.keyer.GenerateVersionKey(key, fi.VersionHash)
bs, err := t.Get(key)
if err != nil {
return fmt.Errorf("filling Version: %w", err)
}
var v bep.Vector
if err := proto.Unmarshal(bs, &v); err != nil {
return err
}
fi.Version = &v
}
return nil
}
// fillTruncated follows the (possible) indirection of version vector and
// fills it.
func (t readOnlyTransaction) fillTruncated(fi *dbproto.FileInfoTruncated) error {
var key []byte
if len(fi.VersionHash) == 0 {
return nil
}
key = t.keyer.GenerateVersionKey(key, fi.VersionHash)
bs, err := t.Get(key)
if err != nil {
return err
}
var v bep.Vector
if err := proto.Unmarshal(bs, &v); err != nil {
return err
}
fi.Version = &v
return nil
}
func (t *readOnlyTransaction) withHaveSequence(folder []byte, startSeq int64, fn Iterator) error {
first, err := t.keyer.GenerateSequenceKey(nil, folder, startSeq)
if err != nil {
return err
}
last, err := t.keyer.GenerateSequenceKey(nil, folder, maxInt64)
if err != nil {
return err
}
dbi, err := t.NewRangeIterator(first, last)
if err != nil {
return err
}
defer dbi.Release()
for dbi.Next() {
f, ok, err := t.getFileByKey(dbi.Value())
if err != nil {
return err
}
if !ok {
continue
}
if !fn(f) {
return nil
}
}
return dbi.Error()
}

View File

@@ -1,249 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"database/sql"
"embed"
"io/fs"
"path/filepath"
"strconv"
"strings"
"sync"
"text/template"
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
)
const currentSchemaVersion = 1
//go:embed sql/**
var embedded embed.FS
type baseDB struct {
path string
baseName string
sql *sqlx.DB
updateLock sync.Mutex
updatePoints int
checkpointsCount int
statementsMut sync.RWMutex
statements map[string]*sqlx.Stmt
tplInput map[string]any
}
func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScripts []string) (*baseDB, error) {
// Open the database with options to enable foreign keys and recursive
// triggers (needed for the delete+insert triggers on row replace).
sqlDB, err := sqlx.Open(dbDriver, "file:"+path+"?"+commonOptions)
if err != nil {
return nil, wrap(err)
}
sqlDB.SetMaxOpenConns(maxConns)
for _, pragma := range pragmas {
if _, err := sqlDB.Exec("PRAGMA " + pragma); err != nil {
return nil, wrap(err, "PRAGMA "+pragma)
}
}
db := &baseDB{
path: path,
baseName: filepath.Base(path),
sql: sqlDB,
statements: make(map[string]*sqlx.Stmt),
}
for _, script := range schemaScripts {
if err := db.runScripts(script); err != nil {
return nil, wrap(err)
}
}
ver, _ := db.getAppliedSchemaVersion()
if ver.SchemaVersion > 0 {
filter := func(scr string) bool {
scr = filepath.Base(scr)
nstr, _, ok := strings.Cut(scr, "-")
if !ok {
return false
}
n, err := strconv.ParseInt(nstr, 10, 32)
if err != nil {
return false
}
return int(n) > ver.SchemaVersion
}
for _, script := range migrationScripts {
if err := db.runScripts(script, filter); err != nil {
return nil, wrap(err)
}
}
}
// Set the current schema version, if not already set
if err := db.setAppliedSchemaVersion(currentSchemaVersion); err != nil {
return nil, wrap(err)
}
db.tplInput = map[string]any{
"FlagLocalUnsupported": protocol.FlagLocalUnsupported,
"FlagLocalIgnored": protocol.FlagLocalIgnored,
"FlagLocalMustRescan": protocol.FlagLocalMustRescan,
"FlagLocalReceiveOnly": protocol.FlagLocalReceiveOnly,
"FlagLocalGlobal": protocol.FlagLocalGlobal,
"FlagLocalNeeded": protocol.FlagLocalNeeded,
"SyncthingVersion": build.LongVersion,
}
return db, nil
}
func (s *baseDB) Close() error {
s.updateLock.Lock()
s.statementsMut.Lock()
defer s.updateLock.Unlock()
defer s.statementsMut.Unlock()
for _, stmt := range s.statements {
stmt.Close()
}
return wrap(s.sql.Close())
}
var tplFuncs = template.FuncMap{
"or": func(vs ...int) int {
v := vs[0]
for _, ov := range vs[1:] {
v |= ov
}
return v
},
}
// stmt returns a prepared statement for the given SQL string, after
// applying local template expansions. The statement is cached.
func (s *baseDB) stmt(tpl string) stmt {
tpl = strings.TrimSpace(tpl)
// Fast concurrent lookup of cached statement
s.statementsMut.RLock()
stmt, ok := s.statements[tpl]
s.statementsMut.RUnlock()
if ok {
return stmt
}
// On miss, take the full lock, check again
s.statementsMut.Lock()
defer s.statementsMut.Unlock()
stmt, ok = s.statements[tpl]
if ok {
return stmt
}
// Apply template expansions
var sb strings.Builder
compTpl := template.Must(template.New("tpl").Funcs(tplFuncs).Parse(tpl))
if err := compTpl.Execute(&sb, s.tplInput); err != nil {
panic("bug: bad template: " + err.Error())
}
// Prepare and cache
stmt, err := s.sql.Preparex(sb.String())
if err != nil {
return failedStmt{err}
}
s.statements[tpl] = stmt
return stmt
}
type stmt interface {
Exec(args ...any) (sql.Result, error)
Get(dest any, args ...any) error
Queryx(args ...any) (*sqlx.Rows, error)
Select(dest any, args ...any) error
}
type failedStmt struct {
err error
}
func (f failedStmt) Exec(_ ...any) (sql.Result, error) { return nil, f.err }
func (f failedStmt) Get(_ any, _ ...any) error { return f.err }
func (f failedStmt) Queryx(_ ...any) (*sqlx.Rows, error) { return nil, f.err }
func (f failedStmt) Select(_ any, _ ...any) error { return f.err }
func (s *baseDB) runScripts(glob string, filter ...func(s string) bool) error {
scripts, err := fs.Glob(embedded, glob)
if err != nil {
return wrap(err)
}
tx, err := s.sql.Begin()
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
nextScript:
for _, scr := range scripts {
for _, fn := range filter {
if !fn(scr) {
continue nextScript
}
}
bs, err := fs.ReadFile(embedded, scr)
if err != nil {
return wrap(err, scr)
}
// SQLite requires one statement per exec, so we split the init
// files on lines containing only a semicolon and execute them
// separately. We require it on a separate line because there are
// also statement-internal semicolons in the triggers.
for _, stmt := range strings.Split(string(bs), "\n;") {
if _, err := tx.Exec(stmt); err != nil {
return wrap(err, stmt)
}
}
}
return wrap(tx.Commit())
}
type schemaVersion struct {
SchemaVersion int
AppliedAt int64
SyncthingVersion string
}
func (s *schemaVersion) AppliedTime() time.Time {
return time.Unix(0, s.AppliedAt)
}
func (s *baseDB) setAppliedSchemaVersion(ver int) error {
_, err := s.stmt(`
INSERT OR IGNORE INTO schemamigrations (schema_version, applied_at, syncthing_version)
VALUES (?, ?, ?)
`).Exec(ver, time.Now().UnixNano(), build.LongVersion)
return wrap(err)
}
func (s *baseDB) getAppliedSchemaVersion() (schemaVersion, error) {
var v schemaVersion
err := s.stmt(`
SELECT schema_version as schemaversion, applied_at as appliedat, syncthing_version as syncthingversion FROM schemamigrations
ORDER BY schema_version DESC
LIMIT 1
`).Get(&v)
return v, wrap(err)
}

View File

@@ -1,243 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"context"
"fmt"
"testing"
"time"
"github.com/syncthing/syncthing/internal/timeutil"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
)
var globalFi protocol.FileInfo
func BenchmarkUpdate(b *testing.B) {
db, err := OpenTemp()
if err != nil {
b.Fatal(err)
}
b.Cleanup(func() {
if err := db.Close(); err != nil {
b.Fatal(err)
}
})
svc := db.Service(time.Hour).(*Service)
fs := make([]protocol.FileInfo, 100)
seed := 0
size := 10000
for size < 200_000 {
t0 := time.Now()
if err := svc.periodic(context.Background()); err != nil {
b.Fatal(err)
}
b.Log("garbage collect in", time.Since(t0))
for {
local, err := db.CountLocal(folderID, protocol.LocalDeviceID)
if err != nil {
b.Fatal(err)
}
if local.Files >= size {
break
}
fs := make([]protocol.FileInfo, 1000)
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.Run(fmt.Sprintf("Insert100Loc@%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepBlocks100@%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
fs[i].Version = fs[i].Version.Update(42)
}
seed++
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepSame100@%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Version = fs[i].Version.Update(42)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("Insert100Rem@%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
fs[i].Version = fs[i].Version.Update(42)
fs[i].Sequence = timeutil.StrictlyMonotonicNanos()
}
if err := db.Update(folderID, protocol.DeviceID{42}, fs); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetGlobal100@%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
_, ok, err := db.GetGlobalFile(folderID, fs[i].Name)
if err != nil {
b.Fatal(err)
}
if !ok {
b.Fatal("should exist")
}
}
}
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalSequenced@%d", size), func(b *testing.B) {
count := 0
for range b.N {
cur, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
if err != nil {
b.Fatal(err)
}
it, errFn := db.AllLocalFilesBySequence(folderID, protocol.LocalDeviceID, cur-100, 0)
for f := range it {
count++
globalFi = f
}
if err := errFn(); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetDeviceSequenceLoc@%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
if err != nil {
b.Fatal(err)
}
}
})
b.Run(fmt.Sprintf("GetDeviceSequenceRem@%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.DeviceID{42})
if err != nil {
b.Fatal(err)
}
}
})
b.Run(fmt.Sprintf("RemoteNeed@%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0)
for f := range it {
count++
globalFi = f
}
if err := errFn(); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalNeed100Largest@%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderLargestFirst, 100, 0)
for f := range it {
globalFi = f
count++
}
if err := errFn(); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
size <<= 1
}
}
func TestBenchmarkDropAllRemote(t *testing.T) {
if testing.Short() {
t.Skip("slow test")
}
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
fs := make([]protocol.FileInfo, 1000)
seq := 0
for {
local, err := db.CountLocal(folderID, protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if local.Files >= 15_000 {
break
}
for i := range fs {
seq++
fs[i] = genFile(rand.String(24), 64, seq)
}
if err := db.Update(folderID, protocol.DeviceID{42}, fs); err != nil {
t.Fatal(err)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
t.Fatal(err)
}
}
t0 := time.Now()
if err := db.DropAllFiles(folderID, protocol.DeviceID{42}); err != nil {
t.Fatal(err)
}
d := time.Since(t0)
t.Log("drop all took", d)
}

View File

@@ -1,402 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"database/sql"
"errors"
"fmt"
"iter"
"path/filepath"
"strings"
"time"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/itererr"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
)
var errNoSuchFolder = errors.New("no such folder")
func (s *DB) getFolderDB(folder string, create bool) (*folderDB, error) {
// Check for an already open database
s.folderDBsMut.RLock()
fdb, ok := s.folderDBs[folder]
s.folderDBsMut.RUnlock()
if ok {
return fdb, nil
}
// Check for an existing database. If we're not supposed to create the
// folder, we don't move on if it doesn't already have a database name.
var dbName string
if err := s.stmt(`
SELECT database_name FROM folders
WHERE folder_id = ?
`).Get(&dbName, folder); err != nil && !errors.Is(err, sql.ErrNoRows) {
return nil, wrap(err)
}
if dbName == "" && !create {
return nil, errNoSuchFolder
}
// Create a folder ID and database if it does not already exist
s.folderDBsMut.Lock()
defer s.folderDBsMut.Unlock()
if fdb, ok := s.folderDBs[folder]; ok {
return fdb, nil
}
if dbName == "" {
// First time we want to access this folder, need to create a new
// folder ID
idx, err := s.folderIdxLocked(folder)
if err != nil {
return nil, wrap(err)
}
// The database name is the folder index ID and a random slug.
slug := strings.ToLower(rand.String(8))
dbName = fmt.Sprintf("folder.%04x-%s.db", idx, slug)
if _, err := s.stmt(`UPDATE folders SET database_name = ? WHERE idx = ?`).Exec(dbName, idx); err != nil {
return nil, wrap(err, "set name")
}
}
l.Debugf("Folder %s in database %s", folder, dbName)
path := dbName
if !filepath.IsAbs(path) {
path = filepath.Join(s.pathBase, dbName)
}
fdb, err := s.folderDBOpener(folder, path, s.deleteRetention)
if err != nil {
return nil, wrap(err)
}
s.folderDBs[folder] = fdb
return fdb, nil
}
func (s *DB) Update(folder string, device protocol.DeviceID, fs []protocol.FileInfo) error {
fdb, err := s.getFolderDB(folder, true)
if err != nil {
return err
}
return fdb.Update(device, fs)
}
func (s *DB) GetDeviceFile(folder string, device protocol.DeviceID, file string) (protocol.FileInfo, bool, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return protocol.FileInfo{}, false, nil
}
if err != nil {
return protocol.FileInfo{}, false, err
}
return fdb.GetDeviceFile(device, file)
}
func (s *DB) GetGlobalAvailability(folder, file string) ([]protocol.DeviceID, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return nil, nil
}
if err != nil {
return nil, err
}
return fdb.GetGlobalAvailability(file)
}
func (s *DB) GetGlobalFile(folder string, file string) (protocol.FileInfo, bool, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return protocol.FileInfo{}, false, nil
}
if err != nil {
return protocol.FileInfo{}, false, err
}
return fdb.GetGlobalFile(file)
}
func (s *DB) AllGlobalFiles(folder string) (iter.Seq[db.FileMetadata], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(db.FileMetadata) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(db.FileMetadata) bool) {}, func() error { return err }
}
return fdb.AllGlobalFiles()
}
func (s *DB) AllGlobalFilesPrefix(folder string, prefix string) (iter.Seq[db.FileMetadata], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(db.FileMetadata) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(db.FileMetadata) bool) {}, func() error { return err }
}
return fdb.AllGlobalFilesPrefix(prefix)
}
func (s *DB) AllLocalBlocksWithHash(hash []byte) ([]db.BlockMapEntry, error) {
var entries []db.BlockMapEntry
err := s.forEachFolder(func(fdb *folderDB) error {
es, err := itererr.Collect(fdb.AllLocalBlocksWithHash(hash))
entries = append(entries, es...)
return err
})
return entries, err
}
func (s *DB) AllLocalFilesWithBlocksHashAnyFolder(hash []byte) (map[string][]db.FileMetadata, error) {
res := make(map[string][]db.FileMetadata)
err := s.forEachFolder(func(fdb *folderDB) error {
files, err := itererr.Collect(fdb.AllLocalFilesWithBlocksHash(hash))
res[fdb.folderID] = files
return err
})
return res, err
}
func (s *DB) AllLocalFiles(folder string, device protocol.DeviceID) (iter.Seq[protocol.FileInfo], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return err }
}
return fdb.AllLocalFiles(device)
}
func (s *DB) AllLocalFilesBySequence(folder string, device protocol.DeviceID, startSeq int64, limit int) (iter.Seq[protocol.FileInfo], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return err }
}
return fdb.AllLocalFilesBySequence(device, startSeq, limit)
}
func (s *DB) AllLocalFilesWithPrefix(folder string, device protocol.DeviceID, prefix string) (iter.Seq[protocol.FileInfo], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return err }
}
return fdb.AllLocalFilesWithPrefix(device, prefix)
}
func (s *DB) AllLocalFilesWithBlocksHash(folder string, h []byte) (iter.Seq[db.FileMetadata], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(db.FileMetadata) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(db.FileMetadata) bool) {}, func() error { return err }
}
return fdb.AllLocalFilesWithBlocksHash(h)
}
func (s *DB) AllNeededGlobalFiles(folder string, device protocol.DeviceID, order config.PullOrder, limit, offset int) (iter.Seq[protocol.FileInfo], func() error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return nil }
}
if err != nil {
return func(yield func(protocol.FileInfo) bool) {}, func() error { return err }
}
return fdb.AllNeededGlobalFiles(device, order, limit, offset)
}
func (s *DB) DropAllFiles(folder string, device protocol.DeviceID) error {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return nil
}
if err != nil {
return err
}
return fdb.DropAllFiles(device)
}
func (s *DB) DropFilesNamed(folder string, device protocol.DeviceID, names []string) error {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return nil
}
if err != nil {
return err
}
return fdb.DropFilesNamed(device, names)
}
func (s *DB) ListDevicesForFolder(folder string) ([]protocol.DeviceID, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return nil, nil
}
if err != nil {
return nil, err
}
return fdb.ListDevicesForFolder()
}
func (s *DB) RemoteSequences(folder string) (map[protocol.DeviceID]int64, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return nil, nil
}
if err != nil {
return nil, err
}
return fdb.RemoteSequences()
}
func (s *DB) CountGlobal(folder string) (db.Counts, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return db.Counts{}, nil
}
if err != nil {
return db.Counts{}, err
}
return fdb.CountGlobal()
}
func (s *DB) CountLocal(folder string, device protocol.DeviceID) (db.Counts, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return db.Counts{}, nil
}
if err != nil {
return db.Counts{}, err
}
return fdb.CountLocal(device)
}
func (s *DB) CountNeed(folder string, device protocol.DeviceID) (db.Counts, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return db.Counts{}, nil
}
if err != nil {
return db.Counts{}, err
}
return fdb.CountNeed(device)
}
func (s *DB) CountReceiveOnlyChanged(folder string) (db.Counts, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return db.Counts{}, nil
}
if err != nil {
return db.Counts{}, err
}
return fdb.CountReceiveOnlyChanged()
}
func (s *DB) DropAllIndexIDs() error {
return s.forEachFolder(func(fdb *folderDB) error {
return fdb.DropAllIndexIDs()
})
}
func (s *DB) GetIndexID(folder string, device protocol.DeviceID) (protocol.IndexID, error) {
fdb, err := s.getFolderDB(folder, true)
if err != nil {
return 0, err
}
return fdb.GetIndexID(device)
}
func (s *DB) SetIndexID(folder string, device protocol.DeviceID, id protocol.IndexID) error {
fdb, err := s.getFolderDB(folder, true)
if err != nil {
return err
}
return fdb.SetIndexID(device, id)
}
func (s *DB) GetDeviceSequence(folder string, device protocol.DeviceID) (int64, error) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return 0, nil
}
if err != nil {
return 0, err
}
return fdb.GetDeviceSequence(device)
}
func (s *DB) DeleteMtime(folder, name string) error {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return nil
}
if err != nil {
return err
}
return fdb.DeleteMtime(name)
}
func (s *DB) GetMtime(folder, name string) (ondisk, virtual time.Time) {
fdb, err := s.getFolderDB(folder, false)
if errors.Is(err, errNoSuchFolder) {
return time.Time{}, time.Time{}
}
if err != nil {
return time.Time{}, time.Time{}
}
return fdb.GetMtime(name)
}
func (s *DB) PutMtime(folder, name string, ondisk, virtual time.Time) error {
fdb, err := s.getFolderDB(folder, true)
if err != nil {
return err
}
return fdb.PutMtime(name, ondisk, virtual)
}
func (s *DB) DropDevice(device protocol.DeviceID) error {
return s.forEachFolder(func(fdb *folderDB) error {
return fdb.DropDevice(device)
})
}
// forEachFolder runs the function for each currently open folderDB,
// returning the first error that was encountered.
func (s *DB) forEachFolder(fn func(fdb *folderDB) error) error {
folders, err := s.ListFolders()
if err != nil {
return err
}
var firstError error
for _, folder := range folders {
fdb, err := s.getFolderDB(folder, false)
if err != nil {
if firstError == nil {
firstError = err
}
continue
}
if err := fn(fdb); err != nil && firstError == nil {
firstError = err
}
}
return firstError
}

View File

@@ -1,519 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"slices"
"testing"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestNeed(t *testing.T) {
t.Helper()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// Some local files
var v protocol.Vector
baseV := v.Update(1)
newerV := baseV.Update(42)
files := []protocol.FileInfo{
genFile("test1", 1, 0), // remote need
genFile("test2", 2, 0), // local need
genFile("test3", 3, 0), // global
}
files[0].Version = baseV
files[1].Version = baseV
files[2].Version = newerV
err = db.Update(folderID, protocol.LocalDeviceID, files)
if err != nil {
t.Fatal(err)
}
// Some remote files
remote := []protocol.FileInfo{
genFile("test2", 2, 100), // global
genFile("test3", 3, 101), // remote need
genFile("test4", 4, 102), // local need
}
remote[0].Version = newerV
remote[1].Version = baseV
remote[2].Version = newerV
err = db.Update(folderID, protocol.DeviceID{42}, remote)
if err != nil {
t.Fatal(err)
}
// A couple are needed locally
localNeed := fiNames(mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 0, 0)))
if !slices.Equal(localNeed, []string{"test2", "test4"}) {
t.Log(localNeed)
t.Fatal("bad local need")
}
// Another couple are needed remotely
remoteNeed := fiNames(mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0)))
if !slices.Equal(remoteNeed, []string{"test1", "test3"}) {
t.Log(remoteNeed)
t.Fatal("bad remote need")
}
}
func TestDropRecalcsGlobal(t *testing.T) {
// When we drop a device we may get a new global
t.Parallel()
t.Run("DropAllFiles", func(t *testing.T) {
t.Parallel()
testDropWithDropper(t, func(t *testing.T, db *DB) {
t.Helper()
if err := db.DropAllFiles(folderID, protocol.DeviceID{42}); err != nil {
t.Fatal(err)
}
})
})
t.Run("DropDevice", func(t *testing.T) {
t.Parallel()
testDropWithDropper(t, func(t *testing.T, db *DB) {
t.Helper()
if err := db.DropDevice(protocol.DeviceID{42}); err != nil {
t.Fatal(err)
}
})
})
t.Run("DropFilesNamed", func(t *testing.T) {
t.Parallel()
testDropWithDropper(t, func(t *testing.T, db *DB) {
t.Helper()
if err := db.DropFilesNamed(folderID, protocol.DeviceID{42}, []string{"test1", "test42"}); err != nil {
t.Fatal(err)
}
})
})
}
func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
t.Helper()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// Some local files
err = db.Update(folderID, protocol.LocalDeviceID, []protocol.FileInfo{
genFile("test1", 1, 0),
genFile("test2", 2, 0),
})
if err != nil {
t.Fatal(err)
}
// Some remote files
remote := []protocol.FileInfo{
genFile("test1", 3, 0),
}
remote[0].Version = remote[0].Version.Update(42)
err = db.Update(folderID, protocol.DeviceID{42}, remote)
if err != nil {
t.Fatal(err)
}
// Remote test1 wins as the global, verify.
count, err := db.CountGlobal(folderID)
if err != nil {
t.Fatal(err)
}
if count.Bytes != (2+3)*128<<10 {
t.Log(count)
t.Fatal("bad global size to begin with")
}
if g, ok, err := db.GetGlobalFile(folderID, "test1"); err != nil || !ok {
t.Fatal("missing global to begin with")
} else if g.Size != 3*128<<10 {
t.Fatal("remote test1 should be the global")
}
// Now remove that remote device
dropper(t, db)
// Our test1 should now be the global
count, err = db.CountGlobal(folderID)
if err != nil {
t.Fatal(err)
}
if count.Bytes != (1+2)*128<<10 {
t.Log(count)
t.Fatal("bad global size after drop")
}
if g, ok, err := db.GetGlobalFile(folderID, "test1"); err != nil || !ok {
t.Fatal("missing global after drop")
} else if g.Size != 1*128<<10 {
t.Fatal("local test1 should be the global")
}
}
func TestNeedDeleted(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// Some local files
err = db.Update(folderID, protocol.LocalDeviceID, []protocol.FileInfo{
genFile("test1", 1, 0),
genFile("test2", 2, 0),
})
if err != nil {
t.Fatal(err)
}
// A remote deleted file
remote := []protocol.FileInfo{
genFile("test1", 1, 101),
}
remote[0].SetDeleted(42)
err = db.Update(folderID, protocol.DeviceID{42}, remote)
if err != nil {
t.Fatal(err)
}
// We need the one deleted file
s, err := db.CountNeed(folderID, protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if s.Bytes != 0 || s.Deleted != 1 {
t.Log(s)
t.Error("bad need")
}
}
func TestDontNeedIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// A remote file
files := []protocol.FileInfo{
genFile("test1", 1, 103),
}
err = db.Update(folderID, protocol.DeviceID{42}, files)
if err != nil {
t.Fatal(err)
}
// Which we've ignored locally
files[0].SetIgnored()
err = db.Update(folderID, protocol.LocalDeviceID, files)
if err != nil {
t.Fatal(err)
}
// We don't need it
s, err := db.CountNeed(folderID, protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if s.Bytes != 0 || s.Files != 0 {
t.Log(s)
t.Error("bad need")
}
// It shouldn't show up in the need list
names := mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 0, 0))
if len(names) != 0 {
t.Log(names)
t.Error("need no files")
}
}
func TestRemoteDontNeedLocalIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// A local ignored file
file := genFile("test1", 1, 103)
file.SetIgnored()
files := []protocol.FileInfo{file}
err = db.Update(folderID, protocol.LocalDeviceID, files)
if err != nil {
t.Fatal(err)
}
// Which the remote doesn't have (no update)
// They don't need it
s, err := db.CountNeed(folderID, protocol.DeviceID{42})
if err != nil {
t.Fatal(err)
}
if s.Bytes != 0 || s.Files != 0 {
t.Log(s)
t.Error("bad need")
}
// It shouldn't show up in their need list
names := mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0))
if len(names) != 0 {
t.Log(names)
t.Error("need no files")
}
}
func TestLocalDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// A remote deleted file
file := genFile("test1", 1, 103)
file.SetDeleted(42)
files := []protocol.FileInfo{file}
err = db.Update(folderID, protocol.DeviceID{42}, files)
if err != nil {
t.Fatal(err)
}
// Which we don't have (no local update)
// We don't need it
s, err := db.CountNeed(folderID, protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if s.Bytes != 0 || s.Files != 0 || s.Deleted != 0 {
t.Log(s)
t.Error("bad need")
}
// It shouldn't show up in the need list
names := mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 0, 0))
if len(names) != 0 {
t.Log(names)
t.Error("need no files")
}
}
func TestRemoteDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// A local deleted file
file := genFile("test1", 1, 103)
file.SetDeleted(42)
files := []protocol.FileInfo{file}
err = db.Update(folderID, protocol.LocalDeviceID, files)
if err != nil {
t.Fatal(err)
}
// Which the remote doesn't have (no local update)
// They don't need it
s, err := db.CountNeed(folderID, protocol.DeviceID{42})
if err != nil {
t.Fatal(err)
}
if s.Bytes != 0 || s.Files != 0 || s.Deleted != 0 {
t.Log(s)
t.Error("bad need")
}
// It shouldn't show up in their need list
names := mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0))
if len(names) != 0 {
t.Log(names)
t.Error("need no files")
}
// Another remote has announced it, but has set the invalid bit,
// presumably it's being ignored.
file = genFile("test1", 1, 103)
file.SetIgnored()
err = db.Update(folderID, protocol.DeviceID{43}, []protocol.FileInfo{file})
if err != nil {
t.Fatal(err)
}
// They don't need it, either
s, err = db.CountNeed(folderID, protocol.DeviceID{43})
if err != nil {
t.Fatal(err)
}
if s.Bytes != 0 || s.Files != 0 || s.Deleted != 0 {
t.Log(s)
t.Error("bad need")
}
// It shouldn't show up in their need list
names = mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0))
if len(names) != 0 {
t.Log(names)
t.Error("need no files")
}
}
func TestNeedRemoteSymlinkAndDir(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// Two remote "specials", a symlink and a directory
var v protocol.Vector
v.Update(1)
files := []protocol.FileInfo{
{Name: "sym", Type: protocol.FileInfoTypeSymlink, Sequence: 100, Version: v, Blocks: genBlocks("symlink", 0, 1)},
{Name: "dir", Type: protocol.FileInfoTypeDirectory, Sequence: 101, Version: v},
}
err = db.Update(folderID, protocol.DeviceID{42}, files)
if err != nil {
t.Fatal(err)
}
// We need them
s, err := db.CountNeed(folderID, protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if s.Directories != 1 || s.Symlinks != 1 {
t.Log(s)
t.Error("bad need")
}
// They should be in the need list
names := mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 0, 0))
if len(names) != 2 {
t.Log(names)
t.Error("bad need")
}
}
func TestNeedPagination(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// Several remote files
var v protocol.Vector
v.Update(1)
files := []protocol.FileInfo{
genFile("test0", 1, 100),
genFile("test1", 1, 101),
genFile("test2", 1, 102),
genFile("test3", 1, 103),
genFile("test4", 1, 104),
genFile("test5", 1, 105),
genFile("test6", 1, 106),
genFile("test7", 1, 107),
genFile("test8", 1, 108),
genFile("test9", 1, 109),
}
err = db.Update(folderID, protocol.DeviceID{42}, files)
if err != nil {
t.Fatal(err)
}
// We should get the first two
names := fiNames(mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 2, 0)))
if !slices.Equal(names, []string{"test0", "test1"}) {
t.Log(names)
t.Error("bad need")
}
// We should get the next three
names = fiNames(mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 3, 2)))
if !slices.Equal(names, []string{"test2", "test3", "test4"}) {
t.Log(names)
t.Error("bad need")
}
// We should get the last five
names = fiNames(mustCollect[protocol.FileInfo](t)(db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderAlphabetic, 5, 5)))
if !slices.Equal(names, []string{"test5", "test6", "test7", "test8", "test9"}) {
t.Log(names)
t.Error("bad need")
}
}

View File

@@ -1,81 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"testing"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestIndexIDs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal()
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
t.Run("LocalDeviceID", func(t *testing.T) {
t.Parallel()
localID, err := db.GetIndexID("foo", protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if localID == 0 {
t.Fatal("should have been generated")
}
again, err := db.GetIndexID("foo", protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if again != localID {
t.Fatal("should get same again")
}
other, err := db.GetIndexID("bar", protocol.LocalDeviceID)
if err != nil {
t.Fatal(err)
}
if other == localID {
t.Fatal("should not get same for other folder")
}
})
t.Run("OtherDeviceID", func(t *testing.T) {
t.Parallel()
localID, err := db.GetIndexID("foo", protocol.DeviceID{42})
if err != nil {
t.Fatal(err)
}
if localID != 0 {
t.Fatal("should have been zero")
}
newID := protocol.NewIndexID()
if err := db.SetIndexID("foo", protocol.DeviceID{42}, newID); err != nil {
t.Fatal(err)
}
again, err := db.GetIndexID("foo", protocol.DeviceID{42})
if err != nil {
t.Fatal(err)
}
if again != newID {
t.Log(again, newID)
t.Fatal("should get the ID we set")
}
})
}

View File

@@ -1,78 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"iter"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/db"
)
func (s *baseDB) GetKV(key string) ([]byte, error) {
var val []byte
if err := s.stmt(`
SELECT value FROM kv
WHERE key = ?
`).Get(&val, key); err != nil {
return nil, wrap(err)
}
return val, nil
}
func (s *baseDB) PutKV(key string, val []byte) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
_, err := s.stmt(`
INSERT OR REPLACE INTO kv (key, value)
VALUES (?, ?)
`).Exec(key, val)
return wrap(err)
}
func (s *baseDB) DeleteKV(key string) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
_, err := s.stmt(`
DELETE FROM kv WHERE key = ?
`).Exec(key)
return wrap(err)
}
func (s *baseDB) PrefixKV(prefix string) (iter.Seq[db.KeyValue], func() error) {
var rows *sqlx.Rows
var err error
if prefix == "" {
rows, err = s.stmt(`SELECT key, value FROM kv`).Queryx()
} else {
end := prefixEnd(prefix)
rows, err = s.stmt(`
SELECT key, value FROM kv
WHERE key >= ? AND key < ?
`).Queryx(prefix, end)
}
if err != nil {
return func(_ func(db.KeyValue) bool) {}, func() error { return err }
}
return func(yield func(db.KeyValue) bool) {
defer rows.Close()
for rows.Next() {
var key string
var val []byte
if err = rows.Scan(&key, &val); err != nil {
return
}
if !yield(db.KeyValue{Key: key, Value: val}) {
return
}
}
err = rows.Err()
}, func() error {
return err
}
}

View File

@@ -1,203 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"testing"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestBlocks(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal()
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
files := []protocol.FileInfo{
{
Name: "file1",
Blocks: []protocol.BlockInfo{
{Hash: []byte{1, 2, 3}, Offset: 0, Size: 42},
{Hash: []byte{2, 3, 4}, Offset: 42, Size: 42},
{Hash: []byte{3, 4, 5}, Offset: 84, Size: 42},
},
},
{
Name: "file2",
Blocks: []protocol.BlockInfo{
{Hash: []byte{2, 3, 4}, Offset: 0, Size: 42},
{Hash: []byte{3, 4, 5}, Offset: 42, Size: 42},
{Hash: []byte{4, 5, 6}, Offset: 84, Size: 42},
},
},
}
if err := db.Update("test", protocol.LocalDeviceID, files); err != nil {
t.Fatal(err)
}
// Search for blocks
vals, err := db.AllLocalBlocksWithHash([]byte{1, 2, 3})
if err != nil {
t.Fatal(err)
}
if len(vals) != 1 {
t.Log(vals)
t.Fatal("expected one hit")
} else if vals[0].BlockIndex != 0 || vals[0].Offset != 0 || vals[0].Size != 42 {
t.Log(vals[0])
t.Fatal("bad entry")
}
// Get FileInfos for those blocks
res, err := db.AllLocalFilesWithBlocksHashAnyFolder(vals[0].BlocklistHash)
if err != nil {
t.Fatal(err)
}
if len(res) != 1 {
t.Fatal("should return one folder")
}
if len(res[folderID]) != 1 {
t.Fatal("should find one file")
}
if res[folderID][0].Name != "file1" {
t.Fatal("should be file1")
}
// Get the other blocks
vals, err = db.AllLocalBlocksWithHash([]byte{3, 4, 5})
if err != nil {
t.Fatal(err)
}
if len(vals) != 2 {
t.Log(vals)
t.Fatal("expected two hits")
}
// if vals[0].Index != 2 || vals[0].Offset != 84 || vals[0].Size != 42 {
// t.Log(vals[0])
// t.Fatal("bad entry 1")
// }
// if vals[1].Index != 1 || vals[1].Offset != 42 || vals[1].Size != 42 {
// t.Log(vals[1])
// t.Fatal("bad entry 2")
// }
}
func TestBlocksDeleted(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
if err != nil {
t.Fatal()
}
t.Cleanup(func() {
if err := sdb.Close(); err != nil {
t.Fatal(err)
}
})
// Insert a file
file := genFile("foo", 1, 0)
if err := sdb.Update(folderID, protocol.LocalDeviceID, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
// We should find one entry for the block hash
search := file.Blocks[0].Hash
es, err := sdb.AllLocalBlocksWithHash(search)
if err != nil {
t.Fatal(err)
}
if len(es) != 1 {
t.Fatal("expected one hit")
}
// Update the file with a new block hash
file.Blocks = genBlocks("foo", 42, 1)
if err := sdb.Update(folderID, protocol.LocalDeviceID, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
// Searching for the old hash should yield no hits
if hits, err := sdb.AllLocalBlocksWithHash(search); err != nil {
t.Fatal(err)
} else if len(hits) != 0 {
t.Log(hits)
t.Error("expected no hits")
}
// Searching for the new hash should yield one hits
if hits, err := sdb.AllLocalBlocksWithHash(file.Blocks[0].Hash); err != nil {
t.Fatal(err)
} else if len(hits) != 1 {
t.Log(hits)
t.Error("expected one hit")
}
}
func TestRemoteSequence(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
if err != nil {
t.Fatal()
}
t.Cleanup(func() {
if err := sdb.Close(); err != nil {
t.Fatal(err)
}
})
// Insert a local file
file := genFile("foo", 1, 0)
if err := sdb.Update(folderID, protocol.LocalDeviceID, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
// Insert several remote files
file = genFile("foo1", 1, 42)
if err := sdb.Update(folderID, protocol.DeviceID{42}, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
if err := sdb.Update(folderID, protocol.DeviceID{43}, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
file = genFile("foo2", 1, 43)
if err := sdb.Update(folderID, protocol.DeviceID{43}, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
if err := sdb.Update(folderID, protocol.DeviceID{44}, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
file = genFile("foo3", 1, 44)
if err := sdb.Update(folderID, protocol.DeviceID{44}, []protocol.FileInfo{file}); err != nil {
t.Fatal()
}
// Verify remote sequences
seqs, err := sdb.RemoteSequences(folderID)
if err != nil {
t.Fatal(err)
}
if len(seqs) != 3 || seqs[protocol.DeviceID{42}] != 42 ||
seqs[protocol.DeviceID{43}] != 43 ||
seqs[protocol.DeviceID{44}] != 44 {
t.Log(seqs)
t.Error("bad seqs")
}
}

View File

@@ -1,54 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"testing"
"time"
)
func TestMtimePairs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
if err != nil {
t.Fatal()
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
t0 := time.Now().Truncate(time.Second)
t1 := t0.Add(1234567890)
// Set a pair
if err := db.PutMtime("foo", "bar", t0, t1); err != nil {
t.Fatal(err)
}
// Check it
gt0, gt1 := db.GetMtime("foo", "bar")
if !gt0.Equal(t0) || !gt1.Equal(t1) {
t.Log(t0, gt0)
t.Log(t1, gt1)
t.Log("bad times")
}
// Delete it
if err := db.DeleteMtime("foo", "bar"); err != nil {
t.Fatal(err)
}
// Check it
gt0, gt1 = db.GetMtime("foo", "bar")
if !gt0.IsZero() || !gt1.IsZero() {
t.Log(gt0, gt1)
t.Log("bad times")
}
}

View File

@@ -1,135 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"os"
"path/filepath"
"sync"
"time"
"github.com/syncthing/syncthing/internal/db"
)
const maxDBConns = 16
type DB struct {
pathBase string
deleteRetention time.Duration
*baseDB
folderDBsMut sync.RWMutex
folderDBs map[string]*folderDB
folderDBOpener func(folder, path string, deleteRetention time.Duration) (*folderDB, error)
}
var _ db.DB = (*DB)(nil)
type Option func(*DB)
func WithDeleteRetention(d time.Duration) Option {
return func(s *DB) {
s.deleteRetention = d
}
}
func Open(path string, opts ...Option) (*DB, error) {
pragmas := []string{
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
}
schemas := []string{
"sql/schema/common/*",
"sql/schema/main/*",
}
os.MkdirAll(path, 0o700)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, maxDBConns, pragmas, schemas, nil)
if err != nil {
return nil, err
}
db := &DB{
pathBase: path,
baseDB: mainBase,
folderDBs: make(map[string]*folderDB),
folderDBOpener: openFolderDB,
}
for _, opt := range opts {
opt(db)
}
return db, nil
}
// Open the database with options suitable for the migration inserts. This
// is not a safe mode of operation for normal processing, use only for bulk
// inserts with a close afterwards.
func OpenForMigration(path string) (*DB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
}
schemas := []string{
"sql/schema/common/*",
"sql/schema/main/*",
}
os.MkdirAll(path, 0o700)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, 1, pragmas, schemas, nil)
if err != nil {
return nil, err
}
db := &DB{
pathBase: path,
baseDB: mainBase,
folderDBs: make(map[string]*folderDB),
folderDBOpener: openFolderDBForMigration,
}
// // Touch device IDs that should always exist and have a low index
// // numbers, and will never change
// db.localDeviceIdx, _ = db.deviceIdxLocked(protocol.LocalDeviceID)
// db.tplInput["LocalDeviceIdx"] = db.localDeviceIdx
return db, nil
}
func OpenTemp() (*DB, error) {
// SQLite has a memory mode, but it works differently with concurrency
// compared to what we need with the WAL mode. So, no memory databases
// for now.
dir, err := os.MkdirTemp("", "syncthing-db")
if err != nil {
return nil, wrap(err)
}
path := filepath.Join(dir, "db")
l.Debugln("Test DB in", path)
return Open(path)
}
func (s *DB) Close() error {
s.folderDBsMut.Lock()
defer s.folderDBsMut.Unlock()
for folder, fdb := range s.folderDBs {
fdb.Close()
delete(s.folderDBs, folder)
}
return wrap(s.baseDB.Close())
}

View File

@@ -1,18 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:build cgo
package sqlite
import (
_ "github.com/mattn/go-sqlite3" // register sqlite3 database driver
)
const (
dbDriver = "sqlite3"
commonOptions = "_fk=true&_rt=true&_cache_size=-65536&_sync=1&_txlock=immediate"
)

View File

@@ -1,23 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:build !cgo && !wazero
package sqlite
import (
"github.com/syncthing/syncthing/lib/build"
_ "modernc.org/sqlite" // register sqlite database driver
)
const (
dbDriver = "sqlite"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=cache_size(-65536)&_pragma=synchronous(1)"
)
func init() {
build.AddTag("modernc-sqlite")
}

View File

@@ -1,44 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import "github.com/jmoiron/sqlx"
type txPreparedStmts struct {
*sqlx.Tx
stmts map[string]*sqlx.Stmt
}
func (p *txPreparedStmts) Preparex(query string) (*sqlx.Stmt, error) {
if p.stmts == nil {
p.stmts = make(map[string]*sqlx.Stmt)
}
stmt, ok := p.stmts[query]
if ok {
return stmt, nil
}
stmt, err := p.Tx.Preparex(query)
if err != nil {
return nil, wrap(err)
}
p.stmts[query] = stmt
return stmt, nil
}
func (p *txPreparedStmts) Commit() error {
for _, s := range p.stmts {
s.Close()
}
return p.Tx.Commit()
}
func (p *txPreparedStmts) Rollback() error {
for _, s := range p.stmts {
s.Close()
}
return p.Tx.Rollback()
}

View File

@@ -1,195 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"context"
"fmt"
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/db"
"github.com/thejerf/suture/v4"
)
const (
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
defaultDeleteRetention = 180 * 24 * time.Hour
minDeleteRetention = 24 * time.Hour
)
func (s *DB) Service(maintenanceInterval time.Duration) suture.Service {
return newService(s, maintenanceInterval)
}
type Service struct {
sdb *DB
maintenanceInterval time.Duration
internalMeta *db.Typed
}
func (s *Service) String() string {
return fmt.Sprintf("sqlite.service@%p", s)
}
func newService(sdb *DB, maintenanceInterval time.Duration) *Service {
return &Service{
sdb: sdb,
maintenanceInterval: maintenanceInterval,
internalMeta: db.NewTyped(sdb, internalMetaPrefix),
}
}
func (s *Service) Serve(ctx context.Context) error {
// Run periodic maintenance
// Figure out when we last ran maintenance and schedule accordingly. If
// it was never, do it now.
lastMaint, _, _ := s.internalMeta.Time(lastMaintKey)
nextMaint := lastMaint.Add(s.maintenanceInterval)
wait := time.Until(nextMaint)
if wait < 0 {
wait = time.Minute
}
l.Debugln("Next periodic run in", wait)
timer := time.NewTimer(wait)
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-timer.C:
}
if err := s.periodic(ctx); err != nil {
return wrap(err)
}
timer.Reset(s.maintenanceInterval)
l.Debugln("Next periodic run in", s.maintenanceInterval)
_ = s.internalMeta.PutTime(lastMaintKey, time.Now())
}
}
func (s *Service) periodic(ctx context.Context) error {
t0 := time.Now()
l.Debugln("Periodic start")
s.sdb.updateLock.Lock()
defer s.sdb.updateLock.Unlock()
t1 := time.Now()
defer func() { l.Debugln("Periodic done in", time.Since(t1), "+", t1.Sub(t0)) }()
tidy(ctx, s.sdb.sql)
return wrap(s.sdb.forEachFolder(func(fdb *folderDB) error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
if err := garbageCollectOldDeletedLocked(fdb); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
tidy(ctx, fdb.sql)
return nil
}))
}
func tidy(ctx context.Context, db *sqlx.DB) error {
conn, err := db.Conn(ctx)
if err != nil {
return wrap(err)
}
defer conn.Close()
_, _ = conn.ExecContext(ctx, `ANALYZE`)
_, _ = conn.ExecContext(ctx, `PRAGMA optimize`)
_, _ = conn.ExecContext(ctx, `PRAGMA incremental_vacuum`)
_, _ = conn.ExecContext(ctx, `PRAGMA journal_size_limit = 8388608`)
_, _ = conn.ExecContext(ctx, `PRAGMA wal_checkpoint(TRUNCATE)`)
return nil
}
func garbageCollectOldDeletedLocked(fdb *folderDB) error {
if fdb.deleteRetention <= 0 {
l.Debugln(fdb.baseName, "delete retention is infinite, skipping cleanup")
return nil
}
// Remove deleted files that are marked as not needed (we have processed
// them) and they were deleted more than MaxDeletedFileAge ago.
l.Debugln(fdb.baseName, "forgetting deleted files older than", fdb.deleteRetention)
res, err := fdb.stmt(`
DELETE FROM files
WHERE deleted AND modified < ? AND local_flags & {{.FlagLocalNeeded}} == 0
`).Exec(time.Now().Add(-fdb.deleteRetention).UnixNano())
if err != nil {
return wrap(err)
}
if aff, err := res.RowsAffected(); err == nil {
l.Debugln(fdb.baseName, "removed old deleted file records:", aff)
}
return nil
}
func garbageCollectBlocklistsAndBlocksLocked(ctx context.Context, fdb *folderDB) error {
// Remove all blocklists not referred to by any files and, by extension,
// any blocks not referred to by a blocklist. This is an expensive
// operation when run normally, especially if there are a lot of blocks
// to collect.
//
// We make this orders of magnitude faster by disabling foreign keys for
// the transaction and doing the cleanup manually. This requires using
// an explicit connection and disabling foreign keys before starting the
// transaction. We make sure to clean up on the way out.
conn, err := fdb.sql.Connx(ctx)
if err != nil {
return wrap(err)
}
defer conn.Close()
if _, err := conn.ExecContext(ctx, `PRAGMA foreign_keys = 0`); err != nil {
return wrap(err)
}
defer func() { //nolint:contextcheck
_, _ = conn.ExecContext(context.Background(), `PRAGMA foreign_keys = 1`)
}()
tx, err := conn.BeginTxx(ctx, nil)
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocklists
WHERE NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = blocklists.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocklists")
} else if shouldDebug() {
rows, err := res.RowsAffected()
l.Debugln(fdb.baseName, "blocklist GC:", rows, err)
}
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocks
WHERE NOT EXISTS (
SELECT 1 FROM blocklists WHERE blocklists.blocklist_hash = blocks.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocks")
} else if shouldDebug() {
rows, err := res.RowsAffected()
l.Debugln(fdb.baseName, "blocks GC:", rows, err)
}
return wrap(tx.Commit())
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,70 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"fmt"
"os"
"runtime"
"strings"
)
func (s *DB) DropFolder(folder string) error {
s.folderDBsMut.Lock()
defer s.folderDBsMut.Unlock()
s.updateLock.Lock()
defer s.updateLock.Unlock()
_, err := s.stmt(`
DELETE FROM folders
WHERE folder_id = ?
`).Exec(folder)
if fdb, ok := s.folderDBs[folder]; ok {
fdb.Close()
_ = os.Remove(fdb.path)
_ = os.Remove(fdb.path + "-wal")
_ = os.Remove(fdb.path + "-shm")
delete(s.folderDBs, folder)
}
return wrap(err)
}
func (s *DB) ListFolders() ([]string, error) {
var res []string
err := s.stmt(`
SELECT folder_id FROM folders
ORDER BY folder_id
`).Select(&res)
return res, wrap(err)
}
// wrap returns the error wrapped with the calling function name and
// optional extra context strings as prefix. A nil error wraps to nil.
func wrap(err error, context ...string) error {
if err == nil {
return nil
}
prefix := "error"
pc, _, _, ok := runtime.Caller(1)
details := runtime.FuncForPC(pc)
if ok && details != nil {
prefix = strings.ToLower(details.Name())
if dotIdx := strings.LastIndex(prefix, "."); dotIdx > 0 {
prefix = prefix[dotIdx+1:]
}
}
if len(context) > 0 {
for i := range context {
context[i] = strings.TrimSpace(context[i])
}
extra := strings.Join(context, ", ")
return fmt.Errorf("%s (%s): %w", prefix, extra, err)
}
return fmt.Errorf("%s: %w", prefix, err)
}

View File

@@ -1,131 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/lib/protocol"
)
type countsRow struct {
Type protocol.FileInfoType
Count int
Size int64
Deleted bool
LocalFlags int64 `db:"local_flags"`
}
func (s *folderDB) CountLocal(device protocol.DeviceID) (db.Counts, error) {
var res []countsRow
if err := s.stmt(`
SELECT s.type, s.count, s.size, s.local_flags, s.deleted FROM counts s
INNER JOIN devices d ON d.idx = s.device_idx
WHERE d.device_id = ? AND s.local_flags & {{.FlagLocalIgnored}} = 0
`).Select(&res, device.String()); err != nil {
return db.Counts{}, wrap(err)
}
return summarizeCounts(res), nil
}
func (s *folderDB) CountNeed(device protocol.DeviceID) (db.Counts, error) {
if device == protocol.LocalDeviceID {
return s.needSizeLocal()
}
return s.needSizeRemote(device)
}
func (s *folderDB) CountGlobal() (db.Counts, error) {
// Exclude ignored and receive-only changed files from the global count
// (legacy expectation? it's a bit weird since those files can in fact
// be global and you can get them with GetGlobal etc.)
var res []countsRow
err := s.stmt(`
SELECT s.type, s.count, s.size, s.local_flags, s.deleted FROM counts s
WHERE s.local_flags & {{.FlagLocalGlobal}} != 0 AND s.local_flags & {{or .FlagLocalReceiveOnly .FlagLocalIgnored}} = 0
`).Select(&res)
if err != nil {
return db.Counts{}, wrap(err)
}
return summarizeCounts(res), nil
}
func (s *folderDB) CountReceiveOnlyChanged() (db.Counts, error) {
var res []countsRow
err := s.stmt(`
SELECT s.type, s.count, s.size, s.local_flags, s.deleted FROM counts s
WHERE local_flags & {{.FlagLocalReceiveOnly}} != 0
`).Select(&res)
if err != nil {
return db.Counts{}, wrap(err)
}
return summarizeCounts(res), nil
}
func (s *folderDB) needSizeLocal() (db.Counts, error) {
// The need size for the local device is the sum of entries with the
// need bit set.
var res []countsRow
err := s.stmt(`
SELECT s.type, s.count, s.size, s.local_flags, s.deleted FROM counts s
WHERE s.local_flags & {{.FlagLocalNeeded}} != 0
`).Select(&res)
if err != nil {
return db.Counts{}, wrap(err)
}
return summarizeCounts(res), nil
}
func (s *folderDB) needSizeRemote(device protocol.DeviceID) (db.Counts, error) {
var res []countsRow
// See neededGlobalFilesRemote for commentary as that is the same query without summing
if err := s.stmt(`
SELECT g.type, count(*) as count, sum(g.size) as size, g.local_flags, g.deleted FROM files g
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND NOT g.deleted AND NOT g.invalid AND NOT EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND f.version = g.version AND d.device_id = ?
)
GROUP BY g.type, g.local_flags, g.deleted
UNION ALL
SELECT g.type, count(*) as count, sum(g.size) as size, g.local_flags, g.deleted FROM files g
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND g.deleted AND NOT g.invalid AND EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND d.device_id = ? AND NOT f.deleted AND NOT f.invalid
)
GROUP BY g.type, g.local_flags, g.deleted
`).Select(&res, device.String(),
device.String()); err != nil {
return db.Counts{}, wrap(err)
}
return summarizeCounts(res), nil
}
func summarizeCounts(res []countsRow) db.Counts {
c := db.Counts{
DeviceID: protocol.LocalDeviceID,
}
for _, r := range res {
switch {
case r.Deleted:
c.Deleted += r.Count
case r.Type == protocol.FileInfoTypeFile:
c.Files += r.Count
c.Bytes += r.Size
case r.Type == protocol.FileInfoTypeDirectory:
c.Directories += r.Count
c.Bytes += r.Size
case r.Type == protocol.FileInfoTypeSymlink:
c.Symlinks += r.Count
c.Bytes += r.Size
}
}
return c
}

View File

@@ -1,182 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"database/sql"
"errors"
"fmt"
"iter"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/itererr"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
)
func (s *folderDB) GetGlobalFile(file string) (protocol.FileInfo, bool, error) {
file = osutil.NormalizedFilename(file)
var ind indirectFI
err := s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
WHERE f.name = ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
`).Get(&ind, file)
if errors.Is(err, sql.ErrNoRows) {
return protocol.FileInfo{}, false, nil
}
if err != nil {
return protocol.FileInfo{}, false, wrap(err)
}
fi, err := ind.FileInfo()
if err != nil {
return protocol.FileInfo{}, false, wrap(err)
}
return fi, true, nil
}
func (s *folderDB) GetGlobalAvailability(file string) ([]protocol.DeviceID, error) {
file = osutil.NormalizedFilename(file)
var devStrs []string
err := s.stmt(`
SELECT d.device_id FROM files f
INNER JOIN devices d ON d.idx = f.device_idx
INNER JOIN files g ON g.version = f.version AND g.name = f.name
WHERE g.name = ? AND g.local_flags & {{.FlagLocalGlobal}} != 0 AND f.device_idx != {{.LocalDeviceIdx}}
ORDER BY d.device_id
`).Select(&devStrs, file)
if errors.Is(err, sql.ErrNoRows) {
return nil, nil
}
if err != nil {
return nil, wrap(err)
}
devs := make([]protocol.DeviceID, 0, len(devStrs))
for _, s := range devStrs {
d, err := protocol.DeviceIDFromString(s)
if err != nil {
return nil, wrap(err)
}
devs = append(devs, d)
}
return devs, nil
}
func (s *folderDB) AllGlobalFiles() (iter.Seq[db.FileMetadata], func() error) {
it, errFn := iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.invalid, f.local_flags as localflags FROM files f
WHERE f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY f.name
`).Queryx())
return itererr.Map(it, errFn, func(m db.FileMetadata) (db.FileMetadata, error) {
m.Name = osutil.NativeFilename(m.Name)
return m, nil
})
}
func (s *folderDB) AllGlobalFilesPrefix(prefix string) (iter.Seq[db.FileMetadata], func() error) {
if prefix == "" {
return s.AllGlobalFiles()
}
prefix = osutil.NormalizedFilename(prefix)
end := prefixEnd(prefix)
it, errFn := iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.invalid, f.local_flags as localflags FROM files f
WHERE f.name >= ? AND f.name < ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY f.name
`).Queryx(prefix, end))
return itererr.Map(it, errFn, func(m db.FileMetadata) (db.FileMetadata, error) {
m.Name = osutil.NativeFilename(m.Name)
return m, nil
})
}
func (s *folderDB) AllNeededGlobalFiles(device protocol.DeviceID, order config.PullOrder, limit, offset int) (iter.Seq[protocol.FileInfo], func() error) {
var selectOpts string
switch order {
case config.PullOrderRandom:
selectOpts = "ORDER BY RANDOM()"
case config.PullOrderAlphabetic:
selectOpts = "ORDER BY g.name ASC"
case config.PullOrderSmallestFirst:
selectOpts = "ORDER BY g.size ASC"
case config.PullOrderLargestFirst:
selectOpts = "ORDER BY g.size DESC"
case config.PullOrderOldestFirst:
selectOpts = "ORDER BY g.modified ASC"
case config.PullOrderNewestFirst:
selectOpts = "ORDER BY g.modified DESC"
}
if limit > 0 {
selectOpts += fmt.Sprintf(" LIMIT %d", limit)
}
if offset > 0 {
selectOpts += fmt.Sprintf(" OFFSET %d", offset)
}
if device == protocol.LocalDeviceID {
return s.neededGlobalFilesLocal(selectOpts)
}
return s.neededGlobalFilesRemote(device, selectOpts)
}
func (s *folderDB) neededGlobalFilesLocal(selectOpts string) (iter.Seq[protocol.FileInfo], func() error) {
// Select all the non-ignored files with the need bit set.
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
WHERE g.local_flags & {{.FlagLocalIgnored}} = 0 AND g.local_flags & {{.FlagLocalNeeded}} != 0
` + selectOpts).Queryx())
return itererr.Map(it, errFn, indirectFI.FileInfo)
}
func (s *folderDB) neededGlobalFilesRemote(device protocol.DeviceID, selectOpts string) (iter.Seq[protocol.FileInfo], func() error) {
// Select:
//
// - all the valid, non-deleted global files that don't have a
// corresponding remote file with the same version.
//
// - all the valid, deleted global files that have a corresponding
// non-deleted and valid remote file (of any version)
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND NOT g.deleted AND NOT g.invalid AND NOT EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND f.version = g.version AND d.device_id = ?
)
UNION ALL
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND g.deleted AND NOT g.invalid AND EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND d.device_id = ? AND NOT f.deleted AND NOT f.invalid
)
`+selectOpts).Queryx(
device.String(),
device.String(),
))
return itererr.Map(it, errFn, indirectFI.FileInfo)
}

View File

@@ -1,151 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"database/sql"
"encoding/hex"
"errors"
"fmt"
"github.com/syncthing/syncthing/internal/itererr"
"github.com/syncthing/syncthing/lib/protocol"
)
func (s *folderDB) GetIndexID(device protocol.DeviceID) (protocol.IndexID, error) {
// Try a fast read-only query to begin with. If it does not find the ID
// we'll do the full thing under a lock.
var indexID string
if err := s.stmt(`
SELECT i.index_id FROM indexids i
INNER JOIN devices d ON d.idx = i.device_idx
WHERE d.device_id = ?
`).Get(&indexID, device.String()); err == nil && indexID != "" {
idx, err := indexIDFromHex(indexID)
return idx, wrap(err, "select")
}
if device != protocol.LocalDeviceID {
// For non-local devices we do not create the index ID, so return
// zero anyway if we don't have one.
return 0, nil
}
s.updateLock.Lock()
defer s.updateLock.Unlock()
// We are now operating only for the local device ID
if err := s.stmt(`
SELECT index_id FROM indexids WHERE device_idx = {{.LocalDeviceIdx}}
`).Get(&indexID); err != nil && !errors.Is(err, sql.ErrNoRows) {
return 0, wrap(err, "select local")
}
if indexID == "" {
// Generate a new index ID. Some trickiness in the query as we need
// to find the max sequence of local files if there already exist
// any.
id := protocol.NewIndexID()
if _, err := s.stmt(`
INSERT INTO indexids (device_idx, index_id, sequence)
SELECT {{.LocalDeviceIdx}}, ?, COALESCE(MAX(sequence), 0) FROM files
WHERE device_idx = {{.LocalDeviceIdx}}
ON CONFLICT DO UPDATE SET index_id = ?
`).Exec(indexIDToHex(id), indexIDToHex(id)); err != nil {
return 0, wrap(err, "insert")
}
return id, nil
}
return indexIDFromHex(indexID)
}
func (s *folderDB) SetIndexID(device protocol.DeviceID, id protocol.IndexID) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
deviceIdx, err := s.deviceIdxLocked(device)
if err != nil {
return wrap(err, "device idx")
}
if _, err := s.stmt(`
INSERT OR REPLACE INTO indexids (device_idx, index_id, sequence) values (?, ?, 0)
`).Exec(deviceIdx, indexIDToHex(id)); err != nil {
return wrap(err, "insert")
}
return nil
}
func (s *folderDB) DropAllIndexIDs() error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
_, err := s.stmt(`DELETE FROM indexids`).Exec()
return wrap(err)
}
func (s *folderDB) GetDeviceSequence(device protocol.DeviceID) (int64, error) {
var res sql.NullInt64
err := s.stmt(`
SELECT sequence FROM indexids i
INNER JOIN devices d ON d.idx = i.device_idx
WHERE d.device_id = ?
`).Get(&res, device.String())
if errors.Is(err, sql.ErrNoRows) {
return 0, nil
}
if err != nil {
return 0, wrap(err)
}
if !res.Valid {
return 0, nil
}
return res.Int64, nil
}
func (s *folderDB) RemoteSequences() (map[protocol.DeviceID]int64, error) {
type row struct {
Device string
Seq int64
}
it, errFn := iterStructs[row](s.stmt(`
SELECT d.device_id AS device, i.sequence AS seq FROM indexids i
INNER JOIN devices d ON d.idx = i.device_idx
WHERE i.device_idx != {{.LocalDeviceIdx}}
`).Queryx())
res := make(map[protocol.DeviceID]int64)
for row, err := range itererr.Zip(it, errFn) {
if err != nil {
return nil, wrap(err)
}
dev, err := protocol.DeviceIDFromString(row.Device)
if err != nil {
return nil, wrap(err, "device ID")
}
res[dev] = row.Seq
}
return res, nil
}
func indexIDFromHex(s string) (protocol.IndexID, error) {
bs, err := hex.DecodeString(s)
if err != nil {
return 0, fmt.Errorf("indexIDFromHex: %q: %w", s, err)
}
var id protocol.IndexID
if err := id.Unmarshal(bs); err != nil {
return 0, fmt.Errorf("indexIDFromHex: %q: %w", s, err)
}
return id, nil
}
func indexIDToHex(i protocol.IndexID) string {
bs, _ := i.Marshal()
return hex.EncodeToString(bs)
}

View File

@@ -1,128 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"database/sql"
"errors"
"fmt"
"iter"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/itererr"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
)
func (s *folderDB) GetDeviceFile(device protocol.DeviceID, file string) (protocol.FileInfo, bool, error) {
file = osutil.NormalizedFilename(file)
var ind indirectFI
err := s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON f.device_idx = d.idx
WHERE d.device_id = ? AND f.name = ?
`).Get(&ind, device.String(), file)
if errors.Is(err, sql.ErrNoRows) {
return protocol.FileInfo{}, false, nil
}
if err != nil {
return protocol.FileInfo{}, false, wrap(err)
}
fi, err := ind.FileInfo()
if err != nil {
return protocol.FileInfo{}, false, wrap(err, "indirect")
}
return fi, true, nil
}
func (s *folderDB) AllLocalFiles(device protocol.DeviceID) (iter.Seq[protocol.FileInfo], func() error) {
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON d.idx = f.device_idx
WHERE d.device_id = ?
`).Queryx(device.String()))
return itererr.Map(it, errFn, indirectFI.FileInfo)
}
func (s *folderDB) AllLocalFilesBySequence(device protocol.DeviceID, startSeq int64, limit int) (iter.Seq[protocol.FileInfo], func() error) {
var limitStr string
if limit > 0 {
limitStr = fmt.Sprintf(" LIMIT %d", limit)
}
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON d.idx = f.device_idx
WHERE d.device_id = ? AND f.sequence >= ?
ORDER BY f.sequence`+limitStr).Queryx(
device.String(), startSeq))
return itererr.Map(it, errFn, indirectFI.FileInfo)
}
func (s *folderDB) AllLocalFilesWithPrefix(device protocol.DeviceID, prefix string) (iter.Seq[protocol.FileInfo], func() error) {
if prefix == "" {
return s.AllLocalFiles(device)
}
prefix = osutil.NormalizedFilename(prefix)
end := prefixEnd(prefix)
it, errFn := iterStructs[indirectFI](s.sql.Queryx(`
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON d.idx = f.device_idx
WHERE d.device_id = ? AND f.name >= ? AND f.name < ?
`, device.String(), prefix, end))
return itererr.Map(it, errFn, indirectFI.FileInfo)
}
func (s *folderDB) AllLocalFilesWithBlocksHash(h []byte) (iter.Seq[db.FileMetadata], func() error) {
return iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.invalid, f.local_flags as localflags FROM files f
WHERE f.device_idx = {{.LocalDeviceIdx}} AND f.blocklist_hash = ?
`).Queryx(h))
}
func (s *folderDB) AllLocalBlocksWithHash(hash []byte) (iter.Seq[db.BlockMapEntry], func() error) {
// We involve the files table in this select because deletion of blocks
// & blocklists is deferred (garbage collected) while the files list is
// not. This filters out blocks that are in fact deleted.
return iterStructs[db.BlockMapEntry](s.stmt(`
SELECT f.blocklist_hash as blocklisthash, b.idx as blockindex, b.offset, b.size FROM files f
LEFT JOIN blocks b ON f.blocklist_hash = b.blocklist_hash
WHERE f.device_idx = {{.LocalDeviceIdx}} AND b.hash = ?
`).Queryx(hash))
}
func (s *folderDB) ListDevicesForFolder() ([]protocol.DeviceID, error) {
var res []string
err := s.stmt(`
SELECT DISTINCT d.device_id FROM counts s
INNER JOIN devices d ON d.idx = s.device_idx
WHERE s.count > 0 AND s.device_idx != {{.LocalDeviceIdx}}
ORDER BY d.device_id
`).Select(&res)
if err != nil {
return nil, wrap(err)
}
devs := make([]protocol.DeviceID, len(res))
for i, s := range res {
devs[i], err = protocol.DeviceIDFromString(s)
if err != nil {
return nil, wrap(err)
}
}
return devs, nil
}

View File

@@ -1,45 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"time"
)
func (s *folderDB) GetMtime(name string) (ondisk, virtual time.Time) {
var res struct {
Ondisk int64
Virtual int64
}
if err := s.stmt(`
SELECT m.ondisk, m.virtual FROM mtimes m
WHERE m.name = ?
`).Get(&res, name); err != nil {
return time.Time{}, time.Time{}
}
return time.Unix(0, res.Ondisk), time.Unix(0, res.Virtual)
}
func (s *folderDB) PutMtime(name string, ondisk, virtual time.Time) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
_, err := s.stmt(`
INSERT OR REPLACE INTO mtimes (name, ondisk, virtual)
VALUES (?, ?, ?)
`).Exec(name, ondisk.UnixNano(), virtual.UnixNano())
return wrap(err)
}
func (s *folderDB) DeleteMtime(name string) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
_, err := s.stmt(`
DELETE FROM mtimes
WHERE name = ?
`).Exec(name)
return wrap(err)
}

View File

@@ -1,110 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"time"
"github.com/syncthing/syncthing/lib/protocol"
)
type folderDB struct {
folderID string
*baseDB
localDeviceIdx int64
deleteRetention time.Duration
}
func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB, error) {
pragmas := []string{
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
}
schemas := []string{
"sql/schema/common/*",
"sql/schema/folder/*",
}
base, err := openBase(path, maxDBConns, pragmas, schemas, nil)
if err != nil {
return nil, err
}
fdb := &folderDB{
folderID: folder,
baseDB: base,
deleteRetention: deleteRetention,
}
_ = fdb.PutKV("folderID", []byte(folder))
// Touch device IDs that should always exist and have a low index
// numbers, and will never change
fdb.localDeviceIdx, _ = fdb.deviceIdxLocked(protocol.LocalDeviceID)
fdb.tplInput["LocalDeviceIdx"] = fdb.localDeviceIdx
return fdb, nil
}
// Open the database with options suitable for the migration inserts. This
// is not a safe mode of operation for normal processing, use only for bulk
// inserts with a close afterwards.
func openFolderDBForMigration(folder, path string, deleteRetention time.Duration) (*folderDB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
}
schemas := []string{
"sql/schema/common/*",
"sql/schema/folder/*",
}
base, err := openBase(path, 1, pragmas, schemas, nil)
if err != nil {
return nil, err
}
fdb := &folderDB{
folderID: folder,
baseDB: base,
deleteRetention: deleteRetention,
}
// Touch device IDs that should always exist and have a low index
// numbers, and will never change
fdb.localDeviceIdx, _ = fdb.deviceIdxLocked(protocol.LocalDeviceID)
fdb.tplInput["LocalDeviceIdx"] = fdb.localDeviceIdx
return fdb, nil
}
func (s *folderDB) deviceIdxLocked(deviceID protocol.DeviceID) (int64, error) {
devStr := deviceID.String()
if _, err := s.stmt(`
INSERT OR IGNORE INTO devices(device_id)
VALUES (?)
`).Exec(devStr); err != nil {
return 0, wrap(err)
}
var idx int64
if err := s.stmt(`
SELECT idx FROM devices
WHERE device_id = ?
`).Get(&idx, devStr); err != nil {
return 0, wrap(err)
}
return idx, nil
}

View File

@@ -1,531 +0,0 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"cmp"
"context"
"fmt"
"slices"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/internal/itererr"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sliceutil"
"google.golang.org/protobuf/proto"
)
const (
// Arbitrarily chosen values for checkpoint frequency....
updatePointsPerFile = 100
updatePointsPerBlock = 1
updatePointsThreshold = 250_000
)
func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
deviceIdx, err := s.deviceIdxLocked(device)
if err != nil {
return wrap(err)
}
tx, err := s.sql.BeginTxx(context.Background(), nil)
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
txp := &txPreparedStmts{Tx: tx}
//nolint:sqlclosecheck
insertFileStmt, err := txp.Preparex(`
INSERT OR REPLACE INTO files (device_idx, remote_sequence, name, type, modified, size, version, deleted, invalid, local_flags, blocklist_hash)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
RETURNING sequence
`)
if err != nil {
return wrap(err, "prepare insert file")
}
//nolint:sqlclosecheck
insertFileInfoStmt, err := txp.Preparex(`
INSERT INTO fileinfos (sequence, fiprotobuf)
VALUES (?, ?)
`)
if err != nil {
return wrap(err, "prepare insert fileinfo")
}
//nolint:sqlclosecheck
insertBlockListStmt, err := txp.Preparex(`
INSERT OR IGNORE INTO blocklists (blocklist_hash, blprotobuf)
VALUES (?, ?)
`)
if err != nil {
return wrap(err, "prepare insert blocklist")
}
var prevRemoteSeq int64
for i, f := range fs {
f.Name = osutil.NormalizedFilename(f.Name)
var blockshash *[]byte
if len(f.Blocks) > 0 {
f.BlocksHash = protocol.BlocksHash(f.Blocks)
blockshash = &f.BlocksHash
} else {
f.BlocksHash = nil
}
if f.Type == protocol.FileInfoTypeDirectory {
f.Size = 128 // synthetic directory size
}
// Insert the file.
//
// If it is a remote file, set remote_sequence otherwise leave it at
// null. Returns the new local sequence.
var remoteSeq *int64
if device != protocol.LocalDeviceID {
if i > 0 && f.Sequence == prevRemoteSeq {
return fmt.Errorf("duplicate remote sequence number %d", prevRemoteSeq)
}
prevRemoteSeq = f.Sequence
remoteSeq = &f.Sequence
}
var localSeq int64
if err := insertFileStmt.Get(&localSeq, deviceIdx, remoteSeq, f.Name, f.Type, f.ModTime().UnixNano(), f.Size, f.Version.String(), f.IsDeleted(), f.IsInvalid(), f.LocalFlags, blockshash); err != nil {
return wrap(err, "insert file")
}
if len(f.Blocks) > 0 {
// Indirect the block list
blocks := sliceutil.Map(f.Blocks, protocol.BlockInfo.ToWire)
bs, err := proto.Marshal(&dbproto.BlockList{Blocks: blocks})
if err != nil {
return wrap(err, "marshal blocklist")
}
if _, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
return wrap(err, "insert blocklist")
}
if device == protocol.LocalDeviceID {
// Insert all blocks
if err := s.insertBlocksLocked(txp, f.BlocksHash, f.Blocks); err != nil {
return wrap(err, "insert blocks")
}
}
f.Blocks = nil
}
// Insert the fileinfo
if device == protocol.LocalDeviceID {
f.Sequence = localSeq
}
bs, err := proto.Marshal(f.ToWire(true))
if err != nil {
return wrap(err, "marshal fileinfo")
}
if _, err := insertFileInfoStmt.Exec(localSeq, bs); err != nil {
return wrap(err, "insert fileinfo")
}
// Update global and need
if err := s.recalcGlobalForFile(txp, f.Name); err != nil {
return wrap(err)
}
}
if err := tx.Commit(); err != nil {
return wrap(err)
}
s.periodicCheckpointLocked(fs)
return nil
}
func (s *folderDB) DropDevice(device protocol.DeviceID) error {
if device == protocol.LocalDeviceID {
panic("bug: cannot drop local device")
}
s.updateLock.Lock()
defer s.updateLock.Unlock()
tx, err := s.sql.BeginTxx(context.Background(), nil)
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
txp := &txPreparedStmts{Tx: tx}
// Drop the device, which cascades to delete all files etc for it
if _, err := tx.Exec(`DELETE FROM devices WHERE device_id = ?`, device.String()); err != nil {
return wrap(err)
}
// Recalc the globals for all affected folders
if err := s.recalcGlobalForFolder(txp); err != nil {
return wrap(err)
}
return wrap(tx.Commit())
}
func (s *folderDB) DropAllFiles(device protocol.DeviceID) error {
s.updateLock.Lock()
defer s.updateLock.Unlock()
// This is a two part operation, first dropping all the files and then
// recalculating the global state for the entire folder.
deviceIdx, err := s.deviceIdxLocked(device)
if err != nil {
return wrap(err)
}
tx, err := s.sql.BeginTxx(context.Background(), nil)
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
txp := &txPreparedStmts{Tx: tx}
// Drop all the file entries
result, err := tx.Exec(`
DELETE FROM files
WHERE device_idx = ?
`, deviceIdx)
if err != nil {
return wrap(err)
}
if n, err := result.RowsAffected(); err == nil && n == 0 {
// The delete affected no rows, so we don't need to redo the entire
// global/need calculation.
return wrap(tx.Commit())
}
// Recalc global for the entire folder
if err := s.recalcGlobalForFolder(txp); err != nil {
return wrap(err)
}
return wrap(tx.Commit())
}
func (s *folderDB) DropFilesNamed(device protocol.DeviceID, names []string) error {
for i := range names {
names[i] = osutil.NormalizedFilename(names[i])
}
s.updateLock.Lock()
defer s.updateLock.Unlock()
deviceIdx, err := s.deviceIdxLocked(device)
if err != nil {
return wrap(err)
}
tx, err := s.sql.BeginTxx(context.Background(), nil)
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
txp := &txPreparedStmts{Tx: tx}
// Drop the named files
query, args, err := sqlx.In(`
DELETE FROM files
WHERE device_idx = ? AND name IN (?)
`, deviceIdx, names)
if err != nil {
return wrap(err)
}
if _, err := tx.Exec(query, args...); err != nil {
return wrap(err)
}
// Recalc globals for the named files
for _, name := range names {
if err := s.recalcGlobalForFile(txp, name); err != nil {
return wrap(err)
}
}
return wrap(tx.Commit())
}
func (*folderDB) insertBlocksLocked(tx *txPreparedStmts, blocklistHash []byte, blocks []protocol.BlockInfo) error {
if len(blocks) == 0 {
return nil
}
bs := make([]map[string]any, len(blocks))
for i, b := range blocks {
bs[i] = map[string]any{
"hash": b.Hash,
"blocklist_hash": blocklistHash,
"idx": i,
"offset": b.Offset,
"size": b.Size,
}
}
// Very large block lists (>8000 blocks) result in "too many variables"
// error. Chunk it to a reasonable size.
for chunk := range slices.Chunk(bs, 1000) {
if _, err := tx.NamedExec(`
INSERT OR IGNORE INTO blocks (hash, blocklist_hash, idx, offset, size)
VALUES (:hash, :blocklist_hash, :idx, :offset, :size)
`, chunk); err != nil {
return wrap(err)
}
}
return nil
}
func (s *folderDB) recalcGlobalForFolder(txp *txPreparedStmts) error {
// Select files where there is no global, those are the ones we need to
// recalculate.
//nolint:sqlclosecheck
namesStmt, err := txp.Preparex(`
SELECT f.name FROM files f
WHERE NOT EXISTS (
SELECT 1 FROM files g
WHERE g.name = f.name AND g.local_flags & ? != 0
)
GROUP BY name
`)
if err != nil {
return wrap(err)
}
rows, err := namesStmt.Queryx(protocol.FlagLocalGlobal)
if err != nil {
return wrap(err)
}
defer rows.Close()
for rows.Next() {
var name string
if err := rows.Scan(&name); err != nil {
return wrap(err)
}
if err := s.recalcGlobalForFile(txp, name); err != nil {
return wrap(err)
}
}
return wrap(rows.Err())
}
func (s *folderDB) recalcGlobalForFile(txp *txPreparedStmts, file string) error {
//nolint:sqlclosecheck
selStmt, err := txp.Preparex(`
SELECT name, device_idx, sequence, modified, version, deleted, invalid, local_flags FROM files
WHERE name = ?
`)
if err != nil {
return wrap(err)
}
es, err := itererr.Collect(iterStructs[fileRow](selStmt.Queryx(file)))
if err != nil {
return wrap(err)
}
if len(es) == 0 {
// shouldn't happen
return nil
}
// Sort the entries; the global entry is at the head of the list
slices.SortFunc(es, fileRow.Compare)
// The global version is the first one in the list that is not invalid,
// or just the first one in the list if all are invalid.
var global fileRow
globIdx := slices.IndexFunc(es, func(e fileRow) bool { return !e.Invalid })
if globIdx < 0 {
globIdx = 0
}
global = es[globIdx]
// We "have" the file if the position in the list of versions is at the
// global version or better, or if the version is the same as the global
// file (we might be further down the list due to invalid flags), or if
// the global is deleted and we don't have it at all...
localIdx := slices.IndexFunc(es, func(e fileRow) bool { return e.DeviceIdx == s.localDeviceIdx })
hasLocal := localIdx >= 0 && localIdx <= globIdx || // have a better or equal version
localIdx >= 0 && es[localIdx].Version.Equal(global.Version.Vector) || // have an equal version but invalid/ignored
localIdx < 0 && global.Deleted // missing it, but the global is also deleted
// Set the global flag on the global entry. Set the need flag if the
// local device needs this file, unless it's invalid.
global.LocalFlags |= protocol.FlagLocalGlobal
if hasLocal || global.Invalid {
global.LocalFlags &= ^protocol.FlagLocalNeeded
} else {
global.LocalFlags |= protocol.FlagLocalNeeded
}
//nolint:sqlclosecheck
upStmt, err := txp.Preparex(`
UPDATE files SET local_flags = ?
WHERE device_idx = ? AND sequence = ?
`)
if err != nil {
return wrap(err)
}
if _, err := upStmt.Exec(global.LocalFlags, global.DeviceIdx, global.Sequence); err != nil {
return wrap(err)
}
// Clear the need and global flags on all other entries
//nolint:sqlclosecheck
upStmt, err = txp.Preparex(`
UPDATE files SET local_flags = local_flags & ?
WHERE name = ? AND sequence != ? AND local_flags & ? != 0
`)
if err != nil {
return wrap(err)
}
if _, err := upStmt.Exec(^(protocol.FlagLocalNeeded | protocol.FlagLocalGlobal), global.Name, global.Sequence, protocol.FlagLocalNeeded|protocol.FlagLocalGlobal); err != nil {
return wrap(err)
}
return nil
}
func (s *DB) folderIdxLocked(folderID string) (int64, error) {
if _, err := s.stmt(`
INSERT OR IGNORE INTO folders(folder_id)
VALUES (?)
`).Exec(folderID); err != nil {
return 0, wrap(err)
}
var idx int64
if err := s.stmt(`
SELECT idx FROM folders
WHERE folder_id = ?
`).Get(&idx, folderID); err != nil {
return 0, wrap(err)
}
return idx, nil
}
type fileRow struct {
Name string
Version dbVector
DeviceIdx int64 `db:"device_idx"`
Sequence int64
Modified int64
Size int64
LocalFlags int64 `db:"local_flags"`
Deleted bool
Invalid bool
}
func (e fileRow) Compare(other fileRow) int {
// From FileInfo.WinsConflict
vc := e.Version.Vector.Compare(other.Version.Vector)
switch vc {
case protocol.Equal:
if e.Invalid != other.Invalid {
if e.Invalid {
return 1
}
return -1
}
// Compare the device ID index, lower is better. This is only
// deterministic to the extent that LocalDeviceID will always be the
// lowest one, order between remote devices is random (and
// irrelevant).
return cmp.Compare(e.DeviceIdx, other.DeviceIdx)
case protocol.Greater: // we are newer
return -1
case protocol.Lesser: // we are older
return 1
case protocol.ConcurrentGreater, protocol.ConcurrentLesser: // there is a conflict
if e.Invalid != other.Invalid {
if e.Invalid { // we are invalid, we lose
return 1
}
return -1 // they are invalid, we win
}
if e.Deleted != other.Deleted {
if e.Deleted { // we are deleted, we lose
return 1
}
return -1 // they are deleted, we win
}
if d := cmp.Compare(e.Modified, other.Modified); d != 0 {
return -d // positive d means we were newer, so we win (negative return)
}
if vc == protocol.ConcurrentGreater {
return -1 // we have a better device ID, we win
}
return 1 // they win
default:
return 0
}
}
func (s *folderDB) periodicCheckpointLocked(fs []protocol.FileInfo) {
// Induce periodic checkpoints. We add points for each file and block,
// and checkpoint when we've written more than a threshold of points.
// This ensures we do not go too long without a checkpoint, while also
// not doing it incessantly for every update.
s.updatePoints += updatePointsPerFile * len(fs)
for _, f := range fs {
s.updatePoints += len(f.Blocks) * updatePointsPerBlock
}
if s.updatePoints > updatePointsThreshold {
conn, err := s.sql.Conn(context.Background())
if err != nil {
l.Debugln(s.baseName, "conn:", err)
return
}
defer conn.Close()
if _, err := conn.ExecContext(context.Background(), `PRAGMA journal_size_limit = 8388608`); err != nil {
l.Debugln(s.baseName, "PRAGMA journal_size_limit:", err)
}
// Every 50th checkpoint becomes a truncate, in an effort to bring
// down the size now and then.
checkpointType := "RESTART"
if s.checkpointsCount > 50 {
checkpointType = "TRUNCATE"
}
cmd := fmt.Sprintf(`PRAGMA wal_checkpoint(%s)`, checkpointType)
row := conn.QueryRowContext(context.Background(), cmd)
var res, modified, moved int
if row.Err() != nil {
l.Debugln(s.baseName, cmd+":", err)
} else if err := row.Scan(&res, &modified, &moved); err != nil {
l.Debugln(s.baseName, cmd+" (scan):", err)
} else {
l.Debugln(s.baseName, cmd, s.checkpointsCount, "at", s.updatePoints, "returned", res, modified, moved)
}
// Reset the truncate counter when a truncate succeeded. If it
// failed, we'll keep trying it until we succeed. Increase it faster
// when we fail to checkpoint, as it's more likely the WAL is
// growing and will need truncation when we get out of this state.
if res == 1 {
s.checkpointsCount += 10
} else if res == 0 && checkpointType == "TRUNCATE" {
s.checkpointsCount = 0
} else {
s.checkpointsCount++
}
s.updatePoints = 0
}
}

View File

@@ -1,8 +0,0 @@
These SQL scripts are embedded in the binary.
Scripts in `schema/` are run at every startup, in alphanumerical order.
Scripts in `migrations/` are run when a migration is needed; the must begin
with a number that equals the schema version that results from that
migration. Migrations are not run on initial database creation, so the
scripts in `schema/` should create the latest version.

View File

@@ -1,7 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- The next migration should be number two.

View File

@@ -1,14 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Schema migrations hold the list of historical migrations applied
CREATE TABLE IF NOT EXISTS schemamigrations (
schema_version INTEGER NOT NULL,
applied_at INTEGER NOT NULL, -- unix nanos
syncthing_version TEXT NOT NULL COLLATE BINARY,
PRIMARY KEY(schema_version)
) STRICT
;

View File

@@ -1,13 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
--- Simple KV store. This backs the "miscDB" we use for certain minor pieces
-- of data.
CREATE TABLE IF NOT EXISTS kv (
key TEXT NOT NULL PRIMARY KEY COLLATE BINARY,
value BLOB NOT NULL
) STRICT
;

View File

@@ -1,12 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- devices map device IDs as used by Syncthing to database device indexes
CREATE TABLE IF NOT EXISTS devices (
idx INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
device_id TEXT NOT NULL UNIQUE COLLATE BINARY
) STRICT
;

View File

@@ -1,60 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Files
--
-- The files table contains all files announced by any device. Files present
-- on this device are filed under the LocalDeviceID, not the actual current
-- device ID, for simplicity, consistency and portability. One announced
-- version of each file is considered the "global" version - the latest one,
-- that all other devices strive to replicate. This instance gets the Global
-- flag bit set. There may be other identical instances of this file
-- announced by other devices, but only one onstance gets the Global flag;
-- this simplifies accounting. If the current device has the Global version,
-- the LocalDeviceID instance of the file is the one that has the Global
-- bit.
--
-- If the current device does not have that version of the file it gets the
-- Need bit set. Only Global files announced by another device can have the
-- Need bit. This allows for very efficient lookup of files needing handling
-- on this device, which is a common query.
CREATE TABLE IF NOT EXISTS files (
device_idx INTEGER NOT NULL, -- actual device ID or LocalDeviceID
sequence INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, -- our local database sequence, for each and every entry
remote_sequence INTEGER, -- remote device's sequence number, null for local or synthetic entries
name TEXT NOT NULL COLLATE BINARY,
type INTEGER NOT NULL, -- protocol.FileInfoType
modified INTEGER NOT NULL, -- Unix nanos
size INTEGER NOT NULL,
version TEXT NOT NULL COLLATE BINARY,
deleted INTEGER NOT NULL, -- boolean
invalid INTEGER NOT NULL, -- boolean
local_flags INTEGER NOT NULL,
blocklist_hash BLOB, -- null when there are no blocks
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
) STRICT
;
-- FileInfos store the actual protobuf object. We do this separately to keep
-- the files rows smaller and more efficient.
CREATE TABLE IF NOT EXISTS fileinfos (
sequence INTEGER NOT NULL PRIMARY KEY, -- our local database sequence from the files table
fiprotobuf BLOB NOT NULL,
FOREIGN KEY(sequence) REFERENCES files(sequence) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
) STRICT
;
-- There can be only one file per folder, device, and remote sequence number
CREATE UNIQUE INDEX IF NOT EXISTS files_remote_sequence ON files (device_idx, remote_sequence)
WHERE remote_sequence IS NOT NULL
;
-- There can be only one file per folder, device, and name
CREATE UNIQUE INDEX IF NOT EXISTS files_device_name ON files (device_idx, name)
;
-- We want to be able to look up & iterate files based on just folder and name
CREATE INDEX IF NOT EXISTS files_name_only ON files (name)
;
-- We want to be able to look up & iterate files based on blocks hash
CREATE INDEX IF NOT EXISTS files_blocklist_hash_only ON files (blocklist_hash, device_idx) WHERE blocklist_hash IS NOT NULL
;

View File

@@ -1,22 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- indexids holds the index ID and maximum sequence for a given device and folder
CREATE TABLE IF NOT EXISTS indexids (
device_idx INTEGER NOT NULL,
index_id TEXT NOT NULL COLLATE BINARY,
sequence INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY(device_idx),
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
) STRICT, WITHOUT ROWID
;
CREATE TRIGGER IF NOT EXISTS indexids_seq AFTER INSERT ON files
BEGIN
INSERT INTO indexids (device_idx, index_id, sequence)
VALUES (NEW.device_idx, "", COALESCE(NEW.remote_sequence, NEW.sequence))
ON CONFLICT DO UPDATE SET sequence = COALESCE(NEW.remote_sequence, NEW.sequence);
END
;

View File

@@ -1,47 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Counts
--
-- Counts and sizes are maintained for each device, folder, type, flag bits
-- combination.
CREATE TABLE IF NOT EXISTS counts (
device_idx INTEGER NOT NULL,
type INTEGER NOT NULL,
local_flags INTEGER NOT NULL,
count INTEGER NOT NULL,
size INTEGER NOT NULL,
deleted INTEGER NOT NULL, -- boolean
PRIMARY KEY(device_idx, type, local_flags, deleted),
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
) STRICT, WITHOUT ROWID
;
--- Maintain counts when files are added and removed using triggers
CREATE TRIGGER IF NOT EXISTS counts_insert AFTER INSERT ON files
BEGIN
INSERT INTO counts (device_idx, type, local_flags, count, size, deleted)
VALUES (NEW.device_idx, NEW.type, NEW.local_flags, 1, NEW.size, NEW.deleted)
ON CONFLICT DO UPDATE SET count = count + 1, size = size + NEW.size;
END
;
CREATE TRIGGER IF NOT EXISTS counts_delete AFTER DELETE ON files
BEGIN
UPDATE counts SET count = count - 1, size = size - OLD.size
WHERE device_idx = OLD.device_idx AND type = OLD.type AND local_flags = OLD.local_flags AND deleted = OLD.deleted;
END
;
CREATE TRIGGER IF NOT EXISTS counts_update AFTER UPDATE OF local_flags ON files
WHEN NEW.local_flags != OLD.local_flags
BEGIN
INSERT INTO counts (device_idx, type, local_flags, count, size, deleted)
VALUES (NEW.device_idx, NEW.type, NEW.local_flags, 1, NEW.size, NEW.deleted)
ON CONFLICT DO UPDATE SET count = count + 1, size = size + NEW.size;
UPDATE counts SET count = count - 1, size = size - OLD.size
WHERE device_idx = OLD.device_idx AND type = OLD.type AND local_flags = OLD.local_flags AND deleted = OLD.deleted;
END
;

View File

@@ -1,34 +0,0 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Block lists
--
-- The block lists are extracted from FileInfos and stored separately. This
-- reduces the database size by reusing the same block list entry for all
-- devices announcing the same file. Doing it for all block lists instead of
-- using a size cutoff simplifies queries. Block lists are garbage collected
-- "manually", not using a trigger as that was too performance impacting.
CREATE TABLE IF NOT EXISTS blocklists (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT
;
-- Blocks
--
-- For all local files we store the blocks individually for quick lookup. A
-- given block can exist in multiple blocklists and at multiple offsets in a
-- blocklist.
CREATE TABLE IF NOT EXISTS blocks (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx),
FOREIGN KEY(blocklist_hash) REFERENCES blocklists(blocklist_hash) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
) STRICT
;

Some files were not shown because too many files have changed in this diff Show More