Compare commits

...

20 Commits

Author SHA1 Message Date
Simon Frei
22e09334ec lib/model: Fix incoming request on receive-enc (fixes #7699) (#7702) 2021-05-22 21:38:49 +02:00
Simon Frei
58592e3ef1 lib/db: Add logging for GC (#7707) 2021-05-22 21:36:43 +02:00
Simon Frei
0126188ba7 lib/config: Set DisableTempIndexes to true on receive-encrypted (#7701) 2021-05-20 22:33:23 +02:00
Simon Frei
5bdb6798a9 all: Regenerate proto (#7696) 2021-05-19 13:30:20 +02:00
Jakob Borg
ab2729ab79 gui, man, authors: Update docs, translations, and contributors 2021-05-19 07:45:35 +02:00
Audrius Butkevicius
58e81fdffb cmd/syncthing/cli: Update recli, fix stdin handling (fixes #7685, fixes #7673) (#7694) 2021-05-18 20:09:48 +01:00
tomasz1986
0619a27872 cmd/strelaypoolsrv: Fix minor grammar, use https in links (#7695)
* cmd/strelaypoolsrv: Fix minor grammar, use https in links

Add a few minor grammatical/stylistic fixes. Use `https` instead of
`http` in the MaxMind link in the footer.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

* wip

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2021-05-18 19:07:09 +01:00
greatroar
0e52ce830a lib/fs: Fix UnicodeLowercaseNormalized on lowercase NFD (#7692)
Co-authored-by: greatroar <@>
2021-05-17 20:43:07 +02:00
Jakob Borg
97437cad64 lib/fs: Ignore normalization differences in case insensitive lookup (fixes #7677) (#7678) 2021-05-17 12:35:03 +02:00
Simon Frei
5b90a98650 lib/model: Fix addFakeConn and other test improvements (#7684) 2021-05-16 17:23:27 +02:00
Audrius Butkevicius
96dae7bfec cmd/uraggregate: Optimise queries (#7679)
* cmd/uraggregate: Optimise queries

* Update main.go
2021-05-16 12:34:46 +01:00
Simon Frei
93a02c677e lib/scanner: Do not drop all not-exist-errors and debug logging (#7675) 2021-05-15 11:51:35 +02:00
Simon Frei
0d054f9b64 lib/model: Don't use empty folder cfg for index sender (fixes #7649) (#7671) 2021-05-15 11:13:39 +02:00
Audrius Butkevicius
1107f6eb5f lib/connections: Reduce default quic redial interval (fixes #7471) (#7672)
* lib/connections: Reduce default quic redial interval (fixes #7471)

* Update quic_dial.go
2021-05-14 14:26:02 +01:00
Simon Frei
3650364017 Merge branch 'release' 2021-05-13 11:44:59 +02:00
bt90
086508f51a docker: Remove sysctl from README (#7670) 2021-05-12 22:17:51 +02:00
Audrius Butkevicius
4ace451013 Update main.go (#7667) 2021-05-12 08:01:18 +01:00
Jakob Borg
c9ea773a22 gui, man, authors: Update docs, translations, and contributors 2021-05-12 07:45:34 +02:00
Simon Frei
0f4ae7636d build: Upgrade pfilter (fixes #7664) (#7666) 2021-05-11 20:57:38 +02:00
Simon Frei
87d3a8363b build: Upgrade pfilter (fixes #7664) 2021-05-11 20:45:35 +02:00
61 changed files with 470 additions and 408 deletions

View File

@@ -16,8 +16,7 @@ the name of the Syncthing instance can be optionally defined by using
**Docker cli**
```
$ docker pull syncthing/syncthing
$ docker run --sysctl net.core.rmem_max=2097152 \
-p 8384:8384 -p 22000:22000/tcp -p 22000:22000/udp \
$ docker run -p 8384:8384 -p 22000:22000/tcp -p 22000:22000/udp \
-v /wherever/st-sync:/var/syncthing \
--hostname=my-syncthing \
syncthing/syncthing:latest
@@ -41,8 +40,6 @@ services:
- 8384:8384
- 22000:22000/tcp
- 22000:22000/udp
sysctls:
- net.core.rmem_max=2097152
restart: unless-stopped
```

View File

@@ -510,10 +510,7 @@ func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
@@ -648,10 +645,7 @@ func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
@@ -752,10 +746,7 @@ func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {

View File

@@ -46,18 +46,18 @@
<h1>Relay Pool Data</h1>
<div ng-if="relays === undefined" class="text-center">
<img src="https://cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif" alt=""/>
<p>Please wait while we gather data</p>
<p>Please wait while we gather data</p>
</div>
<div>
<div ng-show="relays !== undefined" class="ng-hide">
<p>
The relays listed on this page are not managed or vetted by the Syncthing project.
Each relay is the responsibility of the relay operator.
Currently {{ relays.length }} relays online.
Currently {{ relays.length }} relays are online.
</p>
</div>
<div id="map"></div> <!-- Can't hide the map, otherwise it freaks out -->
<p>The circle size represents how much bytes the relay transferred relative to other relays</p>
<p>The circle size represents how much bytes the relay has transferred relatively to other relays.</p>
</div>
<div>
<table class="table table-striped table-condensed table">
@@ -188,7 +188,7 @@
<hr>
<p>
This product includes GeoLite2 data created by MaxMind, available from
<a href="http://www.maxmind.com">http://www.maxmind.com</a>.
<a href="https://www.maxmind.com">https://www.maxmind.com</a>.
</p>
</div>

View File

@@ -8,13 +8,13 @@ package cli
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/alecthomas/kong"
"github.com/flynn-archive/go-shlex"
"github.com/mattn/go-isatty"
"github.com/pkg/errors"
"github.com/urfave/cli"
@@ -97,38 +97,41 @@ func Run() error {
operationCommand,
errorsCommand,
debugCommand,
{
Name: "-",
HideHelp: true,
Usage: "Read commands from stdin",
Action: func(ctx *cli.Context) error {
if ctx.NArg() > 0 {
return errors.New("command does not expect any arguments")
}
// Drop the `-` not to recurse into self.
args := make([]string, len(os.Args)-1)
copy(args, os.Args)
fmt.Println("Reading commands from stdin...", args)
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
if err != nil {
return errors.Wrap(err, "parsing input")
}
if len(input) == 0 {
continue
}
err = app.Run(append(args, input...))
if err != nil {
return err
}
}
return scanner.Err()
},
},
},
}}
tty := isatty.IsTerminal(os.Stdin.Fd()) || isatty.IsCygwinTerminal(os.Stdin.Fd())
if !tty {
// Not a TTY, consume from stdin
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
if err != nil {
return errors.Wrap(err, "parsing input")
}
if len(input) == 0 {
continue
}
err = app.Run(append(os.Args, input...))
if err != nil {
return err
}
}
err = scanner.Err()
if err != nil {
return err
}
} else {
err = app.Run(os.Args)
if err != nil {
return err
}
}
return nil
return app.Run(os.Args)
}
func parseFlags(c *preCli) error {

View File

@@ -47,7 +47,7 @@ func main() {
func runAggregation(db *sql.DB) {
since := maxIndexedDay(db, "VersionSummary")
log.Println("Aggregating VersionSummary data since", since)
rows, err := aggregateVersionSummary(db, since)
rows, err := aggregateVersionSummary(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
@@ -62,7 +62,7 @@ func runAggregation(db *sql.DB) {
since = maxIndexedDay(db, "Performance")
log.Println("Aggregating Performance data since", since)
rows, err = aggregatePerformance(db, since)
rows, err = aggregatePerformance(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
@@ -70,7 +70,7 @@ func runAggregation(db *sql.DB) {
since = maxIndexedDay(db, "BlockStats")
log.Println("Aggregating BlockStats data since", since)
rows, err = aggregateBlockStats(db, since)
rows, err = aggregateBlockStats(db, since.Add(24*time.Hour))
if err != nil {
log.Println("aggregate:", err)
}
@@ -163,7 +163,7 @@ func setupDB(db *sql.DB) error {
func maxIndexedDay(db *sql.DB, table string) time.Time {
var t time.Time
row := db.QueryRow("SELECT MAX(Day) FROM " + table)
row := db.QueryRow("SELECT MAX(DATE_TRUNC('day', Day)) FROM " + table)
err := row.Scan(&t)
if err != nil {
return time.Time{}
@@ -179,8 +179,8 @@ func aggregateVersionSummary(db *sql.DB, since time.Time) (int64, error) {
COUNT(*) AS Count
FROM ReportsJson
WHERE
DATE_TRUNC('day', Received) > $1
AND DATE_TRUNC('day', Received) < DATE_TRUNC('day', NOW())
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
GROUP BY Day, Ver
);
@@ -199,7 +199,7 @@ func aggregateUserMovement(db *sql.DB) (int64, error) {
FROM ReportsJson
WHERE
Report->>'uniqueID' IS NOT NULL
AND DATE_TRUNC('day', Received) < DATE_TRUNC('day', NOW())
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
ORDER BY Day
`)
@@ -284,9 +284,11 @@ func aggregatePerformance(db *sql.DB, since time.Time) (int64, error) {
AVG((Report->>'memoryUsageMiB')::numeric) As MemoryUsageMiB
FROM ReportsJson
WHERE
DATE_TRUNC('day', Received) > $1
AND DATE_TRUNC('day', Received) < DATE_TRUNC('day', NOW())
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND Report->>'version' like 'v_.%'
/* Some custom implementation reported bytes when we expect megabytes, cap at petabyte */
AND (Report->>'memorySize')::numeric < 1073741824
GROUP BY Day
);
`, since)
@@ -313,8 +315,8 @@ func aggregateBlockStats(db *sql.DB, since time.Time) (int64, error) {
SUM((Report->'blockStats'->>'copyElsewhere')::numeric) AS CopyElsewhere
FROM ReportsJson
WHERE
DATE_TRUNC('day', Received) > $1
AND DATE_TRUNC('day', Received) < DATE_TRUNC('day', NOW())
Received > $1
AND Received < DATE_TRUNC('day', NOW())
AND (Report->>'urVersion')::numeric >= 3
AND Report->>'version' like 'v_.%'
AND Report->>'version' NOT LIKE 'v0.14.40%'

5
go.mod
View File

@@ -1,8 +1,8 @@
module github.com/syncthing/syncthing
require (
github.com/AudriusButkevicius/pfilter v0.0.0-20210510194644-fad42c10c5ac
github.com/AudriusButkevicius/recli v0.0.5
github.com/AudriusButkevicius/pfilter v0.0.0-20210511165305-e9aaf99ab213
github.com/AudriusButkevicius/recli v0.0.6
github.com/alecthomas/kong v0.2.16
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e
github.com/calmh/xdr v1.1.0
@@ -30,7 +30,6 @@ require (
github.com/lib/pq v1.10.1
github.com/lucas-clemente/quic-go v0.19.3
github.com/maruel/panicparse v1.6.1
github.com/mattn/go-isatty v0.0.12
github.com/maxbrunsfeld/counterfeiter/v6 v6.3.0
github.com/minio/sha256-simd v1.0.0
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75

8
go.sum
View File

@@ -7,10 +7,10 @@ dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBr
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/AudriusButkevicius/pfilter v0.0.0-20210510194644-fad42c10c5ac h1:ua8XsAiW9JrUa97ioh+ZaZu2JeSMrhcQ2peBxSMrqSs=
github.com/AudriusButkevicius/pfilter v0.0.0-20210510194644-fad42c10c5ac/go.mod h1:EEEtt5r8y0gGHlRFF2+cLx0WUy/rKHnjALmom5E0+74=
github.com/AudriusButkevicius/recli v0.0.5 h1:xUa55PvWTHBm17T6RvjElRO3y5tALpdceH86vhzQ5wg=
github.com/AudriusButkevicius/recli v0.0.5/go.mod h1:Q2E26yc6RvWWEz/TJ/goUp6yXvipYdJI096hpoaqsNs=
github.com/AudriusButkevicius/pfilter v0.0.0-20210511165305-e9aaf99ab213 h1:9E6vGKdipZ+AAkU19TUb5JQKMf44CGAYMtXDAyfonO4=
github.com/AudriusButkevicius/pfilter v0.0.0-20210511165305-e9aaf99ab213/go.mod h1:EEEtt5r8y0gGHlRFF2+cLx0WUy/rKHnjALmom5E0+74=
github.com/AudriusButkevicius/recli v0.0.6 h1:hY9KH09vIbx0fYpkvdWbvnh67uDiuJEVDGhXlefysDQ=
github.com/AudriusButkevicius/recli v0.0.6/go.mod h1:Nhfib1j/VFnLrXL9cHgA+/n2O6P5THuWelOnbfPNd78=
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c h1:/IBSNwUN8+eKzUzbJPqhK839ygXJ82sde8x3ogr6R28=
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=

View File

@@ -18,7 +18,7 @@
"Advanced": "Avanceret",
"Advanced Configuration": "Avanceret konfiguration",
"All Data": "Alt data",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Alle mapper delt med denne enhed, skal beskyttes med adgangskode, således at alle sendte data er ikke-læsbare uden den angivne adgangskode.",
"Allow Anonymous Usage Reporting?": "Tillad anonym brugerstatistik?",
"Allowed Networks": "Tilladte netværk",
"Alphabetic": "Alfabetisk",

View File

@@ -62,7 +62,7 @@
"Currently Shared With Devices": "現在共有中のデバイス",
"Danger!": "危険!",
"Debugging Facilities": "デバッグ機能",
"Default Configuration": "Default Configuration",
"Default Configuration": "デフォルトの設定",
"Default Device": "Default Device",
"Default Folder": "Default Folder",
"Default Folder Path": "デフォルトのフォルダーパス",
@@ -102,9 +102,9 @@
"Downloading": "ダウンロード中",
"Edit": "編集",
"Edit Device": "デバイスの編集",
"Edit Device Defaults": "Edit Device Defaults",
"Edit Device Defaults": "デバイスのデフォルトの編集",
"Edit Folder": "フォルダーの編集",
"Edit Folder Defaults": "Edit Folder Defaults",
"Edit Folder Defaults": "フォルダーのデフォルトの編集",
"Editing {%path%}.": "{{path}} を編集中",
"Enable Crash Reporting": "クラッシュレポートを有効にする",
"Enable NAT traversal": "NATトラバーサルを有効にする",
@@ -153,7 +153,7 @@
"Help": "ヘルプ",
"Home page": "ホームページ",
"However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.": "However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.",
"If untrusted, enter encryption password": "If untrusted, enter encryption password",
"If untrusted, enter encryption password": "信頼しない場合、暗号化パスワードを入力",
"If you want to prevent other users on this computer from accessing Syncthing and through it your files, consider setting up authentication.": "If you want to prevent other users on this computer from accessing Syncthing and through it your files, consider setting up authentication.",
"Ignore": "無視",
"Ignore Patterns": "無視するファイル名",
@@ -202,7 +202,7 @@
"No File Versioning": "バージョン管理をしない",
"No files will be deleted as a result of this operation.": "この操作を行っても、ファイルが削除されることはありません。",
"No upgrades": "アップグレードしない",
"Not shared": "Not shared",
"Not shared": "非共有",
"Notice": "通知",
"OK": "OK",
"Off": "オフ",
@@ -279,7 +279,7 @@
"Share Folder": "フォルダーを共有する",
"Share Folders With Device": "このデバイスと共有するフォルダー",
"Share this folder?": "このフォルダーを共有しますか?",
"Shared Folders": "Shared Folders",
"Shared Folders": "共有中のフォルダー",
"Shared With": "共有中のデバイス",
"Sharing": "共有",
"Show ID": "IDを表示",
@@ -367,8 +367,8 @@
"Unknown": "不明",
"Unshared": "非共有",
"Unshared Devices": "非共有のデバイス",
"Unshared Folders": "Unshared Folders",
"Untrusted": "Untrusted",
"Unshared Folders": "非共有のフォルダー",
"Untrusted": "信頼しない",
"Up to Date": "最新",
"Updated": "更新",
"Upgrade": "アップグレード",

View File

@@ -42,7 +42,7 @@
"Bugs": "Ошибки",
"Changelog": "Журнал изменений",
"Clean out after": "Очистить после",
"Cleaning Versions": "Cleaning Versions",
"Cleaning Versions": "Очистка Версий",
"Cleanup Interval": "Интервал очистки",
"Click to see discovery failures": "Щёлкните, чтобы посмотреть ошибки",
"Close": "Закрыть",
@@ -77,7 +77,7 @@
"Device ID": "ID устройства",
"Device Identification": "Идентификация устройства",
"Device Name": "Имя устройства",
"Device is untrusted, enter encryption password": "Device is untrusted, enter encryption password",
"Device is untrusted, enter encryption password": "Устройство ненадёжно, укажите пароль шифрования",
"Device rate limits": "Ограничения скорости для устройства",
"Device that last modified the item": "Устройство, последним изменившее объект",
"Devices": "Устройства",
@@ -102,9 +102,9 @@
"Downloading": "Загрузка",
"Edit": "Редактировать",
"Edit Device": "Редактирование устройства",
"Edit Device Defaults": "Edit Device Defaults",
"Edit Device Defaults": "Изменить умолчания устройства",
"Edit Folder": "Редактирование папки",
"Edit Folder Defaults": "Edit Folder Defaults",
"Edit Folder Defaults": "Изменить умолчания папки",
"Editing {%path%}.": "Правка {{path}}.",
"Enable Crash Reporting": "Включить отчёты о сбоях",
"Enable NAT traversal": "Включить NAT traversal",
@@ -134,8 +134,8 @@
"Folder Label": "Ярлык папки",
"Folder Path": "Путь к папке",
"Folder Type": "Тип папки",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Folder type \"{{receiveEncrypted}}\" can only be set when adding a new folder.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Folder type \"{{receiveEncrypted}}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Тип папки \"{{receiveEncrypted}}\" может быть указан только при создании новой.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Тип папки \"{{receiveEncrypted}}\" не может быть изменён после добавления. Вам необходимо убрать папку, удалить или дешифровать данные на диске, а затем добавить папку заново.",
"Folders": "Папки",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Для следующих папок произошла ошибка при запуске отслеживания изменений. Попытки будут повторяться раз в минуту, и ошибки скоро могут быть устранены. Если этого не произойдёт, попробуйте разобраться в причинах и попросите поддержки, если у вас не получится.",
"Full Rescan Interval (s)": "Интервал полного сканирования (в секундах)",
@@ -153,7 +153,7 @@
"Help": "Помощь",
"Home page": "Сайт",
"However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.": "Ваши настройки указывают что вы не хотите, чтобы эта функция была включена. Мы отключили отправку отчетов о сбоях.",
"If untrusted, enter encryption password": "If untrusted, enter encryption password",
"If untrusted, enter encryption password": "Если ненадёжное, укажите пароль шифрования",
"If you want to prevent other users on this computer from accessing Syncthing and through it your files, consider setting up authentication.": "Если вы хотите запретить другим пользователям на этом компьютере доступ к Syncthing и через него к вашим файлам, подумайте о настройке аутентификации.",
"Ignore": "Игнорировать",
"Ignore Patterns": "Шаблоны игнорирования",
@@ -328,8 +328,8 @@
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Введён недопустимый ID устройства. Он должен состоять из букв и цифр, может включать пробелы и дефисы, длина должна быть 52 или 56 символов.",
"The folder ID cannot be blank.": "ID папки не может быть пустым.",
"The folder ID must be unique.": "ID папки должен быть уникальным.",
"The folder content on other devices will be overwritten to become identical with this device. Files not present here will be deleted on other devices.": "The folder content on other devices will be overwritten to become identical with this device. Files not present here will be deleted on other devices.",
"The folder content on this device will be overwritten to become identical with other devices. Files newly added here will be deleted.": "The folder content on this device will be overwritten to become identical with other devices. Files newly added here will be deleted.",
"The folder content on other devices will be overwritten to become identical with this device. Files not present here will be deleted on other devices.": "Содержание папки на других устройствах будет перезаписано и станет идентично этому устройству. Файлы, отсутствующие здесь, будут удалены на других устройствах.",
"The folder content on this device will be overwritten to become identical with other devices. Files newly added here will be deleted.": "Содержание папки на этом устройстве будет перезаписано и станет идентично другим устройствам. Новые файлы на этом устройстве будут удалены.",
"The folder path cannot be blank.": "Путь к папке не должен быть пустым.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Используются следующие интервалы: в первый час версия меняется каждые 30 секунд, в первый день - каждый час, первые 30 дней - каждый день, после, до максимального срока - каждую неделю.",
"The following items could not be synchronized.": "Невозможно синхронизировать следующие объекты",
@@ -368,7 +368,7 @@
"Unshared": "Необщедоступно",
"Unshared Devices": "Устройства без общего доступа",
"Unshared Folders": "Необщедоступные папки",
"Untrusted": "Untrusted",
"Untrusted": "Ненадёжный",
"Up to Date": "В актуальном состоянии",
"Updated": "Обновлено",
"Upgrade": "Обновить",
@@ -414,5 +414,5 @@
"seconds": "сек.",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} хочет поделиться папкой «{{folder}}».",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} хочет поделиться папкой «{{folderlabel}}» ({{folder}}).",
"{%reintroducer%} might reintroduce this device.": "{{reintroducer}} might reintroduce this device."
"{%reintroducer%} might reintroduce this device.": "{{reintroducer}} может повторно рекомендовать это устройство."
}

View File

@@ -721,10 +721,7 @@ func (m *Configuration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthConfig
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthConfig
}
if (iNdEx + skippy) > l {
@@ -840,10 +837,7 @@ func (m *Defaults) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthConfig
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthConfig
}
if (iNdEx + skippy) > l {

View File

@@ -1343,3 +1343,30 @@ func TestInternalVersioningConfiguration(t *testing.T) {
}
}
}
func TestReceiveEncryptedFolderFixed(t *testing.T) {
cfg := Configuration{
Folders: []FolderConfiguration{
{
ID: "foo",
Path: "testdata",
Type: FolderTypeReceiveEncrypted,
DisableTempIndexes: false,
IgnorePerms: false,
},
},
}
cfg.prepare(device1)
if len(cfg.Folders) != 1 {
t.Fatal("Expected one folder")
}
f := cfg.Folders[0]
if !f.DisableTempIndexes {
t.Error("DisableTempIndexes should be true")
}
if !f.IgnorePerms {
t.Error("IgnorePerms should be true")
}
}

View File

@@ -924,10 +924,7 @@ func (m *DeviceConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDeviceconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDeviceconfiguration
}
if (iNdEx + skippy) > l {

View File

@@ -214,6 +214,7 @@ func (f *FolderConfiguration) prepare(myID protocol.DeviceID, existingDevices ma
}
if f.Type == FolderTypeReceiveEncrypted {
f.DisableTempIndexes = true
f.IgnorePerms = true
}
}

View File

@@ -968,10 +968,7 @@ func (m *FolderDeviceConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthFolderconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthFolderconfiguration
}
if (iNdEx + skippy) > l {
@@ -1823,10 +1820,7 @@ func (m *FolderConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthFolderconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthFolderconfiguration
}
if (iNdEx + skippy) > l {

View File

@@ -702,10 +702,7 @@ func (m *GUIConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthGuiconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthGuiconfiguration
}
if (iNdEx + skippy) > l {

View File

@@ -425,10 +425,7 @@ func (m *LDAPConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthLdapconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthLdapconfiguration
}
if (iNdEx + skippy) > l {

View File

@@ -7,9 +7,9 @@ import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
_ "github.com/golang/protobuf/ptypes/timestamp"
github_com_syncthing_syncthing_lib_protocol "github.com/syncthing/syncthing/lib/protocol"
_ "github.com/syncthing/syncthing/proto/ext"
_ "google.golang.org/protobuf/types/known/timestamppb"
io "io"
math "math"
math_bits "math/bits"
@@ -435,10 +435,7 @@ func (m *ObservedFolder) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthObserved
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthObserved
}
if (iNdEx + skippy) > l {
@@ -618,10 +615,7 @@ func (m *ObservedDevice) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthObserved
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthObserved
}
if (iNdEx + skippy) > l {

View File

@@ -2475,10 +2475,7 @@ func (m *OptionsConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthOptionsconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthOptionsconfiguration
}
if (iNdEx + skippy) > l {

View File

@@ -235,10 +235,7 @@ func (m *Size) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthSize
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthSize
}
if (iNdEx + skippy) > l {

View File

@@ -403,7 +403,7 @@ func (m *VersioningConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if (iNdEx + skippy) > postIndex {
@@ -490,10 +490,7 @@ func (m *VersioningConfiguration) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if (iNdEx + skippy) > l {

View File

@@ -93,8 +93,15 @@ func (d *quicDialer) Dial(ctx context.Context, _ protocol.DeviceID, uri *url.URL
type quicDialerFactory struct{}
func (quicDialerFactory) New(opts config.OptionsConfiguration, tlsCfg *tls.Config) genericDialer {
// So the idea is that we should probably try dialing every 20 seconds.
// However it would still be nice if this was adjustable/proportional to ReconnectIntervalS
// But prevent something silly like 1/3 = 0 etc.
quicInterval := opts.ReconnectIntervalS / 3
if quicInterval < 10 {
quicInterval = 10
}
return &quicDialer{commonDialer{
reconnectInterval: time.Duration(opts.ReconnectIntervalS) * time.Second,
reconnectInterval: time.Duration(quicInterval) * time.Second,
tlsCfg: tlsCfg,
}}
}

View File

@@ -587,6 +587,18 @@ func (db *Lowlevel) dropFolderIndexIDs(folder []byte) error {
return t.Commit()
}
func (db *Lowlevel) dropIndexIDs() error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
if err := t.deleteKeyPrefix([]byte{KeyTypeIndexID}); err != nil {
return err
}
return t.Commit()
}
func (db *Lowlevel) dropMtimes(folder []byte) error {
key, err := db.keyer.GenerateMtimesKey(nil, folder)
if err != nil {
@@ -663,7 +675,7 @@ func (db *Lowlevel) timeUntil(key string, every time.Duration) time.Duration {
return sleepTime
}
func (db *Lowlevel) gcIndirect(ctx context.Context) error {
func (db *Lowlevel) gcIndirect(ctx context.Context) (err error) {
// The indirection GC uses bloom filters to track used block lists and
// versions. This means iterating over all items, adding their hashes to
// the filter, then iterating over the indirected items and removing
@@ -677,6 +689,26 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) error {
db.gcMut.Lock()
defer db.gcMut.Unlock()
l.Debugln("Started database GC")
var discardedBlocks, matchedBlocks, discardedVersions, matchedVersions int
internalCtx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
// Only print something if the process takes more than "a moment".
select {
case <-internalCtx.Done():
case <-time.After(10 * time.Second):
l.Infoln("Database GC started - many Syncthing operations will be unresponsive until it's finished")
<-internalCtx.Done()
if err != nil || ctx.Err() != nil {
return
}
l.Infof("Database GC done (discarded/remaining: %v/%v blocks, %v/%v versions)", discardedBlocks, matchedBlocks, discardedVersions, matchedVersions)
}
}()
t, err := db.newReadWriteTransaction()
if err != nil {
return err
@@ -734,7 +766,6 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) error {
return err
}
defer it.Release()
matchedBlocks := 0
for it.Next() {
select {
case <-ctx.Done():
@@ -750,6 +781,7 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) error {
if err := t.Delete(key); err != nil {
return err
}
discardedBlocks++
}
it.Release()
if err := it.Error(); err != nil {
@@ -763,7 +795,6 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) error {
if err != nil {
return err
}
matchedVersions := 0
for it.Next() {
select {
case <-ctx.Done():
@@ -779,6 +810,7 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) error {
if err := t.Delete(key); err != nil {
return err
}
discardedVersions++
}
it.Release()
if err := it.Error(); err != nil {
@@ -795,6 +827,8 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) error {
return err
}
l.Debugf("Finished GC, starting compaction (discarded/remaining: %v/%v blocks, %v/%v versions)", discardedBlocks, matchedBlocks, discardedVersions, matchedVersions)
return db.Compact()
}

View File

@@ -20,7 +20,7 @@ import (
// do not put restrictions on downgrades (e.g. for repairs after a bugfix).
const (
dbVersion = 14
dbMigrationVersion = 17
dbMigrationVersion = 18
dbMinSyncthingVersion = "v1.9.0"
)
@@ -102,6 +102,7 @@ func (db *schemaUpdater) updateSchema() error {
{14, 14, "v1.9.0", db.updateSchemaTo14},
{14, 16, "v1.9.0", db.checkRepairMigration},
{14, 17, "v1.9.0", db.migration17},
{14, 18, "v1.9.0", db.dropIndexIDsMigration},
}
for _, m := range migrations {
@@ -831,6 +832,10 @@ func (db *schemaUpdater) migration17(prev int) error {
return nil
}
func (db *schemaUpdater) dropIndexIDsMigration(_ int) error {
return db.dropIndexIDs()
}
func (db *schemaUpdater) rewriteGlobals(t readWriteTransaction) error {
it, err := t.NewPrefixIterator([]byte{KeyTypeGlobal})
if err != nil {

View File

@@ -451,21 +451,12 @@ func DropDeltaIndexIDs(db *Lowlevel) {
}
opStr := "DropDeltaIndexIDs"
l.Debugf(opStr)
dbi, err := db.NewPrefixIterator([]byte{KeyTypeIndexID})
err := db.dropIndexIDs()
if backend.IsClosed(err) {
return
} else if err != nil {
fatalError(err, opStr, db)
}
defer dbi.Release()
for dbi.Next() {
if err := db.Delete(dbi.Key()); err != nil && !backend.IsClosed(err) {
fatalError(err, opStr, db)
}
}
if err := dbi.Error(); err != nil && !backend.IsClosed(err) {
fatalError(err, opStr, db)
}
}
func normalizeFilenamesAndDropDuplicates(fs []protocol.FileInfo) []protocol.FileInfo {

View File

@@ -8,10 +8,10 @@ import (
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
_ "github.com/golang/protobuf/ptypes/timestamp"
github_com_syncthing_syncthing_lib_protocol "github.com/syncthing/syncthing/lib/protocol"
protocol "github.com/syncthing/syncthing/lib/protocol"
_ "github.com/syncthing/syncthing/proto/ext"
_ "google.golang.org/protobuf/types/known/timestamppb"
io "io"
math "math"
math_bits "math/bits"
@@ -1652,10 +1652,7 @@ func (m *FileVersion) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -1739,10 +1736,7 @@ func (m *VersionList) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -2222,10 +2216,7 @@ func (m *FileInfoTruncated) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -2309,10 +2300,7 @@ func (m *BlockList) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -2430,10 +2418,7 @@ func (m *IndirectionHashesOnly) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -2650,10 +2635,7 @@ func (m *Counts) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -2756,10 +2738,7 @@ func (m *CountsSet) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -2916,10 +2895,7 @@ func (m *FileVersionDeprecated) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -3003,10 +2979,7 @@ func (m *VersionListDeprecated) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -3141,10 +3114,7 @@ func (m *ObservedFolder) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
@@ -3291,10 +3261,7 @@ func (m *ObservedDevice) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {

View File

@@ -298,10 +298,7 @@ func (m *Announce) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthLocal
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthLocal
}
if (iNdEx + skippy) > l {

View File

@@ -157,9 +157,9 @@ func (f *BasicFilesystem) Roots() ([]string, error) {
// pathseparator.
func (f *BasicFilesystem) unrootedChecked(absPath string, roots []string) (string, error) {
absPath = f.resolveWin83(absPath)
lowerAbsPath := UnicodeLowercase(absPath)
lowerAbsPath := UnicodeLowercaseNormalized(absPath)
for _, root := range roots {
lowerRoot := UnicodeLowercase(root)
lowerRoot := UnicodeLowercaseNormalized(root)
if lowerAbsPath+string(PathSeparator) == lowerRoot {
return ".", nil
}
@@ -171,7 +171,7 @@ func (f *BasicFilesystem) unrootedChecked(absPath string, roots []string) (strin
}
func rel(path, prefix string) string {
lowerRel := strings.TrimPrefix(strings.TrimPrefix(UnicodeLowercase(path), UnicodeLowercase(prefix)), string(PathSeparator))
lowerRel := strings.TrimPrefix(strings.TrimPrefix(UnicodeLowercaseNormalized(path), UnicodeLowercaseNormalized(prefix)), string(PathSeparator))
return path[len(path)-len(lowerRel):]
}
@@ -193,8 +193,8 @@ func (f *BasicFilesystem) resolveWin83(absPath string) string {
}
// Failed getting the long path. Return the part of the path which is
// already a long path.
lowerRoot := UnicodeLowercase(f.root)
for absPath = filepath.Dir(absPath); strings.HasPrefix(UnicodeLowercase(absPath), lowerRoot); absPath = filepath.Dir(absPath) {
lowerRoot := UnicodeLowercaseNormalized(f.root)
for absPath = filepath.Dir(absPath); strings.HasPrefix(UnicodeLowercaseNormalized(absPath), lowerRoot); absPath = filepath.Dir(absPath) {
if !isMaybeWin83(absPath) {
return absPath
}

View File

@@ -15,6 +15,7 @@ import (
"time"
lru "github.com/hashicorp/golang-lru"
"golang.org/x/text/unicode/norm"
)
const (
@@ -375,7 +376,10 @@ func (f *caseFilesystem) checkCaseExisting(name string) error {
if err != nil {
return err
}
if realName != name {
// We normalize the normalization (hah!) of the strings before
// comparing, as we don't want to treat a normalization difference as a
// case conflict.
if norm.NFC.String(realName) != norm.NFC.String(name) {
return &ErrCaseConflict{name, realName}
}
return nil
@@ -424,7 +428,7 @@ func (r *defaultRealCaser) realCase(name string) (string, error) {
lastLower := ""
for _, n := range dirNames {
node.children[n] = struct{}{}
lower := UnicodeLowercase(n)
lower := UnicodeLowercaseNormalized(n)
if lower != lastLower {
node.lowerToReal[lower] = n
lastLower = n
@@ -437,7 +441,7 @@ func (r *defaultRealCaser) realCase(name string) (string, error) {
// Try to find a direct or case match
if _, ok := node.children[comp]; !ok {
comp, ok = node.lowerToReal[UnicodeLowercase(comp)]
comp, ok = node.lowerToReal[UnicodeLowercaseNormalized(comp)]
if !ok {
return "", ErrNotExist
}

View File

@@ -186,7 +186,7 @@ type fakeEntry struct {
func (fs *fakeFS) entryForName(name string) *fakeEntry {
// bug: lookup doesn't work through symlinks.
if fs.insens {
name = UnicodeLowercase(name)
name = UnicodeLowercaseNormalized(name)
}
name = filepath.ToSlash(name)
@@ -285,7 +285,7 @@ func (fs *fakeFS) create(name string) (*fakeEntry, error) {
}
if fs.insens {
base = UnicodeLowercase(base)
base = UnicodeLowercaseNormalized(base)
}
if fs.withContent {
@@ -373,7 +373,7 @@ func (fs *fakeFS) Mkdir(name string, perm FileMode) error {
return os.ErrExist
}
if fs.insens {
key = UnicodeLowercase(key)
key = UnicodeLowercaseNormalized(key)
}
if _, ok := entry.children[key]; ok {
return os.ErrExist
@@ -402,7 +402,7 @@ func (fs *fakeFS) MkdirAll(name string, perm FileMode) error {
for _, comp := range comps {
key := comp
if fs.insens {
key = UnicodeLowercase(key)
key = UnicodeLowercaseNormalized(key)
}
next, ok := entry.children[key]
@@ -465,7 +465,7 @@ func (fs *fakeFS) OpenFile(name string, flags int, mode FileMode) (File, error)
}
if fs.insens {
key = UnicodeLowercase(key)
key = UnicodeLowercaseNormalized(key)
}
if flags&os.O_EXCL != 0 {
if _, ok := entry.children[key]; ok {
@@ -508,7 +508,7 @@ func (fs *fakeFS) Remove(name string) error {
time.Sleep(fs.latency)
if fs.insens {
name = UnicodeLowercase(name)
name = UnicodeLowercaseNormalized(name)
}
entry := fs.entryForName(name)
@@ -531,7 +531,7 @@ func (fs *fakeFS) RemoveAll(name string) error {
time.Sleep(fs.latency)
if fs.insens {
name = UnicodeLowercase(name)
name = UnicodeLowercaseNormalized(name)
}
entry := fs.entryForName(filepath.Dir(name))
@@ -555,8 +555,8 @@ func (fs *fakeFS) Rename(oldname, newname string) error {
newKey := filepath.Base(newname)
if fs.insens {
oldKey = UnicodeLowercase(oldKey)
newKey = UnicodeLowercase(newKey)
oldKey = UnicodeLowercaseNormalized(oldKey)
newKey = UnicodeLowercaseNormalized(newKey)
}
p0 := fs.entryForName(filepath.Dir(oldname))
@@ -651,7 +651,7 @@ func (fs *fakeFS) SameFile(fi1, fi2 FileInfo) bool {
// where ModTime is not that precise
var ok bool
if fs.insens {
ok = UnicodeLowercase(fi1.Name()) == UnicodeLowercase(fi2.Name())
ok = UnicodeLowercaseNormalized(fi1.Name()) == UnicodeLowercaseNormalized(fi2.Name())
} else {
ok = fi1.Name() == fi2.Name()
}

View File

@@ -10,12 +10,16 @@ import (
"strings"
"unicode"
"unicode/utf8"
"golang.org/x/text/unicode/norm"
)
func UnicodeLowercase(s string) string {
// UnicodeLowercaseNormalized returns the Unicode lower case variant of s,
// having also normalized it to normalization form C.
func UnicodeLowercaseNormalized(s string) string {
i := firstCaseChange(s)
if i == -1 {
return s
return norm.NFC.String(s)
}
var rs strings.Builder
@@ -28,7 +32,7 @@ func UnicodeLowercase(s string) string {
for _, r := range s[i:] {
rs.WriteRune(unicode.ToLower(unicode.ToUpper(r)))
}
return rs.String()
return norm.NFC.String(rs.String())
}
// Byte index of the first rune r s.t. lower(upper(r)) != r.

View File

@@ -44,13 +44,16 @@ var caseCases = [][2]string{
{"チャーハン", "チャーハン"},
// Some special Unicode characters, however, are folded by OSes.
{"\u212A", "k"},
// Folding renormalizes to NFC
{"A\xCC\x88", "\xC3\xA4"}, // ä
{"a\xCC\x88", "\xC3\xA4"}, // ä
}
func TestUnicodeLowercase(t *testing.T) {
func TestUnicodeLowercaseNormalized(t *testing.T) {
for _, tc := range caseCases {
res := UnicodeLowercase(tc[0])
res := UnicodeLowercaseNormalized(tc[0])
if res != tc[1] {
t.Errorf("UnicodeLowercase(%q) => %q, expected %q", tc[0], res, tc[1])
t.Errorf("UnicodeLowercaseNormalized(%q) => %q, expected %q", tc[0], res, tc[1])
}
}
}
@@ -60,7 +63,7 @@ func BenchmarkUnicodeLowercaseMaybeChange(b *testing.B) {
for i := 0; i < b.N; i++ {
for _, s := range caseCases {
UnicodeLowercase(s[0])
UnicodeLowercaseNormalized(s[0])
}
}
}
@@ -70,7 +73,7 @@ func BenchmarkUnicodeLowercaseNoChange(b *testing.B) {
for i := 0; i < b.N; i++ {
for _, s := range caseCases {
UnicodeLowercase(s[1])
UnicodeLowercaseNormalized(s[1])
}
}
}

View File

@@ -157,7 +157,7 @@ func (f *mtimeFS) wrapperType() filesystemWrapperType {
func (f *mtimeFS) save(name string, real, virtual time.Time) {
if f.caseInsensitive {
name = UnicodeLowercase(name)
name = UnicodeLowercaseNormalized(name)
}
if real.Equal(virtual) {
@@ -177,7 +177,7 @@ func (f *mtimeFS) save(name string, real, virtual time.Time) {
func (f *mtimeFS) load(name string) (MtimeMapping, error) {
if f.caseInsensitive {
name = UnicodeLowercase(name)
name = UnicodeLowercaseNormalized(name)
}
data, exists, err := f.db.Bytes(name)

View File

@@ -154,17 +154,17 @@ func (f *fakeConnection) sendIndexUpdate() {
f.model.IndexUpdate(f.id, f.folder, toSend)
}
func addFakeConn(m *testModel, dev protocol.DeviceID) *fakeConnection {
func addFakeConn(m *testModel, dev protocol.DeviceID, folderID string) *fakeConnection {
fc := newFakeConnection(dev, m)
m.AddConnection(fc, protocol.Hello{})
m.ClusterConfig(dev, protocol.ClusterConfig{
Folders: []protocol.Folder{
{
ID: "default",
ID: folderID,
Devices: []protocol.Device{
{ID: myID},
{ID: device1},
{ID: dev},
},
},
},

View File

@@ -43,7 +43,7 @@ func TestRecvOnlyRevertDeletes(t *testing.T) {
// Send and index update for the known stuff
m.Index(device1, "ro", knownFiles)
must(t, m.Index(device1, "ro", knownFiles))
f.updateLocalsFromScanning(knownFiles)
size := globalSize(t, m, "ro")
@@ -119,7 +119,7 @@ func TestRecvOnlyRevertNeeds(t *testing.T) {
// Send and index update for the known stuff
m.Index(device1, "ro", knownFiles)
must(t, m.Index(device1, "ro", knownFiles))
f.updateLocalsFromScanning(knownFiles)
// Scan the folder.
@@ -208,7 +208,7 @@ func TestRecvOnlyUndoChanges(t *testing.T) {
// Send an index update for the known stuff
m.Index(device1, "ro", knownFiles)
must(t, m.Index(device1, "ro", knownFiles))
f.updateLocalsFromScanning(knownFiles)
// Scan the folder.
@@ -277,7 +277,7 @@ func TestRecvOnlyDeletedRemoteDrop(t *testing.T) {
// Send an index update for the known stuff
m.Index(device1, "ro", knownFiles)
must(t, m.Index(device1, "ro", knownFiles))
f.updateLocalsFromScanning(knownFiles)
// Scan the folder.
@@ -341,7 +341,7 @@ func TestRecvOnlyRemoteUndoChanges(t *testing.T) {
// Send an index update for the known stuff
m.Index(device1, "ro", knownFiles)
must(t, m.Index(device1, "ro", knownFiles))
f.updateLocalsFromScanning(knownFiles)
// Scan the folder.
@@ -396,7 +396,7 @@ func TestRecvOnlyRemoteUndoChanges(t *testing.T) {
return true
})
snap.Release()
m.IndexUpdate(device1, "ro", files)
must(t, m.IndexUpdate(device1, "ro", files))
// Ensure the pull to resolve conflicts (content identical) happened
must(t, f.doInSync(func() error {

View File

@@ -1297,7 +1297,7 @@ func TestPullSymlinkOverExistingWindows(t *testing.T) {
if !ok {
t.Fatal("file missing")
}
m.Index(device1, f.ID, []protocol.FileInfo{{Name: name, Type: protocol.FileInfoTypeSymlink, Version: file.Version.Update(device1.Short())}})
must(t, m.Index(device1, f.ID, []protocol.FileInfo{{Name: name, Type: protocol.FileInfoTypeSymlink, Version: file.Version.Update(device1.Short())}}))
scanChan := make(chan string)

View File

@@ -247,6 +247,7 @@ func newIndexSenderRegistry(conn protocol.Connection, closed chan struct{}, sup
func (r *indexSenderRegistry) add(folder config.FolderConfiguration, fset *db.FileSet, startInfo *indexSenderStartInfo) {
r.mut.Lock()
r.addLocked(folder, fset, startInfo)
l.Debugf("Started index sender for device %v and folder %v", r.deviceID.Short(), folder.ID)
r.mut.Unlock()
}
@@ -336,15 +337,17 @@ func (r *indexSenderRegistry) addLocked(folder config.FolderConfiguration, fset
// addPending stores the given info to start an index sender once resume is called
// for this folder.
// If an index sender is already running, it will be stopped.
func (r *indexSenderRegistry) addPending(folder config.FolderConfiguration, startInfo *indexSenderStartInfo) {
func (r *indexSenderRegistry) addPending(folder string, startInfo *indexSenderStartInfo) {
r.mut.Lock()
defer r.mut.Unlock()
if is, ok := r.indexSenders[folder.ID]; ok {
if is, ok := r.indexSenders[folder]; ok {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexSenders, folder.ID)
delete(r.indexSenders, folder)
l.Debugf("Removed index sender for device %v and folder %v due to added pending", r.deviceID.Short(), folder)
}
r.startInfos[folder.ID] = startInfo
r.startInfos[folder] = startInfo
l.Debugf("Pending index sender for device %v and folder %v", r.deviceID.Short(), folder)
}
// remove stops a running index sender or removes one pending to be started.
@@ -358,6 +361,7 @@ func (r *indexSenderRegistry) remove(folder string) {
delete(r.indexSenders, folder)
}
delete(r.startInfos, folder)
l.Debugf("Removed index sender for device %v and folder %v", r.deviceID.Short(), folder)
}
// removeAllExcept stops all running index senders and removes those pending to be started,
@@ -371,11 +375,13 @@ func (r *indexSenderRegistry) removeAllExcept(except map[string]struct{}) {
if _, ok := except[folder]; !ok {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexSenders, folder)
l.Debugf("Removed index sender for device %v and folder %v (removeAllExcept)", r.deviceID.Short(), folder)
}
}
for folder := range r.startInfos {
if _, ok := except[folder]; !ok {
delete(r.startInfos, folder)
l.Debugf("Removed pending index sender for device %v and folder %v (removeAllExcept)", r.deviceID.Short(), folder)
}
}
}
@@ -388,6 +394,9 @@ func (r *indexSenderRegistry) pause(folder string) {
if is, ok := r.indexSenders[folder]; ok {
is.pause()
l.Debugf("Paused index sender for device %v and folder %v", r.deviceID.Short(), folder)
} else {
l.Debugf("No index sender for device %v and folder %v to pause", r.deviceID.Short(), folder)
}
}
@@ -403,11 +412,16 @@ func (r *indexSenderRegistry) resume(folder config.FolderConfiguration, fset *db
if isOk {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexSenders, folder.ID)
l.Debugf("Removed index sender for device %v and folder %v in resume", r.deviceID.Short(), folder.ID)
}
r.addLocked(folder, fset, info)
delete(r.startInfos, folder.ID)
l.Debugf("Started index sender for device %v and folder %v in resume", r.deviceID.Short(), folder.ID)
} else if isOk {
is.resume(fset)
l.Debugf("Resume index sender for device %v and folder %v", r.deviceID.Short(), folder.ID)
} else {
l.Debugf("Not resuming index sender for device %v and folder %v as none is paused and there is no start info", r.deviceID.Short(), folder.ID)
}
}

View File

@@ -1341,7 +1341,7 @@ func (m *model) ccHandleFolders(folders []protocol.Folder, deviceCfg config.Devi
if err := m.db.AddOrUpdatePendingFolder(folder.ID, of, deviceID); err != nil {
l.Warnf("Failed to persist pending folder entry to database: %v", err)
}
indexSenders.addPending(cfg, ccDeviceInfos[folder.ID])
indexSenders.addPending(folder.ID, ccDeviceInfos[folder.ID])
updatedPending = append(updatedPending, updatedPendingFolder{
FolderID: folder.ID,
FolderLabel: folder.Label,
@@ -1365,7 +1365,7 @@ func (m *model) ccHandleFolders(folders []protocol.Folder, deviceCfg config.Devi
}
if cfg.Paused {
indexSenders.addPending(cfg, ccDeviceInfos[folder.ID])
indexSenders.addPending(folder.ID, ccDeviceInfos[folder.ID])
continue
}
@@ -1410,7 +1410,7 @@ func (m *model) ccHandleFolders(folders []protocol.Folder, deviceCfg config.Devi
// Shouldn't happen because !cfg.Paused, but might happen
// if the folder is about to be unpaused, but not yet.
l.Debugln("ccH: no fset", folder.ID)
indexSenders.addPending(cfg, ccDeviceInfos[folder.ID])
indexSenders.addPending(folder.ID, ccDeviceInfos[folder.ID])
continue
}
@@ -1941,7 +1941,7 @@ func (m *model) Request(deviceID protocol.DeviceID, folder, name string, blockNo
return nil, protocol.ErrGeneric
}
if len(hash) > 0 && !scanner.Validate(res.data[:n], hash, weakHash) {
if folderCfg.Type != config.FolderTypeReceiveEncrypted && len(hash) > 0 && !scanner.Validate(res.data[:n], hash, weakHash) {
m.recheckFile(deviceID, folder, name, offset, hash, weakHash)
l.Debugf("%v REQ(in) failed validating data: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile

View File

@@ -224,11 +224,11 @@ func benchmarkIndex(b *testing.B, nfiles int) {
defer cleanupModel(m)
files := genFiles(nfiles)
m.Index(device1, "default", files)
must(b, m.Index(device1, "default", files))
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Index(device1, "default", files)
must(b, m.Index(device1, "default", files))
}
b.ReportAllocs()
}
@@ -252,11 +252,11 @@ func benchmarkIndexUpdate(b *testing.B, nfiles, nufiles int) {
files := genFiles(nfiles)
ufiles := genFiles(nufiles)
m.Index(device1, "default", files)
must(b, m.Index(device1, "default", files))
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.IndexUpdate(device1, "default", ufiles)
must(b, m.IndexUpdate(device1, "default", ufiles))
}
b.ReportAllocs()
}
@@ -273,7 +273,7 @@ func BenchmarkRequestOut(b *testing.B) {
fc.addFile(f.Name, 0644, protocol.FileInfoTypeFile, []byte("some data to return"))
}
m.AddConnection(fc, protocol.Hello{})
m.Index(device1, "default", files)
must(b, m.Index(device1, "default", files))
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -1799,7 +1799,7 @@ func TestGlobalDirectoryTree(t *testing.T) {
return string(bytes)
}
m.Index(device1, "default", testdata)
must(t, m.Index(device1, "default", testdata))
result, _ := m.GlobalDirectoryTree("default", "", -1, false)
@@ -2006,7 +2006,7 @@ func benchmarkTree(b *testing.B, n1, n2 int) {
m.ScanFolder("default")
files := genDeepFiles(n1, n2)
m.Index(device1, "default", files)
must(b, m.Index(device1, "default", files))
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -2293,8 +2293,8 @@ func TestIssue3496(t *testing.T) {
m.ScanFolder("default")
addFakeConn(m, device1)
addFakeConn(m, device2)
addFakeConn(m, device1, "default")
addFakeConn(m, device2, "default")
// Reach into the model and grab the current file list...
@@ -2327,7 +2327,7 @@ func TestIssue3496(t *testing.T) {
Version: protocol.Vector{Counters: []protocol.Counter{{ID: device1.Short(), Value: 42}}},
})
m.IndexUpdate(device1, "default", localFiles)
must(t, m.IndexUpdate(device1, "default", localFiles))
// Check that the completion percentage for us makes sense
@@ -2400,8 +2400,8 @@ func TestNoRequestsFromPausedDevices(t *testing.T) {
t.Errorf("should not be available, no connections")
}
addFakeConn(m, device1)
addFakeConn(m, device2)
addFakeConn(m, device1, "default")
addFakeConn(m, device2, "default")
// !!! This is not what I'd expect to happen, as we don't even know if the peer has the original index !!!
@@ -2440,8 +2440,8 @@ func TestNoRequestsFromPausedDevices(t *testing.T) {
// Test that remote paused folders are not used.
addFakeConn(m, device1)
addFakeConn(m, device2)
addFakeConn(m, device1, "default")
addFakeConn(m, device2, "default")
m.ClusterConfig(device1, cc)
ccp := cc
@@ -2654,7 +2654,7 @@ func TestRemoveDirWithContent(t *testing.T) {
file.Deleted = true
file.Version = file.Version.Update(device1.Short()).Update(device1.Short())
m.IndexUpdate(device1, "default", []protocol.FileInfo{dir, file})
must(t, m.IndexUpdate(device1, "default", []protocol.FileInfo{dir, file}))
// Is there something we could trigger on instead of just waiting?
timeout := time.NewTimer(5 * time.Second)
@@ -3784,7 +3784,7 @@ func TestScanDeletedROChangedOnSR(t *testing.T) {
}
// A remote must have the file, otherwise the deletion below is
// automatically resolved as not a ro-changed item.
m.IndexUpdate(device1, fcfg.ID, []protocol.FileInfo{file})
must(t, m.IndexUpdate(device1, fcfg.ID, []protocol.FileInfo{file}))
must(t, ffs.Remove(name))
m.ScanFolders()
@@ -3895,9 +3895,9 @@ func TestIssue6961(t *testing.T) {
version := protocol.Vector{}.Update(device1.Short())
// Remote, valid and existing file
m.Index(device1, fcfg.ID, []protocol.FileInfo{{Name: name, Version: version, Sequence: 1}})
must(t, m.Index(device1, fcfg.ID, []protocol.FileInfo{{Name: name, Version: version, Sequence: 1}}))
// Remote, invalid (receive-only) and existing file
m.Index(device2, fcfg.ID, []protocol.FileInfo{{Name: name, RawInvalid: true, Sequence: 1}})
must(t, m.Index(device2, fcfg.ID, []protocol.FileInfo{{Name: name, RawInvalid: true, Sequence: 1}}))
// Create a local file
if fd, err := tfs.OpenFile(name, fs.OptCreate, 0666); err != nil {
t.Fatal(err)
@@ -3923,7 +3923,7 @@ func TestIssue6961(t *testing.T) {
m.ScanFolders()
// Drop ther remote index, add some other file.
m.Index(device2, fcfg.ID, []protocol.FileInfo{{Name: "bar", RawInvalid: true, Sequence: 1}})
must(t, m.Index(device2, fcfg.ID, []protocol.FileInfo{{Name: "bar", RawInvalid: true, Sequence: 1}}))
// Pause and unpause folder to create new db.FileSet and thus recalculate everything
pauseFolder(t, wcfg, fcfg.ID, true)
@@ -3947,7 +3947,7 @@ func TestCompletionEmptyGlobal(t *testing.T) {
m.fmut.Unlock()
files[0].Deleted = true
files[0].Version = files[0].Version.Update(device1.Short())
m.IndexUpdate(device1, fcfg.ID, files)
must(t, m.IndexUpdate(device1, fcfg.ID, files))
comp := m.testCompletion(protocol.LocalDeviceID, fcfg.ID)
if comp.CompletionPct != 95 {
t.Error("Expected completion of 95%, got", comp.CompletionPct)
@@ -3966,26 +3966,26 @@ func TestNeedMetaAfterIndexReset(t *testing.T) {
// Start with two remotes having one file, then both deleting it, then
// only one adding it again.
m.Index(device1, fcfg.ID, files)
m.Index(device2, fcfg.ID, files)
must(t, m.Index(device1, fcfg.ID, files))
must(t, m.Index(device2, fcfg.ID, files))
seq++
files[0].SetDeleted(device2.Short())
files[0].Sequence = seq
m.IndexUpdate(device2, fcfg.ID, files)
m.IndexUpdate(device1, fcfg.ID, files)
must(t, m.IndexUpdate(device2, fcfg.ID, files))
must(t, m.IndexUpdate(device1, fcfg.ID, files))
seq++
files[0].Deleted = false
files[0].Size = 20
files[0].Version = files[0].Version.Update(device1.Short())
files[0].Sequence = seq
m.IndexUpdate(device1, fcfg.ID, files)
must(t, m.IndexUpdate(device1, fcfg.ID, files))
if comp := m.testCompletion(device2, fcfg.ID); comp.NeedItems != 1 {
t.Error("Expected one needed item for device2, got", comp.NeedItems)
}
// Pretend we had an index reset on device 1
m.Index(device1, fcfg.ID, files)
must(t, m.Index(device1, fcfg.ID, files))
if comp := m.testCompletion(device2, fcfg.ID); comp.NeedItems != 1 {
t.Error("Expected one needed item for device2, got", comp.NeedItems)
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
)
func TestRequestSimple(t *testing.T) {
@@ -57,7 +58,11 @@ func TestRequestSimple(t *testing.T) {
contents := []byte("test file contents\n")
fc.addFile("testfile", 0644, protocol.FileInfoTypeFile, contents)
fc.sendIndexUpdate()
<-done
select {
case <-done:
case <-time.After(10 * time.Second):
t.Fatal("timed out")
}
// Verify the contents
if err := equalContents(filepath.Join(tfs.URI(), "testfile"), contents); err != nil {
@@ -305,7 +310,7 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
folderIgnoresAlwaysReload(t, m, fcfg)
fc := addFakeConn(m, device1)
fc := addFakeConn(m, device1, fcfg.ID)
fc.folder = "default"
if err := m.SetIgnores("default", []string{"*ignored*"}); err != nil {
@@ -1037,7 +1042,7 @@ func TestIgnoreDeleteUnignore(t *testing.T) {
folderIgnoresAlwaysReload(t, m, fcfg)
m.ScanFolders()
fc := addFakeConn(m, device1)
fc := addFakeConn(m, device1, fcfg.ID)
fc.folder = "default"
fc.mut.Lock()
fc.mut.Unlock()
@@ -1295,7 +1300,7 @@ func TestRequestIndexSenderClusterConfigBeforeStart(t *testing.T) {
stopped: make(chan struct{}),
}
defer cleanupModel(m)
fc := addFakeConn(m, device1)
fc := addFakeConn(m, device1, fcfg.ID)
done := make(chan struct{})
defer close(done) // Must be the last thing to be deferred, thus first to run.
indexChan := make(chan []protocol.FileInfo, 1)
@@ -1335,7 +1340,7 @@ func TestRequestIndexSenderClusterConfigBeforeStart(t *testing.T) {
}
}
func TestRequestReceiveEncryptedLocalNoSend(t *testing.T) {
func TestRequestReceiveEncrypted(t *testing.T) {
if testing.Short() {
t.Skip("skipping on short testing - scrypt is too slow")
}
@@ -1360,10 +1365,11 @@ func TestRequestReceiveEncryptedLocalNoSend(t *testing.T) {
m.fmut.RUnlock()
fset.Update(protocol.LocalDeviceID, files)
indexChan := make(chan []protocol.FileInfo, 1)
indexChan := make(chan []protocol.FileInfo, 10)
done := make(chan struct{})
defer close(done)
fc := newFakeConnection(device1, m)
fc.folder = fcfg.ID
fc.setIndexFn(func(_ context.Context, _ string, fs []protocol.FileInfo) error {
select {
case indexChan <- fs:
@@ -1398,6 +1404,32 @@ func TestRequestReceiveEncryptedLocalNoSend(t *testing.T) {
case <-time.After(5 * time.Second):
t.Fatal("timed out before receiving index")
}
// Detects deletion, as we never really created the file on disk
// Shouldn't send anything because receive-encrypted
must(t, m.ScanFolder(fcfg.ID))
// One real file to be sent
name := "foo"
data := make([]byte, 2000)
rand.Read(data)
fc.addFile(name, 0664, protocol.FileInfoTypeFile, data)
fc.sendIndexUpdate()
select {
case fs := <-indexChan:
if len(fs) != 1 {
t.Error("Expected index with one file, got", fs)
}
if got := fs[0].Name; got != name {
t.Errorf("Expected file %v, got %v", got, files[0].Name)
}
case <-time.After(5 * time.Second):
t.Fatal("timed out before receiving index")
}
// Simulate request from device that is untrusted too, i.e. with non-empty, but garbage hash
_, err := m.Request(device1, fcfg.ID, name, 0, 1064, 0, []byte("garbage"), 0, false)
must(t, err)
}
func TestRequestGlobalInvalidToValid(t *testing.T) {
@@ -1489,6 +1521,7 @@ func TestRequestGlobalInvalidToValid(t *testing.T) {
if gotInvalid {
t.Fatal("Received two invalid index updates")
}
t.Log("got index with invalid file")
gotInvalid = true
}
}

View File

@@ -126,7 +126,7 @@ func setupModelWithConnectionFromWrapper(t testing.TB, w config.Wrapper) (*testM
t.Helper()
m := setupModel(t, w)
fc := addFakeConn(m, device1)
fc := addFakeConn(m, device1, "default")
fc.folder = "default"
_ = m.ScanFolder("default")

View File

@@ -2629,10 +2629,7 @@ func (m *Hello) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -2720,10 +2717,7 @@ func (m *Header) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -2807,10 +2801,7 @@ func (m *ClusterConfig) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -3058,10 +3049,7 @@ func (m *Folder) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -3371,10 +3359,7 @@ func (m *Device) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -3490,10 +3475,7 @@ func (m *Index) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -3609,10 +3591,7 @@ func (m *IndexUpdate) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4126,10 +4105,7 @@ func (m *FileInfo) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4270,10 +4246,7 @@ func (m *BlockInfo) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4357,10 +4330,7 @@ func (m *Vector) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4448,10 +4418,7 @@ func (m *Counter) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4714,10 +4681,7 @@ func (m *Request) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4839,10 +4803,7 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -4958,10 +4919,7 @@ func (m *DownloadProgress) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -5190,10 +5148,7 @@ func (m *FileDownloadProgressUpdate) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -5243,10 +5198,7 @@ func (m *Ping) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {
@@ -5328,10 +5280,7 @@ func (m *Close) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthBep
}
if (iNdEx + skippy) > l {

View File

@@ -297,10 +297,7 @@ func (m *TestOldDeviceID) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDeviceidTest
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDeviceidTest
}
if (iNdEx + skippy) > l {
@@ -383,10 +380,7 @@ func (m *TestNewDeviceID) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDeviceidTest
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDeviceidTest
}
if (iNdEx + skippy) > l {

View File

@@ -109,7 +109,7 @@ type walker struct {
// Walk returns the list of files found in the local folder by scanning the
// file system. Files are blockwise hashed.
func (w *walker) walk(ctx context.Context) chan ScanResult {
l.Debugln("Walk", w.Subs, w.Matcher)
l.Debugln(w, "Walk", w.Subs, w.Matcher)
toHashChan := make(chan protocol.FileInfo)
finishedChan := make(chan ScanResult)
@@ -162,13 +162,13 @@ func (w *walker) walk(ctx context.Context) chan ScanResult {
for {
select {
case <-done:
l.Debugln("Walk progress done", w.Folder, w.Subs, w.Matcher)
l.Debugln(w, "Walk progress done", w.Folder, w.Subs, w.Matcher)
ticker.Stop()
return
case <-ticker.C:
current := progress.Total()
rate := progress.Rate()
l.Debugf("Walk %s %s current progress %d/%d at %.01f MiB/s (%d%%)", w.Folder, w.Subs, current, total, rate/1024/1024, current*100/total)
l.Debugf("%v: Walk %s %s current progress %d/%d at %.01f MiB/s (%d%%)", w, w.Folder, w.Subs, current, total, rate/1024/1024, current*100/total)
w.EventLogger.Log(events.FolderScanProgress, map[string]interface{}{
"folder": w.Folder,
"current": current,
@@ -184,7 +184,7 @@ func (w *walker) walk(ctx context.Context) chan ScanResult {
loop:
for _, file := range filesToHash {
l.Debugln("real to hash:", file.Name)
l.Debugln(w, "real to hash:", file.Name)
select {
case realToHashChan <- file:
case <-ctx.Done():
@@ -198,7 +198,7 @@ func (w *walker) walk(ctx context.Context) chan ScanResult {
}
func (w *walker) walkWithoutHashing(ctx context.Context) chan ScanResult {
l.Debugln("Walk without hashing", w.Subs, w.Matcher)
l.Debugln(w, "Walk without hashing", w.Subs, w.Matcher)
toHashChan := make(chan protocol.FileInfo)
finishedChan := make(chan ScanResult)
@@ -224,7 +224,7 @@ func (w *walker) scan(ctx context.Context, toHashChan chan<- protocol.FileInfo,
} else {
for _, sub := range w.Subs {
if err := osutil.TraversesSymlink(w.Filesystem, filepath.Dir(sub)); err != nil {
l.Debugf("Skip walking %v as it is below a symlink", sub)
l.Debugf("%v: Skip walking %v as it is below a symlink", w, sub)
continue
}
w.Filesystem.Walk(sub, hashFiles)
@@ -258,21 +258,21 @@ func (w *walker) walkAndHashFiles(ctx context.Context, toHashChan chan<- protoco
}
if fs.IsTemporary(path) {
l.Debugln("temporary:", path, "err:", err)
l.Debugln(w, "temporary:", path, "err:", err)
if err == nil && info.IsRegular() && info.ModTime().Add(w.TempLifetime).Before(now) {
w.Filesystem.Remove(path)
l.Debugln("removing temporary:", path, info.ModTime())
l.Debugln(w, "removing temporary:", path, info.ModTime())
}
return nil
}
if fs.IsInternal(path) {
l.Debugln("ignored (internal):", path)
l.Debugln(w, "ignored (internal):", path)
return skip
}
if w.Matcher.Match(path).IsIgnored() {
l.Debugln("ignored (patterns):", path)
l.Debugln(w, "ignored (patterns):", path)
// Only descend if matcher says so and the current file is not a symlink.
if err != nil || w.Matcher.SkipIgnoredDirs() || info.IsSymlink() {
return skip
@@ -285,7 +285,11 @@ func (w *walker) walkAndHashFiles(ctx context.Context, toHashChan chan<- protoco
}
if err != nil {
handleError(ctx, "scan", path, err, finishedChan)
// No need reporting errors for files that don't exist (e.g. scan
// due to filesystem watcher)
if !fs.IsNotExist(err) {
handleError(ctx, "scan", path, err, finishedChan)
}
return skip
}
@@ -384,6 +388,7 @@ func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileIn
if hasCurFile {
if curFile.IsEquivalentOptional(f, w.ModTimeWindow, w.IgnorePerms, true, w.LocalFlags) {
l.Debugln(w, "unchanged:", curFile, info.ModTime().Unix(), info.Mode()&fs.ModePerm)
return nil
}
if curFile.ShouldConflict() {
@@ -394,10 +399,10 @@ func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileIn
// conflict.
f.Version = f.Version.DropOthers(w.ShortID)
}
l.Debugln("rescan:", curFile, info.ModTime().Unix(), info.Mode()&fs.ModePerm)
l.Debugln(w, "rescan:", curFile, info.ModTime().Unix(), info.Mode()&fs.ModePerm)
}
l.Debugln("to hash:", relPath, f)
l.Debugln(w, "to hash:", relPath, f)
select {
case toHashChan <- f:
@@ -417,6 +422,7 @@ func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo,
if hasCurFile {
if curFile.IsEquivalentOptional(f, w.ModTimeWindow, w.IgnorePerms, true, w.LocalFlags) {
l.Debugln(w, "unchanged:", curFile, info.ModTime().Unix(), info.Mode()&fs.ModePerm)
return nil
}
if curFile.ShouldConflict() {
@@ -429,7 +435,7 @@ func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo,
}
}
l.Debugln("dir:", relPath, f)
l.Debugln(w, "dir:", relPath, f)
select {
case finishedChan <- ScanResult{File: f}:
@@ -461,6 +467,7 @@ func (w *walker) walkSymlink(ctx context.Context, relPath string, info fs.FileIn
if hasCurFile {
if curFile.IsEquivalentOptional(f, w.ModTimeWindow, w.IgnorePerms, true, w.LocalFlags) {
l.Debugln(w, "unchanged:", curFile, info.ModTime().Unix(), info.Mode()&fs.ModePerm)
return nil
}
if curFile.ShouldConflict() {
@@ -473,7 +480,7 @@ func (w *walker) walkSymlink(ctx context.Context, relPath string, info fs.FileIn
}
}
l.Debugln("symlink changedb:", relPath, f)
l.Debugln(w, "symlink changedb:", relPath, f)
select {
case finishedChan <- ScanResult{File: f}:
@@ -557,10 +564,6 @@ func (w *walker) updateFileInfo(file, curFile protocol.FileInfo) protocol.FileIn
}
func handleError(ctx context.Context, context, path string, err error, finishedChan chan<- ScanResult) {
// Ignore missing items, as deletions are not handled by the scanner.
if fs.IsNotExist(err) {
return
}
select {
case finishedChan <- ScanResult{
Err: fmt.Errorf("%s: %w", context, err),

View File

@@ -251,6 +251,62 @@ func TestNormalization(t *testing.T) {
}
}
func TestNormalizationDarwinCaseFS(t *testing.T) {
// This tests that normalization works on Darwin, through a CaseFS.
if runtime.GOOS != "darwin" {
t.Skip("Normalization test not possible on non-Darwin")
return
}
testFs := fs.NewCaseFilesystem(testFs)
testFs.RemoveAll("normalization")
defer testFs.RemoveAll("normalization")
testFs.MkdirAll("normalization", 0755)
const (
inNFC = "\xC3\x84"
inNFD = "\x41\xCC\x88"
)
// Create dir in NFC
if err := testFs.Mkdir(filepath.Join("normalization", "dir-"+inNFC), 0755); err != nil {
t.Fatal(err)
}
// Create file in NFC
fd, err := testFs.Create(filepath.Join("normalization", "dir-"+inNFC, "file-"+inNFC))
if err != nil {
t.Fatal(err)
}
fd.Close()
// Walk, which should normalize and return
walkDir(testFs, "normalization", nil, nil, 0)
tmp := walkDir(testFs, "normalization", nil, nil, 0)
if len(tmp) != 3 {
t.Error("Expected one file and one dir scanned")
}
// Verify we see the normalized entries in the result
foundFile := false
foundDir := false
for _, f := range tmp {
if f.Name == filepath.Join("normalization", "dir-"+inNFD) {
foundDir = true
continue
}
if f.Name == filepath.Join("normalization", "dir-"+inNFD, "file-"+inNFD) {
foundFile = true
continue
}
}
if !foundFile || !foundDir {
t.Error("Didn't find expected normalization form")
}
}
func TestIssue1507(t *testing.T) {
w := &walker{}
w.Matcher = ignore.New(w.Filesystem)

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Apr 26, 2021" "v1" "Syncthing"
.TH "STDISCOSRV" "1" "May 18, 2021" "v1" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Apr 26, 2021" "v1" "Syncthing"
.TH "STRELAYSRV" "1" "May 18, 2021" "v1" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS
@@ -50,7 +50,7 @@ strelaysrv [\-debug] [\-ext\-address=<address>] [\-global\-rate=<bytes/s>] [\-ke
Syncthing relies on a network of community\-contributed relay servers. Anyone
can run a relay server, and it will automatically join the relay pool and be
available to Syncthing users. The current list of relays can be found at
\fI\%http://relays.syncthing.net/\fP\&.
\fI\%https://relays.syncthing.net/\fP\&.
.SH OPTIONS
.INDENT 0.0
.TP
@@ -127,7 +127,7 @@ How often pings are sent (default 1m0s).
.TP
.B \-pools=<pool addresses>
Comma separated list of relay pool addresses to join (default
\fI\%http://relays.syncthing.net/endpoint\fP”). Blank to disable announcement to
\fI\%https://relays.syncthing.net/endpoint\fP”). Blank to disable announcement to
a pool, thereby remaining a private relay.
.UNINDENT
.INDENT 0.0
@@ -150,9 +150,9 @@ Status service is used by the relay pool server UI for displaying stats (data tr
.sp
Go to \fI\%releases\fP <\fBhttps://github.com/syncthing/relaysrv/releases\fP> and
download the file appropriate for your operating system. Unpacking it will
yield a binary called \fBrelaysrv\fP (or \fBrelaysrv.exe\fP on Windows).
yield a binary called \fBstrelaysrv\fP (or \fBstrelaysrv.exe\fP on Windows).
Start this in whatever way you are most comfortable with; double clicking
should work in any graphical environment. At first start, relaysrv will
should work in any graphical environment. At first start, strelaysrv will
generate certificate files and database in the current directory unless
given flags to the contrary. It will also join the default pools of relays,
which means that it is publicly visible and any client can connect to it.
@@ -176,16 +176,16 @@ system:
.sp
.nf
.ft C
$ sudo useradd relaysrv
$ sudo mkdir /etc/relaysrv
$ sudo chown relaysrv /etc/relaysrv
$ sudo \-u relaysrv /usr/local/bin/relaysrv \-keys /etc/relaysrv
$ sudo useradd strelaysrv
$ sudo mkdir /etc/strelaysrv
$ sudo chown strelaysrv /etc/strelaysrv
$ sudo \-u strelaysrv /usr/local/bin/strelaysrv \-keys /etc/strelaysrv
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
This creates a user \fBrelaysrv\fP and a directory \fB/etc/relaysrv\fP to store
This creates a user \fBstrelaysrv\fP and a directory \fB/etc/strelaysrv\fP to store
the keys. The keys are generated on first startup. The relay will join the
global relay pool, unless a \fB\-pools=""\fP argument is given.
.sp
@@ -247,8 +247,8 @@ COMMIT
.UNINDENT
.UNINDENT
.sp
You will need to start \fBrelaysrv\fP with \fB\-ext\-address ":443"\fP\&. This tells
\fBrelaysrv\fP that it can be contacted on port 443, even though it is listening
You will need to start \fBstrelaysrv\fP with \fB\-ext\-address ":443"\fP\&. This tells
\fBstrelaysrv\fP that it can be contacted on port 443, even though it is listening
on port 22067. You will also need to let both port 443 and 22067 through your
firewall.
.sp
@@ -257,7 +257,7 @@ although your mileage may vary.
.SH FIREWALL CONSIDERATIONS
.sp
The relay server listens on two ports by default. One for data connections and the other
for providing public statistics at \fI\%http://relays.syncthing.net/\fP\&. The firewall, such as
for providing public statistics at \fI\%https://relays.syncthing.net/\fP\&. The firewall, such as
\fBiptables\fP, must permit incoming TCP connections to the following ports:
.INDENT 0.0
.IP \(bu 2

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-BEP" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-LOCALDISCO" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.SH MODE OF OPERATION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-NETWORKING" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.SH ROUTER SETUP

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-RELAY" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.SH WHAT IS A RELAY?

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-REST-API" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-SECURITY" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-STIGNORE" "5" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-VERSIONING" "7" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.sp
@@ -88,16 +88,17 @@ that will be kept for each.
.INDENT 0.0
.TP
.B 1 Hour
For the first hour, the most recent version is kept every 30 seconds.
For the first hour, the oldest version in every 30\-seconds interval is
kept.
.TP
.B 1 Day
For the first day, the most recent version is kept every hour.
For the first day, the oldest version in every hour is kept.
.TP
.B 30 Days
For the first 30 days, the most recent version is kept every day.
For the first 30 days, the oldest version in every day is kept.
.TP
.B Until Maximum Age
Until maximum age, the most recent version is kept every week.
Until maximum age, the oldest version in every week is kept.
.TP
.B Maximum Age
The maximum time to keep a version in days. For example, to keep replaced or
@@ -105,6 +106,14 @@ deleted files in the “.stversions” folder for an entire year, use 365. If
only for 10 days, use 10.
\fBNote: Set to 0 to keep versions forever.\fP
.UNINDENT
.sp
This means that there is only one version in each interval and as files age they
will be deleted unless when the interval they are entering is empty. By keeping
the oldest versions this versioning scheme preserves the file if it is
overwritten.
.sp
For more info, check the \fI\%unit test file\fP <\fBhttps://github.com/syncthing/syncthing/blob/main/lib/versioner/staggered_test.go#L32\fP>
that shows which versions are deleted for a specific run.
.SH EXTERNAL FILE VERSIONING
.sp
This versioning method delegates the decision on what to do to an

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING" "1" "Apr 26, 2021" "v1" "Syncthing"
.TH "SYNCTHING" "1" "May 18, 2021" "v1" "Syncthing"
.SH NAME
syncthing \- Syncthing
.SH SYNOPSIS
@@ -55,6 +55,14 @@ machine will automatically be replicated to your other devices. We believe your
data is your data alone and you deserve to choose where it is stored. Therefore
Syncthing does not upload your data to the cloud but exchanges your data across
your machines as soon as they are online at the same time.
.sp
The \fBsyncthing\fP core application is a command\-line program which usually runs
in the background and handles the synchronization. It provides a built\-in, HTML
and JavaScript based user interface to be controlled from a web browser. This
frontend communicates with the core application through some HTTP APIs, which
other apps like graphical system integration helpers can use as well, for
greatest flexibility. A link to reach the GUI and API is printed among the first
few log messages.
.SH OPTIONS
.INDENT 0.0
.TP