Compare commits

...

423 Commits

Author SHA1 Message Date
Jakob Borg
daa2bcefad Translations and docs update 2015-08-09 11:56:22 +02:00
Jakob Borg
a71090df81 Enable browser caching of static resources
This sends the Cache-Control header to allow caching of static resources,
and checks the If-Modified-Since header to allow browser to use the
cached resource on refresh.
2015-08-09 11:36:06 +02:00
Jakob Borg
0bfcafc5c6 Handle multiple case insensitivity prefixes in ignores (fixes #2134) 2015-08-09 11:35:12 +02:00
Lode Hoste
161d5c8379 Make rescan available for unshared folders 2015-08-09 11:34:44 +02:00
Audrius Butkevicius
5cfb578170 Add timeout for peek (fixes #1035) 2015-08-09 11:34:30 +02:00
Lode Hoste
9b0d47e9eb Fix TestReset when Syncthing shuts down too fast 2015-08-09 11:34:21 +02:00
Lode Hoste
13f4706067 Clarify password in integration tests 2015-08-09 11:34:10 +02:00
Lode Hoste
7ebdb1736f Properly rename config files during integration tests (fixes #1769) 2015-08-09 11:34:05 +02:00
Jakob Borg
19451e0654 Translation and docs update 2015-08-02 09:19:32 +02:00
Jakob Borg
fa922d7792 Send index immediately on local change 2015-08-02 08:08:24 +02:00
Jakob Borg
bbe1de3119 Merge pull request #2100 from AudriusButkevicius/memory
Use protocol provided buffer for requests (fixes #1157)
2015-08-02 08:07:52 +02:00
Jakob Borg
f87e9b596d Merge pull request #2106 from Zillode/scan-deletes
Reduce scanning effort
2015-08-02 07:42:39 +02:00
Jakob Borg
917e12952e Merge pull request #2109 from Zillode/update-credits
Update credits for dependencies (fixes #2082)
2015-08-02 07:25:45 +02:00
Audrius Butkevicius
1977c526e4 Use protocol provided buffers for requests (fixes #1157) 2015-08-01 12:35:47 +01:00
Audrius Butkevicius
b63351074c Update protocol package 2015-08-01 12:22:19 +01:00
Lode Hoste
d78eb1247e Update credits for dependencies (fixes #2082) 2015-07-31 23:19:09 +02:00
Lode Hoste
9b9fe0d65c Reduce scanning effort 2015-07-31 21:32:49 +02:00
Jakob Borg
5a2db802d9 Fix TestReset 2015-07-28 21:31:01 +04:00
Jakob Borg
d3972b88f2 Remove set.ReplaceWithDelete (dead code) 2015-07-28 21:09:43 +04:00
Audrius Butkevicius
e62cf13760 Add stwatchfile 2015-07-27 19:00:22 +01:00
Jakob Borg
a5f6e3bba0 Update translations and docs 2015-07-26 11:25:40 +02:00
Jakob Borg
d170660c25 Usage -> Statistics 2015-07-24 12:04:16 +02:00
Audrius Butkevicius
2202aaed51 Merge pull request #2087 from calmh/norestart
Add folders without restart (fixes #2063)
2015-07-24 08:06:36 +01:00
Audrius Butkevicius
cbefcd50cf Merge pull request #2088 from calmh/usagedata
Link to usage data (ref syncthing/website#17)
2015-07-24 07:55:41 +01:00
Jakob Borg
1acfa291a0 Link to usage data (ref syncthing/website#17) 2015-07-24 08:53:33 +02:00
Jakob Borg
21accd534c Add folders without restart (fixes #2063) 2015-07-24 08:20:57 +02:00
Audrius Butkevicius
de89d7a976 Merge pull request #2084 from calmh/norestart
Add devices without restart (fixes #2083)
2015-07-23 11:07:27 +01:00
Jakob Borg
319abebd70 Update all dependencies 2015-07-22 12:09:44 +02:00
Jakob Borg
76480adda5 Add devices without restart (fixes #2083) 2015-07-22 10:43:47 +02:00
Jakob Borg
e205f8afbb Don't error integration tests on unexpected EOF at Stop() 2015-07-22 10:43:33 +02:00
Audrius Butkevicius
42dc51784e Merge pull request #2079 from snnd/bugfix_reloads
fix(core): prevent endless reload on cache requests
2015-07-21 23:01:15 +01:00
Dennis Wilson
a4a46f480d refactor(core): eleminate global state of guiVersion and deviceId 2015-07-21 23:47:35 +02:00
Dennis Wilson
e34be16237 style(core): add simple flow, hoisting, stacktrace infos 2015-07-21 23:41:10 +02:00
Dennis Wilson
dfcc166918 fix(core): prevent endless reload on cache requests 2015-07-21 22:35:51 +02:00
Audrius Butkevicius
895d56ed04 Merge pull request #2076 from calmh/ignoredelete
Optionally ignore remote deletes
2015-07-21 19:48:06 +01:00
Jakob Borg
12eab4a8ba Optionally ignore remote deletes (fixes #1254) 2015-07-21 20:39:19 +02:00
Zillode
3eb2b1f7a2 Fix Systemd readme link 2015-07-21 20:32:35 +02:00
Audrius Butkevicius
6ecc9bf93a Merge pull request #2077 from calmh/conflictwins
Determine conflict winner based on change type and modification time (fixes #1848)
2015-07-21 15:05:08 +01:00
Jakob Borg
da4ebb6535 Determine conflict winner based on change type and modification time (fixes #1848) 2015-07-21 15:39:20 +02:00
Jakob Borg
6e4d33c741 Don't run tests with audit on 2015-07-20 15:46:14 +02:00
Jakob Borg
d3387e2a28 Make sure CPU profile actually gets written before exiting 2015-07-20 15:34:40 +02:00
Jakob Borg
491452a19d Improve performance for syncing many small files quickly
Without this, as soon as we'd touched 1000 files in the last minute
(which can happen), we got stuck doing cache cleaning all the time,
burning a lot of CPU time.
2015-07-20 15:30:44 +02:00
Jakob Borg
7d3257b222 Use soft shutdown when running tests 2015-07-20 15:05:15 +02:00
Jakob Borg
1836ef2884 Squashed commit of pull request #1981
Conflicts:
	gui/scripts/syncthing/core/controllers/syncthingController.js
	internal/auto/gui.files.go
2015-07-20 14:48:03 +02:00
Jakob Borg
43d6322d0f Merge pull request #2061 from calmh/atomicwriter
Add osutil.AtomicWriter (take two)
2015-07-20 14:27:36 +02:00
Jakob Borg
f0684d83e9 Add osutil.AtomicWriter
This captures the common pattern of writing to a temp file and moving it
to it's real name only if everything went well. It reduces the amount of
code in some places where we do this, but maybe not as much as I
expected because the upgrade thing is still a special snowflake...
2015-07-20 14:27:14 +02:00
Jakob Borg
3f3170818d Merge pull request #2072 from dva/patch-1
Double curly brace notation displaying
2015-07-20 14:25:27 +02:00
Jakob Borg
7683096fe1 Add dva 2015-07-20 14:25:07 +02:00
Jakob Borg
bb438bfb17 Squashed commit of pull request #1990
commit 4eb3ff55ba
Merge: ddb3fea a04b005
Author: Brian R. Becker <brbecker@gmail.com>
Date:   Sat Jul 11 20:45:30 2015 -0700

    Merge remote-tracking branch 'upstream/master'

commit ddb3fea0d9
Author: Brian R. Becker <brbecker@gmail.com>
Date:   Mon Jun 22 11:36:58 2015 -0700

    Corrected spelling in GUI message
2015-07-20 14:22:36 +02:00
Jakob Borg
a11aa295de Squashed commit of pull request #1875
commit d60fbce311
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Mon Jun 1 11:16:36 2015 +0200

    Correct order of deb files

commit 3b2ecfcc45
Merge: f4daebb c23a601
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Mon Jun 1 11:15:06 2015 +0200

    Merge github.com:syncthing/syncthing

    Conflicts:
    	build.go

commit f4daebb851
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Tue May 26 12:58:25 2015 +0200

    Add me to AUTHORS

commit 9e77f4bea0
Author: Jacek Szafarkiewicz <szafar@linux.pl>
Date:   Tue May 26 12:57:40 2015 +0200

    Add systemd files to deb packate
2015-07-20 14:18:07 +02:00
Denis A.
9a50f4ac1f Double curly brace notation displaying
Prevent double curly brace notation from displaying momentarily before angular.js compiles/interpolates document
2015-07-20 12:18:48 +03:00
Jakob Borg
59e829e595 Translation & docs update 2015-07-19 13:34:11 +02:00
Jakob Borg
78dca5fe8b Assets & translations 2015-07-16 14:03:21 +02:00
Jakob Borg
8c816f64e4 Merge pull request #2064 from uok/infotext
Improve info text for device addresses (fixes #2044)
2015-07-16 14:02:35 +02:00
Ben Schulz
22c525e3fe Improve info text for device addresses (fixes #2044)
Improve info text for device addresses (#2044)
2015-07-16 10:15:33 +02:00
Jakob Borg
f3f6b03d85 Don't let folder ID escape into HTML tag ID:s (fixes #2059) 2015-07-16 08:13:10 +02:00
Audrius Butkevicius
00bebc317e Merge pull request #2062 from canton7/feature/issue-2041
Allow #editIgnores to scroll in browser (fixes #2041)
2015-07-15 17:48:15 +01:00
Antony Male
8f38e83aaf Allow #editIgnores to scroll in browser (fixes #2041)
class modal-open is applied to <body>, which ultimately means that the
browser will scoll to the modal's content. However, #editFolder was
finishing its close animation (and removing this modal-open class)
after #editIgnores had set modal-open (and had started its open
animation). The end result is that <body> ends up without modal-open
when #editIgnores is open, and so the browser doesn't properly scroll.

Instead, only open the #editIgnores once #editFolder has finished closing.
2015-07-15 14:20:57 +01:00
Jakob Borg
8fab7ec5e3 Decrease timing sensitivity of ignore.TestCache 2015-07-14 12:12:57 +02:00
Jakob Borg
50eb968109 Merge pull request #2060 from brgmnn/master
Added select-on-click for remote 'Device ID' and 'API Key'
2015-07-14 10:55:32 +02:00
Daniel Bergmann
569314be45 Added select-on-click for remote 'Device ID' and 'API Key'
Now these uneditable boxes match the behaviour when clicking on the ID
box in Actions > Show ID.
2015-07-13 17:01:13 +01:00
Jakob Borg
909d60464e Revert "Merge pull request #2053 from calmh/atomicwriter" (fixes #2058)
This reverts commit b611f72e08, reversing
changes made to a04b005e93.
2015-07-13 12:47:32 +02:00
Jakob Borg
d855abf8b0 Translation & docs update 2015-07-13 08:24:04 +02:00
Audrius Butkevicius
b611f72e08 Merge pull request #2053 from calmh/atomicwriter
Add osutil.AtomicWriter
2015-07-12 12:10:54 +01:00
Jakob Borg
44e3bec42e Add osutil.AtomicWriter
This captures the common pattern of writing to a temp file and moving it
to it's real name only if everything went well. It reduces the amount of
code in some places where we do this, but maybe not as much as I
expected because the upgrade thing is still a special snowflake...
2015-07-12 14:28:59 +10:00
Jakob Borg
a04b005e93 Revert "Let suture logging bubble upwards"
This reverts commit 1b837116e6.
2015-07-11 11:12:20 +10:00
Jakob Borg
1b837116e6 Let suture logging bubble upwards 2015-07-11 10:52:57 +10:00
Jakob Borg
d16b04b683 Protocol dep update because I screwed up previous one 2015-07-10 19:58:56 +10:00
Audrius Butkevicius
d2e7a8004d Merge pull request #2048 from calmh/clusterconfigrace
Make sure connection is added to m.protoConn and m.rawConn before it's Start()ed (fixes #2034)
2015-07-10 08:46:31 +01:00
Jakob Borg
0c28216ee5 Make sure connection is added to m.protoConn and m.rawConn before it's Start()ed (fixes #2034) 2015-07-10 16:43:41 +10:00
Jakob Borg
5bb8ea7449 Remove one of two emits of events.DeviceConnected (ref #2034) 2015-07-10 16:12:59 +10:00
Jakob Borg
b78c515724 Debug output on request errors 2015-07-10 16:01:10 +10:00
Audrius Butkevicius
cb1a7a7bdc Update panic instructions (fixes #2039) 2015-07-09 22:43:16 +01:00
Audrius Butkevicius
b8dcb7c884 Update protocol package (fixes #2040) 2015-07-09 22:39:22 +01:00
Audrius Butkevicius
1ded554a15 Fix advanced option saving (fixes #2042) 2015-07-09 21:58:58 +01:00
Jakob Borg
bc0ce7b820 launchd: Log to files 2015-07-06 21:45:49 +02:00
Jakob Borg
1da3a57fe7 Asset rebuild 2015-07-05 19:44:55 +02:00
Jakob Borg
8697982302 Add canton7 2015-07-05 19:44:19 +02:00
Jakob Borg
c4168cf855 Merge pull request #2030 from canton7/feature/issue-2029
Use bootstrap grid instead of column-count:3 for aligning checkboxes (fixes #2029)
2015-07-05 19:41:44 +02:00
Antony Male
7023d3ca2b Use bootstrap grid instead of column-count:3 for aligning checkboxes (fixes #2029)
Upgrading to bootstrap 3.3.5 meant that checkboxes inside a div with
column-count:3 set would be unclickable in Chrome: in fact, the entire
div appears to sit on top of its contents, making interaction impossible.

This affected both the 'show folder with these devices' and 'these devices
can access this folder' sections of the UI.

I'm not sure what the the underlying cause is, but moving to Bootstrap's
grid system appears work around the issue. Devices/folders have to be
explicitly split out into rows, otherwise the final element appears offset.

To do this grouping by row, a new filter (groupFilter) has been added, which
turns an input of e.g. [1, 2, 3, 4, 5] with a groupSize of 3 into
[[1, 2, 3], [4, 5]]. However altering the collection in this way throws
Angular into an infinite watch loop, terminating in infdig. m59peacemaker's
pmkr.filterStabilize (MIT) was added to work around this issue.

This also has the nice side-effect of wrapping the list of devices/folders
when the screen width decreases.

See also:
 - #2027 (bootstrap update which triggered this issue)
 - #1121 (last time it happened)
2015-07-05 17:21:02 +01:00
Jakob Borg
57a5d13c47 Translation and docs update 2015-07-05 11:24:21 +02:00
Jakob Borg
500b96240b Don't show Failed Items on folder masters 2015-07-05 11:21:15 +02:00
Jakob Borg
13d961d41d Correctly show Override button when out of sync 2015-07-05 11:20:59 +02:00
Jakob Borg
fddc4c2fc0 Rebuild assets 2015-07-05 11:13:35 +02:00
Jakob Borg
061ec7369f Merge pull request #2027 from calmh/bootstrap
Update bootstrap
2015-07-05 11:07:13 +02:00
Jakob Borg
8366dbd8e0 Add link to home page (fixes #1993, fixes #1999) 2015-07-05 11:06:29 +02:00
Jakob Borg
b02047e4b5 Update Bootstrap 3.1.0 -> 3.3.5 2015-07-05 11:05:38 +02:00
Jakob Borg
e9545c4961 Merge pull request #2022 from brgmnn/master
Preserve setgid bit on local directores (fixes #2012)
2015-07-04 19:37:21 +02:00
Audrius Butkevicius
966a2b1df5 Merge pull request #2023 from calmh/advedit
Advanced configuration dialog
2015-07-04 15:08:40 +01:00
Jakob Borg
dec6540967 Implement "advanced configuration" dialog (fixes #2010) 2015-07-04 13:47:43 +02:00
Daniel Bergmann
3fe1673ce9 Preserve setgid bit on local directores (fixes #2012)
When setting the permissions on directories with ignore permissions off,
preserve the setgid bit to it's original value instead of setting it
off.
2015-07-04 09:01:34 +01:00
Jakob Borg
e9e13474c9 Merge pull request #2021 from brgmnn/master
Fixed add device button being overlapped by footer (fixes #1950)
2015-07-03 08:56:33 +02:00
Daniel Bergmann
aee9093848 Fixed add device button being overlapped by footer (fixes #1950) 2015-07-02 16:16:42 +01:00
Jakob Borg
76822c7c34 Merge pull request #2018 from brgmnn/master
Added select ID text on click to gui
2015-07-02 11:01:37 +02:00
Daniel Bergmann
5c18d34d89 Added a contact email address for myself.
Added myself to the AUTHORS, NICKS and GUI contributors files. Also
fixed the sort order in AUTHORS and NICKS when adding myself as there
were a couple of entries in both that were not quite in alphabetical
order.
2015-07-02 09:45:22 +01:00
Daniel Bergmann
970a9c7552 Added select ID text on click to gui 2015-07-02 09:34:12 +01:00
Audrius Butkevicius
37a42dc408 Fix CSRF tests (fixes #2009) 2015-06-30 19:38:27 +01:00
Audrius Butkevicius
a03c9f9457 Merge pull request #2001 from calmh/failed-files
Show failed files in web UI
2015-06-30 15:26:24 +01:00
Jakob Borg
60004ebff1 Show FolderErrors result in UI (fixes #1437) 2015-06-30 14:41:48 +02:00
Jakob Borg
2d9fcf6828 Collect puller errors, send FolderErrors event 2015-06-30 14:41:47 +02:00
Jakob Borg
c8ac9721d7 Translation and docs update 2015-06-28 21:10:57 +02:00
Audrius Butkevicius
b1b68b58fe Update protocol package 2015-06-28 11:40:53 +01:00
Jakob Borg
ca21db9481 Merge pull request #2006 from AudriusButkevicius/timeout
Make ping timeout configurable (fixes #1751)
2015-06-28 07:45:02 +02:00
Audrius Butkevicius
93ad803073 Make ping timeout configurable (fixes #1751) 2015-06-27 12:34:41 +01:00
Audrius Butkevicius
6cc7f70a65 Update protocol package 2015-06-27 12:07:42 +01:00
Jakob Borg
2b0c33f74d Merge pull request #1996 from AudriusButkevicius/checkrace
Potential race between folder being added and scan (fixes #1986)
2015-06-26 12:56:07 +02:00
Audrius Butkevicius
dae1d36a23 Trim string slices upon loading config (fixes #1750) 2015-06-25 16:50:27 +01:00
Audrius Butkevicius
824fa8f17a Fix go lint warnings 2015-06-24 22:05:27 +01:00
Audrius Butkevicius
31cd0b943c Potential race between folder being added and scan (potentially fixes #1986) 2015-06-24 21:59:03 +01:00
Jakob Borg
070eced2f6 Merge pull request #1985 from calmh/fix-reset
Fix reset DB
2015-06-24 14:07:15 +02:00
Audrius Butkevicius
986f8dfb2e Remove dead variable 2015-06-24 08:43:33 +01:00
Audrius Butkevicius
8c0c03eb38 Merge pull request #1989 from AudriusButkevicius/session
Use different session cookies per device
2015-06-23 13:56:14 +01:00
Audrius Butkevicius
fd9bc20bc5 Merge pull request #1988 from calmh/dups
Don't rename duplicate folders (fixes #1675)
2015-06-23 11:17:30 +01:00
Audrius Butkevicius
089fca2319 Use different session cookies per device 2015-06-22 19:51:46 +01:00
Jakob Borg
e936890927 Don't rename duplicate folders (fixes #1675)
Renaming them puts the user in a difficult situation as they can't
rename them back in the GUI. This way, they need to fix the config in
the same way it got broken (manual editing or external tool).
2015-06-22 11:27:47 +02:00
Zillode
0450d48f89 Merge pull request #1856 from calmh/fix-1391
Serialize scans (fixes #1391)
2015-06-21 14:26:17 +02:00
Jakob Borg
2463819a3d Update translations and docs 2015-06-21 11:45:54 +02:00
Jakob Borg
2b2cae2d50 Fix reset DB
The reset of all folders failed when there was no data for a given
folder, as it was not returned by db.ListFolders then. But we don't
really care about that, we can "reset" it anyway...
2015-06-21 09:35:41 +02:00
Zillode
0f1b40da71 Merge pull request #1982 from calmh/fix-1978
Sanitize rescan interval values (fixes #1978)
2015-06-20 23:28:18 +02:00
Jakob Borg
f73d5a9ab2 Serialize scans and pulls (fixes #1391) 2015-06-20 23:01:40 +02:00
Jakob Borg
4eb0e24c6e Sanitize rescan interval values (fixes #1978) 2015-06-20 23:01:30 +02:00
Jakob Borg
1d2235abe7 Model must be running for tests 2015-06-20 23:00:33 +02:00
Jakob Borg
d347e54acb Don't start model until services have been added (fixes #1969) 2015-06-20 20:04:47 +02:00
Jakob Borg
b5198d8119 Merge pull request #1968 from calmh/newtests
Refactored integration tests
2015-06-20 19:24:04 +02:00
Jakob Borg
b8b5c5ff34 Merge pull request #1913 from Zillode/fix-reset
Fix 'reset' Rest API on windows
2015-06-20 11:43:05 +02:00
Jakob Borg
a738490a3b Update translation strings 2015-06-20 11:40:05 +02:00
Jakob Borg
54a8de2059 Merge remote-tracking branch 'syncthing/pr/1979'
* syncthing/pr/1979:
  Display Local State Summary (All Folders)
2015-06-20 11:39:31 +02:00
Jakob Borg
cb2c0e7ac5 Add fti7 2015-06-20 11:32:55 +02:00
Frank Isemann
510d309b8a Display Local State Summary (All Folders) 2015-06-19 21:52:19 +02:00
Jakob Borg
a7ce2a7aa5 Merge pull request #1963 from wsgcsysadmin/master
Put thisDeviceName first in page tile to make small browser tabs distinguishable
2015-06-19 08:51:24 +02:00
Jakob Borg
cfe24ecdd9 Add wsgcsysadmin 2015-06-19 08:50:36 +02:00
Jakob Borg
c5e9cb025c Merge pull request #1977 from Zillode/fix-1976
Corrected API response when resetting folder (fixes #1976)
2015-06-19 08:48:40 +02:00
Jakob Borg
c3d07d60ca Refactored integration tests
Added internal/rc to remote control a Syncthing process and made the
"awaiting sync" determination reliable.
2015-06-19 08:47:47 +02:00
Lode Hoste
a0897a7456 Corrected API response when resetting folder (fixes #1976) 2015-06-19 08:30:19 +02:00
WSGCSysadmin
c50ba9267c Put thisDeviceName first in page title 2015-06-18 10:53:37 -05:00
Jakob Borg
423e69916c Merge pull request #1962 from Zillode/fix-pull-order-gui
Set default pull order in webGUI
2015-06-18 16:51:10 +02:00
Lode Hoste
b56c76f8ad Fix 'reset' Rest API on windows 2015-06-18 12:45:08 +02:00
Lode Hoste
cb2d2f000f Set default pull order in webGUI 2015-06-18 12:42:00 +02:00
Audrius Butkevicius
69af77a3bd Merge pull request #1967 from calmh/woops
Incorrect error condition on shortcuts
2015-06-18 11:07:07 +01:00
Jakob Borg
7767746d3e Incorrect error condition on shortcuts 2015-06-18 11:55:43 +02:00
Audrius Butkevicius
7219aaeb89 Merge pull request #1966 from calmh/overrideevents
Generate LocalIndexUpdated events in Override
2015-06-18 10:17:38 +01:00
Jakob Borg
7af1863e81 Generate LocalIndexUpdated events in Override
The override should look like we detected the changes locally, or the
GUI and other things won't update correctly.

This is/was caught by the Override integration test, on my newly
refactored integration test suite which is soon ready for prime time, so
a test is coming. :)
2015-06-18 10:49:24 +02:00
Jakob Borg
4beb42bf45 Merge pull request #1959 from AudriusButkevicius/lastfile
Add label next to "Last file received" (fixes #1952)
2015-06-18 10:45:27 +02:00
Audrius Butkevicius
12a3086a9e Add label next to "Last file received" (fixes #1952) 2015-06-16 12:12:34 +01:00
Audrius Butkevicius
198725216f Merge pull request #1957 from calmh/myid
Include myID in the StartupComplete event
2015-06-16 08:46:50 +01:00
Audrius Butkevicius
08647f1267 Merge pull request #1956 from calmh/earlyevents
Fix API event subscription
2015-06-16 08:46:29 +01:00
Audrius Butkevicius
87811efc30 Merge pull request #1955 from calmh/localver
Add version to LocalIndexUpdate event.
2015-06-16 08:45:09 +01:00
Jakob Borg
82c3e6f87f Include myID in the StartupComplete event
Nice to have...
2015-06-16 09:27:06 +02:00
Jakob Borg
1ac40a3043 Fix API event subscription
The API never got the first few events ("Starting" etc) as it subscribed
too late. Instead, set up a subscription for it early on. If the API is
configured not to run this is unnecessary but doesn't hurt very much.
2015-06-16 09:17:58 +02:00
Jakob Borg
127b0c3332 Add version to LocalIndexUpdate event.
Allows correlating LocalIndexUpdate events on one device with RemoteIndexUpdated on another to make sure the cluster has converged.
2015-06-16 08:30:15 +02:00
Jakob Borg
a6d9150b14 Skip extra newline between assets 2015-06-15 23:13:43 +02:00
Jakob Borg
7e5197c566 Help link for folder master 2015-06-15 22:34:39 +02:00
Jakob Borg
2d217e72bd Dependency update 2015-06-15 21:10:33 +02:00
Jakob Borg
12331cc62b Merge pull request #1943 from ralder/webgui-events-in-service
webgui: moved events controller to events service
2015-06-15 20:33:27 +02:00
Sergey Mishin
2449f1e1b6 webgui: moved events controller to events service 2015-06-15 19:05:46 +03:00
Audrius Butkevicius
6a6593c656 Merge pull request #1948 from calmh/symwarning
Dont warn about irrelevant symlinks, print path to culprit (ref #1945)
2015-06-15 10:41:51 +01:00
Jakob Borg
ad220d61f9 Merge pull request #1951 from AudriusButkevicius/voodoo
Voodoo
2015-06-15 11:31:03 +02:00
Audrius Butkevicius
1e35383b4d Merge pull request #1947 from calmh/metadata
Differentiate between content and metadata updates in ItemStarted/ItemFinished
2015-06-15 10:26:09 +01:00
Audrius Butkevicius
c8457ab005 Voodoo 2015-06-15 10:22:44 +01:00
Jakob Borg
070a308217 Dont warn about irrelevant symlinks, print path to culprit (ref #1945)
This skips the warning about "unsupported symlinks" for invalid or
deleted symlinks (and as a side effect also accepts them into the index,
which should be fine). It also prints the affected file paths to the
log. This should be in the hypothetical list of "errored files" we
should present instead of the cryptic "puller stopped" message in the
future...
2015-06-15 00:47:13 +02:00
Jakob Borg
c1f4477376 ItemStarted can be map[string]string 2015-06-14 22:59:21 +02:00
Jakob Borg
d728320ece New ItemStart/Finished type 'metadata' for shortcut updates 2015-06-14 22:56:41 +02:00
Audrius Butkevicius
fee0d7168a Merge pull request #1946 from calmh/guiassets
Default GUI override dir
2015-06-14 21:40:56 +01:00
Jakob Borg
7c23b32de3 Default GUI override dir
If STGUIASSETS is not set, look for assets in $confdir/gui by default.
Simplifies deploying overrides and stuff.
2015-06-14 22:28:40 +02:00
Jakob Borg
1437952aee Translation and docs update 2015-06-14 13:52:00 +02:00
Jakob Borg
a162157301 Add translation strings for trash can versioning 2015-06-14 11:08:25 +02:00
Audrius Butkevicius
6316bf3582 Merge pull request #1941 from AudriusButkevicius/errors
Correctly set and clear errors for missing folders (fixes #1937)
2015-06-13 23:47:20 +01:00
Audrius Butkevicius
1d856b4723 Correctly set and clear errors for missing folders (fixes #1937) 2015-06-13 23:45:54 +01:00
Audrius Butkevicius
7f56d5c23a Merge pull request #1935 from calmh/trashcan
Add trash can file versioning (fixes #1931)
2015-06-12 13:20:37 +01:00
Jakob Borg
a778763851 Add trash can file versioning (fixes #1931) 2015-06-12 13:30:49 +02:00
Audrius Butkevicius
983d7ec265 Merge pull request #1933 from calmh/fix-1907
More resilient broadcast handling (fixes #1907)
2015-06-11 14:59:31 +01:00
Jakob Borg
297769ef57 More resilient broadcast handling (fixes #1907)
My theory is that some error condition on the socket results in it
blocking for writes, which maybe also blocks reads... This separates the
two into separate services with their own socket, with restarts and
retries as appropriates on write timeouts and read/write errors. It
should be more robust, hopefully, but I have a hard time testing the
actual error conditions...
2015-06-11 15:06:05 +02:00
Jakob Borg
885d050e5f Correct docs site link 2015-06-11 14:24:39 +02:00
Jakob Borg
8fb4ce6cad Merge pull request #1927 from ralder/patch-1
fix disappeared status of folder after restart syncthing
2015-06-11 08:47:53 +02:00
Jakob Borg
42738ab54d Add missing copyright notice 2015-06-11 08:46:57 +02:00
ralder
7d1250620e fix disappeared status of folder after restart syncthing
sometimes after restart process syncthing '/rest/db/status' for folder may return data with 'state' = empty string
2015-06-10 16:48:16 +03:00
Jakob Borg
5c49b93c67 Links are nice, too 2015-06-10 00:04:53 +02:00
Jakob Borg
9a11f81fd3 Point to contribution guidelines and docs 2015-06-10 00:02:39 +02:00
Audrius Butkevicius
cba2e972fd Merge pull request #1810 from calmh/cfg-commit
Configuration commit thingy
2015-06-09 15:09:21 +01:00
Jakob Borg
76ad925842 Refactor config commit stuff to support restartless updates better
Includes restartless updates of the GUI settings (listening port etc) as
a proof of concept.
2015-06-09 15:41:22 +02:00
Jakob Borg
ef6f52f688 Correctly handle nil error in verbose logging (fixes #1921) 2015-06-09 09:04:03 +02:00
Audrius Butkevicius
197bfa9f11 Merge pull request #1919 from calmh/fix-1918
Start folders before GUI/API (fixes #1918)
2015-06-08 11:13:02 +01:00
Jakob Borg
145f8c7435 Start folders before GUI/API (fixes #1918) 2015-06-08 11:04:09 +02:00
Jakob Borg
a8b43ae598 Translation / docs update 2015-06-07 12:57:26 +02:00
Audrius Butkevicius
567f19bf68 Do not overwrite error value 2015-06-06 08:30:01 +01:00
Jakob Borg
5cd4cd2271 Merge pull request #1912 from AudriusButkevicius/warnings
Silence discovery warnings (fixes #1388)
2015-06-04 19:29:42 +02:00
Audrius Butkevicius
4180569443 Silence discovery warnings (fixes #1388)
Not performing net.InterfaceAddrs() check in the constructor, as that means we wouldn't start
the read loop, which completely kills it.
2015-06-04 12:59:06 +01:00
Jakob Borg
5f4a92c8e6 Correct link 2015-06-03 19:47:39 +02:00
Jakob Borg
ccf3fed950 Merge remote-tracking branch 'syncthing/pr/1911'
* syncthing/pr/1911:
  replaced (not all) wiki links to new location docs.syncthing.net
2015-06-03 19:24:30 +02:00
Lars K.W. Gohlke
3626003f68 replaced (not all) wiki links to new location docs.syncthing.net 2015-06-03 19:09:36 +02:00
Audrius Butkevicius
11cb040ad1 Merge pull request #1880 from calmh/itemfinished-err
Fix ItemFinished
2015-06-03 16:50:33 +01:00
Jakob Borg
c1761cab49 Trigger ItemFinished when temp file creation fails instead of failing silently 2015-06-03 16:28:31 +02:00
Jakob Borg
25b25b5434 Merge pull request #1885 from AudriusButkevicius/moar-checks
Additional cases for detecting folders disappearing
2015-06-03 08:40:21 +02:00
Jakob Borg
8bdf66d9c0 Merge pull request #1906 from ralder/fix-style-top-menu-for-small-devices
fix language menu for small screen devices
2015-06-03 08:38:40 +02:00
Sergey Mishin
ccebdd142a fix webgui top menu for small screen devices 2015-06-02 17:48:31 +03:00
Jakob Borg
e952da7f91 Ensure we always have an up to date list of language names 2015-06-02 08:47:26 +02:00
Jakob Borg
5bd1e4a167 Re-add mistakenly removed languages 2015-06-02 08:33:36 +02:00
Jakob Borg
6d3de41751 Merge pull request #1905 from ralder/fix-missing-languages
fix missing languages (fixes #1902)
2015-06-02 08:12:06 +02:00
Sergey Mishin
f11bac6705 fix missing languages (fixes #1902) 2015-06-02 01:19:21 +03:00
Jakob Borg
c23a601cc6 Random number is too large for 32 bit archs (fixes #1894) 2015-06-01 09:33:13 +02:00
Jakob Borg
36d4c69fd6 Translation update 2015-05-31 09:37:02 +02:00
Jakob Borg
ca7e7fa0c4 Docs link, capitalization 2015-05-31 09:35:17 +02:00
Jakob Borg
1f6dd5dbb9 Add man pages to Debian package 2015-05-30 13:11:17 +02:00
Jakob Borg
db52646655 Include generated man pages 2015-05-30 13:05:55 +02:00
Jakob Borg
8bb18fa988 Create 'prerelease' step to run before releases 2015-05-30 10:39:27 +02:00
Jakob Borg
f0edaf2f8c Merge pull request #1884 from calmh/helplink
Show help link, add icons, tweak icon spacing
2015-05-30 10:32:16 +02:00
Jakob Borg
d632e3aa1b Show help link, add icons, tweak icon spacing 2015-05-30 10:31:45 +02:00
Audrius Butkevicius
f0e58fa804 Additional cases for detecting folders disappearing 2015-05-27 22:46:10 +01:00
Jakob Borg
ceced09d02 Merge pull request #1882 from calmh/folderstats
Reduce db writes for small files
2015-05-27 22:02:04 +02:00
Jakob Borg
7d48115b90 Reduce db writes for small files
We introduced the dbUpdater routine to handle many small files
efficiently, but the folder stats call is almost equally expensive as it
results in two distinct write transactions to the database. This moves
it to the same routine.

(Doesn't make a *huge* difference with leveldb actually, but reduces the
50k-files benchmark time by 25% on my experimental bolt branch...)
2015-05-27 18:36:26 +02:00
Bart De Vries
d2205228fb Make syncthing honor both the ignorePerms and FlagNoPermBits settings (fixes #1871) 2015-05-26 16:27:26 +02:00
Audrius Butkevicius
5417fb7287 Merge pull request #1872 from calmh/large-file-transfer
Large file transfer
2015-05-25 17:13:58 +01:00
Jakob Borg
3f59d6daff Add validation cache 2015-05-25 13:19:18 +02:00
Jakob Borg
0d659933aa Merge pull request #1868 from mogwa1/umask
Change permissions of newly created files and directories (fixes #1339)
2015-05-25 08:34:34 +02:00
Jakob Borg
3e8eabe833 Add mogwa1 2015-05-25 08:07:59 +02:00
Bart De Vries
badfc77339 Change permissions of newly created files and directories (fixes #1339) 2015-05-25 00:12:51 +02:00
Audrius Butkevicius
c51d3e59ea Merge pull request #1862 from calmh/allocs
Reduce allocations during iteration
2015-05-24 12:31:39 +01:00
Jakob Borg
c0d02a65c3 Reduce allocations during iteration
Reuses key byte slices to reduce allocations:

	benchmark                    old allocs     new allocs     delta
	Benchmark10kReplace          178112         168615         -5.33%
	Benchmark10kUpdateChg        567954         561557         -1.13%
	Benchmark10kUpdateSme        238691         228680         -4.19%
	Benchmark10kNeed2k           105382         103383         -1.90%
	Benchmark10kHaveFullList     60230          60230          +0.00%
	Benchmark10kGlobal           237484         227493         -4.21%

	benchmark                    old bytes     new bytes     delta
	Benchmark10kReplace          8725368       7661788       -12.19%
	Benchmark10kUpdateChg        58155152      57441025      -1.23%
	Benchmark10kUpdateSme        16130875      14996717      -7.03%
	Benchmark10kNeed2k           6561685       6338283       -3.40%
	Benchmark10kHaveFullList     7611112       7611135       +0.00%
	Benchmark10kGlobal           21415056      20300723      -5.20%
2015-05-24 13:08:14 +02:00
Audrius Butkevicius
3a203b8d83 Merge pull request #1861 from calmh/fix-1860
Be more lenient against errors when deleting (fixes #1860)
2015-05-24 00:58:46 +01:00
Audrius Butkevicius
feecdcc7a4 Merge pull request #1854 from calmh/eventmemallocs
Reuse a timer instead of allocating a new one in subscription.Poll
2015-05-23 23:23:46 +01:00
Audrius Butkevicius
51ad533be6 Merge pull request #1859 from calmh/fix-1858
UPnP discovery results must not be collected in a background goroutine (fixes #1858)
2015-05-23 22:58:09 +01:00
Jakob Borg
29da0bc8f5 Be more lenient against errors when deleting (fixes #1860) 2015-05-23 23:57:41 +02:00
Jakob Borg
bccf7fc2a8 Merge pull request #1857 from calmh/testhax
Refactor integration tests to be a little cleaner and more stable, I hope
2015-05-23 23:57:21 +02:00
Jakob Borg
7b6b5981c4 UPnP discovery results must not be collected in a background goroutine (fixes #1858) 2015-05-23 23:27:02 +02:00
Jakob Borg
9463192224 Refactor integration tests to be a little cleaner and more stable, I hope 2015-05-23 23:26:23 +02:00
Audrius Butkevicius
d1689f0012 Merge pull request #1855 from calmh/dboptsagain
Reduce db write cache to (2*) 4 MiB
2015-05-23 20:38:41 +01:00
Jakob Borg
8ed67fe349 Reduce db write cache to (2*) 4 MiB
I haven't been able to reproduce any performance advantage of having it
set higher and it reduces the memory footprint a bit.
2015-05-23 21:05:52 +02:00
Jakob Borg
90a1d99785 Reuse a timer instead of allocating a new one in subscription.Poll
This is surprisingly memory expensive when Poll gets called a lot, such
as when syncing lots of small files generating itemstarted/itemfinished
events. It's line number three in this heap profile on the
TestBenchmarkManyFiles test:

	jb@syno:~/src/github.com/syncthing/syncthing/test (master) $ go tool pprof ../bin/syncthing heap-13194.pprof
	Entering interactive mode (type "help" for commands)
	(pprof) top
	80.91MB of 83.05MB total (97.42%)
	Dropped 1024 nodes (cum <= 0.42MB)
	Showing top 10 nodes out of 85 (cum >= 1.75MB)
	      flat  flat%   sum%        cum   cum%
	      32MB 38.53% 38.53%    32.01MB 38.54%  github.com/syndtr/goleveldb/leveldb/memdb.New
	   22.16MB 26.68% 65.21%    22.16MB 26.68%  github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).Get
	   13.02MB 15.68% 80.89%    13.02MB 15.68%  time.NewTimer
	    6.94MB  8.35% 89.24%     6.94MB  8.35%  github.com/syndtr/goleveldb/leveldb/memdb.(*DB).Put
	    3.18MB  3.82% 93.06%     3.18MB  3.82%  github.com/calmh/xdr.(*Reader).ReadBytesMaxInto

With this change the allocation is removed entirely.
2015-05-23 20:38:41 +02:00
Jakob Borg
e215cf6fb8 Add some REST API benchmarks 2015-05-23 20:15:54 +02:00
Jakob Borg
e827c0bd94 Run benchmarks in docker-all instead 2015-05-23 15:17:19 +02:00
Jakob Borg
8dd7e4e6b5 Run benchmarks when running tests 2015-05-23 15:08:17 +02:00
Jakob Borg
a2b94f4e06 Build Debian armhf and armel 2015-05-23 13:10:33 +02:00
Jakob Borg
8b0037ffab Make check-contribs a little more generous in recognizing a copyright header 2015-05-21 21:42:46 +02:00
Jakob Borg
4ab03f3bef Update CONTRIBUTING to not encourage changing AUTHORS 2015-05-21 20:58:17 +02:00
Jakob Borg
3c3db52f49 Merge pull request #1842 from Zillode/fix-1822
Support the creation of top-level folders on Windows (fixes #1822)
2015-05-21 20:50:52 +02:00
Jakob Borg
5b1e884659 Merge pull request #1847 from Zillode/fix-1833
Show date and time for web GUI notification (fixes #1833)
2015-05-21 20:49:04 +02:00
Lode Hoste
7ec6740e26 Show date and time for web GUI notification (fixes #1833) 2015-05-21 19:44:45 +02:00
Lode Hoste
f12b8c19be Support the creation of top-level folders on Windows (fixes #1822) 2015-05-21 19:21:19 +02:00
Jakob Borg
76174d31ce Merge pull request #1843 from Zillode/fix-1831
Set permanent UPnP lease when required
2015-05-21 12:50:48 +02:00
Lode Hoste
5042248260 Set permanent UPnP lease when required (fixes #1831) 2015-05-21 10:48:40 +02:00
Lode Hoste
1df6589533 Add status code to SOAP response 2015-05-21 09:54:39 +02:00
Jakob Borg
12f76b448c Merge pull request #1826 from AudriusButkevicius/screwflags
Don't check interface flags on Windows
2015-05-19 14:59:16 +02:00
Audrius Butkevicius
f112ef34f6 Don't check interface flags on Windows 2015-05-17 16:42:26 +01:00
Jakob Borg
e4b57a978f Translation update 2015-05-15 10:46:47 +02:00
Audrius Butkevicius
b51e09e0e8 Merge pull request #1811 from calmh/bc
Further reduce maximum db block cache
2015-05-15 11:12:25 +03:00
Jakob Borg
a3ba3f895c Further reduce maximum db block cache 2015-05-14 20:59:59 +02:00
Jakob Borg
947a129e12 Use updateLocals to ensure event generation 2015-05-14 08:56:42 +02:00
Jakob Borg
f3fe6a6cbd Assure existence of a folder marker in the test 2015-05-14 08:46:00 +02:00
Jakob Borg
9d150bef9f Refactor GetMtime for early return 2015-05-14 08:26:21 +02:00
Jakob Borg
65be18cc93 Merge pull request #1804 from cdhowie/virtual-mtimes
Implement virtual mtime support for Android
2015-05-14 08:14:29 +02:00
Jakob Borg
e27ea63900 Add cdhowie 2015-05-14 08:11:02 +02:00
Chris Howie
aa96f7b660 Virtual mtime support for environments that don't support altering mtimes (fixes #831) 2015-05-13 14:57:29 +00:00
Jakob Borg
053690d885 Don't attempt reschedule with zero interval
Can happen on manual rescans of folders with zero interval.
2015-05-13 16:51:08 +02:00
Jakob Borg
b9fc6397a3 Merge pull request #1791 from Zillode/test-master
Add unit test for overriding ignored files (fixes #1701)
2015-05-13 14:50:12 +02:00
Audrius Butkevicius
19b9f15da3 Merge pull request #1803 from calmh/kvspaceclear
Reset a namespaced kv-store
2015-05-13 15:43:52 +03:00
Jakob Borg
0b9441e1a4 Reset a namespaced kv-store 2015-05-13 14:41:47 +02:00
Audrius Butkevicius
2324d7420c Merge pull request #1800 from calmh/ursvc
Break out usage reporting into a service
2015-05-13 15:41:44 +03:00
Jakob Borg
c6b2ca8b19 Break out usage reporting into a service 2015-05-13 14:39:27 +02:00
Audrius Butkevicius
83ea8dc577 Merge pull request #1801 from calmh/fix-1799
Only restart global discovery on UPnP change if it was enabled to start with (fixes #1799)
2015-05-12 18:49:24 +03:00
Jakob Borg
d898277f62 stindex: add some missing newlines 2015-05-12 11:47:30 +02:00
Jakob Borg
a289cfb986 Only restart global discovery on UPnP change if it was enabled to start with (fixes #1799) 2015-05-12 09:48:55 +02:00
Jakob Borg
d603998617 Debian maintainer is release team 2015-05-11 21:59:40 +02:00
Jakob Borg
8bf9f4f5ab Build debian skeleton 2015-05-11 21:34:43 +02:00
Jakob Borg
2c4e6f2926 wip 2015-05-11 19:04:39 +02:00
Jakob Borg
21fbbc50cd hax 2015-05-11 18:39:53 +02:00
Audrius Butkevicius
99512418da Merge pull request #1796 from calmh/adaptive-cache-tweak
Tweak the database block cache size, and add config for it
2015-05-11 11:06:31 +03:00
Jakob Borg
c2f2d8771f Tweak the database block cache size, and add config for it 2015-05-11 09:08:25 +02:00
Jakob Borg
179c9ee8cc Merge pull request #1792 from Zillode/tempname
Retains part of meaningful filename, added unit test and did a small refactoring
2015-05-10 20:44:55 +02:00
Jakob Borg
eb8a505287 Merge pull request #1774 from Zillode/fix-rename-windows
If rename works we are happy (fixes #1767)
2015-05-10 19:04:03 +02:00
Lode Hoste
a7482a3644 Added simple unit test for temporary filenames 2015-05-10 18:57:27 +02:00
Lode Hoste
a692348336 Retain meaningful names for temporary files 2015-05-10 18:57:27 +02:00
Jakob Borg
285dcc30cf Use md5 hash of filename for temporary file (fixes #1786) 2015-05-10 18:57:27 +02:00
Jakob Borg
bcf51aed83 Translation update 2015-05-10 14:59:12 +02:00
Zillode
1a11ce6211 Merge pull request #1784 from calmh/fix-1782
Implement upnpSvc.Stop() (fixes #1782)
2015-05-10 14:47:02 +02:00
Lode Hoste
10021c97a6 Add unit test for overriding ignored files (fixes #1701) 2015-05-10 11:23:29 +02:00
Audrius Butkevicius
e869e3c534 Merge pull request #1789 from calmh/reduce-default-lease
Set default UPnP lease time to 60 minutes
2015-05-10 01:43:10 +03:00
Lode Hoste
f6416285db If rename works we are happy (fixes #1767) 2015-05-10 00:00:24 +02:00
Jakob Borg
7f0593cd2d Set default UPnP lease time to 60 minutes 2015-05-09 22:17:53 +02:00
Jakob Borg
e4c41718d8 Fix upnp mapping name 2015-05-09 20:04:15 +02:00
Jakob Borg
7234553990 Implement upnpSvc.Stop() (fixes #1782) 2015-05-09 12:49:58 +02:00
Jakob Borg
5761efdb73 Remove system/editor specific ignores (ref #1778) 2015-05-08 20:34:06 +02:00
Jakob Borg
1133192a86 Merge remote-tracking branch 'syncthing/pr/1779'
* syncthing/pr/1779:
  Remove stray VIM swap file
2015-05-08 20:32:02 +02:00
Chris Howie
af3288043a Remove stray VIM swap file 2015-05-08 16:11:32 +00:00
Jakob Borg
24a348f6a8 Use larger files for large file transfer benchmark 2015-05-08 14:59:37 +02:00
Audrius Butkevicius
5528b6c231 Merge pull request #1775 from calmh/fix-1765
Trigger pull check on remote index updates (fixes #1765)
2015-05-08 11:45:03 +03:00
Jakob Borg
245bd1eb17 Trigger pull check on remote index updates (fixes #1765)
Without this, when an index update comes in we only do a new pull if the
remote `localVersion` was increased. But it may not be, because the
index is sent alphabetically and the file with the highest local version
may come first. In that case we'll never do a new pull when the rest of
the index comes in, and we'll be stuck in idle but with lots of out of
sync data.
2015-05-08 10:02:46 +02:00
Jakob Borg
03506db76c Fix rename with capitalization test 2015-05-08 10:02:19 +02:00
Jakob Borg
cb5ef26020 Revert "Enforce line endings when cloning (fixes #1766)"
This reverts commit 2361a0dd6e.
2015-05-07 22:55:31 +02:00
Jakob Borg
2493ae4c2c Merge pull request #1773 from Zillode/fix-browser
Do not launch browser when running integration tests
2015-05-07 22:23:47 +02:00
Jakob Borg
2bd88344ad Merge pull request #1772 from Zillode/fix-lf
Enforce line endings when cloning (fixes #1766)
2015-05-07 22:03:20 +02:00
Lode Hoste
4950980d69 Don't launch browser for integration tests 2015-05-07 20:59:12 +02:00
Lode Hoste
2361a0dd6e Enforce line endings when cloning (fixes #1766) 2015-05-07 20:32:18 +02:00
Audrius Butkevicius
152cdd1750 Merge pull request #1771 from calmh/stindex
Simplify stindex
2015-05-07 11:13:49 +03:00
Jakob Borg
31797a5831 Simplify stindex 2015-05-07 09:59:04 +02:00
Jakob Borg
9308c42cff Skip boring concurrency test in internal/db 2015-05-07 09:57:49 +02:00
Audrius Butkevicius
c096cd34b9 Merge pull request #1763 from calmh/win-executable
Set the execute bit on Windows executables (fixes #1762)
2015-05-06 08:28:06 +03:00
Jakob Borg
5fc0808f28 Set the execute bit on Windows executables (fixes #1762) 2015-05-05 21:45:26 +02:00
Jakob Borg
e6866ee980 strings.TrimLeft is not actually TrimPrefix 2015-05-05 13:53:11 +02:00
Jakob Borg
45631d30b0 Merge pull request #1756 from LordLandon/master
Fix #1728 by using latest version property
2015-05-04 18:34:34 +02:00
Lord Landon "Warbles" Agahnim
0f432a0844 Use actual release version for release note link (fixes #1728) 2015-05-04 08:23:24 -07:00
Jakob Borg
df59bc7194 Merge pull request #1747 from Zillode/fix-1743
Partial fix for #1743
2015-05-04 10:51:40 +02:00
Jakob Borg
ff4706e450 Merge branch 'pr-1748'
* pr-1748:
  Reschedule before scan
  Use a channel instead of locks
  Reschedule the next scan interval (fixes #1591)
2015-05-04 10:40:14 +02:00
Jakob Borg
fc013bd04c Add LordLandon 2015-05-04 10:36:40 +02:00
Audrius Butkevicius
7e29a8b927 Merge pull request #1755 from calmh/fix-1754
Wait for stdout/stderr to close (fixes #1754)
2015-05-03 22:45:00 +03:00
Jakob Borg
67ae7a0b6c Wait for stdout/stderr to close (fixes #1754) 2015-05-03 17:36:01 +02:00
Jakob Borg
bd5a64bac0 Reschedule before scan 2015-05-03 14:18:50 +02:00
Jakob Borg
1bd85d8baf Use a channel instead of locks 2015-05-03 14:18:32 +02:00
Lode Hoste
fe34b08ece Reschedule the next scan interval (fixes #1591) 2015-05-03 12:48:44 +02:00
Jakob Borg
33048f88b8 Merge pull request #1752 from alex2108/master
Distinguish files with same name but different extension in staggered versioner (fixes #1738)
2015-05-03 12:32:20 +02:00
Alexander Graf
0ec01f4e78 Distinguish files with same name but different extension in staggered versioner (fixes #1738) 2015-05-03 10:36:46 +02:00
Jakob Borg
d0ebf06ff8 Translation update 2015-05-03 08:29:29 +02:00
Jakob Borg
687b249034 Don't create stopped folder in staggered versioning (fixes #1749) 2015-05-03 08:22:11 +02:00
Jakob Borg
d1528dcff0 Remove old name conversion from staggered versioning 2015-05-03 08:22:11 +02:00
Jakob Borg
1c31cf6319 Merge pull request #1744 from Zillode/fix-1692
Upgrade running Syncthing instances (fixes #1692)
2015-05-02 15:17:45 +02:00
Lode Hoste
67f0c9bef0 Do not remove the file when renaming on a case-insensitive platform (fixes #1743) 2015-05-01 23:43:45 +02:00
Lode Hoste
d54c366150 Upgrade running Syncthing instances (fixes #1692) 2015-05-01 23:38:54 +02:00
Jakob Borg
4ff535f883 Twitter link (lazily wrong icon, because not in Glyphicons...) 2015-05-01 23:32:51 +02:00
Lode Hoste
58b15f9452 Limit alterfiles to a single operation per file 2015-05-01 13:03:03 +02:00
Lode Hoste
dedca59ac9 Add capitalization changes to integration tests 2015-05-01 12:11:57 +02:00
Jakob Borg
3cbddfe545 Merge pull request #1745 from Zillode/fix-1678
Added test for combining case insensitive and negated patterns (fixes #1678)
2015-05-01 09:20:51 +02:00
Lode Hoste
e3cae69495 Added test for combining case insensitive and negated patterns (fixes #1678) 2015-05-01 00:58:44 +02:00
Audrius Butkevicius
aee40316f8 Merge pull request #1732 from calmh/guisvc
Break out GUI into an API service
2015-04-30 22:15:47 +01:00
Audrius Butkevicius
d754f9ae89 Merge pull request #1742 from calmh/adaptive-cache
Adaptive database cache size
2015-04-30 21:56:40 +01:00
Audrius Butkevicius
3c50b3a9e0 Merge pull request #1741 from calmh/verbose
Verbose logging
2015-04-30 21:54:49 +01:00
Jakob Borg
fb312a71f7 Add verbose logging (fixes #179) 2015-04-30 20:47:21 +02:00
Jakob Borg
136d79eaa3 Break out GUI into an API service 2015-04-30 20:36:07 +02:00
Jakob Borg
834336499a Adaptive database cache size 2015-04-30 20:25:44 +02:00
Jakob Borg
c9da8237df Report usage statistics after transfer bench 2015-04-30 08:43:57 +02:00
Audrius Butkevicius
9638dcda0a Merge pull request #1735 from calmh/kinder-rehashing
User fewer hasher routines when there are many folders.
2015-04-29 20:43:51 +01:00
Jakob Borg
756c5a2604 User fewer hasher routines when there are many folders. 2015-04-29 21:26:08 +02:00
Jakob Borg
a9c31652b6 Merge branch 'pr-1725'
* pr-1725:
  typos and spelling correction
2015-04-29 17:09:30 +02:00
dartraiden
32a76901a9 typos and spelling correction 2015-04-29 15:59:47 +02:00
Audrius Butkevicius
a5e11c7489 Merge pull request #1730 from calmh/bug-1721
Don't hang when attempting multicast discovery on non-multicast interfaces
2015-04-29 11:01:25 +01:00
Audrius Butkevicius
f2b12014e1 Merge pull request #1729 from calmh/lint-clean
Run vet and lint. Make us lint clean.
2015-04-29 10:16:36 +01:00
Jakob Borg
60fcaebfdb Run vet and lint. Make us lint clean. 2015-04-29 10:38:02 +02:00
Audrius Butkevicius
32fe2cb659 Merge pull request #1700 from calmh/upnpsvc
Break out UPnP port mapping into a service
2015-04-29 00:18:49 +01:00
Jakob Borg
1207d54fdd Tweak UPnP discovery, avoid non-multicast interfaces (fixes #1721) 2015-04-28 22:32:10 +02:00
Stefan Tatschner
3932884688 Drop systemd README instruction in favor of the wiki 2015-04-28 14:11:12 +02:00
Audrius Butkevicius
19a2042746 Merge pull request #1723 from calmh/bug-1722
Handle conflict with local delete (fixes #1722)
2015-04-28 10:39:22 +01:00
Jakob Borg
4c6eb137da Merge pull request #1720 from AudriusButkevicius/ignores
Matcher is always there
2015-04-28 11:35:12 +02:00
Jakob Borg
57ec2ff915 Handle conflict with local delete (fixes #1722) 2015-04-28 11:34:16 +02:00
Audrius Butkevicius
77a161a087 Matcher checks nil receiver 2015-04-28 10:17:14 +01:00
Jakob Borg
0642402449 Break out UPnP port mapping into a service 2015-04-28 10:25:25 +02:00
Audrius Butkevicius
50d377d9fe Fix integration tests 2015-04-28 00:09:44 +01:00
Jakob Borg
f5211b0697 Add some more cache forbidding headers, for various user agents. 2015-04-27 09:08:55 +02:00
Jakob Borg
fd4ea46fd7 Merge pull request #1708 from jarlebring/upnp_close_conn_fix
Fix to for routers that cannot handle many open HTTP-connections
2015-04-26 22:54:47 +02:00
Elias Jarlebring
8d8546868d Fix to for routers that cannot handle many open HTTP-connections
Some routers do not respond when too many subsequent SOAP-requests are sent and will generate an EOF-error in the DefaultClient.Do(req). This modification adds a req.Close = True following the description on http://stackoverflow.com/questions/17714494/golang-http-request-results-in-eof-errors-when-making-multiple-requests-successi
2015-04-26 22:31:43 +02:00
Audrius Butkevicius
8ce547edeb Fix HTML (fixes #1707) 2015-04-26 20:14:34 +01:00
Jakob Borg
a17c48aed6 Merge pull request #1705 from AudriusButkevicius/glob
Add osutil.Glob to deal with Windows (fixes #1690)
2015-04-26 18:28:25 +02:00
Audrius Butkevicius
d12db3e7b8 Add osutil.Glob to deal with Windows (fixes #1690) 2015-04-26 16:37:50 +01:00
Jakob Borg
15b87ae297 Merge pull request #1704 from jarlebring/upnp_caps
Fix capitalization in HTTP-header in SOAP request (fixes #1696)
2015-04-26 20:10:14 +09:00
Jakob Borg
02fdf59839 Add jarlebring 2015-04-26 19:40:02 +09:00
jarlebring
d9da02b7a8 Formatting with gofmt 2015-04-26 12:37:37 +02:00
Elias Jarlebring
8f2ad6418d Fix capitalization in HTTP-header in SOAP request (fixes #1696)
Some routers are sensitive to the capitalization in  "SOAPAction" in the HTTP-header in SOAP request. This modification follows the recommendation of preserving caps in HTTP-headers in go described on http://stackoverflow.com/questions/26351716/how-to-keep-key-case-sensitive-in-request-header-using-golang?lq=1
2015-04-26 12:16:40 +02:00
Jakob Borg
ff984425a3 Merge pull request #1703 from AudriusButkevicius/page
Add pagination to Out of sync item list (fixes #1509)
2015-04-26 18:42:17 +09:00
Audrius Butkevicius
ac1058359f Rebuild assets 2015-04-26 00:22:30 +01:00
Audrius Butkevicius
9afbca3001 Add pagination to Out of sync item list (fixes #1509) 2015-04-26 00:22:26 +01:00
Audrius Butkevicius
ec3f17cb9c Add angular-dirPagination 2015-04-25 22:52:52 +01:00
Audrius Butkevicius
73b9d5c5f9 Merge pull request #1698 from calmh/pull-order
Configurable file pull order (alphabetic, random, by size or age)
2015-04-25 15:41:02 +01:00
Audrius Butkevicius
ecc8591c95 Merge pull request #1699 from calmh/connsvc
Break out connection handling into a service
2015-04-25 15:37:08 +01:00
Audrius Butkevicius
696b67e4b1 Merge pull request #1697 from calmh/auditsvc
Add audit log feature
2015-04-25 15:34:34 +01:00
Jakob Borg
266a5116a1 Break out connection handling into a service 2015-04-25 23:21:42 +09:00
Jakob Borg
131f2be857 Add audit log feature 2015-04-25 23:20:39 +09:00
Jakob Borg
be7b3a9952 Configurable file pull order (alphabetic, random, by size or age) 2015-04-25 23:20:21 +09:00
Jakob Borg
bb31b1785b Add a service manager to main (future use) 2015-04-25 23:16:46 +09:00
Jakob Borg
2a60f4b1e9 Add .gitattributes; normalize line endings 2015-04-25 23:16:46 +09:00
Jakob Borg
33a4fb5a1a Fix folder check tests 2015-04-25 23:16:46 +09:00
Audrius Butkevicius
aece6e8b6c Merge pull request #1689 from calmh/nolocks
events.Subscription.Poll does not seem to require locking
2015-04-24 10:26:58 +01:00
Jakob Borg
7bf55dd14f events.Subscription.Poll does not seem to require locking
This is a large source of output from the new lock logging, and it
doesn't seem to accomplish anything useful that I can see. Running
integration with the race detector to make sure...
2015-04-24 11:25:42 +09:00
Jakob Borg
e158f17c2b Adjust sync test intervals to be less latency sensitive 2015-04-24 11:25:24 +09:00
Jakob Borg
c5027d9478 Merge branch 'pr-1688'
* pr-1688:
  Minor fixup
  Add tests, fix getCaller, replace wg.Done with wg.Wait
2015-04-24 09:43:52 +09:00
Jakob Borg
36c1d82146 Minor fixup 2015-04-24 09:43:40 +09:00
Audrius Butkevicius
bd4f404d45 Add tests, fix getCaller, replace wg.Done with wg.Wait 2015-04-23 20:09:14 +01:00
Jakob Borg
43d39844f7 Merge pull request #1685 from AudriusButkevicius/mut
Add mutex logging
2015-04-23 21:16:23 +09:00
Audrius Butkevicius
e041a4d212 Track RUnlockers while locking a RWMutex 2015-04-23 11:29:23 +01:00
Audrius Butkevicius
433b923ea7 Add mutex logging 2015-04-23 10:54:14 +01:00
Audrius Butkevicius
f8f1c72b44 Merge pull request #1686 from calmh/major-upgrade-v11
Allow major upgrades
2015-04-23 09:31:58 +01:00
Jakob Borg
542716e216 Allow major upgrades 2015-04-23 17:13:11 +09:00
Jakob Borg
b35958d024 Avoid spurious request for /qr?text={{myID}} (fixes #1679) 2015-04-22 09:37:18 +09:00
Audrius Butkevicius
9ee3541655 Merge pull request #1673 from calmh/filestatus-json
Clean up REST JSON a little further
2015-04-21 17:11:06 +01:00
Jakob Borg
bf7d84c12a Clean up REST JSON a little further 2015-04-21 23:28:58 +09:00
Audrius Butkevicius
34c691087e Merge pull request #1674 from calmh/rc-upgrade
Loosen the requirements on what can be upgraded to what
2015-04-21 08:42:33 +01:00
Jakob Borg
08c383012f Loosen the requirements on what can be upgraded to what 2015-04-21 09:06:10 +09:00
Jakob Borg
e2420495f3 Fix type in device sort (fixes #1668) 2015-04-20 22:18:19 +09:00
Audrius Butkevicius
d530c5eda7 Merge pull request #1665 from calmh/wat
Don't initialize subscription in init()
2015-04-20 08:12:58 +01:00
Audrius Butkevicius
ef7420ecf6 Merge pull request #1666 from calmh/cpu-remind
Reminder in debug output to explain high CPU usage
2015-04-20 08:09:56 +01:00
Jakob Borg
c905a41e2a Reminder in debug output to explain high CPU usage 2015-04-20 14:29:38 +09:00
Jakob Borg
42ff4b5bf0 changelog.go should not be built 2015-04-20 14:03:50 +09:00
Jakob Borg
4fb74a32cc Don't initialize subscription in init()
By doing it init(), the monitor process also gets a subscription thing
running, which is unnecessary (and really confused me when seeing it in
the debug output).
2015-04-20 12:58:58 +09:00
Jakob Borg
c741465328 Use versionString() in about modal (fixes #1663) 2015-04-20 08:23:59 +09:00
Jakob Borg
fbca537a40 Merge pull request #1655 from kamadak/fix-nil-deref
Fix nil pointer dereferences in REST with non-existent folders
2015-04-19 17:20:47 +09:00
Jakob Borg
83420b0199 Merge pull request #1654 from AudriusButkevicius/fixes
Fix capitalization (fixes #1652, fixes #1649)
2015-04-19 17:20:05 +09:00
KAMADA Ken'ichi
33d3ba1b45 Fix nil pointer dereferences in REST with non-existent folders 2015-04-18 22:41:47 +09:00
Audrius Butkevicius
497f85a236 Fix capitalization (fixes #1652, fixes #1649) 2015-04-18 11:23:21 +01:00
Audrius Butkevicius
a624c302ab Merge pull request #1648 from calmh/scanner-batches
Don't buffer large files a long time while scanning
2015-04-17 09:05:09 +01:00
Jakob Borg
cebe21a3af Don't buffer large files a long time while scanning 2015-04-17 16:40:09 +09:00
Audrius Butkevicius
9eb679d70a Merge pull request #1647 from calmh/fix-localindexupdated
Homogenize the LocalIndexUpdated event
2015-04-17 08:14:38 +01:00
Jakob Borg
6d84443db8 Homogenize the LocalIndexUpdated event
It had two different formats, and we use "items" instead of "numFiles"
in other places.

(Discovered while documenting :)
2015-04-17 14:22:06 +09:00
Jakob Borg
da8a1f242c Merge pull request #1646 from AudriusButkevicius/readonly
Make targets writeable before removal on Windows (fixes #1610)
2015-04-17 14:21:39 +09:00
Jakob Borg
946d98b71f Merge pull request #1645 from AudriusButkevicius/tests
Fix tests on Windows (fixes #1531)
2015-04-17 14:20:53 +09:00
Audrius Butkevicius
dff51fc707 Make targets writeable before removal on Windows (fixes #1610) 2015-04-16 22:53:53 +01:00
Audrius Butkevicius
7d954dd5d1 Fix tests on Windows (fixes #1531) 2015-04-16 21:18:17 +01:00
Jakob Borg
c6300a5da8 Tone down UPnP errors a little 2015-04-16 23:45:12 +09:00
Jakob Borg
9359daa0d9 Merge branch 'pr-1636'
* pr-1636:
  Store and use _localStorage object
  fix using detect localStorage
2015-04-16 23:44:59 +09:00
Jakob Borg
2322e9cff7 Store and use _localStorage object 2015-04-16 23:44:34 +09:00
Jakob Borg
a876e1e348 Merge remote-tracking branch 'syncthing/pr/1636' into pr-1636
* syncthing/pr/1636:
  fix using detect localStorage
2015-04-16 23:32:48 +09:00
Jakob Borg
6a863c8f71 Translation update 2015-04-16 23:27:27 +09:00
Jakob Borg
392b006b06 Add Moter8 2015-04-16 23:23:34 +09:00
Audrius Butkevicius
96289f42b7 Merge pull request #1644 from syncthing/timeout
UPnP refactor/fixes
2015-04-16 14:32:16 +01:00
Audrius Butkevicius
1b69c2441c Make UPnP discovery requests on each interface explicitly (fixes #1113) 2015-04-16 14:23:36 +01:00
Audrius Butkevicius
8ca85a4918 Merge pull request #1639 from calmh/events
Improve event handling a little bit.
2015-04-16 14:18:52 +01:00
Audrius Butkevicius
2a31031cbc Add unit suffix to UPnP settings 2015-04-16 10:32:22 +01:00
Audrius Butkevicius
d148cd8ccc Make UPnP timeout configurable 2015-04-16 10:32:12 +01:00
Jakob Borg
d1cc1828b8 Improve ItemStarted/ItemFinished events
- Remove full details from ItemStarted (unnecessary, incorrect CamelCase)

 - Add "type" ("file" or "dir") to both events

 - Add "action" (what we tried to do - "delete" or "update") to both
   events.
2015-04-14 23:31:39 +09:00
Jakob Borg
069e8cf122 Don't schedule summaries on all state changes
Prior to this change we schedule summaries on each state change, i.e.
scanning->idle and idle->scanning, which is unnecessary. Now we only do
it on index updates, plus the immediate one on going syncing->idle.
2015-04-14 20:57:42 +09:00
Audrius Butkevicius
45cbcaca6d Merge pull request #1638 from calmh/lstat
Work around broken Lstat on Android
2015-04-14 11:59:20 +01:00
Jakob Borg
102a2db1f3 Work around broken Lstat on Android 2015-04-14 19:53:49 +09:00
Sergey Mishin
9f81c85ca7 fix using detect localStorage 2015-04-13 19:07:39 +03:00
Audrius Butkevicius
ba4a6fc0c5 Merge pull request #1633 from calmh/errorstate
Move folder errors to state
2015-04-13 00:48:13 +01:00
Jakob Borg
aa803ce2ff Move folder errors to state
The "Invalid" config attribute is retained for errors discovered during
config loading (empty path, duplicate ID). This can only be set or
cleared at config loading time.

Errors discovered during runtime (I/O problems, etc) are now in the
folder state instead. Changes to these are sent as any other folder
state change.
2015-04-13 07:43:45 +09:00
Jakob Borg
a027a60f5d Correctly feature detect localStorage (fixes #1632) 2015-04-13 06:50:07 +09:00
Jakob Borg
270649535e Merge pull request #1625 from Moter8/patch-1
Reword and clarify some sentences.
2015-04-10 13:48:14 +02:00
Carsten H
cf80ba71f4 Reword and clarify some sentences 2015-04-10 13:46:38 +02:00
276 changed files with 29968 additions and 8671 deletions

9
.gitattributes vendored Normal file
View File

@@ -0,0 +1,9 @@
# Text files use LF line endings in this repository
* text=auto
# Except the dependencies, which we leave alone
Godeps/** -text=auto
# Diffs on these files are meaningless
gui.files.go -diff
*.svg -diff

4
.gitignore vendored
View File

@@ -3,8 +3,6 @@ syncthing.exe
*.tar.gz
*.zip
*.asc
*.sublime*
.idea/
.jshintrc
coverage.out
files/pidx
@@ -12,7 +10,7 @@ bin
perfstats*.csv
coverage.xml
!gui/scripts/syncthing
.DS_Store
syncthing.md5
syncthing.exe.md5
RELEASE
deb

16
AUTHORS
View File

@@ -3,26 +3,37 @@
Aaron Bieber <qbit@deftly.net>
Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Antony Male <antony.male@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Bart De Vries <devriesb@gmail.com>
Ben Curthoys <ben@bencurthoys.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
Brendan Long <self@brendanlong.com>
Brian R. Becker <brbecker@gmail.com>
Caleb Callaway <enlightened.despot@gmail.com>
Carsten Hagemann <moter8@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Howie <me@chrishowie.com>
Chris Joel <chris@scriptolo.gy>
Colin Kennedy <moshen.colin@gmail.com>
Daniel Bergmann <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
Daniel Martí <mvdan@mvdan.cc>
Denis A. <denisva@gmail.com>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Elias Jarlebring <jarlebring@gmail.com>
Emil Hessman <emil@hessman.se>
Erik Meitner <e.meitner@willystreet.coop>
Federico Castagnini <federico.castagnini@gmail.com>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
Francois-Xavier Gsell <fxgsell@gmail.com>
Frank Isemann <frank@isemann.name>
Gilli Sigurdsson <gilli@vx.is>
Jacek Szafarkiewicz <szafar@linux.pl>
Jakob Borg <jakob@nym.se>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jaroslav Malec <dzardacz@gmail.com>
@@ -32,9 +43,10 @@ Johan Vromans <jvromans@squirrel.nl>
Karol Różycki <rozycki.karol@gmail.com>
Ken'ichi Kamada <kamada@nanohz.org>
Lode Hoste <zillode@zillode.be>
Marcin Dziadus <dziadus.marcin@gmail.com>
Lord Landon Agahnim <lordlandon@gmail.com>
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol <kilburn@la3.org>
Marcin Dziadus <dziadus.marcin@gmail.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Tilli <pyfisch@gmail.com>
Pascal Jungblut <github@pascalj.com> <mail@pascal-jungblut.com>

View File

@@ -32,64 +32,15 @@ latest info on Transifex.
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Be prepared for a
[certain amount of review](https://github.com/syncthing/syncthing/wiki/FAQ#why-are-you-being-so-hard-on-my-pull-request);
it's all in the name of quality. :) Following the points below will make this
a smoother process.
where to start, any open issues are fair game! See the [Contribution
Guidelines](http://docs.syncthing.net/dev/contributing.html) for the full
story on committing code.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
## Contributing Documentation
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### Core Team
The Syncthing core team currently consists of the following members;
- Jakob Borg (@calmh)
- Audrius Butkevicius (@AudriusButkevicius)
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
as much as makes sense.
- All text files use Unix line endings.
- Each commit should be `go fmt` clean.
- The commit message subject should be a single short sentence
describing the change, starting with a capital letter.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
- A contribution solving a single issue or introducing a single new
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
Updates to the [documentation site](http://docs.syncthing.net/) can be
made as pull requests on the [documentation
repository](https://github.com/syncthing/docs).
## Licensing
@@ -99,42 +50,3 @@ strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
When accepting your first contribution, the maintainer of the project
will ensure that you are added to the AUTHORS file. You are welcome to
add yourself as a separate commit in your first pull request.
## Building
[See the documentation](https://github.com/syncthing/syncthing/wiki/Building)
on how to get started with a build environment.
## Branches
- `master` is the main branch containing good code that will end up in
the next release. You should base your work on it. It won't ever be
rebased or force-pushed to.
- `vx.y` branches exist to make patch releases on otherwise obsolete
minor releases. Should only contain fixes cherry picked from master.
Don't base any work on them.
- Other branches are probably topic branches and may be subject to
rebasing. Don't base any work on them unless you specifically know
otherwise.
## Tags
All releases are tagged semver style as `vx.y.z`. Release tags are
signed by GPG key BCE524C7.
## Tests
Yes please!
## Documentation
[Over here!](https://github.com/syncthing/syncthing/wiki)
## License
MPLv2

32
Godeps/Godeps.json generated
View File

@@ -1,17 +1,17 @@
{
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.4",
"GoVersion": "devel",
"Packages": [
"./cmd/..."
],
"Deps": [
{
"ImportPath": "github.com/bkaradzic/go-lz4",
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
"Rev": "4f7c2045dbd17b802370e2e6022200468abf02ba"
},
{
"ImportPath": "github.com/calmh/logger",
"Rev": "f50d32b313bec2933a3e1049f7416a29f3413d29"
"Rev": "c96f6a1a8c7b6bf2f4860c667867d90174799eb2"
},
{
"ImportPath": "github.com/calmh/luhn",
@@ -21,29 +21,29 @@
"ImportPath": "github.com/calmh/xdr",
"Rev": "5f7208e86762911861c94f1849eddbfc0a60cbf0"
},
{
"ImportPath": "github.com/golang/snappy",
"Rev": "0c7f8a7704bfec561913f4df52c832f094ef56f0"
},
{
"ImportPath": "github.com/juju/ratelimit",
"Rev": "c5abe513796336ee2869745bff0638508450e9c5"
"Rev": "772f5c38e468398c4511514f4f6aa9a4185bc0a0"
},
{
"ImportPath": "github.com/kardianos/osext",
"Rev": "efacde03154693404c65e7aa7d461ac9014acd0c"
"Rev": "6e7f843663477789fac7c02def0d0909e969b4e5"
},
{
"ImportPath": "github.com/syncthing/protocol",
"Rev": "e7db2648034fb71b051902a02bc25d4468ed492e"
"Rev": "ebcdea63c07327a342f65415bbadc497462b8f1f"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "87e4e645d80ae9c537e8f2dee52b28036a5dd75e"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
"Rev": "156a073208e131d7d2e212cb749feae7c339e846"
"Rev": "183614d6b32571e867df4cf086f5480ceefbdfac"
},
{
"ImportPath": "github.com/thejerf/suture",
"Rev": "ff19fb384c3fe30f42717967eaa69da91e5f317c"
"Rev": "fc7aaeabdc43fe41c5328efa1479ffea0b820978"
},
{
"ImportPath": "github.com/vitrun/qart/coding",
@@ -59,19 +59,19 @@
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
"Rev": "7d5b0be716b9d6d4269afdaae10032bb296d3cdf"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
"Rev": "7d5b0be716b9d6d4269afdaae10032bb296d3cdf"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
"Rev": "3eb7007b740b66a77f3c85f2660a0240b284115a"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
"Rev": "3eb7007b740b66a77f3c85f2660a0240b284115a"
}
]
}

View File

@@ -0,0 +1,23 @@
// +build gofuzz
package lz4
import "encoding/binary"
func Fuzz(data []byte) int {
if len(data) < 4 {
return 0
}
ln := binary.LittleEndian.Uint32(data)
if ln > (1 << 21) {
return 0
}
if _, err := Decode(nil, data); err != nil {
return 0
}
return 1
}

View File

@@ -141,7 +141,7 @@ func Decode(dst, src []byte) ([]byte, error) {
length += ln
}
if int(d.spos+length) > len(d.src) {
if int(d.spos+length) > len(d.src) || int(d.dpos+length) > len(d.dst) {
return nil, ErrCorrupt
}
@@ -179,7 +179,12 @@ func Decode(dst, src []byte) ([]byte, error) {
}
literal := d.dpos - d.ref
if literal < 4 {
if int(d.dpos+4) > len(d.dst) {
return nil, ErrCorrupt
}
d.cp(4, decr[literal])
} else {
length += 4

View File

@@ -25,8 +25,10 @@
package lz4
import "encoding/binary"
import "errors"
import (
"encoding/binary"
"errors"
)
const (
minMatch = 4

View File

@@ -6,6 +6,7 @@ package logger
import (
"fmt"
"io/ioutil"
"log"
"os"
"strings"
@@ -16,6 +17,7 @@ type LogLevel int
const (
LevelDebug LogLevel = iota
LevelVerbose
LevelInfo
LevelOK
LevelWarn
@@ -36,6 +38,13 @@ type Logger struct {
var DefaultLogger = New()
func New() *Logger {
if os.Getenv("LOGGER_DISCARD") != "" {
// Hack to completely disable logging, for example when running benchmarks.
return &Logger{
logger: log.New(ioutil.Discard, "", 0),
}
}
return &Logger{
logger: log.New(os.Stdout, "", log.Ltime),
}
@@ -83,6 +92,24 @@ func (l *Logger) Debugf(format string, vals ...interface{}) {
l.callHandlers(LevelDebug, s)
}
// Infoln logs a line with a VERBOSE prefix.
func (l *Logger) Verboseln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "VERBOSE: "+s)
l.callHandlers(LevelVerbose, s)
}
// Infof logs a formatted line with a VERBOSE prefix.
func (l *Logger) Verbosef(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "VERBOSE: "+s)
l.callHandlers(LevelVerbose, s)
}
// Infoln logs a line with an INFO prefix.
func (l *Logger) Infoln(vals ...interface{}) {
l.mut.Lock()

14
Godeps/_workspace/src/github.com/golang/snappy/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,14 @@
# This is the official list of Snappy-Go authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as
# Name or Organization <email address>
# The email address is not required for organizations.
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Google Inc.
Jan Mercl <0xjnml@gmail.com>
Sebastien Binet <seb.binet@gmail.com>

View File

@@ -0,0 +1,36 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the Snappy-Go repository.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Google employees are listed here
# but not in AUTHORS, because Google holds the copyright.
#
# The submission process automatically checks to make sure
# that people submitting code are listed in this file (by email address).
#
# Names should be added to this file only after verifying that
# the individual or the individual's organization has agreed to
# the appropriate Contributor License Agreement, found here:
#
# http://code.google.com/legal/individual-cla-v1.0.html
# http://code.google.com/legal/corporate-cla-v1.0.html
#
# The agreement for individuals can be filled out on the web.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file, depending on whether the
# individual or corporate CLA was used.
# Names should be added to this file like so:
# Name <email address>
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Jan Mercl <0xjnml@gmail.com>
Kai Backman <kaib@golang.org>
Marc-Antoine Ruel <maruel@chromium.org>
Nigel Tao <nigeltao@golang.org>
Rob Pike <r@golang.org>
Russ Cox <rsc@golang.org>
Sebastien Binet <seb.binet@gmail.com>

27
Godeps/_workspace/src/github.com/golang/snappy/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -0,0 +1,7 @@
The Snappy compression format in the Go programming language.
To download and install from source:
$ go get github.com/golang/snappy
Unless otherwise noted, the Snappy-Go source files are distributed
under the BSD-style license found in the LICENSE file.

View File

@@ -27,7 +27,7 @@ func DecodedLen(src []byte) (int, error) {
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n == 0 {
if n <= 0 {
return 0, 0, ErrCorrupt
}
if uint64(int(v)) != v {
@@ -56,7 +56,7 @@ func Decode(dst, src []byte) ([]byte, error) {
x := uint(src[s] >> 2)
switch {
case x < 60:
s += 1
s++
case x == 60:
s += 2
if s > len(src) {
@@ -130,7 +130,7 @@ func Decode(dst, src []byte) ([]byte, error) {
// NewReader returns a new Reader that decompresses from r, using the framing
// format described at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewReader(r io.Reader) *Reader {
return &Reader{
r: r,
@@ -200,7 +200,7 @@ func (r *Reader) Read(p []byte) (int, error) {
}
// The chunk types are specified at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
@@ -280,13 +280,11 @@ func (r *Reader) Read(p []byte) (int, error) {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return 0, r.err
} else {
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen]) {
return 0, r.err
}
}
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen]) {
return 0, r.err
}
}
}

View File

@@ -79,7 +79,7 @@ func emitCopy(dst []byte, offset, length int) int {
// slice of dst if dst was large enough to hold the entire encoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Encode(dst, src []byte) ([]byte, error) {
func Encode(dst, src []byte) []byte {
if n := MaxEncodedLen(len(src)); len(dst) < n {
dst = make([]byte, n)
}
@@ -92,7 +92,7 @@ func Encode(dst, src []byte) ([]byte, error) {
if len(src) != 0 {
d += emitLiteral(dst[d:], src)
}
return dst[:d], nil
return dst[:d]
}
// Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive.
@@ -145,7 +145,7 @@ func Encode(dst, src []byte) ([]byte, error) {
if lit != len(src) {
d += emitLiteral(dst[d:], src[lit:])
}
return dst[:d], nil
return dst[:d]
}
// MaxEncodedLen returns the maximum length of a snappy block, given its
@@ -176,7 +176,7 @@ func MaxEncodedLen(srcLen int) int {
// NewWriter returns a new Writer that compresses to w, using the framing
// format described at
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewWriter(w io.Writer) *Writer {
return &Writer{
w: w,
@@ -226,11 +226,7 @@ func (w *Writer) Write(p []byte) (n int, errRet error) {
// Compress the buffer, discarding the result if the improvement
// isn't at least 12.5%.
chunkType := uint8(chunkTypeCompressedData)
chunkBody, err := Encode(w.enc, uncompressed)
if err != nil {
w.err = err
return n, err
}
chunkBody := Encode(w.enc, uncompressed)
if len(chunkBody) >= len(uncompressed)-len(uncompressed)/8 {
chunkType, chunkBody = chunkTypeUncompressedData, uncompressed
}
@@ -244,11 +240,11 @@ func (w *Writer) Write(p []byte) (n int, errRet error) {
w.buf[5] = uint8(checksum >> 8)
w.buf[6] = uint8(checksum >> 16)
w.buf[7] = uint8(checksum >> 24)
if _, err = w.w.Write(w.buf[:]); err != nil {
if _, err := w.w.Write(w.buf[:]); err != nil {
w.err = err
return n, err
}
if _, err = w.w.Write(chunkBody); err != nil {
if _, err := w.w.Write(chunkBody); err != nil {
w.err = err
return n, err
}

View File

@@ -5,7 +5,7 @@
// Package snappy implements the snappy block-based compression format.
// It aims for very high speeds and reasonable compression.
//
// The C++ snappy implementation is at http://code.google.com/p/snappy/
// The C++ snappy implementation is at https://github.com/google/snappy
package snappy
import (
@@ -46,7 +46,7 @@ const (
chunkHeaderSize = 4
magicChunk = "\xff\x06\x00\x00" + magicBody
magicBody = "sNaPpY"
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt says
// https://github.com/google/snappy/blob/master/framing_format.txt says
// that "the uncompressed data in a chunk must be no longer than 65536 bytes".
maxUncompressedChunkLen = 65536
)
@@ -61,7 +61,7 @@ const (
var crcTable = crc32.MakeTable(crc32.Castagnoli)
// crc implements the checksum specified in section 3 of
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
// https://github.com/google/snappy/blob/master/framing_format.txt
func crc(b []byte) uint32 {
c := crc32.Update(0, crcTable, b)
return uint32(c>>15|c<<17) + 0xa282ead8

View File

@@ -24,11 +24,7 @@ var (
)
func roundtrip(b, ebuf, dbuf []byte) error {
e, err := Encode(ebuf, b)
if err != nil {
return fmt.Errorf("encoding error: %v", err)
}
d, err := Decode(dbuf, e)
d, err := Decode(dbuf, Encode(ebuf, b))
if err != nil {
return fmt.Errorf("decoding error: %v", err)
}
@@ -82,6 +78,16 @@ func TestSmallRegular(t *testing.T) {
}
}
func TestInvalidVarint(t *testing.T) {
data := []byte("\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00")
if _, err := DecodedLen(data); err != ErrCorrupt {
t.Errorf("DecodedLen: got %v, want ErrCorrupt", err)
}
if _, err := Decode(nil, data); err != ErrCorrupt {
t.Errorf("Decode: got %v, want ErrCorrupt", err)
}
}
func cmp(a, b []byte) error {
if len(a) != len(b) {
return fmt.Errorf("got %d bytes, want %d", len(a), len(b))
@@ -197,10 +203,7 @@ func TestWriterReset(t *testing.T) {
}
func benchDecode(b *testing.B, src []byte) {
encoded, err := Encode(nil, src)
if err != nil {
b.Fatal(err)
}
encoded := Encode(nil, src)
// Bandwidth is in amount of uncompressed data.
b.SetBytes(int64(len(src)))
b.ResetTimer()
@@ -222,7 +225,7 @@ func benchEncode(b *testing.B, src []byte) {
func readFile(b testing.TB, filename string) []byte {
src, err := ioutil.ReadFile(filename)
if err != nil {
b.Fatalf("failed reading %s: %s", filename, err)
b.Skipf("skipping benchmark: %v", err)
}
if len(src) == 0 {
b.Fatalf("%s has zero length", filename)
@@ -284,14 +287,14 @@ var testFiles = []struct {
// The test data files are present at this canonical URL.
const baseURL = "https://raw.githubusercontent.com/google/snappy/master/testdata/"
func downloadTestdata(basename string) (errRet error) {
func downloadTestdata(b *testing.B, basename string) (errRet error) {
filename := filepath.Join(*testdata, basename)
if stat, err := os.Stat(filename); err == nil && stat.Size() != 0 {
return nil
}
if !*download {
return fmt.Errorf("test data not found; skipping benchmark without the -download flag")
b.Skipf("test data not found; skipping benchmark without the -download flag")
}
// Download the official snappy C++ implementation reference test data
// files for benchmarking.
@@ -326,7 +329,7 @@ func downloadTestdata(basename string) (errRet error) {
}
func benchFile(b *testing.B, n int, decode bool) {
if err := downloadTestdata(testFiles[n].filename); err != nil {
if err := downloadTestdata(b, testFiles[n].filename); err != nil {
b.Fatalf("failed to download testdata: %s", err)
}
data := readFile(b, filepath.Join(*testdata, testFiles[n].filename))

View File

@@ -1,5 +1,8 @@
This package contains an efficient token-bucket-based rate limiter.
Copyright (C) 2015 Canonical Ltd.
All files in this repository are licensed as follows. If you contribute
to this repository, it is assumed that you license your contribution
under the same license unless you state otherwise.
All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file.
This software is licensed under the LGPLv3, included below.

View File

@@ -20,7 +20,7 @@ token in the bucket represents one byte.
```go
func Writer(w io.Writer, bucket *Bucket) io.Writer
```
Writer returns a reader that is rate limited by the given token bucket. Each
Writer returns a writer that is rate limited by the given token bucket. Each
token in the bucket represents one byte.
#### type Bucket

View File

@@ -2,7 +2,8 @@
// Licensed under the LGPLv3 with static-linking exception.
// See LICENCE file for details.
// The ratelimit package provides an efficient token bucket implementation.
// The ratelimit package provides an efficient token bucket implementation
// that can be used to limit the rate of arbitrary things.
// See http://en.wikipedia.org/wiki/Token_bucket.
package ratelimit
@@ -10,6 +11,7 @@ import (
"strconv"
"sync"
"time"
"math"
)
// Bucket represents a token bucket that fills at a predetermined rate.
@@ -55,7 +57,7 @@ func NewBucketWithRate(rate float64, capacity int64) *Bucket {
continue
}
tb := NewBucketWithQuantum(fillInterval, capacity, quantum)
if diff := abs(tb.Rate() - rate); diff/rate <= rateMargin {
if diff := math.Abs(tb.Rate() - rate); diff/rate <= rateMargin {
return tb
}
}
@@ -217,10 +219,3 @@ func (tb *Bucket) adjust(now time.Time) (currentTick int64) {
tb.availTick = currentTick
return
}
func abs(f float64) float64 {
if f < 0 {
return -f
}
return f
}

View File

@@ -4,7 +4,9 @@
There is sometimes utility in finding the current executable file
that is running. This can be used for upgrading the current executable
or finding resources located relative to the executable file.
or finding resources located relative to the executable file. Both
working directory and the os.Args[0] value are arbitrary and cannot
be relied on; os.Args[0] can be "faked".
Multi-platform and supports:
* Linux

View File

@@ -16,12 +16,12 @@ func Executable() (string, error) {
}
// Returns same path as Executable, returns just the folder
// path. Excludes the executable name.
// path. Excludes the executable name and any trailing slash.
func ExecutableFolder() (string, error) {
p, err := Executable()
if err != nil {
return "", err
}
folder, _ := filepath.Split(p)
return folder, nil
return filepath.Dir(p), nil
}

View File

@@ -17,12 +17,14 @@ import (
func executable() (string, error) {
switch runtime.GOOS {
case "linux":
const deletedSuffix = " (deleted)"
const deletedTag = " (deleted)"
execpath, err := os.Readlink("/proc/self/exe")
if err != nil {
return execpath, err
}
return strings.TrimSuffix(execpath, deletedSuffix), nil
execpath = strings.TrimSuffix(execpath, deletedTag)
execpath = strings.TrimPrefix(execpath, deletedTag)
return execpath, nil
case "netbsd":
return os.Readlink("/proc/curproc/exe")
case "openbsd", "dragonfly":

View File

@@ -24,6 +24,29 @@ const (
executableEnvValueDelete = "delete"
)
func TestPrintExecutable(t *testing.T) {
ef, err := Executable()
if err != nil {
t.Fatalf("Executable failed: %v", err)
}
t.Log("Executable:", ef)
}
func TestPrintExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
t.Log("Executable Folder:", ef)
}
func TestExecutableFolder(t *testing.T) {
ef, err := ExecutableFolder()
if err != nil {
t.Fatalf("ExecutableFolder failed: %v", err)
}
if ef[len(ef)-1] == filepath.Separator {
t.Fatal("ExecutableFolder ends with a trailing slash.")
}
}
func TestExecutableMatch(t *testing.T) {
ep, err := Executable()
if err != nil {

View File

@@ -31,7 +31,7 @@ func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo, fl
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
}
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option, buf []byte) error {
t.folder = folder
t.name = name
t.offset = offset
@@ -39,7 +39,8 @@ func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64
t.hash = hash
t.flags = flags
t.options = options
return t.data, nil
copy(buf, t.data)
return nil
}
func (t *TestModel) Close(deviceID DeviceID, err error) {

View File

@@ -0,0 +1,23 @@
// Copyright (C) 2015 The Protocol Authors.
package protocol
import "testing"
func TestWinsConflict(t *testing.T) {
testcases := [][2]FileInfo{
// The first should always win over the second
{{Modified: 42}, {Modified: 41}},
{{Modified: 41}, {Modified: 42, Flags: FlagDeleted}},
{{Modified: 41, Version: Vector{{42, 2}, {43, 1}}}, {Modified: 41, Version: Vector{{42, 1}, {43, 2}}}},
}
for _, tc := range testcases {
if !tc[0].WinsConflict(tc[1]) {
t.Errorf("%v should win over %v", tc[0], tc[1])
}
if tc[1].WinsConflict(tc[0]) {
t.Errorf("%v should not win over %v", tc[1], tc[0])
}
}
}

View File

@@ -58,6 +58,31 @@ func (f FileInfo) HasPermissionBits() bool {
return f.Flags&FlagNoPermBits == 0
}
// WinsConflict returns true if "f" is the one to choose when it is in
// conflict with "other".
func (f FileInfo) WinsConflict(other FileInfo) bool {
// If a modification is in conflict with a delete, we pick the
// modification.
if !f.IsDeleted() && other.IsDeleted() {
return true
}
if f.IsDeleted() && !other.IsDeleted() {
return false
}
// The one with the newer modification time wins.
if f.Modified > other.Modified {
return true
}
if f.Modified < other.Modified {
return false
}
// The modification times were equal. Use the device ID in the version
// vector as tie breaker.
return f.Version.Compare(other.Version) == ConcurrentGreater
}
type BlockInfo struct {
Offset int64 // noencode (cache only)
Size int32

View File

@@ -26,9 +26,9 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
name = norm.NFD.String(name)
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
return m.next.Request(deviceID, folder, name, offset, hash, flags, options, buf)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {

View File

@@ -18,8 +18,8 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
return m.next.Request(deviceID, folder, name, offset, hash, flags, options, buf)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {

View File

@@ -25,18 +25,18 @@ type nativeModel struct {
}
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
fixupFiles(files)
fixupFiles(folder, files)
m.next.Index(deviceID, folder, files, flags, options)
}
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
fixupFiles(files)
fixupFiles(folder, files)
m.next.IndexUpdate(deviceID, folder, files, flags, options)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error {
name = filepath.FromSlash(name)
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
return m.next.Request(deviceID, folder, name, offset, hash, flags, options, buf)
}
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
@@ -47,7 +47,7 @@ func (m nativeModel) Close(deviceID DeviceID, err error) {
m.next.Close(deviceID, err)
}
func fixupFiles(files []FileInfo) {
func fixupFiles(folder string, files []FileInfo) {
for i, f := range files {
if strings.ContainsAny(f.Name, disallowedCharacters) {
if f.IsDeleted() {
@@ -56,7 +56,7 @@ func fixupFiles(files []FileInfo) {
continue
}
files[i].Flags |= FlagInvalid
l.Warnf("File name %q contains invalid characters; marked as invalid.", f.Name)
l.Warnf("File name %q (folder %q) contains invalid characters; marked as invalid.", f.Name, folder)
}
files[i].Name = filepath.FromSlash(files[i].Name)
}

View File

@@ -31,8 +31,7 @@ const (
const (
stateInitial = iota
stateCCRcvd
stateIdxRcvd
stateReady
)
// FileInfo flags
@@ -82,7 +81,7 @@ type Model interface {
// An index update was received from the peer device
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option)
// A request was made by the peer device
Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error)
Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, flags uint32, options []Option, buf []byte) error
// A cluster configuration message was received
ClusterConfig(deviceID DeviceID, config ClusterConfigMessage)
// The peer device closed the connection
@@ -90,6 +89,7 @@ type Model interface {
}
type Connection interface {
Start()
ID() DeviceID
Name() string
Index(folder string, files []FileInfo, flags uint32, options []Option) error
@@ -103,7 +103,6 @@ type rawConnection struct {
id DeviceID
name string
receiver Model
state int
cr *countingReader
cw *countingWriter
@@ -113,11 +112,11 @@ type rawConnection struct {
idxMut sync.Mutex // ensures serialization of Index calls
nextID chan int
outbox chan hdrMsg
closed chan struct{}
once sync.Once
nextID chan int
outbox chan hdrMsg
closed chan struct{}
once sync.Once
pool sync.Pool
compression Compression
rdbuf0 []byte // used & reused by readMessage
@@ -130,8 +129,9 @@ type asyncResult struct {
}
type hdrMsg struct {
hdr header
msg encodable
hdr header
msg encodable
done chan struct{}
}
type encodable interface {
@@ -142,9 +142,9 @@ type isEofer interface {
IsEOF() bool
}
const (
pingTimeout = 30 * time.Second
pingIdleTime = 60 * time.Second
var (
PingTimeout = 30 * time.Second
PingIdleTime = 60 * time.Second
)
func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiver Model, name string, compress Compression) Connection {
@@ -152,24 +152,32 @@ func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiv
cw := &countingWriter{Writer: writer}
c := rawConnection{
id: deviceID,
name: name,
receiver: nativeModel{receiver},
state: stateInitial,
cr: cr,
cw: cw,
outbox: make(chan hdrMsg),
nextID: make(chan int),
closed: make(chan struct{}),
id: deviceID,
name: name,
receiver: nativeModel{receiver},
cr: cr,
cw: cw,
outbox: make(chan hdrMsg),
nextID: make(chan int),
closed: make(chan struct{}),
pool: sync.Pool{
New: func() interface{} {
return make([]byte, BlockSize)
},
},
compression: compress,
}
return wireFormatConnection{&c}
}
// Start creates the goroutines for sending and receiving of messages. It must
// be called exactly once after creating a connection.
func (c *rawConnection) Start() {
go c.readerLoop()
go c.writerLoop()
go c.pingerLoop()
go c.idGenerator()
return wireFormatConnection{&c}
}
func (c *rawConnection) ID() DeviceID {
@@ -193,7 +201,7 @@ func (c *rawConnection) Index(folder string, idx []FileInfo, flags uint32, optio
Files: idx,
Flags: flags,
Options: options,
})
}, nil)
c.idxMut.Unlock()
return nil
}
@@ -211,7 +219,7 @@ func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo, flags uint32,
Files: idx,
Flags: flags,
Options: options,
})
}, nil)
c.idxMut.Unlock()
return nil
}
@@ -241,7 +249,7 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
Hash: hash,
Flags: flags,
Options: options,
})
}, nil)
if !ok {
return nil, ErrClosed
}
@@ -255,7 +263,7 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
// ClusterConfig send the cluster configuration message to the peer and returns any error
func (c *rawConnection) ClusterConfig(config ClusterConfigMessage) {
c.send(-1, messageTypeClusterConfig, config)
c.send(-1, messageTypeClusterConfig, config, nil)
}
func (c *rawConnection) ping() bool {
@@ -271,7 +279,7 @@ func (c *rawConnection) ping() bool {
c.awaiting[id] = rc
c.awaitingMut.Unlock()
ok := c.send(id, messageTypePing, nil)
ok := c.send(id, messageTypePing, nil, nil)
if !ok {
return false
}
@@ -285,6 +293,7 @@ func (c *rawConnection) readerLoop() (err error) {
c.close(err)
}()
state := stateInitial
for {
select {
case <-c.closed:
@@ -298,47 +307,54 @@ func (c *rawConnection) readerLoop() (err error) {
}
switch msg := msg.(type) {
case ClusterConfigMessage:
if state != stateInitial {
return fmt.Errorf("protocol error: cluster config message in state %d", state)
}
go c.receiver.ClusterConfig(c.id, msg)
state = stateReady
case IndexMessage:
switch hdr.msgType {
case messageTypeIndex:
if c.state < stateCCRcvd {
return fmt.Errorf("protocol error: index message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: index message in state %d", state)
}
c.handleIndex(msg)
c.state = stateIdxRcvd
state = stateReady
case messageTypeIndexUpdate:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: index update message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: index update message in state %d", state)
}
c.handleIndexUpdate(msg)
state = stateReady
}
case RequestMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: request message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: request message in state %d", state)
}
// Requests are handled asynchronously
go c.handleRequest(hdr.msgID, msg)
case ResponseMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: response message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: response message in state %d", state)
}
c.handleResponse(hdr.msgID, msg)
case pingMessage:
c.send(hdr.msgID, messageTypePong, pongMessage{})
if state != stateReady {
return fmt.Errorf("protocol error: ping message in state %d", state)
}
c.send(hdr.msgID, messageTypePong, pongMessage{}, nil)
case pongMessage:
c.handlePong(hdr.msgID)
case ClusterConfigMessage:
if c.state != stateInitial {
return fmt.Errorf("protocol error: cluster config message in state %d", c.state)
if state != stateReady {
return fmt.Errorf("protocol error: pong message in state %d", state)
}
go c.receiver.ClusterConfig(c.id, msg)
c.state = stateCCRcvd
c.handlePong(hdr.msgID)
case CloseMessage:
return errors.New(msg.Reason)
@@ -509,12 +525,36 @@ func filterIndexMessageFiles(fs []FileInfo) []FileInfo {
}
func (c *rawConnection) handleRequest(msgID int, req RequestMessage) {
data, err := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size), req.Hash, req.Flags, req.Options)
size := int(req.Size)
usePool := size <= BlockSize
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: data,
Code: errorToCode(err),
})
var buf []byte
var done chan struct{}
if usePool {
buf = c.pool.Get().([]byte)[:size]
done = make(chan struct{})
} else {
buf = make([]byte, size)
}
err := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), req.Hash, req.Flags, req.Options, buf)
if err != nil {
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: nil,
Code: errorToCode(err),
}, done)
} else {
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: buf,
Code: errorToCode(err),
}, done)
}
if usePool {
<-done
c.pool.Put(buf)
}
}
func (c *rawConnection) handleResponse(msgID int, resp ResponseMessage) {
@@ -537,7 +577,7 @@ func (c *rawConnection) handlePong(msgID int) {
c.awaitingMut.Unlock()
}
func (c *rawConnection) send(msgID int, msgType int, msg encodable) bool {
func (c *rawConnection) send(msgID int, msgType int, msg encodable, done chan struct{}) bool {
if msgID < 0 {
select {
case id := <-c.nextID:
@@ -554,7 +594,7 @@ func (c *rawConnection) send(msgID int, msgType int, msg encodable) bool {
}
select {
case c.outbox <- hdrMsg{hdr, msg}:
case c.outbox <- hdrMsg{hdr, msg, done}:
return true
case <-c.closed:
return false
@@ -573,6 +613,9 @@ func (c *rawConnection) writerLoop() {
if hm.msg != nil {
// Uncompressed message in uncBuf
uncBuf, err = hm.msg.AppendXDR(uncBuf[:0])
if hm.done != nil {
close(hm.done)
}
if err != nil {
c.close(err)
return
@@ -679,17 +722,17 @@ func (c *rawConnection) idGenerator() {
func (c *rawConnection) pingerLoop() {
var rc = make(chan bool, 1)
ticker := time.Tick(pingIdleTime / 2)
ticker := time.Tick(PingIdleTime / 2)
for {
select {
case <-ticker:
if d := time.Since(c.cr.Last()); d < pingIdleTime {
if d := time.Since(c.cr.Last()); d < PingIdleTime {
if debug {
l.Debugln(c.id, "ping skipped after rd", d)
}
continue
}
if d := time.Since(c.cw.Last()); d < pingIdleTime {
if d := time.Since(c.cw.Last()); d < PingIdleTime {
if debug {
l.Debugln(c.id, "ping skipped after wr", d)
}
@@ -709,7 +752,7 @@ func (c *rawConnection) pingerLoop() {
if !ok {
c.close(fmt.Errorf("ping failure"))
}
case <-time.After(pingTimeout):
case <-time.After(PingTimeout):
c.close(fmt.Errorf("ping timeout"))
case <-c.closed:
return

View File

@@ -67,8 +67,12 @@ func TestPing(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, nil, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c1 := NewConnection(c1ID, br, aw, nil, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c0 := NewConnection(c0ID, ar, bw, newTestModel(), "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c0.Start()
c1 := NewConnection(c1ID, br, aw, newTestModel(), "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
if ok := c0.ping(); !ok {
t.Error("c0 ping failed")
@@ -81,8 +85,8 @@ func TestPing(t *testing.T) {
func TestPingErr(t *testing.T) {
e := errors.New("something broke")
for i := 0; i < 16; i++ {
for j := 0; j < 16; j++ {
for i := 0; i < 32; i++ {
for j := 0; j < 32; j++ {
m0 := newTestModel()
m1 := newTestModel()
@@ -92,12 +96,18 @@ func TestPingErr(t *testing.T) {
ebw := &ErrPipe{PipeWriter: *bw, max: j, err: e}
c0 := NewConnection(c0ID, ar, ebw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, eaw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, eaw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
res := c0.ping()
if (i < 8 || j < 8) && res {
// This should have resulted in failure, as there is no way an empty ClusterConfig plus a Ping message fits in eight bytes.
t.Errorf("Unexpected ping success; i=%d, j=%d", i, j)
} else if (i >= 12 && j >= 12) && !res {
} else if (i >= 28 && j >= 28) && !res {
// This should have worked though, as 28 bytes is plenty for both.
t.Errorf("Unexpected ping fail; i=%d, j=%d", i, j)
}
}
@@ -168,7 +178,11 @@ func TestVersionErr(t *testing.T) {
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
w := xdr.NewWriter(c0.cw)
w.WriteUint32(encodeHeader(header{
@@ -191,7 +205,11 @@ func TestTypeErr(t *testing.T) {
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
w := xdr.NewWriter(c0.cw)
w.WriteUint32(encodeHeader(header{
@@ -214,7 +232,11 @@ func TestClose(t *testing.T) {
br, bw := io.Pipe()
c0 := NewConnection(c0ID, ar, bw, m0, "name", CompressAlways).(wireFormatConnection).next.(*rawConnection)
NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c0.Start()
c1 := NewConnection(c1ID, br, aw, m1, "name", CompressAlways)
c1.Start()
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
c0.close(nil)

View File

@@ -12,6 +12,10 @@ type wireFormatConnection struct {
next Connection
}
func (c wireFormatConnection) Start() {
c.next.Start()
}
func (c wireFormatConnection) ID() DeviceID {
return c.next.ID()
}

View File

@@ -63,13 +63,14 @@ type DB struct {
journalAckC chan error
// Compaction.
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
mcompCmdC chan cCmd
compErrC chan error
compPerErrC chan error
compErrSetC chan error
compStats []cStats
tcompCmdC chan cCmd
tcompPauseC chan chan<- struct{}
mcompCmdC chan cCmd
compErrC chan error
compPerErrC chan error
compErrSetC chan error
compWriteLocking bool
compStats []cStats
// Close.
closeW sync.WaitGroup
@@ -108,28 +109,44 @@ func openDB(s *session) (*DB, error) {
closeC: make(chan struct{}),
}
if err := db.recoverJournal(); err != nil {
return nil, err
}
// Read-only mode.
readOnly := s.o.GetReadOnly()
// Remove any obsolete files.
if err := db.checkAndCleanFiles(); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
if readOnly {
// Recover journals (read-only mode).
if err := db.recoverJournalRO(); err != nil {
return nil, err
}
return nil, err
} else {
// Recover journals.
if err := db.recoverJournal(); err != nil {
return nil, err
}
// Remove any obsolete files.
if err := db.checkAndCleanFiles(); err != nil {
// Close journal.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
}
return nil, err
}
}
// Doesn't need to be included in the wait group.
go db.compactionError()
go db.mpoolDrain()
db.closeW.Add(3)
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
if readOnly {
db.SetReadOnly()
} else {
db.closeW.Add(3)
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
}
s.logf("db@open done T·%v", time.Since(start))
@@ -275,7 +292,7 @@ func recoverTable(s *session, o *opt.Options) error {
// We will drop corrupted table.
strict = o.GetStrict(opt.StrictRecovery)
rec = &sessionRecord{numLevel: o.GetNumLevel()}
rec = &sessionRecord{}
bpool = util.NewBufferPool(o.GetBlockSize() + 5)
)
buildTable := func(iter iterator.Iterator) (tmp storage.File, size int64, err error) {
@@ -450,132 +467,136 @@ func recoverTable(s *session, o *opt.Options) error {
}
func (db *DB) recoverJournal() error {
// Get all tables and sort it by file number.
journalFiles_, err := db.s.getFiles(storage.TypeJournal)
// Get all journals and sort it by file number.
allJournalFiles, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
journalFiles := files(journalFiles_)
journalFiles.sort()
files(allJournalFiles).sort()
// Discard older journal.
prev := -1
for i, file := range journalFiles {
if file.Num() >= db.s.stJournalNum {
if prev >= 0 {
i--
journalFiles[i] = journalFiles[prev]
}
journalFiles = journalFiles[i:]
break
} else if file.Num() == db.s.stPrevJournalNum {
prev = i
// Journals that will be recovered.
var recJournalFiles []storage.File
for _, jf := range allJournalFiles {
if jf.Num() >= db.s.stJournalNum || jf.Num() == db.s.stPrevJournalNum {
recJournalFiles = append(recJournalFiles, jf)
}
}
var jr *journal.Reader
var of storage.File
var mem *memdb.DB
batch := new(Batch)
cm := newCMem(db.s)
buf := new(util.Buffer)
// Options.
strict := db.s.o.GetStrict(opt.StrictJournal)
checksum := db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer := db.s.o.GetWriteBuffer()
recoverJournal := func(file storage.File) error {
db.logf("journal@recovery recovering @%d", file.Num())
reader, err := file.Open()
if err != nil {
return err
}
defer reader.Close()
var (
of storage.File // Obsolete file.
rec = &sessionRecord{}
)
// Create/reset journal reader instance.
if jr == nil {
jr = journal.NewReader(reader, dropper{db.s, file}, strict, checksum)
} else {
jr.Reset(reader, dropper{db.s, file}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
return err
}
}
if err := cm.commit(file.Num(), db.seq); err != nil {
return err
}
cm.reset()
of.Remove()
of = nil
}
// Replay journal to memdb.
mem.Reset()
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
return errors.SetFile(err, file)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
} else {
return errors.SetFile(err, file)
}
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mem); err != nil {
if strict || !errors.IsCorrupted(err) {
return errors.SetFile(err, file)
} else {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
// Flush it if large enough.
if mem.Size() >= writeBuffer {
if err := cm.flush(mem, 0); err != nil {
return err
}
mem.Reset()
}
}
of = file
return nil
}
// Recover all journals.
if len(journalFiles) > 0 {
db.logf("journal@recovery F·%d", len(journalFiles))
// Recover journals.
if len(recJournalFiles) > 0 {
db.logf("journal@recovery F·%d", len(recJournalFiles))
// Mark file number as used.
db.s.markFileNum(journalFiles[len(journalFiles)-1].Num())
db.s.markFileNum(recJournalFiles[len(recJournalFiles)-1].Num())
mem = memdb.New(db.s.icmp, writeBuffer)
for _, file := range journalFiles {
if err := recoverJournal(file); err != nil {
var (
// Options.
strict = db.s.o.GetStrict(opt.StrictJournal)
checksum = db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer = db.s.o.GetWriteBuffer()
jr *journal.Reader
mdb = memdb.New(db.s.icmp, writeBuffer)
buf = &util.Buffer{}
batch = &Batch{}
)
for _, jf := range recJournalFiles {
db.logf("journal@recovery recovering @%d", jf.Num())
fr, err := jf.Open()
if err != nil {
return err
}
// Create or reset journal reader instance.
if jr == nil {
jr = journal.NewReader(fr, dropper{db.s, jf}, strict, checksum)
} else {
jr.Reset(fr, dropper{db.s, jf}, strict, checksum)
}
// Flush memdb and remove obsolete journal file.
if of != nil {
if mdb.Len() > 0 {
if _, err := db.s.flushMemdb(rec, mdb, -1); err != nil {
fr.Close()
return err
}
}
rec.setJournalNum(jf.Num())
rec.setSeqNum(db.seq)
if err := db.s.commit(rec); err != nil {
fr.Close()
return err
}
rec.resetAddedTables()
of.Remove()
of = nil
}
// Replay journal to memdb.
mdb.Reset()
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
fr.Close()
return errors.SetFile(err, jf)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mdb); err != nil {
if !strict && errors.IsCorrupted(err) {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
// Flush it if large enough.
if mdb.Size() >= writeBuffer {
if _, err := db.s.flushMemdb(rec, mdb, 0); err != nil {
fr.Close()
return err
}
mdb.Reset()
}
}
fr.Close()
of = jf
}
// Flush the last journal.
if mem.Len() > 0 {
if err := cm.flush(mem, 0); err != nil {
// Flush the last memdb.
if mdb.Len() > 0 {
if _, err := db.s.flushMemdb(rec, mdb, 0); err != nil {
return err
}
}
@@ -587,8 +608,10 @@ func (db *DB) recoverJournal() error {
}
// Commit.
if err := cm.commit(db.journalFile.Num(), db.seq); err != nil {
// Close journal.
rec.setJournalNum(db.journalFile.Num())
rec.setSeqNum(db.seq)
if err := db.s.commit(rec); err != nil {
// Close journal on error.
if db.journal != nil {
db.journal.Close()
db.journalWriter.Close()
@@ -604,6 +627,103 @@ func (db *DB) recoverJournal() error {
return nil
}
func (db *DB) recoverJournalRO() error {
// Get all journals and sort it by file number.
allJournalFiles, err := db.s.getFiles(storage.TypeJournal)
if err != nil {
return err
}
files(allJournalFiles).sort()
// Journals that will be recovered.
var recJournalFiles []storage.File
for _, jf := range allJournalFiles {
if jf.Num() >= db.s.stJournalNum || jf.Num() == db.s.stPrevJournalNum {
recJournalFiles = append(recJournalFiles, jf)
}
}
var (
// Options.
strict = db.s.o.GetStrict(opt.StrictJournal)
checksum = db.s.o.GetStrict(opt.StrictJournalChecksum)
writeBuffer = db.s.o.GetWriteBuffer()
mdb = memdb.New(db.s.icmp, writeBuffer)
)
// Recover journals.
if len(recJournalFiles) > 0 {
db.logf("journal@recovery RO·Mode F·%d", len(recJournalFiles))
var (
jr *journal.Reader
buf = &util.Buffer{}
batch = &Batch{}
)
for _, jf := range recJournalFiles {
db.logf("journal@recovery recovering @%d", jf.Num())
fr, err := jf.Open()
if err != nil {
return err
}
// Create or reset journal reader instance.
if jr == nil {
jr = journal.NewReader(fr, dropper{db.s, jf}, strict, checksum)
} else {
jr.Reset(fr, dropper{db.s, jf}, strict, checksum)
}
// Replay journal to memdb.
for {
r, err := jr.Next()
if err != nil {
if err == io.EOF {
break
}
fr.Close()
return errors.SetFile(err, jf)
}
buf.Reset()
if _, err := buf.ReadFrom(r); err != nil {
if err == io.ErrUnexpectedEOF {
// This is error returned due to corruption, with strict == false.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
if err := batch.memDecodeAndReplay(db.seq, buf.Bytes(), mdb); err != nil {
if !strict && errors.IsCorrupted(err) {
db.s.logf("journal error: %v (skipped)", err)
// We won't apply sequence number as it might be corrupted.
continue
}
fr.Close()
return errors.SetFile(err, jf)
}
// Save sequence number.
db.seq = batch.seq + uint64(batch.Len())
}
fr.Close()
}
}
// Set memDB.
db.mem = &memDB{db: db, DB: mdb, ref: 1}
return nil
}
func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, err error) {
ikey := newIkey(key, seq, ktSeek)
@@ -614,7 +734,7 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
}
defer m.decref()
mk, mv, me := m.mdb.Find(ikey)
mk, mv, me := m.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
@@ -652,7 +772,7 @@ func (db *DB) has(key []byte, seq uint64, ro *opt.ReadOptions) (ret bool, err er
}
defer m.decref()
mk, _, me := m.mdb.Find(ikey)
mk, _, me := m.Find(ikey)
if me == nil {
ukey, _, kt, kerr := parseIkey(mk)
if kerr != nil {
@@ -784,7 +904,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
const prefix = "leveldb."
if !strings.HasPrefix(name, prefix) {
return "", errors.New("leveldb: GetProperty: unknown property: " + name)
return "", ErrNotFound
}
p := name[len(prefix):]
@@ -798,7 +918,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
var rest string
n, _ := fmt.Sscanf(p[len(numFilesPrefix):], "%d%s", &level, &rest)
if n != 1 || int(level) >= db.s.o.GetNumLevel() {
err = errors.New("leveldb: GetProperty: invalid property: " + name)
err = ErrNotFound
} else {
value = fmt.Sprint(v.tLen(int(level)))
}
@@ -837,7 +957,7 @@ func (db *DB) GetProperty(name string) (value string, err error) {
case p == "aliveiters":
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
default:
err = errors.New("leveldb: GetProperty: unknown property: " + name)
err = ErrNotFound
}
return
@@ -900,6 +1020,9 @@ func (db *DB) Close() error {
var err error
select {
case err = <-db.compErrC:
if err == ErrReadOnly {
err = nil
}
default:
}

View File

@@ -11,7 +11,6 @@ import (
"time"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
@@ -62,58 +61,8 @@ func (p *cStatsStaging) stopTimer() {
}
}
type cMem struct {
s *session
level int
rec *sessionRecord
}
func newCMem(s *session) *cMem {
return &cMem{s: s, rec: &sessionRecord{numLevel: s.o.GetNumLevel()}}
}
func (c *cMem) flush(mem *memdb.DB, level int) error {
s := c.s
// Write memdb to table.
iter := mem.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
if err != nil {
return err
}
// Pick level.
if level < 0 {
v := s.version()
level = v.pickLevel(t.imin.ukey(), t.imax.ukey())
v.release()
}
c.rec.addTableFile(level, t)
s.logf("mem@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
c.level = level
return nil
}
func (c *cMem) reset() {
c.rec = &sessionRecord{numLevel: c.s.o.GetNumLevel()}
}
func (c *cMem) commit(journal, seq uint64) error {
c.rec.setJournalNum(journal)
c.rec.setSeqNum(seq)
// Commit changes.
return c.s.commit(c.rec)
}
func (db *DB) compactionError() {
var (
err error
wlocked bool
)
var err error
noerr:
// No error.
for {
@@ -121,7 +70,7 @@ noerr:
case err = <-db.compErrSetC:
switch {
case err == nil:
case errors.IsCorrupted(err):
case err == ErrReadOnly, errors.IsCorrupted(err):
goto hasperr
default:
goto haserr
@@ -139,7 +88,7 @@ haserr:
switch {
case err == nil:
goto noerr
case errors.IsCorrupted(err):
case err == ErrReadOnly, errors.IsCorrupted(err):
goto hasperr
default:
}
@@ -155,9 +104,9 @@ hasperr:
case db.compPerErrC <- err:
case db.writeLockC <- struct{}{}:
// Hold write lock, so that write won't pass-through.
wlocked = true
db.compWriteLocking = true
case _, _ = <-db.closeC:
if wlocked {
if db.compWriteLocking {
// We should release the lock or Close will hang.
<-db.writeLockC
}
@@ -287,21 +236,18 @@ func (db *DB) compactionExitTransact() {
}
func (db *DB) memCompaction() {
mem := db.getFrozenMem()
if mem == nil {
mdb := db.getFrozenMem()
if mdb == nil {
return
}
defer mem.decref()
defer mdb.decref()
c := newCMem(db.s)
stats := new(cStatsStaging)
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
db.logf("memdb@flush N·%d S·%s", mdb.Len(), shortenb(mdb.Size()))
// Don't compact empty memdb.
if mem.mdb.Len() == 0 {
db.logf("mem@flush skipping")
// drop frozen mem
if mdb.Len() == 0 {
db.logf("memdb@flush skipping")
// drop frozen memdb
db.dropFrozenMem()
return
}
@@ -317,13 +263,20 @@ func (db *DB) memCompaction() {
return
}
db.compactionTransactFunc("mem@flush", func(cnt *compactionTransactCounter) (err error) {
var (
rec = &sessionRecord{}
stats = &cStatsStaging{}
flushLevel int
)
db.compactionTransactFunc("memdb@flush", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.flush(mem.mdb, -1)
flushLevel, err = db.s.flushMemdb(rec, mdb.DB, -1)
stats.stopTimer()
return
}, func() error {
for _, r := range c.rec.addedTables {
db.logf("mem@flush revert @%d", r.num)
for _, r := range rec.addedTables {
db.logf("memdb@flush revert @%d", r.num)
f := db.s.getTableFile(r.num)
if err := f.Remove(); err != nil {
return err
@@ -332,20 +285,23 @@ func (db *DB) memCompaction() {
return nil
})
db.compactionTransactFunc("mem@commit", func(cnt *compactionTransactCounter) (err error) {
db.compactionTransactFunc("memdb@commit", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.commit(db.journalFile.Num(), db.frozenSeq)
rec.setJournalNum(db.journalFile.Num())
rec.setSeqNum(db.frozenSeq)
err = db.s.commit(rec)
stats.stopTimer()
return
}, nil)
db.logf("mem@flush committed F·%d T·%v", len(c.rec.addedTables), stats.duration)
db.logf("memdb@flush committed F·%d T·%v", len(rec.addedTables), stats.duration)
for _, r := range c.rec.addedTables {
for _, r := range rec.addedTables {
stats.write += r.size
}
db.compStats[c.level].add(stats)
db.compStats[flushLevel].add(stats)
// Drop frozen mem.
// Drop frozen memdb.
db.dropFrozenMem()
// Resume table compaction.
@@ -557,7 +513,7 @@ func (b *tableCompactionBuilder) revert() error {
func (db *DB) tableCompaction(c *compaction, noTrivial bool) {
defer c.release()
rec := &sessionRecord{numLevel: db.s.o.GetNumLevel()}
rec := &sessionRecord{}
rec.addCompPtr(c.level, c.imax)
if !noTrivial && c.trivial() {

View File

@@ -8,6 +8,7 @@ package leveldb
import (
"errors"
"math/rand"
"runtime"
"sync"
"sync/atomic"
@@ -39,11 +40,11 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
ti := v.getIterators(slice, ro)
n := len(ti) + 2
i := make([]iterator.Iterator, 0, n)
emi := em.mdb.NewIterator(slice)
emi := em.NewIterator(slice)
emi.SetReleaser(&memdbReleaser{m: em})
i = append(i, emi)
if fm != nil {
fmi := fm.mdb.NewIterator(slice)
fmi := fm.NewIterator(slice)
fmi.SetReleaser(&memdbReleaser{m: fm})
i = append(i, fmi)
}
@@ -80,6 +81,10 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
return iter
}
func (db *DB) iterSamplingRate() int {
return rand.Intn(2 * db.s.o.GetIteratorSamplingRate())
}
type dir int
const (
@@ -98,11 +103,21 @@ type dbIter struct {
seq uint64
strict bool
dir dir
key []byte
value []byte
err error
releaser util.Releaser
smaplingGap int
dir dir
key []byte
value []byte
err error
releaser util.Releaser
}
func (i *dbIter) sampleSeek() {
ikey := i.iter.Key()
i.smaplingGap -= len(ikey) + len(i.iter.Value())
for i.smaplingGap < 0 {
i.smaplingGap += i.db.iterSamplingRate()
i.db.sampleSeek(ikey)
}
}
func (i *dbIter) setErr(err error) {
@@ -175,6 +190,7 @@ func (i *dbIter) Seek(key []byte) bool {
func (i *dbIter) next() bool {
for {
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if seq <= i.seq {
switch kt {
case ktDel:
@@ -225,6 +241,7 @@ func (i *dbIter) prev() bool {
if i.iter.Valid() {
for {
if ukey, seq, kt, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if seq <= i.seq {
if !del && i.icmp.uCompare(ukey, i.key) < 0 {
return true
@@ -266,6 +283,7 @@ func (i *dbIter) Prev() bool {
case dirForward:
for i.iter.Prev() {
if ukey, _, _, kerr := parseIkey(i.iter.Key()); kerr == nil {
i.sampleSeek()
if i.icmp.uCompare(ukey, i.key) < 0 {
goto cont
}

View File

@@ -15,8 +15,8 @@ import (
)
type memDB struct {
db *DB
mdb *memdb.DB
db *DB
*memdb.DB
ref int32
}
@@ -27,12 +27,12 @@ func (m *memDB) incref() {
func (m *memDB) decref() {
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
// Only put back memdb with std capacity.
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
m.mdb.Reset()
m.db.mpoolPut(m.mdb)
if m.Capacity() == m.db.s.o.GetWriteBuffer() {
m.Reset()
m.db.mpoolPut(m.DB)
}
m.db = nil
m.mdb = nil
m.DB = nil
} else if ref < 0 {
panic("negative memdb ref")
}
@@ -48,6 +48,15 @@ func (db *DB) addSeq(delta uint64) {
atomic.AddUint64(&db.seq, delta)
}
func (db *DB) sampleSeek(ikey iKey) {
v := db.s.version()
if v.sampleSeek(ikey) {
// Trigger table compaction.
db.compSendTrigger(db.tcompCmdC)
}
v.release()
}
func (db *DB) mpoolPut(mem *memdb.DB) {
defer func() {
recover()
@@ -117,7 +126,7 @@ func (db *DB) newMem(n int) (mem *memDB, err error) {
}
mem = &memDB{
db: db,
mdb: mdb,
DB: mdb,
ref: 2,
}
db.mem = mem

View File

@@ -405,19 +405,21 @@ func (h *dbHarness) compactRange(min, max string) {
t.Log("DB range compaction done")
}
func (h *dbHarness) sizeAssert(start, limit string, low, hi uint64) {
t := h.t
db := h.db
s, err := db.SizeOf([]util.Range{
func (h *dbHarness) sizeOf(start, limit string) uint64 {
sz, err := h.db.SizeOf([]util.Range{
{[]byte(start), []byte(limit)},
})
if err != nil {
t.Error("SizeOf: got error: ", err)
h.t.Error("SizeOf: got error: ", err)
}
if s.Sum() < low || s.Sum() > hi {
t.Errorf("sizeof %q to %q not in range, want %d - %d, got %d",
shorten(start), shorten(limit), low, hi, s.Sum())
return sz.Sum()
}
func (h *dbHarness) sizeAssert(start, limit string, low, hi uint64) {
sz := h.sizeOf(start, limit)
if sz < low || sz > hi {
h.t.Errorf("sizeOf %q to %q not in range, want %d - %d, got %d",
shorten(start), shorten(limit), low, hi, sz)
}
}
@@ -2443,7 +2445,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
if err != nil {
t.Fatal(err)
}
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
rec := &sessionRecord{}
rec.addTableFile(i, tf)
if err := s.commit(rec); err != nil {
t.Fatal(err)
@@ -2453,7 +2455,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Build grandparent.
v := s.version()
c := newCompaction(s, v, 1, append(tFiles{}, v.tables[1]...))
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
rec := &sessionRecord{}
b := &tableCompactionBuilder{
s: s,
c: c,
@@ -2477,7 +2479,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Build level-1.
v = s.version()
c = newCompaction(s, v, 0, append(tFiles{}, v.tables[0]...))
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
b = &tableCompactionBuilder{
s: s,
c: c,
@@ -2521,7 +2523,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
// Compaction with transient error.
v = s.version()
c = newCompaction(s, v, 1, append(tFiles{}, v.tables[1]...))
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
b = &tableCompactionBuilder{
s: s,
c: c,
@@ -2577,3 +2579,123 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
}
v.release()
}
func testDB_IterTriggeredCompaction(t *testing.T, limitDiv int) {
const (
vSize = 200 * opt.KiB
tSize = 100 * opt.MiB
mIter = 100
n = tSize / vSize
)
h := newDbHarnessWopt(t, &opt.Options{
Compression: opt.NoCompression,
DisableBlockCache: true,
})
defer h.close()
key := func(x int) string {
return fmt.Sprintf("v%06d", x)
}
// Fill.
value := strings.Repeat("x", vSize)
for i := 0; i < n; i++ {
h.put(key(i), value)
}
h.compactMem()
// Delete all.
for i := 0; i < n; i++ {
h.delete(key(i))
}
h.compactMem()
var (
limit = n / limitDiv
startKey = key(0)
limitKey = key(limit)
maxKey = key(n)
slice = &util.Range{Limit: []byte(limitKey)}
initialSize0 = h.sizeOf(startKey, limitKey)
initialSize1 = h.sizeOf(limitKey, maxKey)
)
t.Logf("inital size %s [rest %s]", shortenb(int(initialSize0)), shortenb(int(initialSize1)))
for r := 0; true; r++ {
if r >= mIter {
t.Fatal("taking too long to compact")
}
// Iterates.
iter := h.db.NewIterator(slice, h.ro)
for iter.Next() {
}
if err := iter.Error(); err != nil {
t.Fatalf("Iter err: %v", err)
}
iter.Release()
// Wait compaction.
h.waitCompaction()
// Check size.
size0 := h.sizeOf(startKey, limitKey)
size1 := h.sizeOf(limitKey, maxKey)
t.Logf("#%03d size %s [rest %s]", r, shortenb(int(size0)), shortenb(int(size1)))
if size0 < initialSize0/10 {
break
}
}
if initialSize1 > 0 {
h.sizeAssert(limitKey, maxKey, initialSize1/4-opt.MiB, initialSize1+opt.MiB)
}
}
func TestDB_IterTriggeredCompaction(t *testing.T) {
testDB_IterTriggeredCompaction(t, 1)
}
func TestDB_IterTriggeredCompactionHalf(t *testing.T) {
testDB_IterTriggeredCompaction(t, 2)
}
func TestDB_ReadOnly(t *testing.T) {
h := newDbHarness(t)
defer h.close()
h.put("foo", "v1")
h.put("bar", "v2")
h.compactMem()
h.put("xfoo", "v1")
h.put("xbar", "v2")
t.Log("Trigger read-only")
if err := h.db.SetReadOnly(); err != nil {
h.close()
t.Fatalf("SetReadOnly error: %v", err)
}
h.stor.SetEmuErr(storage.TypeAll, tsOpCreate, tsOpReplace, tsOpRemove, tsOpWrite, tsOpWrite, tsOpSync)
ro := func(key, value, wantValue string) {
if err := h.db.Put([]byte(key), []byte(value), h.wo); err != ErrReadOnly {
t.Fatalf("unexpected error: %v", err)
}
h.getVal(key, wantValue)
}
ro("foo", "vx", "v1")
h.o.ReadOnly = true
h.reopenDB()
ro("foo", "vx", "v1")
ro("bar", "vx", "v2")
h.assertNumKeys(4)
}

View File

@@ -63,24 +63,24 @@ func (db *DB) rotateMem(n int) (mem *memDB, err error) {
return
}
func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
func (db *DB) flush(n int) (mdb *memDB, mdbFree int, err error) {
delayed := false
flush := func() (retry bool) {
v := db.s.version()
defer v.release()
mem = db.getEffectiveMem()
mdb = db.getEffectiveMem()
defer func() {
if retry {
mem.decref()
mem = nil
mdb.decref()
mdb = nil
}
}()
nn = mem.mdb.Free()
mdbFree = mdb.Free()
switch {
case v.tLen(0) >= db.s.o.GetWriteL0SlowdownTrigger() && !delayed:
delayed = true
time.Sleep(time.Millisecond)
case nn >= n:
case mdbFree >= n:
return false
case v.tLen(0) >= db.s.o.GetWriteL0PauseTrigger():
delayed = true
@@ -90,15 +90,15 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
}
default:
// Allow memdb to grow if it has no entry.
if mem.mdb.Len() == 0 {
nn = n
if mdb.Len() == 0 {
mdbFree = n
} else {
mem.decref()
mem, err = db.rotateMem(n)
mdb.decref()
mdb, err = db.rotateMem(n)
if err == nil {
nn = mem.mdb.Free()
mdbFree = mdb.Free()
} else {
nn = 0
mdbFree = 0
}
}
return false
@@ -157,18 +157,18 @@ func (db *DB) Write(b *Batch, wo *opt.WriteOptions) (err error) {
}
}()
mem, memFree, err := db.flush(b.size())
mdb, mdbFree, err := db.flush(b.size())
if err != nil {
return
}
defer mem.decref()
defer mdb.decref()
// Calculate maximum size of the batch.
m := 1 << 20
if x := b.size(); x <= 128<<10 {
m = x + (128 << 10)
}
m = minInt(m, memFree)
m = minInt(m, mdbFree)
// Merge with other batch.
drain:
@@ -197,7 +197,7 @@ drain:
select {
case db.journalC <- b:
// Write into memdb
if berr := b.memReplay(mem.mdb); berr != nil {
if berr := b.memReplay(mdb.DB); berr != nil {
panic(berr)
}
case err = <-db.compPerErrC:
@@ -211,7 +211,7 @@ drain:
case err = <-db.journalAckC:
if err != nil {
// Revert memdb if error detected
if berr := b.revertMemReplay(mem.mdb); berr != nil {
if berr := b.revertMemReplay(mdb.DB); berr != nil {
panic(berr)
}
return
@@ -225,7 +225,7 @@ drain:
if err != nil {
return
}
if berr := b.memReplay(mem.mdb); berr != nil {
if berr := b.memReplay(mdb.DB); berr != nil {
panic(berr)
}
}
@@ -233,7 +233,7 @@ drain:
// Set last seq number.
db.addSeq(uint64(b.Len()))
if b.size() >= memFree {
if b.size() >= mdbFree {
db.rotateMem(0)
}
return
@@ -249,8 +249,7 @@ func (db *DB) Put(key, value []byte, wo *opt.WriteOptions) error {
return db.Write(b, wo)
}
// Delete deletes the value for the given key. It returns ErrNotFound if
// the DB does not contain the key.
// Delete deletes the value for the given key.
//
// It is safe to modify the contents of the arguments after Delete returns.
func (db *DB) Delete(key []byte, wo *opt.WriteOptions) error {
@@ -290,9 +289,9 @@ func (db *DB) CompactRange(r util.Range) error {
}
// Check for overlaps in memdb.
mem := db.getEffectiveMem()
defer mem.decref()
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
mdb := db.getEffectiveMem()
defer mdb.decref()
if isMemOverlaps(db.s.icmp, mdb.DB, r.Start, r.Limit) {
// Memdb compaction.
if _, err := db.rotateMem(0); err != nil {
<-db.writeLockC
@@ -309,3 +308,31 @@ func (db *DB) CompactRange(r util.Range) error {
// Table compaction.
return db.compSendRange(db.tcompCmdC, -1, r.Start, r.Limit)
}
// SetReadOnly makes DB read-only. It will stay read-only until reopened.
func (db *DB) SetReadOnly() error {
if err := db.ok(); err != nil {
return err
}
// Lock writer.
select {
case db.writeLockC <- struct{}{}:
db.compWriteLocking = true
case err := <-db.compPerErrC:
return err
case _, _ = <-db.closeC:
return ErrClosed
}
// Set compaction read-only.
select {
case db.compErrSetC <- ErrReadOnly:
case perr := <-db.compPerErrC:
return perr
case _, _ = <-db.closeC:
return ErrClosed
}
return nil
}

View File

@@ -12,6 +12,7 @@ import (
var (
ErrNotFound = errors.ErrNotFound
ErrReadOnly = errors.New("leveldb: read-only mode")
ErrSnapshotReleased = errors.New("leveldb: snapshot released")
ErrIterReleased = errors.New("leveldb: iterator released")
ErrClosed = errors.New("leveldb: closed")

View File

@@ -206,6 +206,7 @@ func (p *DB) randHeight() (h int) {
return
}
// Must hold RW-lock if prev == true, as it use shared prevNode slice.
func (p *DB) findGE(key []byte, prev bool) (int, bool) {
node := 0
h := p.maxHeight - 1
@@ -302,7 +303,7 @@ func (p *DB) Put(key []byte, value []byte) error {
node := len(p.nodeData)
p.nodeData = append(p.nodeData, kvOffset, len(key), len(value), h)
for i, n := range p.prevNode[:h] {
m := n + 4 + i
m := n + nNext + i
p.nodeData = append(p.nodeData, p.nodeData[m])
p.nodeData[m] = node
}
@@ -434,20 +435,22 @@ func (p *DB) Len() int {
// Reset resets the DB to initial empty state. Allows reuse the buffer.
func (p *DB) Reset() {
p.mu.Lock()
p.rnd = rand.New(rand.NewSource(0xdeadbeef))
p.maxHeight = 1
p.n = 0
p.kvSize = 0
p.kvData = p.kvData[:0]
p.nodeData = p.nodeData[:4+tMaxHeight]
p.nodeData = p.nodeData[:nNext+tMaxHeight]
p.nodeData[nKV] = 0
p.nodeData[nKey] = 0
p.nodeData[nVal] = 0
p.nodeData[nHeight] = tMaxHeight
for n := 0; n < tMaxHeight; n++ {
p.nodeData[4+n] = 0
p.nodeData[nNext+n] = 0
p.prevNode[n] = 0
}
p.mu.Unlock()
}
// New creates a new initalized in-memory key/value DB. The capacity

View File

@@ -34,10 +34,11 @@ var (
DefaultCompactionTotalSize = 10 * MiB
DefaultCompactionTotalSizeMultiplier = 10.0
DefaultCompressionType = SnappyCompression
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultIteratorSamplingRate = 1 * MiB
DefaultMaxMemCompationLevel = 2
DefaultNumLevel = 7
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultWriteBuffer = 4 * MiB
DefaultWriteL0PauseTrigger = 12
DefaultWriteL0SlowdownTrigger = 8
@@ -249,6 +250,11 @@ type Options struct {
// The default value (DefaultCompression) uses snappy compression.
Compression Compression
// DisableBufferPool allows disable use of util.BufferPool functionality.
//
// The default value is false.
DisableBufferPool bool
// DisableBlockCache allows disable use of cache.Cache functionality on
// 'sorted table' block.
//
@@ -288,6 +294,13 @@ type Options struct {
// The default value is nil.
Filter filter.Filter
// IteratorSamplingRate defines approximate gap (in bytes) between read
// sampling of an iterator. The samples will be used to determine when
// compaction should be triggered.
//
// The default is 1MiB.
IteratorSamplingRate int
// MaxMemCompationLevel defines maximum level a newly compacted 'memdb'
// will be pushed into if doesn't creates overlap. This should less than
// NumLevel. Use -1 for level-0.
@@ -313,6 +326,11 @@ type Options struct {
// The default value is 500.
OpenFilesCacheCapacity int
// If true then opens DB in read-only mode.
//
// The default value is false.
ReadOnly bool
// Strict defines the DB strict level.
Strict Strict
@@ -464,6 +482,20 @@ func (o *Options) GetCompression() Compression {
return o.Compression
}
func (o *Options) GetDisableBufferPool() bool {
if o == nil {
return false
}
return o.DisableBufferPool
}
func (o *Options) GetDisableBlockCache() bool {
if o == nil {
return false
}
return o.DisableBlockCache
}
func (o *Options) GetDisableCompactionBackoff() bool {
if o == nil {
return false
@@ -492,6 +524,13 @@ func (o *Options) GetFilter() filter.Filter {
return o.Filter
}
func (o *Options) GetIteratorSamplingRate() int {
if o == nil || o.IteratorSamplingRate <= 0 {
return DefaultIteratorSamplingRate
}
return o.IteratorSamplingRate
}
func (o *Options) GetMaxMemCompationLevel() int {
level := DefaultMaxMemCompationLevel
if o != nil {
@@ -533,6 +572,13 @@ func (o *Options) GetOpenFilesCacheCapacity() int {
return o.OpenFilesCacheCapacity
}
func (o *Options) GetReadOnly() bool {
if o == nil {
return false
}
return o.ReadOnly
}
func (o *Options) GetStrict(strict Strict) bool {
if o == nil || o.Strict == 0 {
return DefaultStrict&strict != 0

View File

@@ -11,10 +11,8 @@ import (
"io"
"os"
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/journal"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
@@ -127,11 +125,16 @@ func (s *session) recover() (err error) {
return
}
defer reader.Close()
strict := s.o.GetStrict(opt.StrictManifest)
jr := journal.NewReader(reader, dropper{s, m}, strict, true)
staging := s.stVersion.newStaging()
rec := &sessionRecord{numLevel: s.o.GetNumLevel()}
var (
// Options.
numLevel = s.o.GetNumLevel()
strict = s.o.GetStrict(opt.StrictManifest)
jr = journal.NewReader(reader, dropper{s, m}, strict, true)
rec = &sessionRecord{}
staging = s.stVersion.newStaging()
)
for {
var r io.Reader
r, err = jr.Next()
@@ -143,7 +146,7 @@ func (s *session) recover() (err error) {
return errors.SetFile(err, m)
}
err = rec.decode(r)
err = rec.decode(r, numLevel)
if err == nil {
// save compact pointers
for _, r := range rec.compPtrs {
@@ -206,250 +209,3 @@ func (s *session) commit(r *sessionRecord) (err error) {
return
}
// Pick a compaction based on current state; need external synchronization.
func (s *session) pickCompaction() *compaction {
v := s.version()
var level int
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cptr := s.stCompPtrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
ts := (*tSet)(p)
level = ts.level
t0 = append(t0, ts.table)
} else {
v.release()
return nil
}
}
return newCompaction(s, v, level, t0)
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version()
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
v.release()
return nil
}
// Avoid compacting too much in one shot in case the range is large.
// But we cannot do this for level-0 since level-0 files can overlap
// and we must not pick one file and drop another older file if the
// two files overlap.
if level > 0 {
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
total := uint64(0)
for i, t := range t0 {
total += t.size
if total >= limit {
s.logf("table@compaction limiting F·%d -> F·%d", len(t0), i+1)
t0 = t0[:i+1]
break
}
}
}
return newCompaction(s, v, level, t0)
}
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
c := &compaction{
s: s,
v: v,
level: level,
tables: [2]tFiles{t0, nil},
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
tPtrs: make([]int, s.o.GetNumLevel()),
}
c.expand()
c.save()
return c
}
// compaction represent a compaction state.
type compaction struct {
s *session
v *version
level int
tables [2]tFiles
maxGPOverlaps uint64
gp tFiles
gpi int
seenKey bool
gpOverlappedBytes uint64
imin, imax iKey
tPtrs []int
released bool
snapGPI int
snapSeenKey bool
snapGPOverlappedBytes uint64
snapTPtrs []int
}
func (c *compaction) save() {
c.snapGPI = c.gpi
c.snapSeenKey = c.seenKey
c.snapGPOverlappedBytes = c.gpOverlappedBytes
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
}
func (c *compaction) restore() {
c.gpi = c.snapGPI
c.seenKey = c.snapSeenKey
c.gpOverlappedBytes = c.snapGPOverlappedBytes
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
}
func (c *compaction) release() {
if !c.released {
c.released = true
c.v.release()
}
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
t0, t1 := c.tables[0], c.tables[1]
imin, imax := t0.getRange(c.s.icmp)
// We expand t0 here just incase ukey hop across tables.
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
if len(t0) != len(c.tables[0]) {
imin, imax = t0.getRange(c.s.icmp)
}
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if c.level+2 < c.s.o.GetNumLevel() {
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
}
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
}
c.tPtrs[level]++
}
}
return true
}
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpi < len(c.gp); c.gpi++ {
gp := c.gp[c.gpi]
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
c.gpOverlappedBytes += gp.size
}
}
c.seenKey = true
if c.gpOverlappedBytes > c.maxGPOverlaps {
// Too much overlap for current output; start new output.
c.gpOverlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
Strict: opt.StrictOverride,
}
strict := c.s.o.GetStrict(opt.StrictCompaction)
if strict {
ro.Strict |= opt.StrictReader
}
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, c.s.icmp, strict)
}

View File

@@ -0,0 +1,287 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package leveldb
import (
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
func (s *session) pickMemdbLevel(umin, umax []byte) int {
v := s.version()
defer v.release()
return v.pickMemdbLevel(umin, umax)
}
func (s *session) flushMemdb(rec *sessionRecord, mdb *memdb.DB, level int) (level_ int, err error) {
// Create sorted table.
iter := mdb.NewIterator(nil)
defer iter.Release()
t, n, err := s.tops.createFrom(iter)
if err != nil {
return level, err
}
// Pick level and add to record.
if level < 0 {
level = s.pickMemdbLevel(t.imin.ukey(), t.imax.ukey())
}
rec.addTableFile(level, t)
s.logf("memdb@flush created L%d@%d N·%d S·%s %q:%q", level, t.file.Num(), n, shortenb(int(t.size)), t.imin, t.imax)
return level, nil
}
// Pick a compaction based on current state; need external synchronization.
func (s *session) pickCompaction() *compaction {
v := s.version()
var level int
var t0 tFiles
if v.cScore >= 1 {
level = v.cLevel
cptr := s.stCompPtrs[level]
tables := v.tables[level]
for _, t := range tables {
if cptr == nil || s.icmp.Compare(t.imax, cptr) > 0 {
t0 = append(t0, t)
break
}
}
if len(t0) == 0 {
t0 = append(t0, tables[0])
}
} else {
if p := atomic.LoadPointer(&v.cSeek); p != nil {
ts := (*tSet)(p)
level = ts.level
t0 = append(t0, ts.table)
} else {
v.release()
return nil
}
}
return newCompaction(s, v, level, t0)
}
// Create compaction from given level and range; need external synchronization.
func (s *session) getCompactionRange(level int, umin, umax []byte) *compaction {
v := s.version()
t0 := v.tables[level].getOverlaps(nil, s.icmp, umin, umax, level == 0)
if len(t0) == 0 {
v.release()
return nil
}
// Avoid compacting too much in one shot in case the range is large.
// But we cannot do this for level-0 since level-0 files can overlap
// and we must not pick one file and drop another older file if the
// two files overlap.
if level > 0 {
limit := uint64(v.s.o.GetCompactionSourceLimit(level))
total := uint64(0)
for i, t := range t0 {
total += t.size
if total >= limit {
s.logf("table@compaction limiting F·%d -> F·%d", len(t0), i+1)
t0 = t0[:i+1]
break
}
}
}
return newCompaction(s, v, level, t0)
}
func newCompaction(s *session, v *version, level int, t0 tFiles) *compaction {
c := &compaction{
s: s,
v: v,
level: level,
tables: [2]tFiles{t0, nil},
maxGPOverlaps: uint64(s.o.GetCompactionGPOverlaps(level)),
tPtrs: make([]int, s.o.GetNumLevel()),
}
c.expand()
c.save()
return c
}
// compaction represent a compaction state.
type compaction struct {
s *session
v *version
level int
tables [2]tFiles
maxGPOverlaps uint64
gp tFiles
gpi int
seenKey bool
gpOverlappedBytes uint64
imin, imax iKey
tPtrs []int
released bool
snapGPI int
snapSeenKey bool
snapGPOverlappedBytes uint64
snapTPtrs []int
}
func (c *compaction) save() {
c.snapGPI = c.gpi
c.snapSeenKey = c.seenKey
c.snapGPOverlappedBytes = c.gpOverlappedBytes
c.snapTPtrs = append(c.snapTPtrs[:0], c.tPtrs...)
}
func (c *compaction) restore() {
c.gpi = c.snapGPI
c.seenKey = c.snapSeenKey
c.gpOverlappedBytes = c.snapGPOverlappedBytes
c.tPtrs = append(c.tPtrs[:0], c.snapTPtrs...)
}
func (c *compaction) release() {
if !c.released {
c.released = true
c.v.release()
}
}
// Expand compacted tables; need external synchronization.
func (c *compaction) expand() {
limit := uint64(c.s.o.GetCompactionExpandLimit(c.level))
vt0, vt1 := c.v.tables[c.level], c.v.tables[c.level+1]
t0, t1 := c.tables[0], c.tables[1]
imin, imax := t0.getRange(c.s.icmp)
// We expand t0 here just incase ukey hop across tables.
t0 = vt0.getOverlaps(t0, c.s.icmp, imin.ukey(), imax.ukey(), c.level == 0)
if len(t0) != len(c.tables[0]) {
imin, imax = t0.getRange(c.s.icmp)
}
t1 = vt1.getOverlaps(t1, c.s.icmp, imin.ukey(), imax.ukey(), false)
// Get entire range covered by compaction.
amin, amax := append(t0, t1...).getRange(c.s.icmp)
// See if we can grow the number of inputs in "level" without
// changing the number of "level+1" files we pick up.
if len(t1) > 0 {
exp0 := vt0.getOverlaps(nil, c.s.icmp, amin.ukey(), amax.ukey(), c.level == 0)
if len(exp0) > len(t0) && t1.size()+exp0.size() < limit {
xmin, xmax := exp0.getRange(c.s.icmp)
exp1 := vt1.getOverlaps(nil, c.s.icmp, xmin.ukey(), xmax.ukey(), false)
if len(exp1) == len(t1) {
c.s.logf("table@compaction expanding L%d+L%d (F·%d S·%s)+(F·%d S·%s) -> (F·%d S·%s)+(F·%d S·%s)",
c.level, c.level+1, len(t0), shortenb(int(t0.size())), len(t1), shortenb(int(t1.size())),
len(exp0), shortenb(int(exp0.size())), len(exp1), shortenb(int(exp1.size())))
imin, imax = xmin, xmax
t0, t1 = exp0, exp1
amin, amax = append(t0, t1...).getRange(c.s.icmp)
}
}
}
// Compute the set of grandparent files that overlap this compaction
// (parent == level+1; grandparent == level+2)
if c.level+2 < c.s.o.GetNumLevel() {
c.gp = c.v.tables[c.level+2].getOverlaps(c.gp, c.s.icmp, amin.ukey(), amax.ukey(), false)
}
c.tables[0], c.tables[1] = t0, t1
c.imin, c.imax = imin, imax
}
// Check whether compaction is trivial.
func (c *compaction) trivial() bool {
return len(c.tables[0]) == 1 && len(c.tables[1]) == 0 && c.gp.size() <= c.maxGPOverlaps
}
func (c *compaction) baseLevelForKey(ukey []byte) bool {
for level, tables := range c.v.tables[c.level+2:] {
for c.tPtrs[level] < len(tables) {
t := tables[c.tPtrs[level]]
if c.s.icmp.uCompare(ukey, t.imax.ukey()) <= 0 {
// We've advanced far enough.
if c.s.icmp.uCompare(ukey, t.imin.ukey()) >= 0 {
// Key falls in this file's range, so definitely not base level.
return false
}
break
}
c.tPtrs[level]++
}
}
return true
}
func (c *compaction) shouldStopBefore(ikey iKey) bool {
for ; c.gpi < len(c.gp); c.gpi++ {
gp := c.gp[c.gpi]
if c.s.icmp.Compare(ikey, gp.imax) <= 0 {
break
}
if c.seenKey {
c.gpOverlappedBytes += gp.size
}
}
c.seenKey = true
if c.gpOverlappedBytes > c.maxGPOverlaps {
// Too much overlap for current output; start new output.
c.gpOverlappedBytes = 0
return true
}
return false
}
// Creates an iterator.
func (c *compaction) newIterator() iterator.Iterator {
// Creates iterator slice.
icap := len(c.tables)
if c.level == 0 {
// Special case for level-0.
icap = len(c.tables[0]) + 1
}
its := make([]iterator.Iterator, 0, icap)
// Options.
ro := &opt.ReadOptions{
DontFillCache: true,
Strict: opt.StrictOverride,
}
strict := c.s.o.GetStrict(opt.StrictCompaction)
if strict {
ro.Strict |= opt.StrictReader
}
for i, tables := range c.tables {
if len(tables) == 0 {
continue
}
// Level-0 is not sorted and may overlaps each other.
if c.level+i == 0 {
for _, t := range tables {
its = append(its, c.s.tops.newIterator(t, nil, ro))
}
} else {
it := iterator.NewIndexedIterator(tables.newIndexIterator(c.s.tops, c.s.icmp, nil, ro), strict)
its = append(its, it)
}
}
return iterator.NewMergedIterator(its, c.s.icmp, strict)
}

View File

@@ -52,8 +52,6 @@ type dtRecord struct {
}
type sessionRecord struct {
numLevel int
hasRec int
comparer string
journalNum uint64
@@ -230,7 +228,7 @@ func (p *sessionRecord) readBytes(field string, r byteReader) []byte {
return x
}
func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
func (p *sessionRecord) readLevel(field string, r io.ByteReader, numLevel int) int {
if p.err != nil {
return 0
}
@@ -238,14 +236,14 @@ func (p *sessionRecord) readLevel(field string, r io.ByteReader) int {
if p.err != nil {
return 0
}
if x >= uint64(p.numLevel) {
if x >= uint64(numLevel) {
p.err = errors.NewErrCorrupted(nil, &ErrManifestCorrupted{field, "invalid level number"})
return 0
}
return int(x)
}
func (p *sessionRecord) decode(r io.Reader) error {
func (p *sessionRecord) decode(r io.Reader, numLevel int) error {
br, ok := r.(byteReader)
if !ok {
br = bufio.NewReader(r)
@@ -286,13 +284,13 @@ func (p *sessionRecord) decode(r io.Reader) error {
p.setSeqNum(x)
}
case recCompPtr:
level := p.readLevel("comp-ptr.level", br)
level := p.readLevel("comp-ptr.level", br, numLevel)
ikey := p.readBytes("comp-ptr.ikey", br)
if p.err == nil {
p.addCompPtr(level, iKey(ikey))
}
case recAddTable:
level := p.readLevel("add-table.level", br)
level := p.readLevel("add-table.level", br, numLevel)
num := p.readUvarint("add-table.num", br)
size := p.readUvarint("add-table.size", br)
imin := p.readBytes("add-table.imin", br)
@@ -301,7 +299,7 @@ func (p *sessionRecord) decode(r io.Reader) error {
p.addTable(level, num, size, imin, imax)
}
case recDelTable:
level := p.readLevel("del-table.level", br)
level := p.readLevel("del-table.level", br, numLevel)
num := p.readUvarint("del-table.num", br)
if p.err == nil {
p.delTable(level, num)

View File

@@ -19,8 +19,8 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
if err != nil {
return
}
v2 := &sessionRecord{numLevel: opt.DefaultNumLevel}
err = v.decode(b)
v2 := &sessionRecord{}
err = v.decode(b, opt.DefaultNumLevel)
if err != nil {
return
}
@@ -34,7 +34,7 @@ func decodeEncode(v *sessionRecord) (res bool, err error) {
func TestSessionRecord_EncodeDecode(t *testing.T) {
big := uint64(1) << 50
v := &sessionRecord{numLevel: opt.DefaultNumLevel}
v := &sessionRecord{}
i := uint64(0)
test := func() {
res, err := decodeEncode(v)

View File

@@ -182,7 +182,7 @@ func (s *session) newManifest(rec *sessionRecord, v *version) (err error) {
defer v.release()
}
if rec == nil {
rec = &sessionRecord{numLevel: s.o.GetNumLevel()}
rec = &sessionRecord{}
}
s.fillRecord(rec, true)
v.fillRecord(rec)

View File

@@ -42,6 +42,8 @@ type tsOp uint
const (
tsOpOpen tsOp = iota
tsOpCreate
tsOpReplace
tsOpRemove
tsOpRead
tsOpReadAt
tsOpWrite
@@ -241,6 +243,10 @@ func (tf tsFile) Replace(newfile storage.File) (err error) {
if err != nil {
return
}
if tf.shouldErr(tsOpReplace) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Replace(newfile.(tsFile).File)
if err != nil {
ts.t.Errorf("E: cannot replace file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)
@@ -258,6 +264,10 @@ func (tf tsFile) Remove() (err error) {
if err != nil {
return
}
if tf.shouldErr(tsOpRemove) {
err = errors.New("leveldb.testStorage: emulated create error")
return
}
err = tf.File.Remove()
if err != nil {
ts.t.Errorf("E: cannot remove file, num=%d type=%v: %v", tf.Num(), tf.Type(), err)

View File

@@ -441,22 +441,26 @@ func newTableOps(s *session) *tOps {
var (
cacher cache.Cacher
bcache *cache.Cache
bpool *util.BufferPool
)
if s.o.GetOpenFilesCacheCapacity() > 0 {
cacher = cache.NewLRU(s.o.GetOpenFilesCacheCapacity())
}
if !s.o.DisableBlockCache {
if !s.o.GetDisableBlockCache() {
var bcacher cache.Cacher
if s.o.GetBlockCacheCapacity() > 0 {
bcacher = cache.NewLRU(s.o.GetBlockCacheCapacity())
}
bcache = cache.NewCache(bcacher)
}
if !s.o.GetDisableBufferPool() {
bpool = util.NewBufferPool(s.o.GetBlockSize() + 5)
}
return &tOps{
s: s,
cache: cache.NewCache(cacher),
bcache: bcache,
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
bpool: bpool,
}
}

View File

@@ -14,7 +14,7 @@ import (
"strings"
"sync"
"github.com/syndtr/gosnappy/snappy"
"github.com/golang/snappy"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/comparer"

View File

@@ -12,7 +12,7 @@ import (
"fmt"
"io"
"github.com/syndtr/gosnappy/snappy"
"github.com/golang/snappy"
"github.com/syndtr/goleveldb/leveldb/comparer"
"github.com/syndtr/goleveldb/leveldb/filter"
@@ -167,11 +167,7 @@ func (w *Writer) writeBlock(buf *util.Buffer, compression opt.Compression) (bh b
if n := snappy.MaxEncodedLen(buf.Len()) + blockTrailerLen; len(w.compressionScratch) < n {
w.compressionScratch = make([]byte, n)
}
var compressed []byte
compressed, err = snappy.Encode(w.compressionScratch, buf.Bytes())
if err != nil {
return
}
compressed := snappy.Encode(w.compressionScratch, buf.Bytes())
n := len(compressed)
b = compressed[:n+blockTrailerLen]
b[n] = blockTypeSnappyCompression

View File

@@ -136,9 +136,8 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byt
if !tseek {
if tset == nil {
tset = &tSet{level, t}
} else if tset.table.consumeSeek() <= 0 {
} else {
tseek = true
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
}
@@ -203,6 +202,28 @@ func (v *version) get(ikey iKey, ro *opt.ReadOptions, noValue bool) (value []byt
return true
})
if tseek && tset.table.consumeSeek() <= 0 {
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
return
}
func (v *version) sampleSeek(ikey iKey) (tcomp bool) {
var tset *tSet
v.walkOverlapping(ikey, func(level int, t *tFile) bool {
if tset == nil {
tset = &tSet{level, t}
return true
} else {
if tset.table.consumeSeek() <= 0 {
tcomp = atomic.CompareAndSwapPointer(&v.cSeek, nil, unsafe.Pointer(tset))
}
return false
}
}, nil)
return
}
@@ -279,7 +300,7 @@ func (v *version) offsetOf(ikey iKey) (n uint64, err error) {
return
}
func (v *version) pickLevel(umin, umax []byte) (level int) {
func (v *version) pickMemdbLevel(umin, umax []byte) (level int) {
if !v.tables[0].overlaps(v.s.icmp, umin, umax, true) {
var overlaps tFiles
maxLevel := v.s.o.GetMaxMemCompationLevel()

View File

@@ -1,4 +1,4 @@
Copyright (c) 2014 Barracuda Networks, Inc.
Copyright (c) 2014-2015 Barracuda Networks, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -36,6 +36,13 @@ to your Supervisor. Supervisors are also services, so you can create a
tree structure here, depending on the exact combination of restarts
you want to create.
As a special case, when adding Supervisors to Supervisors, the "sub"
supervisor will have the "super" supervisor's Log function copied.
This allows you to set one log function on the "top" supervisor, and
have it propagate down to all the sub-supervisors. This also allows
libraries or modules to provide Supervisors without having to commit
their users to a particular logging method.
Finally, as what is probably the last line of your main() function, call
.Serve() on your top level supervisor. This will start all the services
you've defined.
@@ -126,8 +133,10 @@ type Supervisor struct {
// If you ever come up with some need to get into these, submit a pull
// request to make them public and some smidge of justification, and
// I'll happily do it.
logBadStop func(Service)
logFailure func(service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
// But since I've now changed the signature on these once, I'm glad I
// didn't start with them public... :)
logBadStop func(*Supervisor, Service)
logFailure func(supervisor *Supervisor, service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
logBackoff func(*Supervisor, bool)
// avoid a dependency on github.com/thejerf/abtime by just implementing
@@ -233,10 +242,10 @@ func New(name string, spec Spec) (s *Supervisor) {
s.resumeTimer = make(chan time.Time)
// set up the default logging handlers
s.logBadStop = func(service Service) {
s.log(fmt.Sprintf("Service %s failed to terminate in a timely manner", serviceName(service)))
s.logBadStop = func(supervisor *Supervisor, service Service) {
s.log(fmt.Sprintf("%s: Service %s failed to terminate in a timely manner", serviceName(supervisor), serviceName(service)))
}
s.logFailure = func(service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
s.logFailure = func(supervisor *Supervisor, service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
var errString string
e, canError := err.(error)
@@ -246,7 +255,7 @@ func New(name string, spec Spec) (s *Supervisor) {
errString = fmt.Sprintf("%#v", err)
}
s.log(fmt.Sprintf("Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(service), failures, threshold, restarting, errString, string(st)))
s.log(fmt.Sprintf("%s: Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(supervisor), serviceName(service), failures, threshold, restarting, errString, string(st)))
}
s.logBackoff = func(s *Supervisor, entering bool) {
if entering {
@@ -346,12 +355,24 @@ will be started when the supervisor is.
The returned ServiceID may be passed to the Remove method of the Supervisor
to terminate the service.
As a special behavior, if the service added is itself a supervisor, the
supervisor being added will copy the Log function from the Supervisor it
is being added to. This allows factoring out providing a Supervisor
from its logging.
*/
func (s *Supervisor) Add(service Service) ServiceToken {
if s == nil {
panic("can't add service to nil *suture.Supervisor")
}
if supervisor, isSupervisor := service.(*Supervisor); isSupervisor {
supervisor.logBadStop = s.logBadStop
supervisor.logFailure = s.logFailure
supervisor.logBackoff = s.logBackoff
}
if s.state == notRunning {
id := s.serviceCounter
s.serviceCounter++
@@ -492,12 +513,12 @@ func (s *Supervisor) handleFailedService(id serviceID, err interface{}, stacktra
if monitored {
if s.state == normal {
s.runService(failedService, id)
s.logFailure(failedService, s.failures, s.failureThreshold, true, err, stacktrace)
s.logFailure(s, failedService, s.failures, s.failureThreshold, true, err, stacktrace)
} else {
// FIXME: When restarting, check that the service still
// exists (it may have been stopped in the meantime)
s.restartQueue = append(s.restartQueue, id)
s.logFailure(failedService, s.failures, s.failureThreshold, false, err, stacktrace)
s.logFailure(s, failedService, s.failures, s.failureThreshold, false, err, stacktrace)
}
}
}
@@ -536,7 +557,7 @@ func (s *Supervisor) removeService(id serviceID) {
case <-successChan:
// Life is good!
case <-failChan:
s.logBadStop(service)
s.logBadStop(s, service)
}
}()
}

View File

@@ -77,7 +77,7 @@ func TestFailures(t *testing.T) {
// to avoid deadlocks during shutdown, we have to not try to send
// things out on channels while we're shutting down (this undoes the
// logFailure overide about 25 lines down)
s.logFailure = func(Service, float64, float64, bool, interface{}, []byte) {}
s.logFailure = func(*Supervisor, Service, float64, float64, bool, interface{}, []byte) {}
s.Stop()
}()
s.sync()
@@ -102,7 +102,7 @@ func TestFailures(t *testing.T) {
failNotify := make(chan bool)
// use this to synchronize on here
s.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
s.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- r
}
@@ -276,8 +276,8 @@ func TestDefaultLogging(t *testing.T) {
serviceName(&BarelyService{})
s.logBadStop(service)
s.logFailure(service, 1, 1, true, errors.New("test error"), []byte{})
s.logBadStop(s, service)
s.logFailure(s, service, 1, 1, true, errors.New("test error"), []byte{})
s.Stop()
}
@@ -289,9 +289,17 @@ func TestNestedSupervisors(t *testing.T) {
super2 := NewSimple("Nested5")
service := NewService("Service5")
super2.logBadStop = func(*Supervisor, Service) {
panic("Failed to copy logBadStop")
}
super1.Add(super2)
super2.Add(service)
// test the functions got copied from super1; if this panics, it didn't
// get copied
super2.logBadStop(super2, service)
go super1.Serve()
super1.sync()
@@ -340,7 +348,7 @@ func TestStoppingStillWorksWithHungServices(t *testing.T) {
return resumeChan
}
failNotify := make(chan struct{})
s.logBadStop = func(s Service) {
s.logBadStop = func(supervisor *Supervisor, s Service) {
failNotify <- struct{}{}
}
@@ -438,7 +446,7 @@ func TestFailingSupervisors(t *testing.T) {
}
failNotify := make(chan string)
// use this to synchronize on here
s1.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
s1.logFailure = func(supervisor *Supervisor, s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
failNotify <- fmt.Sprintf("%s", s)
}

View File

@@ -471,29 +471,22 @@ func computeNonStarterCounts() {
if exp := c.forms[FCompatibility].expandedDecomp; len(exp) > 0 {
runes = exp
}
// We consider runes that combine backwards to be non-starters for the
// purpose of Stream-Safe Text Processing.
for _, r := range runes {
if chars[r].ccc == 0 {
if cr := &chars[r]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward {
break
}
c.nLeadingNonStarters++
}
for i := len(runes) - 1; i >= 0; i-- {
if chars[runes[i]].ccc == 0 {
if cr := &chars[runes[i]]; cr.ccc == 0 && !cr.forms[FCompatibility].combinesBackward {
break
}
c.nTrailingNonStarters++
}
// We consider runes that combine backwards to be non-starters for the
// purpose of Stream-Safe Text Processing.
for _, f := range c.forms {
if c.ccc == 0 && f.combinesBackward {
if len(c.forms[FCompatibility].expandedDecomp) > 0 {
log.Fatalf("%U: CCC==0 modifier with an expansion is not supported.", i)
}
c.nTrailingNonStarters = 1
c.nLeadingNonStarters = 1
}
if c.nTrailingNonStarters > 3 {
log.Fatalf("%U: Decomposition with more than 3 (%d) trailing modifiers (%U)", i, c.nTrailingNonStarters, runes)
}
if isHangul(rune(i)) {
@@ -839,10 +832,16 @@ func verifyComputed() {
continue
}
if a, b := c.nLeadingNonStarters > 0, (c.ccc > 0 || f.combinesBackward); a != b {
// We accept these two runes to be treated differently (it only affects
// segment breaking in iteration, most likely on inproper use), but
// We accept these runes to be treated differently (it only affects
// segment breaking in iteration, most likely on improper use), but
// reconsider if more characters are added.
if i != 0xFF9E && i != 0xFF9F {
// U+FF9E HALFWIDTH KATAKANA VOICED SOUND MARK;Lm;0;L;<narrow> 3099;;;;N;;;;;
// U+FF9F HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK;Lm;0;L;<narrow> 309A;;;;N;;;;;
// U+3133 HANGUL LETTER KIYEOK-SIOS;Lo;0;L;<compat> 11AA;;;;N;HANGUL LETTER GIYEOG SIOS;;;;
// U+318E HANGUL LETTER ARAEAE;Lo;0;L;<compat> 11A1;;;;N;HANGUL LETTER ALAE AE;;;;
// U+FFA3 HALFWIDTH HANGUL LETTER KIYEOK-SIOS;Lo;0;L;<narrow> 3133;;;;N;HALFWIDTH HANGUL LETTER GIYEOG SIOS;;;;
// U+FFDC HALFWIDTH HANGUL LETTER I;Lo;0;L;<narrow> 3163;;;;N;;;;;
if i != 0xFF9E && i != 0xFF9F && !(0x3133 <= i && i <= 0x318E) && !(0xFFA3 <= i && i <= 0xFFDC) {
log.Fatalf("%U: nLead was %v; want %v", i, a, b)
}
}

View File

@@ -113,7 +113,25 @@ var decomposeSegmentTests = []PositionTest{
{"\u00C0b", 2, "A\u0300"},
// long
{grave(31), 60, grave(30) + cgj},
{"a" + grave(31), 61, "a" + grave(30) + cgj},
// Stability tests: see http://www.unicode.org/review/pr-29.html.
// U+0300 COMBINING GRAVE ACCENT;Mn;230;NSM;;;;;N;NON-SPACING GRAVE;;;;
// U+0B47 ORIYA VOWEL SIGN E;Mc;0;L;;;;;N;;;;;
// U+0B3E ORIYA VOWEL SIGN AA;Mc;0;L;;;;;N;;;;;
// U+1100 HANGUL CHOSEONG KIYEOK;Lo;0;L;;;;;N;;;;;
// U+1161 HANGUL JUNGSEONG A;Lo;0;L;;;;;N;;;;;
{"\u0B47\u0300\u0B3E", 8, "\u0B47\u0300\u0B3E"},
{"\u1100\u0300\u1161", 8, "\u1100\u0300\u1161"},
{"\u0B47\u0B3E", 6, "\u0B47\u0B3E"},
{"\u1100\u1161", 6, "\u1100\u1161"},
// U+04DA MALAYALAM VOWEL SIGN O;Mc;0;L;0D46 0D3E;;;;N;;;;;
// Sequence of decomposing characters that are starters and modifiers.
{"\u0d4a" + strings.Repeat("\u0d3e", 31), 90, "\u0d46" + strings.Repeat("\u0d3e", 30) + cgj},
{grave(30), 60, grave(30)},
// U+FF9E is a starter, but decomposes to U+3099, which is not.
{grave(30) + "\uff9e", 60, grave(30) + cgj},
// ends with incomplete UTF-8 encoding
{"\xCC", 0, ""},
@@ -552,6 +570,44 @@ var appendTestsNFC = []AppendTest{
"a" + rep(0x0305, maxNonStarters+4) + "\u0316",
"a" + rep(0x0305, maxNonStarters) + cgj + "\u0316" + rep(0x305, 4),
},
{ // Combine across non-blocking non-starters.
// U+0327 COMBINING CEDILLA;Mn;202;NSM;;;;;N;NON-SPACING CEDILLA;;;;
// U+0325 COMBINING RING BELOW;Mn;220;NSM;;;;;N;NON-SPACING RING BELOW;;;;
"", "a\u0327\u0325", "\u1e01\u0327",
},
{ // Jamo V+T does not combine.
"",
"\u1161\u11a8",
"\u1161\u11a8",
},
// Stability tests: see http://www.unicode.org/review/pr-29.html.
{"", "\u0b47\u0300\u0b3e", "\u0b47\u0300\u0b3e"},
{"", "\u1100\u0300\u1161", "\u1100\u0300\u1161"},
{"", "\u0b47\u0b3e", "\u0b4b"},
{"", "\u1100\u1161", "\uac00"},
// U+04DA MALAYALAM VOWEL SIGN O;Mc;0;L;0D46 0D3E;;;;N;;;;;
{ // 0d4a starts a new segment.
"",
"\u0d4a" + strings.Repeat("\u0d3e", 15) + "\u0d4a" + strings.Repeat("\u0d3e", 15),
"\u0d4a" + strings.Repeat("\u0d3e", 15) + "\u0d4a" + strings.Repeat("\u0d3e", 15),
},
{ // Split combining characters.
// TODO: don't insert CGJ before starters.
"",
"\u0d46" + strings.Repeat("\u0d3e", 31),
"\u0d4a" + strings.Repeat("\u0d3e", 29) + cgj + "\u0d3e",
},
{ // Split combining characters.
"",
"\u0d4a" + strings.Repeat("\u0d3e", 30),
"\u0d4a" + strings.Repeat("\u0d3e", 29) + cgj + "\u0d3e",
},
}
var appendTestsNFD = []AppendTest{

View File

File diff suppressed because it is too large Load Diff

14
NICKS
View File

@@ -3,6 +3,8 @@
AudriusButkevicius <audrius.butkevicius@gmail.com>
Cathryne <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
KayoticSully <kayoticsully@gmail.com>
LordLandon <lordlandon@gmail.com>
Moter8 <moter8@gmail.com>
Nutomic <me@nutomic.com>
Rewt0r <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Vilbrekin <vilbrekin@gmail.com>
@@ -12,17 +14,25 @@ andrew-d <andrew@du.nham.ca>
asdil12 <dominik@heidler.eu>
bencurthoys <ben@bencurthoys.com>
bigbear2nd <bigbear2nd@gmail.com>
brbecker <brbecker@gmail.com>
brendanlong <self@brendanlong.com>
brgmnn <dan.arne.bergmann@gmail.com> <brgmnn@users.noreply.github.com>
bsidhom <bsidhom@gmail.com>
calmh <jakob@nym.se>
canton7 <antony.male@gmail.com>
cdata <chris@scriptolo.gy>
cdhowie <me@chrishowie.com>
ceh <emil@hessman.se>
cqcallaw <enlightened.despot@gmail.com>
dva <denisva@gmail.com>
dzarda <dzardacz@gmail.com>
facastagnini <federico.castagnini@gmail.com>
filoozoom <philippe@schommers.be>
frioux <frew@afoolishmanifesto.com> <frioux@gmail.com>
fti7 <frank@isemann.name>
gillisig <gilli@vx.is>
hadogenes <szafar@linux.pl>
jarlebring <jarlebring@gmail.com>
jedie <github.com@jensdiemer.de> <git@jensdiemer.de>
jpjp <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
kamadak <kamada@nanohz.org>
@@ -31,6 +41,7 @@ kozec <kozec@kozec.com>
krozycki <rozycki.karol@gmail.com>
marcindziadus <dziadus.marcin@gmail.com>
marclaporte <marc@marclaporte.com>
mogwa1 <devriesb@gmail.com>
moshen <moshen.colin@gmail.com>
mvdan <mvdan@mvdan.cc>
pascalj <github@pascalj.com> <mail@pascal-jungblut.com>
@@ -39,9 +50,9 @@ philips <brandon@ifup.org>
piobpl <piotrb10@gmail.com>
pluby <phill.luby@newredo.com>
pyfisch <pyfisch@gmail.com>
qbit <qbit@deftly.net>
ralder <ralder@yandex.ru>
rumpelsepp <stefan@sevenbyte.org>
qbit <qbit@deftly.net>
sciurius <jvromans@squirrel.nl>
seehuhn <voss@seehuhn.de>
snnd <dw@risu.io>
@@ -50,4 +61,5 @@ tnn2 <tnn@nygren.pp.se>
tojrobinson <tully@tojr.org>
uok <ueomkail@gmail.com> <uok@users.noreply.github.com>
veeti <veeti.paananen@rojekti.fi>
wsgcsysadmin <e.meitner@willystreet.coo>
zukoo <fxgsell@gmail.com>

View File

@@ -1,60 +1,58 @@
syncthing
Syncthing
=========
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/syncthing.svg?style=flat-square)](http://build.syncthing.net/job/syncthing/lastBuild/)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat-square)](http://godoc.org/github.com/syncthing/syncthing)
[![MPLv2 License](http://img.shields.io/badge/license-MPLv2-blue.svg?style=flat-square)](https://www.mozilla.org/MPL/2.0/)
This is the `syncthing` project. The following are the project goals:
This is the Syncthing project which pursues the following goals:
1. Define a protocol for synchronization of a folder between a number of
collaborating devices. The protocol should be well defined, unambiguous,
collaborating devices. This protocol should be well defined, unambiguous,
easily understood, free to use, efficient, secure and language neutral.
This is the [Block Exchange
This is called the [Block Exchange
Protocol](https://github.com/syncthing/specs/blob/master/BEPv1.md).
2. Provide the reference implementation to demonstrate the usability of
said protocol. This is the `syncthing` utility. It is the hope that
alternative, compatible implementations of the protocol will come to
exist.
said protocol. This is the `syncthing` utility. We hope that
alternative, compatible implementations of the protocol will arise.
The two are evolving together; the protocol is not to be considered
stable until syncthing 1.0 is released, at which point it is locked down
stable until Syncthing 1.0 is released, at which point it is locked down
for incompatible changes.
Getting Started
---------------
Take a look at the [getting started
guide](https://github.com/syncthing/syncthing/wiki/Getting-Started).
guide](http://docs.syncthing.net/intro/getting-started.html).
There are a few examples for keeping syncthing running in the background
There are a few examples for keeping Syncthing running in the background
on your system in [the etc directory](https://github.com/syncthing/syncthing/blob/master/etc).
There is an IRC channel, `#syncthing` on Freenode, for talking directly
to developers and users (when awake and present, etc.).
to developers and users.
Building
--------
Building Syncthing from source is easy, and there's a
[guide](https://github.com/syncthing/syncthing/wiki/Building).
that describes it for both Unix and Windows.
[guide](http://docs.syncthing.net/dev/building.html).
that describes it for both Unix and Windows systems.
Signed Releases
---------------
As of v0.10.15 and onwards, git tags and release binaries are GPG signed
with the key D26E6ED000654A3E (see http://syncthing.net/security.html).
with the key D26E6ED000654A3E (see https://syncthing.net/security.html).
For release binaries, MD5 and SHA1 checksums are calculated and signed,
available in the md5sum.txt.asc and sha1sum.txt.asc files.
Documentation
=============
The [syncthing
documentation](https://github.com/syncthing/syncthing/wiki/) is on the
Github wiki.
Please see the [Syncthing
documentation site](http://docs.syncthing.net/).
All code is licensed under the
[MPLv2](https://github.com/syncthing/syncthing/blob/master/LICENSE).
[MPLv2 License](https://github.com/syncthing/syncthing/blob/master/LICENSE).

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 KiB

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

After

Width:  |  Height:  |  Size: 3.7 KiB

50
benchfilter.go Normal file
View File

@@ -0,0 +1,50 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
// +build ignore
// Neatly format benchmarking output which otherwise looks like crap.
package main
import (
"bufio"
"bytes"
"fmt"
"os"
"regexp"
"text/tabwriter"
)
var (
benchRe = regexp.MustCompile(`^Bench`)
spacesRe = regexp.MustCompile(`\s+`)
numbersRe = regexp.MustCompile(`\b[\d\.]+\b`)
)
func main() {
tw := tabwriter.NewWriter(os.Stdout, 1, 1, 1, ' ', 0)
br := bufio.NewScanner(os.Stdin)
n := 0
for br.Scan() {
line := br.Bytes()
if benchRe.Match(line) {
n++
line = spacesRe.ReplaceAllLiteral(line, []byte("\t"))
line = numbersRe.ReplaceAllFunc(line, func(n []byte) []byte {
return []byte(fmt.Sprintf("%12s", n))
})
tw.Write(line)
tw.Write([]byte("\n"))
} else if n > 0 && bytes.HasPrefix(line, []byte("ok")) {
n = 0
tw.Flush()
fmt.Printf("%s\n\n", line)
}
}
tw.Flush()
}

162
build.go
View File

@@ -78,6 +78,11 @@ func main() {
tags = []string{"noupgrade"}
}
install("./cmd/...", tags)
vet("./cmd/syncthing")
vet("./internal/...")
lint("./cmd/syncthing")
lint("./internal/...")
return
}
@@ -103,8 +108,10 @@ func main() {
build(pkg, tags)
case "test":
pkg := "./..."
test(pkg)
test("./...")
case "bench":
bench("./...")
case "assets":
assets()
@@ -127,9 +134,20 @@ func main() {
case "zip":
buildZip()
case "deb":
buildDeb()
case "clean":
clean()
case "vet":
vet("./cmd/syncthing")
vet("./internal/...")
case "lint":
lint("./cmd/syncthing")
lint("./internal/...")
default:
log.Fatalf("Unknown command %q", cmd)
}
@@ -167,6 +185,11 @@ func test(pkg string) {
runPrint("go", "test", "-short", "-timeout", "60s", pkg)
}
func bench(pkg string) {
setBuildEnv()
runPrint("go", "test", "-run", "NONE", "-bench", ".", pkg)
}
func install(pkg string, tags []string) {
os.Setenv("GOBIN", "./bin")
args := []string{"install", "-v", "-ldflags", ldflags()}
@@ -260,6 +283,97 @@ func buildZip() {
log.Println(filename)
}
func buildDeb() {
os.RemoveAll("deb")
// "goarch" here is set to whatever the Debian packages expect. We correct
// "it to what we actually know how to build and keep the Debian variant
// "name in "debarch".
debarch := goarch
switch goarch {
case "i386":
goarch = "386"
case "armel", "armhf":
goarch = "arm"
}
build("./cmd/syncthing", []string{"noupgrade"})
files := []archiveFile{
{src: "README.md", dst: "deb/usr/share/doc/syncthing/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing/AUTHORS.txt", perm: 0644},
{src: "syncthing", dst: "deb/usr/bin/syncthing", perm: 0755},
{src: "man/syncthing.1", dst: "deb/usr/share/man/man1/syncthing.1", perm: 0644},
{src: "man/syncthing-config.5", dst: "deb/usr/share/man/man5/syncthing-config.5", perm: 0644},
{src: "man/syncthing-stignore.5", dst: "deb/usr/share/man/man5/syncthing-stignore.5", perm: 0644},
{src: "man/syncthing-device-ids.7", dst: "deb/usr/share/man/man7/syncthing-device-ids.7", perm: 0644},
{src: "man/syncthing-event-api.7", dst: "deb/usr/share/man/man7/syncthing-event-api.7", perm: 0644},
{src: "man/syncthing-faq.7", dst: "deb/usr/share/man/man7/syncthing-faq.7", perm: 0644},
{src: "man/syncthing-networking.7", dst: "deb/usr/share/man/man7/syncthing-networking.7", perm: 0644},
{src: "man/syncthing-rest-api.7", dst: "deb/usr/share/man/man7/syncthing-rest-api.7", perm: 0644},
{src: "man/syncthing-security.7", dst: "deb/usr/share/man/man7/syncthing-security.7", perm: 0644},
{src: "man/syncthing-versioning.7", dst: "deb/usr/share/man/man7/syncthing-versioning.7", perm: 0644},
{src: "etc/linux-systemd/system/syncthing@.service", dst: "deb/lib/systemd/system/syncthing@.service", perm: 0644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0644},
}
for _, file := range listFiles("extra") {
files = append(files, archiveFile{src: file, dst: "deb/usr/share/doc/syncthing/" + filepath.Base(file), perm: 0644})
}
for _, af := range files {
if err := copyFile(af.src, af.dst, af.perm); err != nil {
log.Fatal(err)
}
}
control := `Package: syncthing
Architecture: {{arch}}
Depends: libc6
Version: {{version}}
Maintainer: Syncthing Release Management <release@syncthing.net>
Description: Open Source Continuous File Synchronization
Syncthing does bidirectional synchronization of files between two or
more computers.
`
changelog := `syncthing ({{version}}); urgency=medium
* Packaging of {{version}}.
-- Jakob Borg <jakob@nym.se> {{date}}
`
control = strings.Replace(control, "{{arch}}", debarch, -1)
control = strings.Replace(control, "{{version}}", version[1:], -1)
changelog = strings.Replace(changelog, "{{arch}}", debarch, -1)
changelog = strings.Replace(changelog, "{{version}}", version[1:], -1)
changelog = strings.Replace(changelog, "{{date}}", time.Now().Format(time.RFC1123), -1)
os.MkdirAll("deb/DEBIAN", 0755)
ioutil.WriteFile("deb/DEBIAN/control", []byte(control), 0644)
ioutil.WriteFile("deb/DEBIAN/compat", []byte("9\n"), 0644)
ioutil.WriteFile("deb/DEBIAN/changelog", []byte(changelog), 0644)
}
func copyFile(src, dst string, perm os.FileMode) error {
dstDir := filepath.Dir(dst)
os.MkdirAll(dstDir, 0755) // ignore error
srcFd, err := os.Open(src)
if err != nil {
return err
}
defer srcFd.Close()
dstFd, err := os.OpenFile(dst, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, perm)
if err != nil {
return err
}
defer dstFd.Close()
_, err = io.Copy(dstFd, srcFd)
return err
}
func listFiles(dir string) []string {
var res []string
filepath.Walk(dir, func(path string, fi os.FileInfo, err error) error {
@@ -437,10 +551,7 @@ func run(cmd string, args ...string) []byte {
func runError(cmd string, args ...string) ([]byte, error) {
ecmd := exec.Command(cmd, args...)
bs, err := ecmd.CombinedOutput()
if err != nil {
return nil, err
}
return bytes.TrimSpace(bs), nil
return bytes.TrimSpace(bs), err
}
func runPrint(cmd string, args ...string) {
@@ -471,8 +582,9 @@ func runPipe(file, cmd string, args ...string) {
}
type archiveFile struct {
src string
dst string
src string
dst string
perm os.FileMode
}
func tarGz(out string, files []archiveFile) {
@@ -615,3 +727,37 @@ func md5File(file string) error {
return out.Close()
}
func vet(pkg string) {
bs, err := runError("go", "vet", pkg)
if err != nil && err.Error() == "exit status 3" || bytes.Contains(bs, []byte("no such tool \"vet\"")) {
// Go said there is no go vet
log.Println(`- No go vet, no vetting. Try "go get -u golang.org/x/tools/cmd/vet".`)
return
}
falseAlarmComposites := regexp.MustCompile("composite literal uses unkeyed fields")
exitStatus := regexp.MustCompile("exit status 1")
for _, line := range bytes.Split(bs, []byte("\n")) {
if falseAlarmComposites.Match(line) || exitStatus.Match(line) {
continue
}
log.Printf("%s", line)
}
}
func lint(pkg string) {
bs, err := runError("golint", pkg)
if err != nil {
log.Println(`- No golint, not linting. Try "go get -u github.com/golang/lint/golint".`)
return
}
analCommentPolicy := regexp.MustCompile(`exported (function|method|const|type|var) [^\s]+ should have comment`)
for _, line := range bytes.Split(bs, []byte("\n")) {
if analCommentPolicy.Match(line) {
continue
}
log.Printf("%s", line)
}
}

View File

@@ -17,8 +17,11 @@ case "${1:-default}" in
ulimit -t 60 &>/dev/null || true
ulimit -d 512000 &>/dev/null || true
ulimit -m 512000 &>/dev/null || true
go run build.go test
;;
go run build.go "$1"
bench)
LOGGER_DISCARD=1 go run build.go bench | go run benchfilter.go
;;
tar)
@@ -41,7 +44,17 @@ case "${1:-default}" in
go run build.go "$1"
;;
transifex)
prerelease)
go run build.go transifex
git add -A gui/assets/ internal/auto/
pushd man ; ./refresh.sh ; popd
git add -A man
echo
echo Changelog:
go run changelog.go
;;
deb)
go run build.go "$1"
;;
@@ -118,10 +131,9 @@ case "${1:-default}" in
-e "STTRACE=$STTRACE" \
syncthing/build:latest \
sh -c './build.sh clean \
&& go tool vet -composites=false cmd/*/*.go internal/*/*.go \
&& ( golint ./cmd/... ; golint ./internal/... ) | egrep -v "comment on exported|should have comment" \
&& ./build.sh all \
&& STTRACE=all ./build.sh test-cov'
&& ./build.sh test-cov \
&& ./build.sh bench \
&& ./build.sh all'
;;
docker-test)
@@ -138,6 +150,25 @@ case "${1:-default}" in
&& git clean -fxd .'
;;
docker-lint)
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
syncthing/build:latest \
sh -euxc 'go run build.go lint'
;;
docker-vet)
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
syncthing/build:latest \
sh -euxc 'go run build.go vet'
;;
*)
echo "Unknown build command $1"
;;

View File

@@ -4,6 +4,8 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
// +build ignore
package main
import (

View File

@@ -17,7 +17,10 @@ no-docs-typos() {
grep -v a9339d0627fff439879d157c75077f02c9fac61b |\
grep -v 254c63763a3ad42fd82259f1767db526cff94a14 |\
grep -v 4b76ec40c07078beaa2c5e250ed7d9bd6276a718 |\
grep -v ffc39dfbcb34eacc3ea12327a02b6e7741a2c207
grep -v ffc39dfbcb34eacc3ea12327a02b6e7741a2c207 |\
grep -v 32a76901a91ff0f663db6f0830e0aedec946e4d0 |\
grep -v af3288043a49bcc28f8ae3060852a09de552fe5f |\
grep -v 3626003f680bad3e63677982576d3a05421e88e9
}
print-missing-authors() {
@@ -27,7 +30,7 @@ print-missing-authors() {
}
print-missing-copyright() {
find . -name \*.go | xargs egrep -L 'Copyright \(C\)|automatically generated' | grep -v Godeps | grep -v internal/auto/
find . -name \*.go | xargs egrep -L 'Copyright|automatically generated' | grep -v Godeps | grep -v internal/auto/
}
authors=$(print-missing-authors)

View File

@@ -36,8 +36,7 @@ const (
func Assets() map[string][]byte {
var assets = make(map[string][]byte, {{.Assets | len}})
{{range $asset := .Assets}}
assets["{{$asset.Name}}"], _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
{{end}}
assets["{{$asset.Name}}"], _ = base64.StdEncoding.DecodeString("{{$asset.Data}}"){{end}}
return assets
}

View File

@@ -7,6 +7,7 @@
package main
import (
"encoding/binary"
"flag"
"fmt"
"log"
@@ -15,41 +16,71 @@ import (
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/db"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
func main() {
log.SetFlags(0)
log.SetOutput(os.Stdout)
folder := flag.String("folder", "default", "Folder ID")
device := flag.String("device", "", "Device ID (blank for global)")
flag.Parse()
ldb, err := leveldb.OpenFile(flag.Arg(0), nil)
ldb, err := leveldb.OpenFile(flag.Arg(0), &opt.Options{
ErrorIfMissing: true,
Strict: opt.StrictAll,
OpenFilesCacheCapacity: 100,
})
if err != nil {
log.Fatal(err)
}
fs := db.NewFileSet(*folder, ldb)
it := ldb.NewIterator(nil, nil)
var dev protocol.DeviceID
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := nulString(key[1 : 1+64])
devBytes := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
copy(dev[:], devBytes)
fmt.Printf("[device] F:%q N:%q D:%v\n", folder, name, dev)
if *device == "" {
log.Printf("*** Global index for folder %q", *folder)
fs.WithGlobalTruncated(func(fi db.FileIntf) bool {
f := fi.(db.FileInfoTruncated)
fmt.Println(f)
fmt.Println("\t", fs.Availability(f.Name))
return true
})
} else {
n, err := protocol.DeviceIDFromString(*device)
if err != nil {
log.Fatal(err)
var f protocol.FileInfo
err := f.UnmarshalXDR(it.Value())
if err != nil {
log.Fatal(err)
}
fmt.Printf(" N:%q\n F:%#o\n M:%d\n V:%v\n S:%d\n B:%d\n", f.Name, f.Flags, f.Modified, f.Version, f.Size(), len(f.Blocks))
case db.KeyTypeGlobal:
folder := nulString(key[1 : 1+64])
name := nulString(key[1+64:])
fmt.Printf("[global] F:%q N:%q V:%x\n", folder, name, it.Value())
case db.KeyTypeBlock:
folder := nulString(key[1 : 1+64])
hash := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
fmt.Printf("[block] F:%q H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
case db.KeyTypeDeviceStatistic:
fmt.Printf("[dstat]\n %x\n %x\n", it.Key(), it.Value())
case db.KeyTypeFolderStatistic:
fmt.Printf("[fstat]\n %x\n %x\n", it.Key(), it.Value())
default:
fmt.Printf("[???]\n %x\n %x\n", it.Key(), it.Value())
}
log.Printf("*** Have index for folder %q device %q", *folder, n)
fs.WithHaveTruncated(n, func(fi db.FileIntf) bool {
f := fi.(db.FileInfoTruncated)
fmt.Println(f)
return true
})
}
}
func nulString(bs []byte) string {
for i := range bs {
if bs[i] == 0 {
return string(bs[:i])
}
}
return string(bs)
}

98
cmd/stwatchfile/main.go Normal file
View File

@@ -0,0 +1,98 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"crypto/md5"
"flag"
"fmt"
"io"
"os"
"time"
)
func getmd5(filePath string) ([]byte, error) {
var result []byte
file, err := os.Open(filePath)
if err != nil {
return result, err
}
defer file.Close()
hash := md5.New()
if _, err := io.Copy(hash, file); err != nil {
return result, err
}
return hash.Sum(result), nil
}
func main() {
period := flag.Duration("period", 200*time.Millisecond, "Sleep period between checks")
flag.Parse()
file := flag.Arg(0)
if file == "" {
fmt.Println("Expects a path as an argument")
return
}
exists := true
size := int64(0)
mtime := time.Time{}
hash := []byte{}
for {
time.Sleep(*period)
newExists := true
fi, err := os.Stat(file)
if err != nil && os.IsNotExist(err) {
newExists = false
} else if err != nil {
fmt.Println("stat:", err)
return
}
if newExists != exists {
exists = newExists
if !newExists {
fmt.Println(file, "does not exist")
} else {
fmt.Println(file, "appeared")
}
}
if !exists {
size = 0
mtime = time.Time{}
hash = []byte{}
continue
}
if fi.IsDir() {
fmt.Println(file, "is directory")
return
}
newSize := fi.Size()
newMtime := fi.ModTime()
newHash, err := getmd5(file)
if err != nil {
fmt.Println("getmd5:", err)
}
if newSize != size || newMtime != mtime || !bytes.Equal(newHash, hash) {
fmt.Println(file, "Size:", newSize, "Mtime:", newMtime, "Hash:", fmt.Sprintf("%x", newHash))
hash = newHash
size = newSize
mtime = newMtime
}
}
}

69
cmd/syncthing/audit.go Normal file
View File

@@ -0,0 +1,69 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package main
import (
"encoding/json"
"io"
"github.com/syncthing/syncthing/internal/events"
)
// The auditSvc subscribes to events and writes these in JSON format, one
// event per line, to the specified writer.
type auditSvc struct {
w io.Writer // audit destination
stop chan struct{} // signals time to stop
started chan struct{} // signals startup complete
stopped chan struct{} // signals stop complete
}
func newAuditSvc(w io.Writer) *auditSvc {
return &auditSvc{
w: w,
stop: make(chan struct{}),
started: make(chan struct{}),
stopped: make(chan struct{}),
}
}
// Serve runs the audit service.
func (s *auditSvc) Serve() {
defer close(s.stopped)
sub := events.Default.Subscribe(events.AllEvents)
defer events.Default.Unsubscribe(sub)
enc := json.NewEncoder(s.w)
// We're ready to start processing events.
close(s.started)
for {
select {
case ev := <-sub.C():
enc.Encode(ev)
case <-s.stop:
return
}
}
}
// Stop stops the audit service.
func (s *auditSvc) Stop() {
close(s.stop)
}
// WaitForStart returns once the audit service is ready to receive events, or
// immediately if it's already running.
func (s *auditSvc) WaitForStart() {
<-s.started
}
// WaitForStop returns once the audit service has stopped.
// (Needed by the tests.)
func (s *auditSvc) WaitForStop() {
<-s.stopped
}

View File

@@ -0,0 +1,54 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"strings"
"testing"
"time"
"github.com/syncthing/syncthing/internal/events"
)
func TestAuditService(t *testing.T) {
buf := new(bytes.Buffer)
svc := newAuditSvc(buf)
// Event sent before start, will not be logged
events.Default.Log(events.Ping, "the first event")
go svc.Serve()
svc.WaitForStart()
// Event that should end up in the audit log
events.Default.Log(events.Ping, "the second event")
// We need to give the events time to arrive, since the channels are buffered etc.
time.Sleep(10 * time.Millisecond)
svc.Stop()
svc.WaitForStop()
// This event should not be logged, since we have stopped.
events.Default.Log(events.Ping, "the third event")
result := string(buf.Bytes())
t.Log(result)
if strings.Contains(result, "first event") {
t.Error("Unexpected first event")
}
if !strings.Contains(result, "second event") {
t.Error("Missing second event")
}
if strings.Contains(result, "third event") {
t.Error("Missing third event")
}
}

View File

@@ -15,23 +15,84 @@ import (
"time"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/events"
"github.com/syncthing/syncthing/internal/model"
"github.com/thejerf/suture"
)
func listenConnect(myID protocol.DeviceID, m *model.Model, tlsCfg *tls.Config) {
var conns = make(chan *tls.Conn)
// The connection service listens on TLS and dials configured unconnected
// devices. Successful connections are handed to the model.
type connectionSvc struct {
*suture.Supervisor
cfg *config.Wrapper
myID protocol.DeviceID
model *model.Model
tlsCfg *tls.Config
conns chan *tls.Conn
}
// Listen
for _, addr := range cfg.Options().ListenAddress {
go listenTLS(conns, addr, tlsCfg)
func newConnectionSvc(cfg *config.Wrapper, myID protocol.DeviceID, model *model.Model, tlsCfg *tls.Config) *connectionSvc {
svc := &connectionSvc{
Supervisor: suture.NewSimple("connectionSvc"),
cfg: cfg,
myID: myID,
model: model,
tlsCfg: tlsCfg,
conns: make(chan *tls.Conn),
}
// Connect
go dialTLS(m, conns, tlsCfg)
// There are several moving parts here; one routine per listening address
// to handle incoming connections, one routine to periodically attempt
// outgoing connections, and lastly one routine to the the common handling
// regardless of whether the connection was incoming or outgoing. It ends
// up as in the diagram below. We embed a Supervisor to manage the
// routines (i.e. log and restart if they crash or exit, etc).
//
// +-----------------+
// Incoming | +---------------+-+ +-----------------+
// Connections | | | | | Outgoing
// -------------->| | svc.listen | | | Connections
// | | (1 per listen | | svc.connect |-------------->
// | | address) | | |
// +-+ | | |
// +-----------------+ +-----------------+
// v v
// | |
// | |
// +------------+-----------+
// |
// | svc.conns
// v
// +-----------------+
// | |
// | |
// | svc.handle |------> model.AddConnection()
// | |
// | |
// +-----------------+
//
// TODO: Clean shutdown, and/or handling config changes on the fly. We
// partly do this now - new devices and addresses will be picked up, but
// not new listen addresses and we don't support disconnecting devices
// that are removed and so on...
svc.Add(serviceFunc(svc.connect))
for _, addr := range svc.cfg.Options().ListenAddress {
addr := addr
listener := serviceFunc(func() {
svc.listen(addr)
})
svc.Add(listener)
}
svc.Add(serviceFunc(svc.handle))
return svc
}
func (s *connectionSvc) handle() {
next:
for conn := range conns {
for conn := range s.conns {
cs := conn.ConnectionState()
// We should have negotiated the next level protocol "bep/1.0" as part
@@ -55,7 +116,7 @@ next:
remoteID := protocol.NewDeviceID(remoteCert.Raw)
// The device ID should not be that of ourselves. It can happen
// though, especially in the presense of NAT hairpinning, multiple
// though, especially in the presence of NAT hairpinning, multiple
// clients between the same NAT gateway, and global discovery.
if remoteID == myID {
l.Infof("Connected to myself (%s) - should not happen", remoteID)
@@ -67,15 +128,15 @@ next:
// could use some better handling. If the old connection is dead but
// hasn't timed out yet we may want to drop *that* connection and keep
// this one. But in case we are two devices connecting to each other
// in parallell we don't want to do that or we end up with no
// in parallel we don't want to do that or we end up with no
// connections still established...
if m.ConnectedTo(remoteID) {
if s.model.ConnectedTo(remoteID) {
l.Infof("Connected to already connected device (%s)", remoteID)
conn.Close()
continue
}
for deviceID, deviceCfg := range cfg.Devices() {
for deviceID, deviceCfg := range s.cfg.Devices() {
if deviceID == remoteID {
// Verify the name on the certificate. By default we set it to
// "syncthing" when generating, but the user may have replaced
@@ -97,7 +158,7 @@ next:
// If rate limiting is set, and based on the address we should
// limit the connection, then we wrap it in a limiter.
limit := shouldLimit(conn.RemoteAddr())
limit := s.shouldLimit(conn.RemoteAddr())
wr := io.Writer(conn)
if limit && writeRateLimit != nil {
@@ -110,23 +171,19 @@ next:
}
name := fmt.Sprintf("%s-%s", conn.LocalAddr(), conn.RemoteAddr())
protoConn := protocol.NewConnection(remoteID, rd, wr, m, name, deviceCfg.Compression)
protoConn := protocol.NewConnection(remoteID, rd, wr, s.model, name, deviceCfg.Compression)
l.Infof("Established secure connection to %s at %s", remoteID, name)
if debugNet {
l.Debugf("cipher suite: %04X in lan: %t", conn.ConnectionState().CipherSuite, !limit)
}
events.Default.Log(events.DeviceConnected, map[string]string{
"id": remoteID.String(),
"addr": conn.RemoteAddr().String(),
})
m.AddConnection(conn, protoConn)
s.model.AddConnection(conn, protoConn)
continue next
}
}
if !cfg.IgnoredDevice(remoteID) {
if !s.cfg.IgnoredDevice(remoteID) {
events.Default.Log(events.DeviceRejected, map[string]string{
"device": remoteID.String(),
"address": conn.RemoteAddr().String(),
@@ -140,7 +197,7 @@ next:
}
}
func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
func (s *connectionSvc) listen(addr string) {
if debugNet {
l.Debugln("listening on", addr)
}
@@ -166,9 +223,9 @@ func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
}
tcpConn := conn.(*net.TCPConn)
setTCPOptions(tcpConn)
s.setTCPOptions(tcpConn)
tc := tls.Server(conn, tlsCfg)
tc := tls.Server(conn, s.tlsCfg)
err = tc.Handshake()
if err != nil {
l.Infoln("TLS handshake:", err)
@@ -176,21 +233,20 @@ func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
continue
}
conns <- tc
s.conns <- tc
}
}
func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
func (s *connectionSvc) connect() {
delay := time.Second
for {
nextDevice:
for deviceID, deviceCfg := range cfg.Devices() {
for deviceID, deviceCfg := range s.cfg.Devices() {
if deviceID == myID {
continue
}
if m.ConnectedTo(deviceID) {
if s.model.ConnectedTo(deviceID) {
continue
}
@@ -238,9 +294,9 @@ func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
continue
}
setTCPOptions(conn)
s.setTCPOptions(conn)
tc := tls.Client(conn, tlsCfg)
tc := tls.Client(conn, s.tlsCfg)
err = tc.Handshake()
if err != nil {
l.Infoln("TLS handshake:", err)
@@ -248,20 +304,20 @@ func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
continue
}
conns <- tc
s.conns <- tc
continue nextDevice
}
}
time.Sleep(delay)
delay *= 2
if maxD := time.Duration(cfg.Options().ReconnectIntervalS) * time.Second; delay > maxD {
if maxD := time.Duration(s.cfg.Options().ReconnectIntervalS) * time.Second; delay > maxD {
delay = maxD
}
}
}
func setTCPOptions(conn *net.TCPConn) {
func (*connectionSvc) setTCPOptions(conn *net.TCPConn) {
var err error
if err = conn.SetLinger(0); err != nil {
l.Infoln(err)
@@ -277,8 +333,8 @@ func setTCPOptions(conn *net.TCPConn) {
}
}
func shouldLimit(addr net.Addr) bool {
if cfg.Options().LimitBandwidthInLan {
func (s *connectionSvc) shouldLimit(addr net.Addr) bool {
if s.cfg.Options().LimitBandwidthInLan {
return true
}
@@ -293,3 +349,24 @@ func shouldLimit(addr net.Addr) bool {
}
return !tcpaddr.IP.IsLoopback()
}
func (s *connectionSvc) VerifyConfiguration(from, to config.Configuration) error {
return nil
}
func (s *connectionSvc) CommitConfiguration(from, to config.Configuration) bool {
// We require a restart if a device as been removed.
newDevices := make(map[protocol.DeviceID]bool, len(to.Devices))
for _, dev := range to.Devices {
newDevices[dev.DeviceID] = true
}
for _, dev := range from.Devices {
if !newDevices[dev.DeviceID] {
return false
}
}
return true
}

View File

@@ -12,6 +12,7 @@ import (
)
var (
debugNet = strings.Contains(os.Getenv("STTRACE"), "net") || os.Getenv("STTRACE") == "all"
debugHTTP = strings.Contains(os.Getenv("STTRACE"), "http") || os.Getenv("STTRACE") == "all"
debugNet = strings.Contains(os.Getenv("STTRACE"), "net") || os.Getenv("STTRACE") == "all"
debugHTTP = strings.Contains(os.Getenv("STTRACE"), "http") || os.Getenv("STTRACE") == "all"
debugSuture = strings.Contains(os.Getenv("STTRACE"), "suture") || os.Getenv("STTRACE") == "all"
)

View File

File diff suppressed because it is too large Load Diff

View File

@@ -12,26 +12,27 @@ import (
"math/rand"
"net/http"
"strings"
"sync"
"time"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/sync"
"golang.org/x/crypto/bcrypt"
)
var (
sessions = make(map[string]bool)
sessionsMut sync.Mutex
sessionsMut = sync.NewMutex()
)
func basicAuthAndSessionMiddleware(cfg config.GUIConfiguration, next http.Handler) http.Handler {
func basicAuthAndSessionMiddleware(cookieName string, cfg config.GUIConfiguration, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if cfg.APIKey != "" && r.Header.Get("X-API-Key") == cfg.APIKey {
next.ServeHTTP(w, r)
return
}
cookie, err := r.Cookie("sessionid")
cookie, err := r.Cookie(cookieName)
if err == nil && cookie != nil {
sessionsMut.Lock()
_, ok := sessions[cookie.Value]
@@ -42,6 +43,10 @@ func basicAuthAndSessionMiddleware(cfg config.GUIConfiguration, next http.Handle
}
}
if debugHTTP {
l.Debugln("Sessionless HTTP request with authentication; this is expensive.")
}
error := func() {
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
w.Header().Set("WWW-Authenticate", "Basic realm=\"Authorization Required\"")
@@ -82,7 +87,7 @@ func basicAuthAndSessionMiddleware(cfg config.GUIConfiguration, next http.Handle
sessions[sessionid] = true
sessionsMut.Unlock()
http.SetCookie(w, &http.Cookie{
Name: "sessionid",
Name: cookieName,
Value: sessionid,
MaxAge: 0,
})

View File

@@ -12,19 +12,18 @@ import (
"net/http"
"os"
"strings"
"sync"
"time"
"github.com/syncthing/syncthing/internal/osutil"
"github.com/syncthing/syncthing/internal/sync"
)
var csrfTokens []string
var csrfMut sync.Mutex
var csrfMut = sync.NewMutex()
// Check for CSRF token on /rest/ URLs. If a correct one is not given, reject
// the request with 403. For / and /index.html, set a new CSRF cookie if none
// is currently set.
func csrfMiddleware(prefix, apiKey string, next http.Handler) http.Handler {
func csrfMiddleware(unique, prefix, apiKey string, next http.Handler) http.Handler {
loadCsrfTokens()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Allow requests carrying a valid API key
@@ -35,10 +34,10 @@ func csrfMiddleware(prefix, apiKey string, next http.Handler) http.Handler {
// Allow requests for the front page, and set a CSRF cookie if there isn't already a valid one.
if !strings.HasPrefix(r.URL.Path, prefix) {
cookie, err := r.Cookie("CSRF-Token")
cookie, err := r.Cookie("CSRF-Token-" + unique)
if err != nil || !validCsrfToken(cookie.Value) {
cookie = &http.Cookie{
Name: "CSRF-Token",
Name: "CSRF-Token-" + unique,
Value: newCsrfToken(),
}
http.SetCookie(w, cookie)
@@ -54,7 +53,7 @@ func csrfMiddleware(prefix, apiKey string, next http.Handler) http.Handler {
}
// Verify the CSRF token
token := r.Header.Get("X-CSRF-Token")
token := r.Header.Get("X-CSRF-Token-" + unique)
if !validCsrfToken(token) {
http.Error(w, "CSRF Error", 403)
return
@@ -91,28 +90,20 @@ func newCsrfToken() string {
}
func saveCsrfTokens() {
name := locations[locCsrfTokens]
tmp := fmt.Sprintf("%s.tmp.%d", name, time.Now().UnixNano())
// We're ignoring errors in here. It's not super critical and there's
// nothing relevant we can do about them anyway...
f, err := os.OpenFile(tmp, os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0644)
name := locations[locCsrfTokens]
f, err := osutil.CreateAtomic(name, 0600)
if err != nil {
return
}
defer os.Remove(tmp)
for _, t := range csrfTokens {
_, err := fmt.Fprintln(f, t)
if err != nil {
return
}
fmt.Fprintln(f, t)
}
err = f.Close()
if err != nil {
return
}
osutil.Rename(tmp, name)
f.Close()
}
func loadCsrfTokens() {

View File

@@ -11,6 +11,7 @@ import (
"path/filepath"
"runtime"
"strings"
"time"
"github.com/syncthing/syncthing/internal/osutil"
)
@@ -29,6 +30,8 @@ const (
locLogFile = "logFile"
locCsrfTokens = "csrfTokens"
locPanicLog = "panicLog"
locAuditLog = "auditLog"
locGUIAssets = "GUIAssets"
locDefFolder = "defFolder"
)
@@ -48,7 +51,9 @@ var locations = map[locationEnum]string{
locDatabase: "${config}/index-v0.11.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-20060102-150405.log", // passed through time.Format()
locPanicLog: "${config}/panic-${timestamp}.log",
locAuditLog: "${config}/audit-${timestamp}.log",
locGUIAssets: "${config}/gui",
locDefFolder: "${home}/Sync",
}
@@ -107,3 +112,14 @@ func homeDir() string {
}
return home
}
func timestampedLoc(key locationEnum) string {
// We take the roundtrip via "${timestamp}" instead of passing the path
// directly through time.Format() to avoid issues when the path we are
// expanding contains numbers; otherwise for example
// /home/user2006/.../panic-20060102-150405.log would get both instances of
// 2006 replaced by 2015...
tpl := locations[key]
now := time.Now().Format("20060102-150405")
return strings.Replace(tpl, "${timestamp}", now, -1)
}

View File

@@ -10,6 +10,7 @@ import (
"crypto/tls"
"flag"
"fmt"
"io/ioutil"
"log"
"net"
"net/http"
@@ -35,10 +36,10 @@ import (
"github.com/syncthing/syncthing/internal/osutil"
"github.com/syncthing/syncthing/internal/symlinks"
"github.com/syncthing/syncthing/internal/upgrade"
"github.com/syncthing/syncthing/internal/upnp"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/thejerf/suture"
"golang.org/x/crypto/bcrypt"
)
@@ -78,10 +79,15 @@ func init() {
}
}
// Check for a clean release build.
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-beta[\d\.]+)?$`)
// Check for a clean release build. A release is something like "v0.1.2",
// with an optional suffix of letters and dot separated numbers like
// "-beta3.47". If there's more stuff, like a plus sign and a commit hash
// and so on, then it's not a release. If there's a dash anywhere in
// there, it's some kind of beta or prerelease version.
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z]+[\d\.]+)?$`)
IsRelease = exp.MatchString(Version)
IsBeta = strings.Contains(Version, "beta")
IsBeta = strings.Contains(Version, "-")
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
@@ -103,8 +109,6 @@ var (
readRateLimit *ratelimit.Bucket
stop = make(chan int)
discoverer *discover.Discoverer
externalPort int
igd *upnp.IGD
cert tls.Certificate
lans []*net.IPNet
)
@@ -146,10 +150,12 @@ are mostly useful for developers. Use with care.
- "events" (the events package)
- "files" (the files package)
- "http" (the main package; HTTP requests)
- "locks" (the sync package; trace long held locks)
- "net" (the main package; connections & network messages)
- "model" (the model package)
- "scanner" (the scanner package)
- "stats" (the stats package)
- "suture" (the suture package; service management)
- "upnp" (the upnp package)
- "xdr" (the xdr package)
- "all" (all of the above)
@@ -189,6 +195,8 @@ var (
noConsole bool
generateDir string
logFile string
auditEnabled bool
verbose bool
noRestart = os.Getenv("STNORESTART") != ""
noUpgrade = os.Getenv("STNOUPGRADE") != ""
guiAddress = os.Getenv("STGUIADDRESS") // legacy
@@ -225,6 +233,8 @@ func main() {
flag.BoolVar(&doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
flag.BoolVar(&showVersion, "version", false, "Show version")
flag.StringVar(&upgradeTo, "upgrade-to", upgradeTo, "Force upgrade directly from specified URL")
flag.BoolVar(&auditEnabled, "audit", false, "Write events to audit file")
flag.BoolVar(&verbose, "verbose", false, "Print verbose log output")
flag.Usage = usageFor(flag.CommandLine, usage, fmt.Sprintf(extraUsage, baseDirs["config"]))
flag.Parse()
@@ -242,6 +252,10 @@ func main() {
l.Fatalln(err)
}
if guiAssets == "" {
guiAssets = locations[locGUIAssets]
}
if runtime.GOOS == "windows" {
if logFile == "" {
// Use the default log file location
@@ -270,7 +284,7 @@ func main() {
l.Fatalln(dir, "is not a directory")
}
if err != nil && os.IsNotExist(err) {
err = os.MkdirAll(dir, 0700)
err = osutil.MkdirAll(dir, 0700)
if err != nil {
l.Fatalln("generate:", err)
}
@@ -341,7 +355,13 @@ func main() {
// Use leveldb database locks to protect against concurrent upgrades
_, err = leveldb.OpenFile(locations[locDatabase], &opt.Options{OpenFilesCacheCapacity: 100})
if err != nil {
l.Fatalln("Cannot upgrade, database seems to be locked. Is another copy of Syncthing already running?")
l.Infoln("Attempting upgrade through running Syncthing...")
err = upgradeViaRest()
if err != nil {
l.Fatalln("Upgrade:", err)
}
l.Okln("Syncthing upgrading")
return
}
err = upgrade.To(rel)
@@ -366,17 +386,76 @@ func main() {
}
}
func upgradeViaRest() error {
cfg, err := config.Load(locations[locConfigFile], protocol.LocalDeviceID)
if err != nil {
return err
}
target := cfg.GUI().Address
if cfg.GUI().UseTLS {
target = "https://" + target
} else {
target = "http://" + target
}
r, _ := http.NewRequest("POST", target+"/rest/system/upgrade", nil)
r.Header.Set("X-API-Key", cfg.GUI().APIKey)
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client := &http.Client{
Transport: tr,
Timeout: 60 * time.Second,
}
resp, err := client.Do(r)
if err != nil {
return err
}
if resp.StatusCode != 200 {
bs, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
if err != nil {
return err
}
return errors.New(string(bs))
}
return err
}
func syncthingMain() {
var err error
// Create a main service manager. We'll add things to this as we go along.
// We want any logging it does to go through our log system.
mainSvc := suture.New("main", suture.Spec{
Log: func(line string) {
if debugSuture {
l.Debugln(line)
}
},
})
mainSvc.ServeBackground()
// Set a log prefix similar to the ID we will have later on, or early log
// lines look ugly.
l.SetPrefix("[start] ")
if auditEnabled {
startAuditing(mainSvc)
}
if verbose {
mainSvc.Add(newVerboseSvc())
}
// Event subscription for the API; must start early to catch the early events.
apiSub := events.NewBufferedSubscription(events.Default.Subscribe(events.AllEvents), 1000)
if len(os.Getenv("GOMAXPROCS")) == 0 {
runtime.GOMAXPROCS(runtime.NumCPU())
}
events.Default.Log(events.Starting, map[string]string{"home": baseDirs["config"]})
// Ensure that that we have a certificate and key.
cert, err = tls.LoadX509KeyPair(locations[locCertFile], locations[locKeyFile])
cert, err := tls.LoadX509KeyPair(locations[locCertFile], locations[locKeyFile])
if err != nil {
cert, err = newCertificate(locations[locCertFile], locations[locKeyFile], tlsDefaultCommonName)
if err != nil {
@@ -394,6 +473,13 @@ func syncthingMain() {
l.Infoln(LongVersion)
l.Infoln("My ID:", myID)
// Emit the Starting event, now that we know who we are.
events.Default.Log(events.Starting, map[string]string{
"home": baseDirs["config"],
"myID": myID.String(),
})
// Prepare to be able to save configuration
cfgFile := locations[locConfigFile]
@@ -479,6 +565,9 @@ func syncthingMain() {
symlinks.Supported = false
}
protocol.PingTimeout = time.Duration(opts.PingTimeoutS) * time.Second
protocol.PingIdleTime = time.Duration(opts.PingIdleTimeS) * time.Second
if opts.MaxSendKbps > 0 {
writeRateLimit = ratelimit.NewBucketWithRate(float64(1000*opts.MaxSendKbps), int64(5*1000*opts.MaxSendKbps))
}
@@ -496,10 +585,9 @@ func syncthingMain() {
}
dbFile := locations[locDatabase]
dbOpts := &opt.Options{OpenFilesCacheCapacity: 100}
ldb, err := leveldb.OpenFile(dbFile, dbOpts)
ldb, err := leveldb.OpenFile(dbFile, dbOpts())
if err != nil && errors.IsCorrupted(err) {
ldb, err = leveldb.RecoverFile(dbFile, dbOpts)
ldb, err = leveldb.RecoverFile(dbFile, dbOpts())
}
if err != nil {
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
@@ -515,6 +603,7 @@ func syncthingMain() {
}
m := model.NewModel(cfg, myID, myName, "syncthing", Version, ldb)
cfg.Subscribe(m)
if t := os.Getenv("STDEADLOCKTIMEOUT"); len(t) > 0 {
it, err := strconv.Atoi(t)
@@ -525,10 +614,6 @@ func syncthingMain() {
m.StartDeadlockDetector(20 * 60 * time.Second)
}
// GUI
setupGUI(cfg, m)
// Clear out old indexes for other devices. Otherwise we'll start up and
// start needing a bunch of files which are nowhere to be found. This
// needs to be changed when we correctly do persistent indexes.
@@ -540,38 +625,44 @@ func syncthingMain() {
}
m.Index(device, folderCfg.ID, nil, 0, nil)
}
// Routine to pull blocks from other devices to synchronize the local
// folder. Does not run when we are in read only (publish only) mode.
if folderCfg.ReadOnly {
m.StartFolderRO(folderCfg.ID)
} else {
m.StartFolderRW(folderCfg.ID)
}
}
mainSvc.Add(m)
// GUI
setupGUI(mainSvc, cfg, m, apiSub)
// The default port we announce, possibly modified by setupUPnP next.
addr, err := net.ResolveTCPAddr("tcp", opts.ListenAddress[0])
if err != nil {
l.Fatalln("Bad listen address:", err)
}
externalPort = addr.Port
// UPnP
igd = nil
// Start discovery
localPort := addr.Port
discoverer = discovery(localPort)
// Start UPnP. The UPnP service will restart global discovery if the
// external port changes.
if opts.UPnPEnabled {
setupUPnP()
upnpSvc := newUPnPSvc(cfg, localPort)
mainSvc.Add(upnpSvc)
}
// Routine to connect out to configured devices
discoverer = discovery(externalPort)
go listenConnect(myID, m, tlsCfg)
for _, folder := range cfg.Folders() {
// Routine to pull blocks from other devices to synchronize the local
// folder. Does not run when we are in read only (publish only) mode.
if folder.ReadOnly {
l.Okf("Ready to synchronize %s (read only; no external updates accepted)", folder.ID)
m.StartFolderRO(folder.ID)
} else {
l.Okf("Ready to synchronize %s (read-write)", folder.ID)
m.StartFolderRW(folder.ID)
}
}
connectionSvc := newConnectionSvc(cfg, myID, m, tlsCfg)
cfg.Subscribe(connectionSvc)
mainSvc.Add(connectionSvc)
if cpuProfile {
f, err := os.Create(fmt.Sprintf("cpu-%d.pprof", os.Getpid()))
@@ -579,7 +670,6 @@ func syncthingMain() {
log.Fatal(err)
}
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
}
for _, device := range cfg.Devices() {
@@ -602,16 +692,13 @@ func syncthingMain() {
cfg.SetOptions(opts)
cfg.Save()
}
go usageReportingLoop(m)
go func() {
time.Sleep(10 * time.Minute)
err := sendUsageReport(m)
if err != nil {
l.Infoln("Usage report:", err)
}
}()
}
// The usageReportingManager registers itself to listen to configuration
// changes, and there's nothing more we need to tell it from the outside.
// Hence we don't keep the returned pointer.
newUsageReportingManager(m, cfg)
if opts.RestartOnWakeup {
go standbyMonitor()
}
@@ -626,18 +713,78 @@ func syncthingMain() {
}
}
events.Default.Log(events.StartupComplete, nil)
events.Default.Log(events.StartupComplete, map[string]string{
"myID": myID.String(),
})
go generatePingEvents()
cleanConfigDirectory()
code := <-stop
mainSvc.Stop()
l.Okln("Exiting")
if cpuProfile {
pprof.StopCPUProfile()
}
os.Exit(code)
}
func setupGUI(cfg *config.Wrapper, m *model.Model) {
func dbOpts() *opt.Options {
// Calculate a suitable database block cache capacity.
// Default is 8 MiB.
blockCacheCapacity := 8 << 20
// Increase block cache up to this maximum:
const maxCapacity = 64 << 20
// ... which we reach when the box has this much RAM:
const maxAtRAM = 8 << 30
if v := cfg.Options().DatabaseBlockCacheMiB; v != 0 {
// Use the value from the config, if it's set.
blockCacheCapacity = v << 20
} else if bytes, err := memorySize(); err == nil {
// We start at the default of 8 MiB and use larger values for machines
// with more memory.
if bytes > maxAtRAM {
// Cap the cache at maxCapacity when we reach maxAtRam amount of memory
blockCacheCapacity = maxCapacity
} else if bytes > maxAtRAM/maxCapacity*int64(blockCacheCapacity) {
// Grow from the default to maxCapacity at maxAtRam amount of memory
blockCacheCapacity = int(bytes * maxCapacity / maxAtRAM)
}
l.Infoln("Database block cache capacity", blockCacheCapacity/1024, "KiB")
}
return &opt.Options{
OpenFilesCacheCapacity: 100,
BlockCacheCapacity: blockCacheCapacity,
WriteBuffer: 4 << 20,
}
}
func startAuditing(mainSvc *suture.Supervisor) {
auditFile := timestampedLoc(locAuditLog)
fd, err := os.OpenFile(auditFile, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
l.Fatalln("Audit:", err)
}
auditSvc := newAuditSvc(fd)
mainSvc.Add(auditSvc)
// We wait for the audit service to fully start before we return, to
// ensure we capture all events from the start.
auditSvc.WaitForStart()
l.Infoln("Audit log in", auditFile)
}
func setupGUI(mainSvc *suture.Supervisor, cfg *config.Wrapper, m *model.Model, apiSub *events.BufferedSubscription) {
opts := cfg.Options()
guiCfg := overrideGUIConfig(cfg.GUI(), guiAddress, guiAuthentication, guiAPIKey)
@@ -666,10 +813,13 @@ func setupGUI(cfg *config.Wrapper, m *model.Model) {
urlShow := fmt.Sprintf("%s://%s/", proto, net.JoinHostPort(hostShow, strconv.Itoa(addr.Port)))
l.Infoln("Starting web GUI on", urlShow)
err := startGUI(guiCfg, guiAssets, m)
api, err := newAPISvc(myID, guiCfg, guiAssets, m, apiSub)
if err != nil {
l.Fatalln("Cannot start GUI:", err)
}
cfg.Subscribe(api)
mainSvc.Add(api)
if opts.StartBrowser && !noBrowser && !stRestarting {
urlOpen := fmt.Sprintf("%s://%s/", proto, net.JoinHostPort(hostOpen, strconv.Itoa(addr.Port)))
// Can potentially block if the utility we are invoking doesn't
@@ -719,108 +869,6 @@ func generatePingEvents() {
}
}
func setupUPnP() {
if opts := cfg.Options(); len(opts.ListenAddress) == 1 {
_, portStr, err := net.SplitHostPort(opts.ListenAddress[0])
if err != nil {
l.Warnln("Bad listen address:", err)
} else {
// Set up incoming port forwarding, if necessary and possible
port, _ := strconv.Atoi(portStr)
igds := upnp.Discover()
if len(igds) > 0 {
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
// for handling multiple IGDs, which will require changes to the global discovery service
igd = &igds[0]
externalPort = setupExternalPort(igd, port)
if externalPort == 0 {
l.Warnln("Failed to create UPnP port mapping")
} else {
l.Infof("Created UPnP port mapping for external port %d on UPnP device %s.", externalPort, igd.FriendlyIdentifier())
if opts.UPnPRenewal > 0 {
go renewUPnP(port)
}
}
}
}
} else {
l.Warnln("Multiple listening addresses; not attempting UPnP port mapping")
}
}
func setupExternalPort(igd *upnp.IGD, port int) int {
if igd == nil {
return 0
}
for i := 0; i < 10; i++ {
r := 1024 + predictableRandom.Intn(65535-1024)
err := igd.AddPortMapping(upnp.TCP, r, port, fmt.Sprintf("syncthing-%d", r), cfg.Options().UPnPLease*60)
if err == nil {
return r
}
}
return 0
}
func renewUPnP(port int) {
for {
opts := cfg.Options()
time.Sleep(time.Duration(opts.UPnPRenewal) * time.Minute)
// Make sure our IGD reference isn't nil
if igd == nil {
if debugNet {
l.Debugln("Undefined IGD during UPnP port renewal. Re-discovering...")
}
igds := upnp.Discover()
if len(igds) > 0 {
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
// for handling multiple IGDs, which will require changes to the global discovery service
igd = &igds[0]
} else {
if debugNet {
l.Debugln("Failed to discover IGD during UPnP port mapping renewal.")
}
continue
}
}
// Just renew the same port that we already have
if externalPort != 0 {
err := igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", opts.UPnPLease*60)
if err != nil {
l.Warnf("Error renewing UPnP port mapping for external port %d on device %s: %s", externalPort, igd.FriendlyIdentifier(), err.Error())
} else if debugNet {
l.Debugf("Renewed UPnP port mapping for external port %d on device %s.", externalPort, igd.FriendlyIdentifier())
}
continue
}
// Something strange has happened. We didn't have an external port before?
// Or perhaps the gateway has changed?
// Retry the same port sequence from the beginning.
if debugNet {
l.Debugln("No UPnP port mapping defined, updating...")
}
forwardedPort := setupExternalPort(igd, port)
if forwardedPort != 0 {
externalPort = forwardedPort
discoverer.StopGlobal()
discoverer.StartGlobal(opts.GlobalAnnServers, uint16(forwardedPort))
if debugNet {
l.Debugf("Updated UPnP port mapping for external port %d on device %s.", forwardedPort, igd.FriendlyIdentifier())
}
} else {
l.Warnf("Failed to update UPnP port mapping for external port on device " + igd.FriendlyIdentifier() + ".")
}
}
}
func resetDB() error {
return os.RemoveAll(locations[locDatabase])
}
@@ -855,7 +903,7 @@ func discovery(extPort int) *discover.Discoverer {
func ensureDir(dir string, mode int) {
fi, err := os.Stat(dir)
if os.IsNotExist(err) {
err := os.MkdirAll(dir, 0700)
err := osutil.MkdirAll(dir, 0700)
if err != nil {
l.Fatalln(err)
}
@@ -1004,6 +1052,7 @@ func autoUpgrade() {
func cleanConfigDirectory() {
patterns := map[string]time.Duration{
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
@@ -1012,14 +1061,14 @@ func cleanConfigDirectory() {
for pat, dur := range patterns {
pat = filepath.Join(baseDirs["config"], pat)
files, err := filepath.Glob(pat)
files, err := osutil.Glob(pat)
if err != nil {
l.Infoln("Cleaning:", err)
continue
}
for _, file := range files {
info, err := os.Lstat(file)
info, err := osutil.Lstat(file)
if err != nil {
l.Infoln("Cleaning:", err)
continue

View File

@@ -20,6 +20,10 @@ import (
)
func TestFolderErrors(t *testing.T) {
// This test intentionally avoids starting the folders. If they are
// started, they will perform an initial scan, which will create missing
// folder markers and race with the stuff we do in the test.
fcfg := config.FolderConfiguration{
ID: "folder",
RawPath: "testdata/testfolder",
@@ -29,10 +33,8 @@ func TestFolderErrors(t *testing.T) {
})
for _, file := range []string{".stfolder", "testfolder/.stfolder", "testfolder"} {
os.Remove("testdata/" + file)
_, err := os.Stat("testdata/" + file)
if err == nil {
t.Error("Found unexpected file")
if err := os.Remove("testdata/" + file); err != nil && !os.IsNotExist(err) {
t.Fatal(err)
}
}
@@ -57,8 +59,12 @@ func TestFolderErrors(t *testing.T) {
t.Error(err)
}
os.Remove("testdata/testfolder/.stfolder")
os.Remove("testdata/testfolder/")
if err := os.Remove("testdata/testfolder/.stfolder"); err != nil {
t.Fatal(err)
}
if err := os.Remove("testdata/testfolder/"); err != nil {
t.Fatal(err)
}
// Case 2 - new folder, marker created
@@ -79,7 +85,9 @@ func TestFolderErrors(t *testing.T) {
t.Error(err)
}
os.Remove("testdata/.stfolder")
if err := os.Remove("testdata/.stfolder"); err != nil {
t.Fatal(err)
}
// Case 3 - Folder marker missing
@@ -91,7 +99,7 @@ func TestFolderErrors(t *testing.T) {
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb)
m.AddFolder(fcfg)
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "Folder marker missing" {
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder marker missing" {
t.Error("Incorrect error: Folder marker missing !=", m.CheckFolderHealth("folder"))
}
@@ -107,8 +115,12 @@ func TestFolderErrors(t *testing.T) {
// Case 4 - Folder path missing
os.Remove("testdata/testfolder/.stfolder")
os.Remove("testdata/testfolder/")
if err := os.Remove("testdata/testfolder/.stfolder"); err != nil && !os.IsNotExist(err) {
t.Fatal(err)
}
if err := os.Remove("testdata/testfolder"); err != nil && !os.IsNotExist(err) {
t.Fatal(err)
}
fcfg.RawPath = "testdata/testfolder"
cfg = config.Wrap("testdata/subfolder", config.Configuration{
@@ -118,15 +130,17 @@ func TestFolderErrors(t *testing.T) {
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb)
m.AddFolder(fcfg)
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "Folder path missing" {
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder path missing" {
t.Error("Incorrect error: Folder path missing !=", m.CheckFolderHealth("folder"))
}
// Case 4.1 - recover after folder path missing
os.Mkdir("testdata/testfolder", 0700)
if err := os.Mkdir("testdata/testfolder", 0700); err != nil {
t.Fatal(err)
}
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "Folder marker missing" {
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder marker missing" {
t.Error("Incorrect error: Folder marker missing !=", m.CheckFolderHealth("folder"))
}

View File

@@ -14,17 +14,17 @@ import (
"os/signal"
"runtime"
"strings"
"sync"
"syscall"
"time"
"github.com/syncthing/syncthing/internal/osutil"
"github.com/syncthing/syncthing/internal/sync"
)
var (
stdoutFirstLines []string // The first 10 lines of stdout
stdoutLastLines []string // The last 50 lines of stdout
stdoutMut sync.Mutex
stdoutMut = sync.NewMutex()
)
const (
@@ -106,12 +106,24 @@ func monitorMain() {
stdoutLastLines = make([]string, 0, 50)
stdoutMut.Unlock()
go copyStderr(stderr, dst)
go copyStdout(stdout, dst)
wg := sync.NewWaitGroup()
wg.Add(1)
go func() {
copyStderr(stderr, dst)
wg.Done()
}()
wg.Add(1)
go func() {
copyStdout(stdout, dst)
wg.Done()
}()
exit := make(chan error)
go func() {
wg.Wait()
exit <- cmd.Wait()
}()
@@ -124,7 +136,7 @@ func monitorMain() {
case err = <-exit:
if err == nil {
// Successfull exit indicates an intentional shutdown
// Successful exit indicates an intentional shutdown
return
} else if exiterr, ok := err.(*exec.ExitError); ok {
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
@@ -149,7 +161,7 @@ func monitorMain() {
}
}
func copyStderr(stderr io.ReadCloser, dst io.Writer) {
func copyStderr(stderr io.Reader, dst io.Writer) {
br := bufio.NewReader(stderr)
var panicFd *os.File
@@ -163,14 +175,15 @@ func copyStderr(stderr io.ReadCloser, dst io.Writer) {
dst.Write([]byte(line))
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(time.Now().Format(locations[locPanicLog]))
panicFd, err = os.Create(timestampedLoc(locPanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
}
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
l.Warnln("Please create an issue at https://github.com/syncthing/syncthing/issues/ with the panic log attached")
l.Warnln("Please check for existing issues with similar panic message at https://github.com/syncthing/syncthing/issues/")
l.Warnln("If no issue with similar panic message exists, please create a new issue with the panic log attached")
stdoutMut.Lock()
for _, line := range stdoutFirstLines {
@@ -192,7 +205,7 @@ func copyStderr(stderr io.ReadCloser, dst io.Writer) {
}
}
func copyStdout(stdout io.ReadCloser, dst io.Writer) {
func copyStdout(stdout io.Reader, dst io.Writer) {
br := bufio.NewReader(stdout)
for {
line, err := br.ReadString('\n')

View File

@@ -7,6 +7,7 @@
package main
import (
"runtime"
"sync"
"testing"
)
@@ -14,10 +15,13 @@ import (
var predictableRandomTest sync.Once
func TestPredictableRandom(t *testing.T) {
if runtime.GOARCH != "amd64" {
t.Skip("Test only for 64 bit platforms; but if it works there, it should work on 32 bit")
}
predictableRandomTest.Do(func() {
// predictable random sequence is predictable
e := 3440579354231278675
if v := predictableRandom.Int(); v != e {
e := int64(3440579354231278675)
if v := int64(predictableRandom.Int()); v != e {
t.Errorf("Unexpected random value %d != %d", v, e)
}
})

View File

@@ -7,45 +7,51 @@
package main
import (
"sync"
"time"
"github.com/syncthing/syncthing/internal/events"
"github.com/syncthing/syncthing/internal/model"
"github.com/syncthing/syncthing/internal/sync"
"github.com/thejerf/suture"
)
// The folderSummarySvc adds summary information events (FolderSummary and
// FolderCompletion) into the event stream at certain intervals.
type folderSummarySvc struct {
*suture.Supervisor
model *model.Model
srv suture.Service
stop chan struct{}
immediate chan string
// For keeping track of folders to recalculate for
foldersMut sync.Mutex
folders map[string]struct{}
// For keeping track of when the last event request on the API was
lastEventReq time.Time
lastEventReqMut sync.Mutex
}
func (c *folderSummarySvc) Serve() {
srv := suture.NewSimple("folderSummarySvc")
srv.Add(serviceFunc(c.listenForUpdates))
srv.Add(serviceFunc(c.calculateSummaries))
func newFolderSummarySvc(m *model.Model) *folderSummarySvc {
svc := &folderSummarySvc{
Supervisor: suture.NewSimple("folderSummarySvc"),
model: m,
stop: make(chan struct{}),
immediate: make(chan string),
folders: make(map[string]struct{}),
foldersMut: sync.NewMutex(),
lastEventReqMut: sync.NewMutex(),
}
c.immediate = make(chan string)
c.stop = make(chan struct{})
c.folders = make(map[string]struct{})
c.srv = srv
svc.Add(serviceFunc(svc.listenForUpdates))
svc.Add(serviceFunc(svc.calculateSummaries))
srv.Serve()
return svc
}
func (c *folderSummarySvc) Stop() {
// c.srv.Stop() is mostly a no-op here, but we need to call it anyway so
// c.srv doesn't try to restart the serviceFuncs when they exit after we
// close the stop channel.
c.srv.Stop()
c.Supervisor.Stop()
close(c.stop)
}
@@ -66,21 +72,25 @@ func (c *folderSummarySvc) listenForUpdates() {
data := ev.Data.(map[string]interface{})
folder := data["folder"].(string)
if ev.Type == events.StateChanged && data["to"].(string) == "idle" && data["from"].(string) == "syncing" {
// The folder changed to idle from syncing. We should do an
// immediate refresh to update the GUI. The send to
// c.immediate must be nonblocking so that we can continue
// handling events.
switch ev.Type {
case events.StateChanged:
if data["to"].(string) == "idle" && data["from"].(string) == "syncing" {
// The folder changed to idle from syncing. We should do an
// immediate refresh to update the GUI. The send to
// c.immediate must be nonblocking so that we can continue
// handling events.
select {
case c.immediate <- folder:
c.foldersMut.Lock()
delete(c.folders, folder)
c.foldersMut.Unlock()
select {
case c.immediate <- folder:
c.foldersMut.Lock()
delete(c.folders, folder)
c.foldersMut.Unlock()
default:
default:
}
}
} else {
default:
// This folder needs to be refreshed whenever we do the next
// refresh.
@@ -127,16 +137,13 @@ func (c *folderSummarySvc) calculateSummaries() {
// foldersToHandle returns the list of folders needing a summary update, and
// clears the list.
func (c *folderSummarySvc) foldersToHandle() []string {
// We only recalculate sumamries if someone is listening to events
// We only recalculate summaries if someone is listening to events
// (a request to /rest/events has been made within the last
// pingEventInterval).
lastEventRequestMut.Lock()
// XXX: Reaching out to a global var here is very ugly :( Should
// we make the gui stuff a proper object with methods on it that
// we can query about this kind of thing?
last := lastEventRequest
lastEventRequestMut.Unlock()
c.lastEventReqMut.Lock()
last := c.lastEventReq
c.lastEventReqMut.Unlock()
if time.Since(last) > pingEventInterval {
return nil
}
@@ -182,6 +189,12 @@ func (c *folderSummarySvc) sendSummary(folder string) {
}
}
func (c *folderSummarySvc) gotEventRequest() {
c.lastEventReqMut.Lock()
c.lastEventReq = time.Now()
c.lastEventReqMut.Unlock()
}
// serviceFunc wraps a function to create a suture.Service without stop
// functionality.
type serviceFunc func()

View File

@@ -102,7 +102,9 @@ func (l *DowngradingListener) Accept() (net.Conn, error) {
}
br := bufio.NewReader(conn)
conn.SetReadDeadline(time.Now().Add(1 * time.Second))
bs, err := br.Peek(1)
conn.SetReadDeadline(time.Time{})
if err != nil {
// We hit a read error here, but the Accept() call succeeded so we must not return an error.
// We return the connection as is and let whoever tries to use it deal with the error.

120
cmd/syncthing/upnpsvc.go Normal file
View File

@@ -0,0 +1,120 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"time"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/upnp"
)
// The UPnP service runs a loop for discovery of IGDs (Internet Gateway
// Devices) and setup/renewal of a port mapping.
type upnpSvc struct {
cfg *config.Wrapper
localPort int
stop chan struct{}
}
func newUPnPSvc(cfg *config.Wrapper, localPort int) *upnpSvc {
return &upnpSvc{
cfg: cfg,
localPort: localPort,
}
}
func (s *upnpSvc) Serve() {
extPort := 0
foundIGD := true
s.stop = make(chan struct{})
for {
igds := upnp.Discover(time.Duration(s.cfg.Options().UPnPTimeoutS) * time.Second)
if len(igds) > 0 {
foundIGD = true
extPort = s.tryIGDs(igds, extPort)
} else if foundIGD {
// Only print a notice if we've previously found an IGD or this is
// the first time around.
foundIGD = false
l.Infof("No UPnP device detected")
}
d := time.Duration(s.cfg.Options().UPnPRenewalM) * time.Minute
if d == 0 {
// We always want to do renewal so lets just pick a nice sane number.
d = 30 * time.Minute
}
select {
case <-s.stop:
return
case <-time.After(d):
}
}
}
func (s *upnpSvc) Stop() {
close(s.stop)
}
func (s *upnpSvc) tryIGDs(igds []upnp.IGD, prevExtPort int) int {
// Lets try all the IGDs we found and use the first one that works.
// TODO: Use all of them, and sort out the resulting mess to the
// discovery announcement code...
for _, igd := range igds {
extPort, err := s.tryIGD(igd, prevExtPort)
if err != nil {
l.Warnf("Failed to set UPnP port mapping: external port %d on device %s.", extPort, igd.FriendlyIdentifier())
continue
}
if extPort != prevExtPort {
// External port changed; refresh the discovery announcement.
// TODO: Don't reach out to some magic global here?
l.Infof("New UPnP port mapping: external port %d to local port %d.", extPort, s.localPort)
if s.cfg.Options().GlobalAnnEnabled {
discoverer.StopGlobal()
discoverer.StartGlobal(s.cfg.Options().GlobalAnnServers, uint16(extPort))
}
}
if debugNet {
l.Debugf("Created/updated UPnP port mapping for external port %d on device %s.", extPort, igd.FriendlyIdentifier())
}
return extPort
}
return 0
}
func (s *upnpSvc) tryIGD(igd upnp.IGD, suggestedPort int) (int, error) {
var err error
leaseTime := s.cfg.Options().UPnPLeaseM * 60
if suggestedPort != 0 {
// First try renewing our existing mapping.
name := fmt.Sprintf("syncthing-%d", suggestedPort)
err = igd.AddPortMapping(upnp.TCP, suggestedPort, s.localPort, name, leaseTime)
if err == nil {
return suggestedPort, nil
}
}
for i := 0; i < 10; i++ {
// Then try up to ten random ports.
extPort := 1024 + predictableRandom.Intn(65535-1024)
name := fmt.Sprintf("syncthing-%d", extPort)
err = igd.AddPortMapping(upnp.TCP, extPort, s.localPort, name, leaseTime)
if err == nil {
return extPort, nil
}
}
return 0, err
}

View File

@@ -11,12 +11,15 @@ import (
"crypto/rand"
"crypto/sha256"
"encoding/json"
"fmt"
"net"
"net/http"
"runtime"
"time"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/model"
"github.com/thejerf/suture"
)
// Current version number of the usage report, for acceptance purposes. If
@@ -24,8 +27,54 @@ import (
// are prompted for acceptance of the new report.
const usageReportVersion = 1
var stopUsageReportingCh = make(chan struct{})
type usageReportingManager struct {
model *model.Model
sup *suture.Supervisor
}
func newUsageReportingManager(m *model.Model, cfg *config.Wrapper) *usageReportingManager {
mgr := &usageReportingManager{
model: m,
}
// Start UR if it's enabled.
mgr.CommitConfiguration(config.Configuration{}, cfg.Raw())
// Listen to future config changes so that we can start and stop as
// appropriate.
cfg.Subscribe(mgr)
return mgr
}
func (m *usageReportingManager) VerifyConfiguration(from, to config.Configuration) error {
return nil
}
func (m *usageReportingManager) CommitConfiguration(from, to config.Configuration) bool {
if to.Options.URAccepted >= usageReportVersion && m.sup == nil {
// Usage reporting was turned on; lets start it.
svc := &usageReportingService{
model: m.model,
}
m.sup = suture.NewSimple("usageReporting")
m.sup.Add(svc)
m.sup.ServeBackground()
} else if to.Options.URAccepted < usageReportVersion && m.sup != nil {
// Usage reporting was turned off
m.sup.Stop()
m.sup = nil
}
return true
}
func (m *usageReportingManager) String() string {
return fmt.Sprintf("usageReportingManager@%p", m)
}
// reportData returns the data to be sent in a usage report. It's used in
// various places, so not part of the usageReportingSvc object.
func reportData(m *model.Model) map[string]interface{} {
res := make(map[string]interface{})
res["uniqueID"] = cfg.Options().URUniqueID
@@ -75,8 +124,13 @@ func reportData(m *model.Model) map[string]interface{} {
return res
}
func sendUsageReport(m *model.Model) error {
d := reportData(m)
type usageReportingService struct {
model *model.Model
stop chan struct{}
}
func (s *usageReportingService) sendUsageReport() error {
d := reportData(s.model)
var b bytes.Buffer
json.NewEncoder(&b).Encode(d)
@@ -94,32 +148,32 @@ func sendUsageReport(m *model.Model) error {
return err
}
func usageReportingLoop(m *model.Model) {
func (s *usageReportingService) Serve() {
s.stop = make(chan struct{})
l.Infoln("Starting usage reporting")
t := time.NewTicker(86400 * time.Second)
loop:
defer l.Infoln("Stopping usage reporting")
t := time.NewTimer(10 * time.Minute) // time to initial report at start
for {
select {
case <-stopUsageReportingCh:
break loop
case <-s.stop:
return
case <-t.C:
err := sendUsageReport(m)
err := s.sendUsageReport()
if err != nil {
l.Infoln("Usage report:", err)
}
t.Reset(24 * time.Hour) // next report tomorrow
}
}
l.Infoln("Stopping usage reporting")
}
func stopUsageReporting() {
select {
case stopUsageReportingCh <- struct{}{}:
default:
}
func (s *usageReportingService) Stop() {
close(s.stop)
}
// Returns CPU performance as a measure of single threaded SHA-256 MiB/s
// cpuBench returns CPU performance as a measure of single threaded SHA-256 MiB/s
func cpuBench() float64 {
chunkSize := 100 * 1 << 10
h := sha256.New()

129
cmd/syncthing/verbose.go Normal file
View File

@@ -0,0 +1,129 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"github.com/syncthing/syncthing/internal/events"
)
// The verbose logging service subscribes to events and prints these in
// verbose format to the console using INFO level.
type verboseSvc struct {
stop chan struct{} // signals time to stop
started chan struct{} // signals startup complete
}
func newVerboseSvc() *verboseSvc {
return &verboseSvc{
stop: make(chan struct{}),
started: make(chan struct{}),
}
}
// Serve runs the verbose logging service.
func (s *verboseSvc) Serve() {
sub := events.Default.Subscribe(events.AllEvents)
defer events.Default.Unsubscribe(sub)
// We're ready to start processing events.
close(s.started)
for {
select {
case ev := <-sub.C():
formatted := s.formatEvent(ev)
if formatted != "" {
l.Verboseln(formatted)
}
case <-s.stop:
return
}
}
}
// Stop stops the verbose logging service.
func (s *verboseSvc) Stop() {
close(s.stop)
}
// WaitForStart returns once the verbose logging service is ready to receive
// events, or immediately if it's already running.
func (s *verboseSvc) WaitForStart() {
<-s.started
}
func (s *verboseSvc) formatEvent(ev events.Event) string {
switch ev.Type {
case events.Ping, events.DownloadProgress:
// Skip
return ""
case events.Starting:
return fmt.Sprintf("Starting up (%s)", ev.Data.(map[string]string)["home"])
case events.StartupComplete:
return "Startup complete"
case events.DeviceDiscovered:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Discovered device %v at %v", data["device"], data["addrs"])
case events.DeviceConnected:
data := ev.Data.(map[string]string)
return fmt.Sprintf("Connected to device %v at %v", data["id"], data["addr"])
case events.DeviceDisconnected:
data := ev.Data.(map[string]string)
return fmt.Sprintf("Disconnected from device %v", data["id"])
case events.StateChanged:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Folder %q is now %v", data["folder"], data["to"])
case events.RemoteIndexUpdated:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Device %v sent an index update for %q with %d items", data["device"], data["folder"], data["items"])
case events.LocalIndexUpdated:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Updated index for folder %q with %v items", data["folder"], data["items"])
case events.DeviceRejected:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Rejected connection from device %v at %v", data["device"], data["address"])
case events.FolderRejected:
data := ev.Data.(map[string]string)
return fmt.Sprintf("Rejected unshared folder %q from device %v", data["folder"], data["device"])
case events.ItemStarted:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Started syncing %q / %q (%v %v)", data["folder"], data["item"], data["action"], data["type"])
case events.ItemFinished:
data := ev.Data.(map[string]interface{})
if err, ok := data["error"].(*string); ok && err != nil {
// If the err interface{} is not nil, it is a string pointer.
// Dereference it to get the actual error or Sprintf will print
// the pointer value....
return fmt.Sprintf("Finished syncing %q / %q (%v %v): %v", data["folder"], data["item"], data["action"], data["type"], *err)
}
return fmt.Sprintf("Finished syncing %q / %q (%v %v): Success", data["folder"], data["item"], data["action"], data["type"])
case events.ConfigSaved:
return "Configuration was saved"
case events.FolderCompletion:
data := ev.Data.(map[string]interface{})
return fmt.Sprintf("Completion for folder %q on device %v is %v%%", data["folder"], data["device"], data["completion"])
case events.FolderSummary:
data := ev.Data.(map[string]interface{})
sum := data["summary"].(map[string]interface{})
delete(sum, "invalid")
delete(sum, "ignorePatterns")
delete(sum, "stateChanged")
return fmt.Sprintf("Summary for folder %q is %v", data["folder"], data["summary"])
}
return fmt.Sprintf("%s %#v", ev.Type, ev)
}

View File

@@ -51,6 +51,8 @@ func main() {
}
resp.Body.Close()
names := make(map[string]string)
var langs []string
for code, stat := range stats {
code = strings.Replace(code, "_", "-", 1)
@@ -62,6 +64,7 @@ func main() {
}
langs = append(langs, code)
names[code] = languageName(code)
if code == "en" {
continue
}
@@ -85,6 +88,7 @@ func main() {
}
saveValidLangs(langs)
saveLanguageNames(names)
}
func saveValidLangs(langs []string) {
@@ -98,6 +102,16 @@ func saveValidLangs(langs []string) {
fd.Close()
}
func saveLanguageNames(names map[string]string) {
fd, err := os.Create("prettyprint.js")
if err != nil {
log.Fatal(err)
}
fmt.Fprint(fd, "var langPrettyprint = ")
json.NewEncoder(fd).Encode(names)
fd.Close()
}
func userPass() (string, string) {
user := os.Getenv("TRANSIFEX_USER")
pass := os.Getenv("TRANSIFEX_PASS")
@@ -131,7 +145,7 @@ func loadValidLangs() []string {
}
var langs []string
exp := regexp.MustCompile(`\[([a-zA-Z",-]+)\]`)
exp := regexp.MustCompile(`\[([a-zA-Z@",-]+)\]`)
if matches := exp.FindSubmatch(bs); len(matches) == 2 {
langs = strings.Split(string(matches[1]), ",")
for i := range langs {
@@ -142,3 +156,19 @@ func loadValidLangs() []string {
return langs
}
type languageResponse struct {
Code string
Name string
}
func languageName(code string) string {
var lang languageResponse
resp := req("https://www.transifex.com/api/2/language/" + code)
defer resp.Body.Close()
json.NewDecoder(resp.Body).Decode(&lang)
if lang.Name == "" {
return code
}
return lang.Name
}

View File

@@ -1,27 +1,8 @@
This directory contains a configuration for running syncthing under the
# Systemd Configuration
This directory contains configuration files for running syncthing under the
"systemd" service manager on Linux both under either a systemd system service or
systemd user service.
systemd user service. For further documentation take a look at the [systemd
section][1] on the Github Wiki.
1. Install systemd.
2. If you are running this as a system level service:
1. Create the user you will be running the service as (foo in this example).
2. Copy the syncthing@.service files to /etc/systemd/system
3. Enable and start the service
systemctl enable syncthing@foo.service
systemctl start syncthing@foo.service
3. If you are running this as a user level service:
1. Log in as the user you will be running the service as
2. Copy the syncthing.service files to /etc/systemd/user
3. Enable and start the service
systemctl --user enable syncthing.service
systemctl --user start syncthing.service
Log output is sent to the journal.
[1]: http://docs.syncthing.net/users/autostart.html#systemd

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=https://github.com/syncthing/syncthing/wiki
Documentation=http://docs.syncthing.net/
After=network.target
[Service]

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=https://github.com/syncthing/syncthing/wiki
Documentation=http://docs.syncthing.net/
After=network.target
[Service]

View File

@@ -4,9 +4,8 @@ background under Mac OS X.
1. Install the `syncthing` binary in a directory called `bin` in your
home directory.
2. Edit the `syncthing.plist` file in the two places that refer to your
home directory; that is, replace `/Users/jb` with your actual home
directory location.
2. Edit the `syncthing.plist` by replacing `USERNAME` with your actual
username such as `jb`.
3. Copy the `syncthing.plist` file to `~/Library/LaunchAgents`.
@@ -15,3 +14,6 @@ background under Mac OS X.
You probably want to turn off "Start Browser" among the settings to
avoid it opening a browser window on each login.
Logs are in `~/Library/Logs/Syncthing.log` and, for crashes and exceptions,
`~/Library/Logs/Syncthing-Error.log`.

View File

@@ -1,5 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<!--
Make sure the "syncthing" executable is located in ~/bin/syncthing.
Replace the string "USERNAME" in this file with your username, such as "jb".
Copy this file to ~/Library/LaunchAgents/syncthing.plist.
Execute "launchctl load ~/Library/LaunchAgents/syncthing.plist".
-->
<plist version="1.0">
<dict>
<key>Label</key>
@@ -7,13 +13,13 @@
<key>ProgramArguments</key>
<array>
<string>/Users/jb/bin/syncthing</string>
<string>/Users/USERNAME/bin/syncthing</string>
</array>
<key>EnvironmentVariables</key>
<dict>
<key>HOME</key>
<string>/Users/jb</string>
<string>/Users/USERNAME</string>
<key>STNORESTART</key>
<string>1</string>
</dict>
@@ -26,5 +32,11 @@
<key>ProcessType</key>
<string>Background</string>
<key>StandardOutPath</key>
<string>/Users/USERNAME/Library/Logs/Syncthing.log</string>
<key>StandardErrorPath</key>
<string>/Users/USERNAME/Library/Logs/Syncthing-Errors.log</string>
</dict>
</plist>

View File

@@ -68,7 +68,7 @@ identicon {
}
.panel-heading .glyphicon {
margin-right: 15px;
margin-right: 10px;
}
.panel-heading {
@@ -153,27 +153,33 @@ table.table-condensed td {
padding-right: 15px;
}
/**
* Menu for select language
*/
@media (min-width:480px) {
@media (min-width:480px) and (max-width:649px) {
*[language-select] > .dropdown-menu {
width: 230px;
}
}
@media (min-width:650px) {
*[language-select] > .dropdown-menu > li {
width: 50%;
float: left;
}
*[language-select] > .dropdown-menu {
width: 400px;
width: 440px;
}
}
@media (max-width:479px) {
.dropdown-menu {
padding-top: 55px;
}
*[language-select] > .dropdown-toggle {
nav .dropdown-toggle {
font-size: 14px;
}
@@ -239,7 +245,7 @@ ul.three-columns li, ul.two-columns li {
/** Footer nav on small devices **/
@media (max-width: 767px) {
@media (max-width: 991px) {
body {
padding-bottom: 0;
}

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 KiB

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

@@ -1,172 +0,0 @@
{
"API Key": "Ключ API",
"About": "Аб праграме",
"Add": "Дадаць",
"Add Device": "Дадаць прыладу",
"Add Folder": "Дадаць каталёг",
"Add new folder?": "Дадаць новы каталёг ?",
"Address": "Адрас",
"Addresses": "Адрасы",
"All Data": "All Data",
"Allow Anonymous Usage Reporting?": "Allow Anonymous Usage Reporting?",
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
"Anonymous Usage Reporting": "Anonymous Usage Reporting",
"Any devices configured on an introducer device will be added to this device as well.": "Any devices configured on an introducer device will be added to this device as well.",
"Automatic upgrades": "Automatic upgrades",
"Bugs": "Памылкі",
"CPU Utilization": "Выкарыстаньне працэсара",
"Changelog": "Сьпіс зьменаў",
"Close": "Зачыніць",
"Command": "Command",
"Comment, when used at the start of a line": "Comment, when used at the start of a line",
"Compression": "Compression",
"Connection Error": "Connection Error",
"Copied from elsewhere": "Copied from elsewhere",
"Copied from original": "Copied from original",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
"Delete": "Выдаліць",
"Device ID": "ID прылады",
"Device Identification": "Ідэнтыфікацыя прылады",
"Device Name": "Назва прылады",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Device {{device}} ({{address}}) wants to connect. Add new device?",
"Devices": "Прылады",
"Disconnected": "Адлучана",
"Documentation": "Дакумэнтацыя",
"Download Rate": "Хуткасьць спампоўваньня",
"Downloaded": "Downloaded",
"Downloading": "Downloading",
"Edit": "Зьмяніць",
"Edit Device": "Зьмяніць прыладу",
"Edit Folder": "Зьмяніць каталёг",
"Editing": "Editing",
"Enable UPnP": "Enable UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.",
"Enter ignore patterns, one per line.": "Enter ignore patterns, one per line.",
"Error": "Error",
"External File Versioning": "External File Versioning",
"File Versioning": "Вэрсіі файлаў",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.",
"Folder ID": "ID каталёгу",
"Folder Master": "Folder Master",
"Folder Path": "Шлях каталёгу",
"Folders": "Каталёгі",
"GUI Authentication Password": "GUI Authentication Password",
"GUI Authentication User": "GUI Authentication User",
"GUI Listen Addresses": "GUI Listen Addresses",
"Generate": "Сгенераваць",
"Global Discovery": "Глябальнае вызначэньне",
"Global Discovery Server": "Сэрвер глябальнага вызначэньня",
"Global State": "Глябальны стан",
"Ignore": "Ignore",
"Ignore Patterns": "Ігнараваць шаблёны",
"Ignore Permissions": "Ігнараваць правы",
"Incoming Rate Limit (KiB/s)": "Incoming Rate Limit (KiB/s)",
"Introducer": "Introducer",
"Inversion of the given condition (i.e. do not exclude)": "Inversion of the given condition (i.e. do not exclude)",
"Keep Versions": "Трымаць вэрсій",
"Last File Received": "Апошні атрыманы файл",
"Last seen": "Апошні раз бачылі",
"Later": "Later",
"Local Discovery": "Лякальнае вызначэньне",
"Local State": "Лякальны стан",
"Maximum Age": "Maximum Age",
"Metadata Only": "Metadata Only",
"Move to top of queue": "Move to top of queue",
"Multi level wildcard (matches multiple directory levels)": "Multi level wildcard (matches multiple directory levels)",
"Never": "Ніколі",
"New Device": "New Device",
"New Folder": "New Folder",
"No": "Не",
"No File Versioning": "Не захоўваць вэрсіі",
"Notice": "Notice",
"OK": "Добра",
"Off": "Off",
"Out Of Sync": "Несынхранізавана",
"Out of Sync Items": "Несынхранізаваныя складнікі",
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
"Override Changes": "Override Changes",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Path where versions should be stored (leave empty for the default .stversions folder in the folder).",
"Please wait": "Please wait",
"Preview": "Preview",
"Preview Usage Report": "Preview Usage Report",
"Quick guide to supported patterns": "Quick guide to supported patterns",
"RAM Utilization": "Выкарыстаньне памяці",
"Rescan": "Перачытаць",
"Rescan All": "Rescan All",
"Rescan Interval": "Інтэрвал перачытваньня",
"Restart": "Перастартаваць",
"Restart Needed": "Патрэбна перастартоўваньне",
"Restarting": "Перастартоўваньне",
"Reused": "Reused",
"Save": "Захаваць",
"Scanning": "Скануецца",
"Select the devices to share this folder with.": "Пазначце прылады зь якімі трэба абагуліць гэты каталёг.",
"Select the folders to share with this device.": "Пазначце каталёгі якія трэба абагуліць з гэтай прыладай.",
"Settings": "Налады",
"Share": "Абагуліць",
"Share Folder": "Share Folder",
"Share Folders With Device": "Share Folders With Device",
"Share With Devices": "Абагуліць з прыладамі",
"Share this folder?": "Абагуліць гэты каталёг ?",
"Shared With": "Абагульны з",
"Short identifier for the folder. Must be the same on all cluster devices.": "Short identifier for the folder. Must be the same on all cluster devices.",
"Show ID": "Паказаць ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.",
"Shutdown": "Выключыць",
"Shutdown Complete": "Выключэньне завершанае",
"Simple File Versioning": "Простае захоўваньне вэрсій",
"Single level wildcard (matches within a directory only)": "Single level wildcard (matches within a directory only)",
"Source Code": "Зыходнікі",
"Staggered File Versioning": "Адаптыўнае захоўваньне вэрсій",
"Start Browser": "Start Browser",
"Stopped": "Спынена",
"Support": "Падтрымка",
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
"Syncing": "Сынхранізуецца",
"Syncthing has been shut down.": "Syncthing has been shut down.",
"Syncthing includes the following software or portions thereof:": "Syncthing includes the following software or portions thereof:",
"Syncthing is restarting.": "Syncthing перастартоўвае.",
"Syncthing is upgrading.": "Syncthing is upgrading.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
"The aggregated statistics are publicly available at {%url%}.": "The aggregated statistics are publicly available at {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
"The device ID cannot be blank.": "The device ID cannot be blank.",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
"The folder ID cannot be blank.": "The folder ID cannot be blank.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.",
"The folder ID must be unique.": "The folder ID must be unique.",
"The folder path cannot be blank.": "The folder path cannot be blank.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
"The number of old versions to keep, per file.": "Колькі старых вэрсій трымаць, для кожнага файлу.",
"The number of versions must be a number and cannot be blank.": "The number of versions must be a number and cannot be blank.",
"The path cannot be blank.": "The path cannot be blank.",
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
"Unknown": "Невядома",
"Unshared": "Unshared",
"Unused": "Unused",
"Up to Date": "Найноўшае",
"Upgrade To {%version%}": "Upgrade To {{version}}",
"Upgrading": "Абнаўленьне",
"Upload Rate": "Хуткасьць запампоўваньня",
"Use HTTPS for GUI": "Use HTTPS for GUI",
"Version": "Вэрсія",
"Versions Path": "Versions Path",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "When adding a new device, keep in mind that this device must be added on the other side too.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.",
"Yes": "Так",
"You must keep at least one version.": "You must keep at least one version.",
"full documentation": "full documentation",
"items": "складнікаў",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} wants to share folder \"{{folder}}\"."
}

View File

@@ -1,21 +1,29 @@
{
"A negative number of days doesn't make sense.": "Няма логика на зададен отрицателен брой дни.",
"A new major version may not be compatible with previous versions.": "Нова основна версия, която може да не е съвмеситима с предишни версии.",
"API Key": "API Ключ",
"About": "За Програмата",
"Actions": "Действия",
"Add": "Добави",
"Add Device": "Добави устройство",
"Add Folder": "Добави папка",
"Add new folder?": "Добави нова папка?",
"Address": "Адрес",
"Addresses": "Адреси",
"Advanced": "Допълнителни",
"Advanced Configuration": "Допълнителни настройки",
"All Data": "Всички данни",
"Allow Anonymous Usage Reporting?": "Разреши анонимен доклад за ползване на програмата?",
"Alphabetic": "Азбучен ред",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Друга команда се занимава с версиите. Тази команда трябва да премахни файла от синхронизираната папка.",
"Anonymous Usage Reporting": "Анонимен Доклад",
"Any devices configured on an introducer device will be added to this device as well.": "Устройства настроени на introducer компютъра също ще бъдат добавени към този компютър.",
"Automatic upgrades": "Автоматични ъпдейти",
"Be careful!": "Внимавай!",
"Bugs": "Бъгове",
"CPU Utilization": "Натоварване на Процесора",
"Changelog": "Сипъск с промени",
"Clean out after": "Изчисти след",
"Close": "Затвори",
"Command": "Команда",
"Comment, when used at the start of a line": "Коментар, използван в началото на реда",
@@ -25,6 +33,7 @@
"Copied from original": "Копиран от оригинала",
"Copyright © 2015 the following Contributors:": "Правата запазени © 2015 Сътрудници:",
"Delete": "Изтрий",
"Deleted": "Изтрито",
"Device ID": "Идентификатор на устройство",
"Device Identification": "Идентификация на устройство",
"Device Name": "Име на устройство",
@@ -44,14 +53,19 @@
"Enter ignore patterns, one per line.": "Добави шаблони за игнориране, по един на ред.",
"Error": "Грешка",
"External File Versioning": "Външно упраление на версиите",
"Failed Items": "Неуспешни",
"File Pull Order": "По ред на дърпане",
"File Versioning": "Файлови Версии",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Битовете за права за достъп са игнорирани, когато се проверява за промени. Използвай с файлови системи тип FAT.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Файловете биват преместени в .stversions папка, когато са заменен или изтрити от Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с дабавени дата и час.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Файловете са защитени от промени направени на други устройства, но промени направени на това устройство ще бъдат синхронизирани с другите устройства.",
"Folder": "Папка",
"Folder ID": "Идентификатор на папка",
"Folder Master": "Главна папка",
"Folder Path": "Път до папката",
"Folders": "Папки",
"GUI": "Потребителски интефейс",
"GUI Authentication Password": "Парола за Потребителския Интерфейс",
"GUI Authentication User": "Потребител за Потребителския Интерфейс",
"GUI Listen Addresses": "Адрес за Свързване с Потребителския Интерфейс",
@@ -59,41 +73,55 @@
"Global Discovery": "Глобавно Откриване",
"Global Discovery Server": "Сървър за Глобално Откриване",
"Global State": "Глобално състояние",
"Help": "Помощ",
"Home page": "Начална страница",
"Ignore": "Игнорирай",
"Ignore Patterns": "Шаблони за Игнориране",
"Ignore Permissions": "Игнорирай Права за Достъп",
"Incoming Rate Limit (KiB/s)": "Входящ Лимит на Скоростта (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Неправилни настройки могат да повредят съдържанието на папката и да попречат на по-нататъшно синхронизиране.",
"Introducer": "Introducer",
"Inversion of the given condition (i.e. do not exclude)": "Обратното на даденото условие (пр. не изключвай)",
"Keep Versions": "Пази Версии",
"Largest First": "Най-големите първо",
"Last File Received": "Последния получен файл",
"Last seen": "Последно видян",
"Later": "По-късно",
"Local Discovery": "Локално Откриване",
"Local State": "Локално състояние",
"Local State (Total)": "Локално Състояние (Общо)",
"Major Upgrade": "Основно Обновяване",
"Maximum Age": "Максимална Възраст",
"Metadata Only": "Само мета информация",
"Minimum Free Disk Space": "Минимално свободно дисково пространство",
"Move to top of queue": "Премести в началото на опашката",
"Multi level wildcard (matches multiple directory levels)": "Маска на много нива (покрива папки с много нива)",
"Never": "Никога",
"New Device": "Ново устройство",
"New Folder": "Нова папка",
"Newest First": "Най-новите първо",
"No": "Не",
"No File Versioning": "Няма Файлови Версии",
"Notice": "Известие",
"OK": "ОК",
"Off": "Изключено",
"Out Of Sync": "Не Синхронизиран",
"Oldest First": "Най-старите първо",
"Options": "Настройки",
"Out of Sync": "Не синхронизиран",
"Out of Sync Items": "Несинхронизирани елементи",
"Outgoing Rate Limit (KiB/s)": "Лимит на Изходящата Скорост (KiB/s)",
"Override Changes": "Замени Промените",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Пътят до папката на този компютър. Ще бъде създадена ако не съществува. Символът тилда (~) може да бъде използван като заместител на",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Пътят, където версиите да бъдат складирани(остави празно за папката .stversions).",
"Please consult the release notes before performing a major upgrade.": "Моля прочети бележките по обновяването преди да започнеш.",
"Please wait": "Моля изчакай",
"Preview": "Преглед",
"Preview Usage Report": "Разгледай Доклада за Използване",
"Quick guide to supported patterns": "Бърз наръчник към поддържаните шаблони",
"RAM Utilization": "RAM Натоварване",
"Random": "Произволно",
"Release Notes": "Бележки по обновяването",
"Remove": "Премахни",
"Rescan": "Повторно Сканиране",
"Rescan All": "Пълно повторно сканиране",
"Rescan Interval": "Интервал за Повторно Сканиране",
@@ -120,9 +148,11 @@
"Shutdown Complete": "Спирането завършено",
"Simple File Versioning": "Опростени Файлови Версии",
"Single level wildcard (matches within a directory only)": "Маска на едно ниво (покрива само в папка)",
"Smallest First": "Най-малките първо",
"Source Code": "Сорс Код",
"Staggered File Versioning": "Наслагващи се Файлови Версии",
"Start Browser": "Стартирай Браузъра",
"Statistics": "Статистика",
"Stopped": "Спряна",
"Support": "Помощ",
"Sync Protocol Listen Addresses": "Адрес за слушане на синхронизиращия протокол",
@@ -145,19 +175,29 @@
"The folder ID must be unique.": "Идентификаторът на папката тряба да бъде уникален.",
"The folder path cannot be blank.": "Пътят до папката не може да бъде празен.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Използва се следния интервал: за първия час се пази версия всеки 30 секунди, за първия ден се пази версия всеки час, за първите 30 дена се пази версия всеки ден, до максимума се пази една версия всяка седмица.",
"The following items could not be synchronized.": "Следните не могат да бъдат синхронизирани.",
"The maximum age must be a number and cannot be blank.": "Максималната възраст трябва да е число и не може д ае празна.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Максималното време да се пазят весрсии (в дни, сложи 0, за да пазиш версии завинаги).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "Минималното свободно дисково пространство в проценти трябва да е между 0 и 100 (включително).",
"The number of days must be a number and cannot be blank.": "Броят дни трябва да бъде число и неможе да бъде празно.",
"The number of days to keep files in the trash can. Zero means forever.": "Броят дни за запазване на файловете в кошчето. Нула значи завинаги.",
"The number of old versions to keep, per file.": "Броят стари версии, които да бъдат пазени за всеки файл.",
"The number of versions must be a number and cannot be blank.": "Броят версии трябва да бъде число и не може да бъде празно.",
"The path cannot be blank.": "Пътят неможе да бъде празен.",
"The rescan interval must be a non-negative number of seconds.": "Интервала на сканиране трябва да бъде не отрицателно число в секунди.",
"They are retried automatically and will be synced when the error is resolved.": "Ще бъдат спрени и автоматично синхронизирани, когато грешката бъде оправена.",
"This is a major version upgrade.": "Това е нова основна версия.",
"Trash Can File Versioning": "Версии на файлове в кошчето",
"Unknown": "Неясен",
"Unshared": "Споделянето прекратено",
"Unused": "Неизползван",
"Up to Date": "Актуален",
"Updated": "Обновено",
"Upgrade": "Обнови",
"Upgrade To {%version%}": "Обновен До {{version}}",
"Upgrading": "Обновяване",
"Upload Rate": "Скорост на Качване",
"Uptime": "Работи вече",
"Use HTTPS for GUI": "Използвай HTTPS за Потребителския Интерфейс",
"Version": "Версия",
"Versions Path": "Път до Версиите",

View File

@@ -1,30 +1,39 @@
{
"A negative number of days doesn't make sense.": "A negative number of days doesn't make sense.",
"A new major version may not be compatible with previous versions.": "Una nova versió major pot ser incompatible amb versions anteriors.",
"API Key": "Clau API",
"About": "Sobre",
"Actions": "Accions",
"Add": "Afegir",
"Add Device": "Afegir dispositiu",
"Add Folder": "Afegir carpeta",
"Add new folder?": "Afegir nova carpeta?",
"Address": "Adreça",
"Addresses": "Adreces",
"All Data": "All Data",
"Advanced": "Advanced",
"Advanced Configuration": "Advanced Configuration",
"All Data": "Totes les dades",
"Allow Anonymous Usage Reporting?": "Permetre l'enviament anònim d'informes d'ús?",
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
"Alphabetic": "Alfabètic",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Un comando extern s'encarrega del control de versions. Ha d'eliminar l'arxiu de la carpeta sincronitzada.",
"Anonymous Usage Reporting": "Informe anònim d'ús",
"Any devices configured on an introducer device will be added to this device as well.": "Qualsevol dispositiu configurat com a dispositiu introductor s'afegirà a aquest dispositiu tambè.",
"Any devices configured on an introducer device will be added to this device as well.": "Qualsevol dispositiu configurat en un dispositiu introductor també s'afegirà a aquest dispositiu.",
"Automatic upgrades": "Actualitzacions automàtiques",
"Be careful!": "Be careful!",
"Bugs": "Bugs",
"CPU Utilization": "Utilització del CPU",
"Changelog": "Historial de canvis",
"Clean out after": "Clean out after",
"Close": "Tancar",
"Command": "Command",
"Command": "Comando",
"Comment, when used at the start of a line": "Comentari quan és usat al principi d'una línia",
"Compression": "Compression",
"Compression": "Compressió",
"Connection Error": "Error de connexió",
"Copied from elsewhere": "Copiat d'un altre lloc",
"Copied from original": "Copiat de l'original",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 els següents col·laboradors:",
"Delete": "Esborrar",
"Deleted": "Deleted",
"Device ID": "ID del dispositiu",
"Device Identification": "Identificació del dispositiu",
"Device Name": "Nom del dispositiu",
@@ -41,17 +50,22 @@
"Editing": "Modificant",
"Enable UPnP": "Habilitat UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduir, separat per comes, adreces \"ip:port\" o \"dynamic\" per descobrir automàticament les adreces.",
"Enter ignore patterns, one per line.": "Introduïx els patrons d'ignoració, un per línia.",
"Enter ignore patterns, one per line.": "Introduex patrons a ignorar, un per línia.",
"Error": "Error",
"External File Versioning": "External File Versioning",
"External File Versioning": "Versionat de fitxers extern",
"Failed Items": "Failed Items",
"File Pull Order": "Ordre d'agafar fitxers",
"File Versioning": "Versionat de Fitxers",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Els bits de permisos dels fitxers son ignorats quan es cerquen canvis. Utilitzar en sistemes de fitxers FAT.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Files are moved to .stversions folder when replaced or deleted by Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Els fitxers es mouen amb l'estampat de la data a la carpeta .stversions quan son substituïts o esborrats per syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Els fitxers estan protegits de canvis fets per altres dispositius, però els canvis fets en aquest dispositiu seran enviats a la resta del cluster.",
"Folder": "Folder",
"Folder ID": "ID de carpeta",
"Folder Master": "Carpeta mestre",
"Folder Path": "Camí de carpeta",
"Folders": "Carpetes",
"GUI": "GUI",
"GUI Authentication Password": "Contrasenya d'autenticació GUI",
"GUI Authentication User": "Usuari d'autenticació GUI",
"GUI Listen Addresses": "Adreça d'escolta del GUI",
@@ -59,43 +73,57 @@
"Global Discovery": "Descobriment Global",
"Global Discovery Server": "Servidor de Descobriment Global",
"Global State": "Estat global",
"Help": "Ajuda",
"Home page": "Home page",
"Ignore": "Ignorar",
"Ignore Patterns": "Patrons d'ignoració",
"Ignore Permissions": "Ignora Permisos",
"Incoming Rate Limit (KiB/s)": "Tasca Límit d'Entrada (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Incorrect configuration may damage your folder contents and render Syncthing inoperable.",
"Introducer": "Introductor",
"Inversion of the given condition (i.e. do not exclude)": "Inversió del patrò introduït",
"Keep Versions": "Mantenir Versions",
"Largest First": "Més gran primer",
"Last File Received": "Últim fitxer rebut",
"Last seen": "Vist per última vegada",
"Later": "Després",
"Local Discovery": "Descobriment Local",
"Local State": "Estat local",
"Local State (Total)": "Local State (Total)",
"Major Upgrade": "Actualització major",
"Maximum Age": "Antiguitat Màxima",
"Metadata Only": "Metadata Only",
"Move to top of queue": "Move to top of queue",
"Metadata Only": "Només metadades",
"Minimum Free Disk Space": "Minimum Free Disk Space",
"Move to top of queue": "Moure al primer de la cua",
"Multi level wildcard (matches multiple directory levels)": "Caràcter comodí de nivell múltiple (aparella en carpetes de nivells múltiples)",
"Never": "Mai",
"New Device": "Nou dispositiu",
"New Folder": "Nova carpeta",
"Newest First": "Més nou primer",
"No": "No",
"No File Versioning": "Sense Versionat de Fitxer",
"Notice": "Avís",
"OK": "OK",
"Off": "Off",
"Out Of Sync": "Fora de la Sincronització",
"Off": "Desactivar",
"Oldest First": "Més antic primer",
"Options": "Options",
"Out of Sync": "Out of Sync",
"Out of Sync Items": "Arxius encara no sincronitzats",
"Outgoing Rate Limit (KiB/s)": "Tasca Límit de Sortida (KiB/s)",
"Override Changes": "Sobreescriure Canvis",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta de la carpeta a l'equip local. Si no existeix serà creada. El caràcter (~) es pot fer servir com a drecera de",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Ruta on les versions s'haurien de guardar (deixa-ho buit per fer servir el directori .stversions per defecte a la carpeta)",
"Please consult the release notes before performing a major upgrade.": "Si us plau consulta les notes de llançament abans de realitzar una actualització major.",
"Please wait": "Si-us-plau espera",
"Preview": "Vista prèvia",
"Preview Usage Report": "Vista Prèvia de l'Informe d'Ús",
"Quick guide to supported patterns": "Guia ràpida per als possibles patrons",
"RAM Utilization": "Utilització de la RAM",
"Random": "Aleatori",
"Release Notes": "Notes de llançament",
"Remove": "Remove",
"Rescan": "Re-escanejar",
"Rescan All": "Rescan All",
"Rescan All": "Re-escanejar tot",
"Rescan Interval": "Interval de re-escaneig",
"Restart": "Reiniciar",
"Restart Needed": "És Necessari Reiniciar",
@@ -120,9 +148,11 @@
"Shutdown Complete": "Apagat complet",
"Simple File Versioning": "Versionat de Fitxers Senzill",
"Single level wildcard (matches within a directory only)": "Caràcter comodí de nivell singular (aparella sóls en una carpeta)",
"Smallest First": "Més petit primer",
"Source Code": "Codi Font",
"Staggered File Versioning": "Versionat de Fitxers Esglaonat",
"Start Browser": "Arrancar Navegador",
"Statistics": "Statistics",
"Stopped": "Aturat",
"Support": "Suport",
"Sync Protocol Listen Addresses": "Adreça d'escolta del Protocol Sync",
@@ -132,32 +162,42 @@
"Syncthing is restarting.": "Reiniciant syncthing.",
"Syncthing is upgrading.": "Actualitzant syncthing.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Synthing sembla parat, o hi ha algun problema amb la connexió a Internet. Reintentant...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Sembla ser que Syncthing està tinguent problemes per processar la teva petició. Si us plau, refresca la pàgina o reinicia Syncthing si el problema persisteix.",
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan públicament disponibles a {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
"The device ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "El ID del dispositiu per introduir ací es pot trobar al diàleg \"Editar > Mostrar ID\" en l'altre dispositiu. Els espais i les barres son opcionals (s'ignoren).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de carpetes i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del dispositiu introduït no sembla vàlid. Hauria de tenir 52 o 56 caràcters amb lletres i números, els espais i les barres son opcionals.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "El primer paràmetre de la línia de comandes és el camí a la carpeta i el segon paràmetre és el camí relatiu a la carpeta.",
"The folder ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "El ID de la carpeta ha de ser un identificador curt (64 caràcters o menys) format només per lletres, nombres i el punt (.), barra (-) i barra baixa (_).",
"The folder ID must be unique.": "El ID de la carpeta ha de ser únic.",
"The folder path cannot be blank.": "El camí a la carpeta no pot estar en blanc.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es fan servir els següents intervals: per la primera hora es manté una versió cada 30 segons, pel primer dia es manté una versió cada hora, pel primer cada 30 dies es manté una versió cada dia, fins el màxim d'antiguitat es manté una versió cada setmana.",
"The following items could not be synchronized.": "The following items could not be synchronized.",
"The maximum age must be a number and cannot be blank.": "La màxima antiguitat ha de ser un número i no pot estar en blanc.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).",
"The number of days must be a number and cannot be blank.": "The number of days must be a number and cannot be blank.",
"The number of days to keep files in the trash can. Zero means forever.": "The number of days to keep files in the trash can. Zero means forever.",
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
"The path cannot be blank.": "The path cannot be blank.",
"The path cannot be blank.": "El camí no pot estar en blanc.",
"The rescan interval must be a non-negative number of seconds.": "El interval de re-escaneig ha der ser un nombre positiu de segons.",
"They are retried automatically and will be synced when the error is resolved.": "They are retried automatically and will be synced when the error is resolved.",
"This is a major version upgrade.": "Aquesta és una actualització de versió major.",
"Trash Can File Versioning": "Trash Can File Versioning",
"Unknown": "Desconegut",
"Unshared": "No compartit",
"Unused": "No usat",
"Up to Date": "Actualitzat",
"Updated": "Updated",
"Upgrade": "Actualització",
"Upgrade To {%version%}": "Actualitzar a {{version}}",
"Upgrading": "Actualitzant",
"Upload Rate": "Tasca de Pujada",
"Uptime": "Temps funcionant",
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
"Version": "Versió",
"Versions Path": "Carpeta de les Versions",

View File

@@ -0,0 +1,212 @@
{
"A negative number of days doesn't make sense.": "Un nombre negatiu de dies no té sentit.",
"A new major version may not be compatible with previous versions.": "Una nova versión amb canvis importants pot no ser compatible amb versions prèvies.",
"API Key": "Clau API",
"About": "Sobre",
"Actions": "Accions",
"Add": "Afegir",
"Add Device": "Afegir dispositiu",
"Add Folder": "Afegir carpeta",
"Add new folder?": "Afegir nova carpeta?",
"Address": "Direcció",
"Addresses": "Direccions",
"Advanced": "Advanced",
"Advanced Configuration": "Advanced Configuration",
"All Data": "Totes les dades",
"Allow Anonymous Usage Reporting?": "Permetre informes d'ús anònim?",
"Alphabetic": "Alfabètic",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Un comando extern controla el versionat. És necessari eliminar el fitxer de la carpeta sincronitzada.",
"Anonymous Usage Reporting": "Informe d'ús anònim",
"Any devices configured on an introducer device will be added to this device as well.": "Tots els dispositius configurats en un dispositiu presentador seràn afegits també a aquest dispositiu.",
"Automatic upgrades": "Actualitzacions automàtiques",
"Be careful!": "Be careful!",
"Bugs": "Errors (Bugs)",
"CPU Utilization": "Utilització de la CPU",
"Changelog": "Registre de canvis",
"Clean out after": "Netejar després de",
"Close": "Tancar",
"Command": "Comando",
"Comment, when used at the start of a line": "Comentar, quant s'utilitza al principi d'una línia",
"Compression": "Compresió",
"Connection Error": "Error de connexió",
"Copied from elsewhere": "Copiat de qualsevol lloc",
"Copied from original": "Copiat de l'original",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 els següents Col·laboradors:",
"Delete": "Esborrar",
"Deleted": "Esborrat",
"Device ID": "ID del dispositiu",
"Device Identification": "Identificació del dispositiu",
"Device Name": "Nom del dispositiu",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "El dispositiu {{device}} ({{address}}) vol connectar-se. Afegir nou dispositiu?",
"Devices": "Dispositius",
"Disconnected": "Desconnectat",
"Documentation": "Documentació",
"Download Rate": "Velocitat de descàrrega",
"Downloaded": "Descarregat",
"Downloading": "Descarregant",
"Edit": "Editar",
"Edit Device": "Editar dispositiu",
"Edit Folder": "Editar carpeta",
"Editing": "Editant",
"Enable UPnP": "Activar UPnp",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduïr direccions separades per coma com \"ip:port\" o \"dynamic\" per a fer un descobriment automàtic de la direcció.",
"Enter ignore patterns, one per line.": "Introduïr patrons a ignorar, un per línia.",
"Error": "Error",
"External File Versioning": "Versionat extern de fitxers",
"Failed Items": "Failed Items",
"File Pull Order": "Ordre de fitxers del pull",
"File Versioning": "Versionat de fitxer",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Els bits de permís del fitxer són ignorats quant es busquen els canvis. Utilitzar en sistemes de fitxers FAT.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Els arxius es menejen a la carpeta .stversions quant són substituïts o esborrats per Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Els fitxers són canviats a versions amb indicació de data en una carpeta \".stversions\" quant són reemplaçats o esborrats per Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Els fitxers són protegits dels canvis fets en altres dispositius, però els canvis fets en aquest dispositiu seràn enviats a la resta del grup (cluster).",
"Folder": "Folder",
"Folder ID": "ID de carpeta",
"Folder Master": "Carpeta principal",
"Folder Path": "Ruta de la carpeta",
"Folders": "Carpetes",
"GUI": "GUI",
"GUI Authentication Password": "Password d'autenticació de l'Interfície Gràfica d'Usuari (GUI)",
"GUI Authentication User": "Autenticació de l'usuari de l'Interfície Gràfica d'Usuari (GUI)",
"GUI Listen Addresses": "Direcció d'escolta de l'Interfície Gràfica d'Usuari (GUI)",
"Generate": "Generar",
"Global Discovery": "Descobriment global",
"Global Discovery Server": "Servidor de descobriment global",
"Global State": "Estat global",
"Help": "Ajuda",
"Home page": "Home page",
"Ignore": "Ignorar",
"Ignore Patterns": "Patrons a ignorar",
"Ignore Permissions": "Permisos a ignorar",
"Incoming Rate Limit (KiB/s)": "Límit de descàrrega (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Incorrect configuration may damage your folder contents and render Syncthing inoperable.",
"Introducer": "Presentador",
"Inversion of the given condition (i.e. do not exclude)": "Inversió de la condició donada (per exemple no excloure)",
"Keep Versions": "Mantindre versions",
"Largest First": "El més gran primer",
"Last File Received": "Darrer fitxer rebut",
"Last seen": "Vist per última vegada",
"Later": "Més tard",
"Local Discovery": "Descobriment local",
"Local State": "Estat local",
"Local State (Total)": "Estat Local (Total)",
"Major Upgrade": "Actualització important",
"Maximum Age": "Edat màxima",
"Metadata Only": "Sols metadades",
"Minimum Free Disk Space": "Minimum Free Disk Space",
"Move to top of queue": "Moure al principi de la cua",
"Multi level wildcard (matches multiple directory levels)": "Comodí multinivell (coincideix amb múltiples nivells de directoris)",
"Never": "Mai",
"New Device": "Nou dispositiu",
"New Folder": "Nova carpeta",
"Newest First": "El més nou primer",
"No": "No",
"No File Versioning": "Sense versionat de fitxer",
"Notice": "Avís",
"OK": "OK",
"Off": "Off",
"Oldest First": "El més vell primer",
"Options": "Options",
"Out of Sync": "Out of Sync",
"Out of Sync Items": "Dispositius sense sincronitzar",
"Outgoing Rate Limit (KiB/s)": "Límit de pujada (KiB/s)",
"Override Changes": "Sobreescriure els canvis",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta a la carpeta local en l'ordinador. Es crearà si no existeix. El caràcter tilde (~) es pot utilitzar com a drecera",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Ruta on les versions deurien estar emmagatzemades (deixar buit per a la carpeta .stversions en la carpeta).",
"Please consult the release notes before performing a major upgrade.": "Per favor, consultar les notes de la versió abans de fer una actualització important.",
"Please wait": "Per favor, espere",
"Preview": "Vista prèvia",
"Preview Usage Report": "Informe d'ús de vista prèvia",
"Quick guide to supported patterns": "Guía ràpida de patrons suportats",
"RAM Utilization": "Utilització de la RAM",
"Random": "Aleatori",
"Release Notes": "Notes de la versió",
"Remove": "Remove",
"Rescan": "Tornar a buscar",
"Rescan All": "Tornar a buscar tot",
"Rescan Interval": "Interval de nova busca",
"Restart": "Reiniciar",
"Restart Needed": "Reinici necesari",
"Restarting": "Reiniciant",
"Reused": "Reutilitzat",
"Save": "Gravar",
"Scanning": "Rastrejant",
"Select the devices to share this folder with.": "Selecciona els dispositius amb els que compartir aquesta carpeta.",
"Select the folders to share with this device.": "Selecciona les carpetes per a compartir amb aquest dispositiu.",
"Settings": "Ajustos",
"Share": "Compartir",
"Share Folder": "Compartir carpeta",
"Share Folders With Device": "Compartir carpetes amb el dispositiu",
"Share With Devices": "Compartir amb els dispositius",
"Share this folder?": "Compartir aquesta carpeta?",
"Shared With": "Compartit amb",
"Short identifier for the folder. Must be the same on all cluster devices.": "Identificador curt per a la carpeta. Deu ser el mateix en tots els dispositius del grup (cluster).",
"Show ID": "Mostrar ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Mostrat en lloc de l'ID del dispositiu en l'estat del grup (cluster). S'anunciarà als altres dispositius com el nom opcional per defecte.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Mostrat en lloc de l'ID del dispositiu en l'estat del grup (cluster). S'actualitzarà al nom que el dispositiu anuncia si es deixa buit.",
"Shutdown": "Apagar",
"Shutdown Complete": "Apagar completament",
"Simple File Versioning": "Versionat de fitxers senzill",
"Single level wildcard (matches within a directory only)": "Comodí de nivell únic (coincideix sols dins d'un directori)",
"Smallest First": "El més xicotet primer",
"Source Code": "Codi font",
"Staggered File Versioning": "Versionat de fitxers escalonat",
"Start Browser": "Iniciar navegador",
"Statistics": "Statistics",
"Stopped": "Parat",
"Support": "Suport",
"Sync Protocol Listen Addresses": "Direccions d'escolta del protocol de sincronització",
"Syncing": "Sincronitzant",
"Syncthing has been shut down.": "Syncthing s'ha apagat",
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent software o parts d'ell:",
"Syncthing is restarting.": "Syncthing està reiniciant.",
"Syncthing is upgrading.": "Syncthing està actualitzant-se.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing pareix apagat o hi ha un problema amb la connexió a Internet. Tornant a intentar...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing pareix que té un problema processant la seua sol·licitud. Per favor, refresque la pàgina o reinicie Syncthing si el problema persistix.",
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan disponibles públicament en {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració ha sigut gravada però no activada. Syncthing deu reiniciar per tal d'activar la nova configuració.",
"The device ID cannot be blank.": "L'ID del dispositiu no pot estar buida.",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID del dispositiu que hi ha que introduïr ací es pot trobar en el menú \"Editar > Mostrar ID\" en l'altre dispositiu. Els espais i les barres son opcionals (ignorats).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe encriptat d'ús s'envia diariament. S'utilitza per a rastrejar plataformes comuns, tamanys de carpetes i versions de l'aplicació. Si el conjunt de dades enviat a l'informe es canvia, se li demanarà a vosté l'autorització altra vegada.\n",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID del dispositiu introduïda no pareix vàlida. Deuria ser una cadena de 52 o 56 caracters consistents en lletres i nombre, amb espais i barres opcionals.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "El primer paràmetre de la línia de comandos és la ruta de la carpeta i el segon és la ruta relativa en la carpeta.",
"The folder ID cannot be blank.": "L'ID de la carpeta no pot estar buit.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID de la carpeta deu ser un identificador curt (64 caracters o menys) consistint en lletres, nombres i únicament els caracters del punt (.), guió (-) i guió baix (_).",
"The folder ID must be unique.": "L'ID de la carpeta deu ser única.",
"The folder path cannot be blank.": "La ruta de la carpeta no pot estar buida.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "S'utilitzen els següents intervals: per a la primera hora es guarda una versió cada 30 segons, per al primer dia es guarda una versió cada hora, per als primers 30 dies es guarda una versió diaria, fins l'edat màxima es guarda una versió cada setmana.",
"The following items could not be synchronized.": "The following items could not be synchronized.",
"The maximum age must be a number and cannot be blank.": "L'edat màxima deu ser un nombre i no pot estar buida.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "El temps màxim per a guardar una versió (en dies, ficar 0 per a guardar les versions per a sempre).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).",
"The number of days must be a number and cannot be blank.": "El nombre de dies deu ser un nombre i no pot estar en blanc.",
"The number of days to keep files in the trash can. Zero means forever.": "El nombre de dies per a mantindre els arxius a la paperera. Cero vol dir \"per a sempre\".",
"The number of old versions to keep, per file.": "El nombre de versions antigues per a guardar, per cada fitxer.",
"The number of versions must be a number and cannot be blank.": "El nombre de versions deu ser un nombre i no pot estar buit.",
"The path cannot be blank.": "La ruta no pot estar buida.",
"The rescan interval must be a non-negative number of seconds.": "L'interval de reescaneig deu ser un nombre positiu de segons.",
"They are retried automatically and will be synced when the error is resolved.": "They are retried automatically and will be synced when the error is resolved.",
"This is a major version upgrade.": "Aquesta és una actualització important de la versió.",
"Trash Can File Versioning": "Versionat d'arxius de la paperera",
"Unknown": "Desconegut",
"Unshared": "No compartit",
"Unused": "No utilitzat",
"Up to Date": "Actualitzat",
"Updated": "Actualitzat",
"Upgrade": "Actualitzar",
"Upgrade To {%version%}": "Actualitzar a {{version}}",
"Upgrading": "Actualitzant",
"Upload Rate": "Velocitat d'actualització",
"Uptime": "Temps de funcionament",
"Use HTTPS for GUI": "Utilitzar HTTP per a l'Interfície Gràfica d'Usuari (GUI)",
"Version": "Versió",
"Versions Path": "Ruta de les versions",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions s'esborren automàticament si són més antigues que l'edat màxima o excedixen el nombre de fitxer permesos en un interval.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Quant s'afig un nou dispositiu, hi ha que tindre en compte que aquest dispositiu deu ser afegit també en l'altre costat.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Quant s'afig una nova carpeta, hi ha que tindre en compte que l'ID de la carpeta s'utilitza per a juntar les carpetes entre dispositius. Són sensibles a les majúscules i deuen coincidir exactament entre tots els dispositius.",
"Yes": "Sí",
"You must keep at least one version.": "Es deu mantindre al menys una versió.",
"full documentation": "Documentació completa",
"items": "Elements",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vol compartit la carpeta \"{{folder}}\"."
}

Some files were not shown because too many files have changed in this diff Show More