Compare commits

...

36 Commits

Author SHA1 Message Date
Jakob Borg
ea5808d833 Only check specified paths in check-authors.go 2016-03-06 11:59:03 +01:00
Jakob Borg
8b045a826a Update docs & translations 2016-03-06 11:55:53 +01:00
Laurent Arnoud
1a6e078510 Add priority,section and homepage to debian/control 2016-03-06 11:54:43 +01:00
Laurent Arnoud
daff8010cd Fix description-contains-tabs and improve description 2016-03-06 11:54:39 +01:00
Audrius Butkevicius
4035930e0e Add kralo 2016-03-06 11:54:27 +01:00
Max Schulze
fc63a384b2 gui: add a lock icon to the folder title for easy overview (fixes #2703)
(to indicate it is a master directory)
2016-03-06 11:54:20 +01:00
Max Schulze
2f3449b651 gui: add html tooltips (title) to the folder path and syncthing version elements (fixes #2758) 2016-03-06 11:54:15 +01:00
Stefan Tatschner
31c65a39e4 systemd: Add syncthing-resume.service
This systemd service restarts Syncthing after resume from suspend
via sending SIGHUP. By default Syncthing detects resume from sleep
on its own by looking for jumps in the system clock. Since systemd
knows exactly when the system resumes from sleep let's trigger
the Syncthing restart from there. Doing this in systemd eliminates
some annoying delay, as the service is restarted immediately after
resume. Also, using the systemd dependency mechanism syncthing-inotify
is restarted as well.

$ journalctl -e --identifier syncthing --identifier syncthing-inotify --identifier systemd
Feb 22 09:44:27 kronos systemd[1]: Reached target Sleep.
Feb 22 09:44:27 kronos systemd[1]: Starting Suspend...
Feb 22 09:44:33 kronos systemd[1]: Time has been changed
Feb 22 09:44:33 kronos systemd[963]: Time has been changed
Feb 22 09:44:33 kronos systemd[1]: Started Suspend.
Feb 22 09:44:33 kronos systemd[1]: sleep.target: Unit not needed anymore. Stopping.
Feb 22 09:44:33 kronos systemd[1]: Stopped target Sleep.
Feb 22 09:44:33 kronos systemd[1]: Reached target Suspend.
Feb 22 09:44:33 kronos systemd[1]: suspend.target: Unit is bound to inactive unit systemd-suspend.service. Stopping, too.
Feb 22 09:44:33 kronos systemd[1]: Stopped target Suspend.
Feb 22 09:44:33 kronos systemd[1]: Starting Restart Syncthing after resume...
Feb 22 09:44:33 kronos syncthing[2561]: [35K66] OK: Exiting
Feb 22 09:44:33 kronos systemd[1]: Started Restart Syncthing after resume.
Feb 22 09:44:34 kronos systemd[963]: syncthing.service: Service hold-off time over, scheduling restart.
Feb 22 09:44:34 kronos systemd[963]: Stopping Syncthing Inotify File Watcher...
Feb 22 09:44:34 kronos systemd[963]: Stopped Syncthing Inotify File Watcher.
Feb 22 09:44:34 kronos systemd[963]: Stopped Syncthing - Open Source Continuous File Synchronization.
Feb 22 09:44:34 kronos systemd[963]: Started Syncthing - Open Source Continuous File Synchronization.
Feb 22 09:44:34 kronos systemd[963]: Started Syncthing Inotify File Watcher.
Feb 22 09:44:34 kronos syncthing[2836]: [35K66] INFO: syncthing v0.12.19 "Beryllium Bedbug" (go1.5.3 linux-amd64) builduser@svetlemodry 2016-02-14 19:26:33 UTC

This system service has to be located in "/etc/systemd/system/syncthing-resume.service",
and for packages in "/usr/lib/systemd/system/syncthing-resume.service". It can be
enabled using "systemctl enable syncthing-resume.service".
2016-03-06 11:54:08 +01:00
Jakob Borg
7ba20928c3 Only test with -race on supported platforms (fixes #2765) 2016-03-06 11:53:54 +01:00
Jakob Borg
c482cbbe70 Docs & translations 2016-02-14 20:26:33 +01:00
Jakob Borg
4042a3e406 Merge branch 'master' into v0.12
* master:
  Report versioning usage in usage report
  Swap the corsMiddleware and the csrfMiddleware to the unauthenticated OPTIONS requests are first processed.
  Return "No such object in the index" when /rest/db/file gets called on something that doesn't exist
  Revert "Add .arcconfig to project root"
  Add .arcconfig to project root
2016-02-14 19:39:59 +01:00
Jakob Borg
392132dc3b Merge commit '6f2de31' into v0.12 (no changes taken)
* commit '6f2de31':
  Use v2 of XDR package (actual changes)
  Use v2 of XDR package (auto generated)
  Use v2 of XDR package (deps)
2016-02-14 19:39:34 +01:00
Jakob Borg
459a3dc58c Update docs & translations 2016-02-08 17:39:45 +01:00
Jakob Borg
194a8b0922 Merge commit 'a7a9d7d' into v0.12
* commit 'a7a9d7d':
  Return correct content type for /rest/events
  Rename RawAPIKey -> APIKey in GUIConfiguration
  Add -paths option to print config, key, database paths
  Clean up error handling a bit in protocol.readMessage
  Remove old reference to moved protocol
  Support multiple API keys (command-line and config) (fixes #2747)
2016-02-08 17:38:52 +01:00
Jakob Borg
8a8336ae08 Merge branch 'master' into v0.12
* master:
  Update docs & translations
  build.sh prerelease should rebuild author credits in about dialog
  Use dialer in relay checks (fixes #2732)
  Handle null case for invalid ng-model value (fixes #2392)
  Return status code 307 instead of 302 when redirecting from HTTP to HTTPS
  Benchmark for single database update
  Add a CORS handler to deal with preflight OPTIONS requests
  Add letiemble
  Update docs and translations
  Correct order of pkill(1) arguments in debian script (fixes #2728)
2016-01-31 10:39:10 +01:00
Jakob Borg
458e0b3b8b Merge branch 'master' into v0.12
* master:
  Update docs and translations
  Model.internalScanFolder: Don't ignore special .stfolder and .stignore files.
  Model.internalScanFolderSubs: Scan only requested subs.
  A couple of protocol tests
  Humanize serialization of version vectors (again)
  FetchLatestReleases: fix the error log message
  Add postinst script to restart after upgrade
  Templatize Debian files
  Don't require restart for usage reporting changes (fixes #2704)
  RLimit comment typo
2016-01-24 08:09:28 +01:00
Jakob Borg
9758dc6422 Update docs and translations 2016-01-24 08:08:08 +01:00
Jakob Borg
e4a9fb8a27 Merge master into v0.12 (no changes) 2016-01-17 22:13:34 +01:00
Jakob Borg
f0473fde17 Docs and translations update 2016-01-17 10:57:20 +01:00
Jakob Borg
e2980a5210 Don't crash on folder remove while pulling (fixes #2705) 2016-01-17 10:55:39 +01:00
Jakob Borg
764da14440 Improve API/GUI shutdown handling (fixes #2694)
This fixes both a race condition where we could assign s.stop from one
goroutine and then read it from another without locking, and handles the
fact that listener may be nil at shutdown if we've had a bad
CommitConfiguration call in the meantime.
2016-01-17 10:55:26 +01:00
Audrius Butkevicius
7da6c627fe Handle race within the job queue (fixes #1263) 2016-01-17 10:55:19 +01:00
Jakob Borg
0a092b5b7f Codesign binaries in Mac OS X distribution packages 2016-01-17 10:54:42 +01:00
Jakob Borg
9f8af2327d Update docs & translations 2016-01-13 21:11:02 +01:00
kluppy
345e24142e Update 'Edit' menu to 'Action' menu (fixes #2662) 2016-01-13 21:09:17 +01:00
kluppy
0fdd03ddee Fix location of build translation scripts. 2016-01-13 21:08:34 +01:00
Jakob Borg
a0fa288cb6 Undo the hash algorithm additions; retain flag checks 2016-01-12 16:03:06 +01:00
Jakob Borg
70bac24832 Always run relaying when enabled (fixes #2665) 2016-01-12 16:02:59 +01:00
Jakob Borg
1df40fbdeb Mend protocol tests, for sure 2016-01-12 16:02:38 +01:00
Jakob Borg
91e9ffff85 Rebuild assets 2016-01-12 14:15:00 +01:00
Jakob Borg
853df14e2f Improve protocol tests, close handling 2016-01-12 10:08:50 +01:00
Jakob Borg
e17a772bb6 Don't leak sendIndexes on disconnect (fixes #2589)
Adds a Closed() method on protocol.Connection and clears up
wireformatConnection a little too.
2016-01-12 10:08:41 +01:00
Audrius Butkevicius
90e027d9a4 Silence the linter 2016-01-12 10:08:32 +01:00
alessandro.g89
fdc9a5d8b0 Add dark theme by alessandro.g89
Source: https://userstyles.org/styles/122502/syncthing-dark
2016-01-12 10:08:23 +01:00
Audrius Butkevicius
543891a0a0 Add support for themes (fixes #1925) 2016-01-12 10:08:04 +01:00
Jakob Borg
06921443fc Docs & translations update 2016-01-10 10:10:53 +01:00
86 changed files with 2085 additions and 2777 deletions

View File

@@ -57,6 +57,7 @@ Marc Pujol <kilburn@la3.org>
Marcin Dziadus <dziadus.marcin@gmail.com>
Mateusz Naściszewski <matin1111@wp.pl>
Matt Burke <mburke@amplify.com> <burkemw3@gmail.com>
Max Schulze <max.schulze@online.de> <kralo@users.noreply.github.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Ploujnikov <ploujj@gmail.com>
Michael Tilli <pyfisch@gmail.com>

5
Godeps/Godeps.json generated
View File

@@ -11,18 +11,15 @@
},
{
"ImportPath": "github.com/calmh/du",
"Comment": "v1.0.0",
"Rev": "3c0690cca16228b97741327b1b6781397afbdb24"
},
{
"ImportPath": "github.com/calmh/luhn",
"Comment": "v1.0.0",
"Rev": "0c8388ff95fa92d4094011e5a04fc99dea3d1632"
},
{
"ImportPath": "github.com/calmh/xdr",
"Comment": "v2.0.0",
"Rev": "b6e0c321c9b5b28ba5ee21e828323e4b982c6976"
"Rev": "9eb3e1a622d9364deb39c831f7e5f164393d7e37"
},
{
"ImportPath": "github.com/golang/snappy",

View File

@@ -6,5 +6,7 @@ xdr
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat)](http://godoc.org/github.com/calmh/xdr)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat)](http://opensource.org/licenses/MIT)
This is an XDR marshalling/unmarshalling library. It uses code generation and
not reflection.
This is an XDR encoding/decoding library. It uses code generation and
not reflection. It supports the IPDR bastardized XDR format when built
with `-tags ipdr`.

View File

@@ -36,73 +36,31 @@ type structInfo struct {
Fields []fieldInfo
}
func (i structInfo) SizeExpr() string {
var xdrSizes = map[string]int{
"int8": 4,
"uint8": 4,
"int16": 4,
"uint16": 4,
"int32": 4,
"uint32": 4,
"int64": 8,
"uint64": 8,
"int": 8,
"bool": 4,
}
var terms []string
nl := ""
for _, f := range i.Fields {
if size := xdrSizes[f.FieldType]; size > 0 {
if f.IsSlice {
terms = append(terms, nl+"4+len(o."+f.Name+")*"+strconv.Itoa(size))
} else {
terms = append(terms, strconv.Itoa(size))
}
} else {
switch f.FieldType {
case "string", "[]byte":
if f.IsSlice {
terms = append(terms, nl+"4+xdr.SizeOfSlice(o."+f.Name+")")
} else {
terms = append(terms, nl+"4+len(o."+f.Name+")+xdr.Padding(len(o."+f.Name+"))")
}
default:
if f.IsSlice {
terms = append(terms, nl+"4+xdr.SizeOfSlice(o."+f.Name+")")
} else {
terms = append(terms, nl+"o."+f.Name+".XDRSize()")
}
}
}
nl = "\n"
}
return strings.Join(terms, "+")
}
var headerData = `// ************************************************************
var headerTpl = template.Must(template.New("header").Parse(`// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package {{.Package}}
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
`
`))
var encoderData = `
func (o {{.Name}}) XDRSize() int {
return {{.SizeExpr}}
var encodeTpl = template.Must(template.New("encoder").Parse(`
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}//+n
func (o {{.Name}}) MarshalXDR() ([]byte, error) {
buf:= make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
return o.AppendXDR(make([]byte, 0, 128))
}//+n
func (o {{.Name}}) MustMarshalXDR() []byte {
func (o {{.TypeName}}) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
@@ -110,155 +68,141 @@ func (o {{.Name}}) MustMarshalXDR() []byte {
return bs
}//+n
func (o {{.Name}}) MarshalXDRInto(m *xdr.Marshaller) error {
{{range $fi := .Fields}}
{{if $fi.IsSlice}}
{{template "marshalSlice" $fi}}
{{else}}
{{template "marshalValue" $fi}}
{{end}}
{{end}}
return m.Error
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}//+n
{{define "marshalValue"}}
{{if ne .Convert ""}}
m.Marshal{{.Encoder}}({{.Convert}}(o.{{.Name}}))
{{else if .IsBasic}}
{{if ge .Max 1}}
if l := len(o.{{.Name}}); l > {{.Max}} {
return xdr.ElementSizeExceeded("{{.Name}}", l, {{.Max}})
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
{{range $fieldInfo := .Fields}}
{{if not $fieldInfo.IsSlice}}
{{if ne $fieldInfo.Convert ""}}
xw.Write{{$fieldInfo.Encoder}}({{$fieldInfo.Convert}}(o.{{$fieldInfo.Name}}))
{{else if $fieldInfo.IsBasic}}
{{if ge $fieldInfo.Max 1}}
if l := len(o.{{$fieldInfo.Name}}); l > {{$fieldInfo.Max}} {
return xw.Tot(), xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", l, {{$fieldInfo.Max}})
}
{{end}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
{{else}}
_, err := o.{{$fieldInfo.Name}}.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
{{end}}
{{else}}
{{if ge $fieldInfo.Max 1}}
if l := len(o.{{$fieldInfo.Name}}); l > {{$fieldInfo.Max}} {
return xw.Tot(), xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", l, {{$fieldInfo.Max}})
}
{{end}}
xw.WriteUint32(uint32(len(o.{{$fieldInfo.Name}})))
for i := range o.{{$fieldInfo.Name}} {
{{if ne $fieldInfo.Convert ""}}
xw.Write{{$fieldInfo.Encoder}}({{$fieldInfo.Convert}}(o.{{$fieldInfo.Name}}[i]))
{{else if $fieldInfo.IsBasic}}
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}}[i])
{{else}}
_, err := o.{{$fieldInfo.Name}}[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
{{end}}
}
{{end}}
m.Marshal{{.Encoder}}(o.{{.Name}})
{{else}}
if err := o.{{.Name}}.MarshalXDRInto(m); err != nil {
return err
}
{{end}}
{{end}}
{{define "marshalSlice"}}
{{if ge .Max 1}}
if l := len(o.{{.Name}}); l > {{.Max}} {
return xdr.ElementSizeExceeded("{{.Name}}", l, {{.Max}})
}
{{end}}
m.MarshalUint32(uint32(len(o.{{.Name}})))
for i := range o.{{.Name}} {
{{if ne .Convert ""}}
m.Marshal{{.Encoder}}({{.Convert}}(o.{{.Name}}[i]))
{{else if .IsBasic}}
m.Marshal{{.Encoder}}(o.{{.Name}}[i])
{{else}}
if err := o.{{.Name}}[i].MarshalXDRInto(m); err != nil {
return err
}
{{end}}
}
{{end}}
func (o *{{.Name}}) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
}
func (o *{{.Name}}) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
{{range $fi := .Fields}}
{{if $fi.IsSlice}}
{{template "unmarshalSlice" $fi}}
{{else}}
{{template "unmarshalValue" $fi}}
{{end}}
{{end}}
return u.Error
return xw.Tot(), xw.Error()
}//+n
{{define "unmarshalValue"}}
{{if ne .Convert ""}}
o.{{.Name}} = {{.FieldType}}(u.Unmarshal{{.Encoder}}())
{{else if .IsBasic}}
{{if ge .Max 1}}
o.{{.Name}} = u.Unmarshal{{.Encoder}}Max({{.Max}})
{{else}}
o.{{.Name}} = u.Unmarshal{{.Encoder}}()
{{end}}
{{else}}
(&o.{{.Name}}).UnmarshalXDRFrom(u)
{{end}}
{{end}}
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}//+n
{{define "unmarshalSlice"}}
_{{.Name}}Size := int(u.UnmarshalUint32())
if _{{.Name}}Size < 0 {
return xdr.ElementSizeExceeded("{{.Name}}", _{{.Name}}Size, {{.Max}})
} else if _{{.Name}}Size == 0 {
o.{{.Name}} = nil
} else {
{{if ge .Max 1}}
if _{{.Name}}Size > {{.Max}} {
return xdr.ElementSizeExceeded("{{.Name}}", _{{.Name}}Size, {{.Max}})
}
{{end}}
if _{{.Name}}Size <= len(o.{{.Name}}) {
{{if eq .FieldType "string"}}
for i := _{{.Name}}Size; i < len(o.{{.Name}}); i++ { o.{{.Name}}[i] = "" }
{{end}}
{{if eq .FieldType "[]byte"}}
for i := _{{.Name}}Size; i < len(o.{{.Name}}); i++ { o.{{.Name}}[i] = nil }
{{end}}
o.{{.Name}} = o.{{.Name}}[:_{{.Name}}Size]
} else {
o.{{.Name}} = make([]{{.FieldType}}, _{{.Name}}Size)
}
for i := range o.{{.Name}} {
{{if ne .Convert ""}}
o.{{.Name}}[i] = {{.FieldType}}(u.Unmarshal{{.Encoder}}())
{{else if .IsBasic}}
{{if ge .Submax 1}}
o.{{.Name}}[i] = u.Unmarshal{{.Encoder}}Max({{.Submax}})
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}//+n
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
{{range $fieldInfo := .Fields}}
{{if not $fieldInfo.IsSlice}}
{{if ne $fieldInfo.Convert ""}}
o.{{$fieldInfo.Name}} = {{$fieldInfo.FieldType}}(xr.Read{{$fieldInfo.Encoder}}())
{{else if $fieldInfo.IsBasic}}
{{if ge $fieldInfo.Max 1}}
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}Max({{$fieldInfo.Max}})
{{else}}
o.{{.Name}}[i] = u.Unmarshal{{.Encoder}}()
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}()
{{end}}
{{else}}
(&o.{{.Name}}[i]).UnmarshalXDRFrom(u)
(&o.{{$fieldInfo.Name}}).DecodeXDRFrom(xr)
{{end}}
}
}
{{end}}
`
var (
encodeTpl = template.Must(template.New("encoder").Parse(encoderData))
headerTpl = template.Must(template.New("header").Parse(headerData))
)
{{else}}
_{{$fieldInfo.Name}}Size := int(xr.ReadUint32())
if _{{$fieldInfo.Name}}Size < 0 {
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
}
{{if ge $fieldInfo.Max 1}}
if _{{$fieldInfo.Name}}Size > {{$fieldInfo.Max}} {
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
}
{{end}}
o.{{$fieldInfo.Name}} = make([]{{$fieldInfo.FieldType}}, _{{$fieldInfo.Name}}Size)
for i := range o.{{$fieldInfo.Name}} {
{{if ne $fieldInfo.Convert ""}}
o.{{$fieldInfo.Name}}[i] = {{$fieldInfo.FieldType}}(xr.Read{{$fieldInfo.Encoder}}())
{{else if $fieldInfo.IsBasic}}
{{if ge $fieldInfo.Submax 1}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}Max({{$fieldInfo.Submax}})
{{else}}
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
{{end}}
{{else}}
(&o.{{$fieldInfo.Name}}[i]).DecodeXDRFrom(xr)
{{end}}
}
{{end}}
{{end}}
return xr.Error()
}`))
var emptyTypeTpl = template.Must(template.New("encoder").Parse(`
func (o {{.Name}}) XDRSize() int {
return 0
}
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
return 0, nil
}//+n
func (o {{.Name}}) MarshalXDR() ([]byte, error) {
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
return nil, nil
}//+n
func (o {{.Name}}) MustMarshalXDR() []byte {
func (o {{.TypeName}}) MustMarshalXDR() []byte {
return nil
}//+n
func (o {{.Name}}) MarshalXDRInto(m *xdr.Marshaller) error {
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
return bs, nil
}//+n
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}//+n
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
return nil
}//+n
func (o *{{.Name}}) UnmarshalXDR(bs []byte) error {
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
return nil
}//+n
func (o *{{.Name}}) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
return nil
}//+n
`))
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}`))
var maxRe = regexp.MustCompile(`(?:\Wmax:)(\d+)(?:\s*,\s*(\d+))?`)
@@ -342,7 +286,7 @@ func handleStruct(t *ast.StructType) []fieldInfo {
f = fieldInfo{
Name: fn,
IsBasic: true,
FieldType: "[]" + tn,
FieldType: tn,
Encoder: enc.Encoder,
Convert: enc.Type,
Max: max1,
@@ -385,14 +329,17 @@ func handleStruct(t *ast.StructType) []fieldInfo {
}
func generateCode(output io.Writer, s structInfo) {
name := s.Name
fs := s.Fields
var buf bytes.Buffer
var err error
if len(s.Fields) == 0 {
if len(fs) == 0 {
// This is an empty type. We can create a quite simple codec for it.
err = emptyTypeTpl.Execute(&buf, s)
err = emptyTypeTpl.Execute(&buf, map[string]interface{}{"TypeName": name})
} else {
// Generate with the default template.
err = encodeTpl.Execute(&buf, s)
err = encodeTpl.Execute(&buf, map[string]interface{}{"TypeName": name, "Fields": fs})
}
if err != nil {
panic(err)
@@ -400,7 +347,12 @@ func generateCode(output io.Writer, s structInfo) {
bs := regexp.MustCompile(`(\s*\n)+`).ReplaceAll(buf.Bytes(), []byte("\n"))
bs = bytes.Replace(bs, []byte("//+n"), []byte("\n"), -1)
output.Write(bs)
bs, err = format.Source(bs)
if err != nil {
panic(err)
}
fmt.Fprintln(output, string(bs))
}
func uncamelize(s string) string {
@@ -432,46 +384,46 @@ func generateDiagram(output io.Writer, s structInfo) {
tn := f.FieldType
name := uncamelize(f.Name)
suffix := ""
if f.IsSlice {
fmt.Fprintf(output, "| %s |\n", center("Number of "+name, 61))
fmt.Fprintln(output, line)
suffix = " (n items)"
fmt.Fprintf(output, "/ %s /\n", center("", 61))
}
switch tn {
case "bool":
fmt.Fprintf(output, "| %s |V|\n", center(name+" (V=0 or 1)", 59))
fmt.Fprintln(output, line)
case "int16", "uint16":
fmt.Fprintf(output, "| %s | %s |\n", center("16 zero bits", 29), center(name, 29))
case "int8", "uint8":
fmt.Fprintf(output, "| %s | %s |\n", center("24 zero bits", 45), center(name, 13))
fmt.Fprintf(output, "| %s | %s |\n", center("0x0000", 29), center(name, 29))
fmt.Fprintln(output, line)
case "int32", "uint32":
fmt.Fprintf(output, "| %s |\n", center(name+suffix, 61))
fmt.Fprintf(output, "| %s |\n", center(name, 61))
fmt.Fprintln(output, line)
case "int64", "uint64":
fmt.Fprintf(output, "| %-61s |\n", "")
fmt.Fprintf(output, "+ %s +\n", center(name+" (64 bits)", 61))
fmt.Fprintf(output, "| %-61s |\n", "")
case "string", "[]byte":
fmt.Fprintln(output, line)
case "string", "byte": // XXX We assume slice of byte!
fmt.Fprintf(output, "| %s |\n", center("Length of "+name, 61))
fmt.Fprintln(output, line)
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintf(output, "\\ %s \\\n", center(name+" (length + padded data)", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(name+" (variable length)", 61))
fmt.Fprintf(output, "/ %61s /\n", "")
fmt.Fprintln(output, line)
default:
if f.IsSlice {
tn = "Zero or more " + tn + " Structures"
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
} else {
tn = tn + " Structure"
fmt.Fprintf(output, "/ %s /\n", center("", 61))
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
fmt.Fprintf(output, "/ %s /\n", center("", 61))
}
fmt.Fprintln(output, line)
}
if f.IsSlice {
fmt.Fprintf(output, "/ %s /\n", center("", 61))
}
fmt.Fprintln(output, line)
}
fmt.Fprintln(output)
fmt.Fprintln(output)
@@ -496,9 +448,9 @@ func generateXdr(output io.Writer, s structInfo) {
}
switch tn {
case "int8", "int16", "int32":
case "int16", "int32":
fmt.Fprintf(output, "\tint %s%s;\n", fn, suf)
case "uint8", "uint16", "uint32":
case "uint16", "uint32":
fmt.Fprintf(output, "\tunsigned int %s%s;\n", fn, suf)
case "int64":
fmt.Fprintf(output, "\thyper %s%s;\n", fn, suf)
@@ -506,7 +458,7 @@ func generateXdr(output io.Writer, s structInfo) {
fmt.Fprintf(output, "\tunsigned hyper %s%s;\n", fn, suf)
case "string":
fmt.Fprintf(output, "\tstring %s<%s>;\n", fn, l)
case "[]byte":
case "byte":
fmt.Fprintf(output, "\topaque %s<%s>;\n", fn, l)
default:
fmt.Fprintf(output, "\t%s %s%s;\n", tn, fn, suf)
@@ -558,22 +510,6 @@ func main() {
i := inspector(&structs)
ast.Inspect(f, i)
buf := new(bytes.Buffer)
headerTpl.Execute(buf, map[string]string{"Package": f.Name.Name})
for _, s := range structs {
fmt.Fprintf(buf, "\n/*\n\n")
generateDiagram(buf, s)
generateXdr(buf, s)
fmt.Fprintf(buf, "*/\n")
generateCode(buf, s)
}
bs, err := format.Source(buf.Bytes())
if err != nil {
log.Print(buf.String())
log.Fatal(err)
}
var output io.Writer = os.Stdout
if *outputFile != "" {
fd, err := os.Create(*outputFile)
@@ -582,5 +518,13 @@ func main() {
}
output = fd
}
output.Write(bs)
headerTpl.Execute(output, map[string]string{"Package": f.Name.Name})
for _, s := range structs {
fmt.Fprintf(output, "\n/*\n\n")
generateDiagram(output, s)
generateXdr(output, s)
fmt.Fprintf(output, "*/\n")
generateCode(output, s)
}
}

View File

@@ -1,59 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import (
"fmt"
"reflect"
)
var padBytes = []byte{0, 0, 0}
// Pad returns the number of bytes that should be added to an item of length l
// bytes to conform to the XDR padding standard. This function is used by the
// generated marshalling code.
func Padding(l int) int {
d := l % 4
if d == 0 {
return 0
}
return 4 - d
}
// ElementSizeExceeded returns an error describing the violated size
// constraint. This function is used by the generated marshalling code.
func ElementSizeExceeded(field string, size, limit int) error {
return fmt.Errorf("%s exceeds size limit; %d > %d", field, size, limit)
}
type XDRSizer interface {
XDRSize() int
}
// SizeOfSlice returns the XDR encoded size of the given []T. Supported types
// for T are string, []byte and types implementing XDRSizer. SizeOfSlice
// panics if the parameter is not a slice or if T is not one of the supported
// types. This function is used by the generated marshalling code.
func SizeOfSlice(ss interface{}) int {
l := 0
switch ss := ss.(type) {
case []string:
for _, s := range ss {
l += 4 + len(s) + Padding(len(s))
}
case [][]byte:
for _, s := range ss {
l += 4 + len(s) + Padding(len(s))
}
default:
v := reflect.ValueOf(ss)
for i := 0; i < v.Len(); i++ {
l += v.Index(i).Interface().(XDRSizer).XDRSize()
}
}
return l
}

16
Godeps/_workspace/src/github.com/calmh/xdr/debug.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import (
"log"
"os"
)
var (
debug = len(os.Getenv("XDRTRACE")) > 0
dl = log.New(os.Stdout, "xdr: ", log.Lshortfile|log.Ltime|log.Lmicroseconds)
)
const maxDebugBytes = 32

View File

@@ -1,5 +1,5 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// Package xdr implements an XDR (RFC 4506) marshaller/unmarshaller.
// Package xdr implements an XDR (RFC 4506) encoder/decoder.
package xdr

View File

@@ -1,123 +0,0 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import "io"
// The Marshaller is a thin wrapper around a byte buffer. The buffer must be
// of sufficient size to hold the complete marshalled object, or an
// io.ErrShortBuffer error will result. The Marshal... methods don't
// individually return an error - the intention is that multiple fields are
// marshalled in rapid succession, followed by a check of the Error field on
// the Marshaller.
type Marshaller struct {
Data []byte
Error error
offset int
}
// MarshalRaw copies the raw bytes to the buffer, without a size prefix or
// padding. This is suitable for appending data already in XDR format from
// another source.
func (m *Marshaller) MarshalRaw(bs []byte) {
if m.Error != nil {
return
}
if len(m.Data) < m.offset+len(bs) {
m.Error = io.ErrShortBuffer
return
}
m.offset += copy(m.Data[m.offset:], bs)
}
// MarshalString appends the string to the buffer, with a size prefix and
// correct padding.
func (m *Marshaller) MarshalString(s string) {
if m.Error != nil {
return
}
if len(m.Data) < m.offset+4+len(s)+Padding(len(s)) {
m.Error = io.ErrShortBuffer
return
}
m.MarshalUint32(uint32(len(s)))
m.offset += copy(m.Data[m.offset:], s)
m.offset += copy(m.Data[m.offset:], padBytes[:Padding(len(s))])
}
// MarshalString appends the bytes to the buffer, with a size prefix and
// correct padding.
func (m *Marshaller) MarshalBytes(bs []byte) {
if m.Error != nil {
return
}
if len(m.Data) < m.offset+4+len(bs)+Padding(len(bs)) {
m.Error = io.ErrShortBuffer
return
}
m.MarshalUint32(uint32(len(bs)))
m.offset += copy(m.Data[m.offset:], bs)
m.offset += copy(m.Data[m.offset:], padBytes[:Padding(len(bs))])
}
// MarshalString appends the bool to the buffer, as an uint32.
func (m *Marshaller) MarshalBool(v bool) {
if v {
m.MarshalUint8(1)
} else {
m.MarshalUint8(0)
}
}
// MarshalString appends the uint8 to the buffer, as an uint32.
func (m *Marshaller) MarshalUint8(v uint8) {
m.MarshalUint32(uint32(v))
}
// MarshalString appends the uint16 to the buffer, as an uint32.
func (m *Marshaller) MarshalUint16(v uint16) {
m.MarshalUint32(uint32(v))
}
// MarshalString appends the uint32 to the buffer.
func (m *Marshaller) MarshalUint32(v uint32) {
if m.Error != nil {
return
}
if len(m.Data) < m.offset+4 {
m.Error = io.ErrShortBuffer
return
}
m.Data[m.offset+0] = byte(v >> 24)
m.Data[m.offset+1] = byte(v >> 16)
m.Data[m.offset+2] = byte(v >> 8)
m.Data[m.offset+3] = byte(v)
m.offset += 4
}
// MarshalString appends the uint64 to the buffer.
func (m *Marshaller) MarshalUint64(v uint64) {
if m.Error != nil {
return
}
if len(m.Data) < m.offset+8 {
m.Error = io.ErrShortBuffer
return
}
m.Data[m.offset+0] = byte(v >> 56)
m.Data[m.offset+1] = byte(v >> 48)
m.Data[m.offset+2] = byte(v >> 40)
m.Data[m.offset+3] = byte(v >> 32)
m.Data[m.offset+4] = byte(v >> 24)
m.Data[m.offset+5] = byte(v >> 16)
m.Data[m.offset+6] = byte(v >> 8)
m.Data[m.offset+7] = byte(v)
m.offset += 8
}

10
Godeps/_workspace/src/github.com/calmh/xdr/pad_ipdr.go generated vendored Normal file
View File

@@ -0,0 +1,10 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build ipdr
package xdr
func pad(l int) int {
return 0
}

14
Godeps/_workspace/src/github.com/calmh/xdr/pad_xdr.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build !ipdr
package xdr
func pad(l int) int {
d := l % 4
if d == 0 {
return 0
}
return 4 - d
}

171
Godeps/_workspace/src/github.com/calmh/xdr/reader.go generated vendored Normal file
View File

@@ -0,0 +1,171 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package xdr
import (
"fmt"
"io"
"reflect"
"unsafe"
)
type Reader struct {
r io.Reader
err error
b [8]byte
}
func NewReader(r io.Reader) *Reader {
return &Reader{
r: r,
}
}
func (r *Reader) ReadRaw(bs []byte) (int, error) {
if r.err != nil {
return 0, r.err
}
var n int
n, r.err = io.ReadFull(r.r, bs)
return n, r.err
}
func (r *Reader) ReadString() string {
return r.ReadStringMax(0)
}
func (r *Reader) ReadStringMax(max int) string {
buf := r.ReadBytesMaxInto(max, nil)
bh := (*reflect.SliceHeader)(unsafe.Pointer(&buf))
sh := reflect.StringHeader{
Data: bh.Data,
Len: bh.Len,
}
return *((*string)(unsafe.Pointer(&sh)))
}
func (r *Reader) ReadBytes() []byte {
return r.ReadBytesInto(nil)
}
func (r *Reader) ReadBytesMax(max int) []byte {
return r.ReadBytesMaxInto(max, nil)
}
func (r *Reader) ReadBytesInto(dst []byte) []byte {
return r.ReadBytesMaxInto(0, dst)
}
func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
if r.err != nil {
return nil
}
l := int(r.ReadUint32())
if r.err != nil {
return nil
}
if l < 0 || max > 0 && l > max {
// l may be negative on 32 bit builds
r.err = ElementSizeExceeded("bytes field", l, max)
return nil
}
if fullLen := l + pad(l); fullLen > len(dst) {
dst = make([]byte, fullLen)
} else {
dst = dst[:fullLen]
}
var n int
n, r.err = io.ReadFull(r.r, dst)
if r.err != nil {
if debug {
dl.Printf("rd bytes (%d): %v", len(dst), r.err)
}
return nil
}
if debug {
if n > maxDebugBytes {
dl.Printf("rd bytes (%d): %x...", len(dst), dst[:maxDebugBytes])
} else {
dl.Printf("rd bytes (%d): %x", len(dst), dst)
}
}
return dst[:l]
}
func (r *Reader) ReadBool() bool {
return r.ReadUint8() != 0
}
func (r *Reader) ReadUint32() uint32 {
if r.err != nil {
return 0
}
_, r.err = io.ReadFull(r.r, r.b[:4])
if r.err != nil {
if debug {
dl.Printf("rd uint32: %v", r.err)
}
return 0
}
v := uint32(r.b[3]) | uint32(r.b[2])<<8 | uint32(r.b[1])<<16 | uint32(r.b[0])<<24
if debug {
dl.Printf("rd uint32=%d (0x%08x)", v, v)
}
return v
}
func (r *Reader) ReadUint64() uint64 {
if r.err != nil {
return 0
}
_, r.err = io.ReadFull(r.r, r.b[:8])
if r.err != nil {
if debug {
dl.Printf("rd uint64: %v", r.err)
}
return 0
}
v := uint64(r.b[7]) | uint64(r.b[6])<<8 | uint64(r.b[5])<<16 | uint64(r.b[4])<<24 |
uint64(r.b[3])<<32 | uint64(r.b[2])<<40 | uint64(r.b[1])<<48 | uint64(r.b[0])<<56
if debug {
dl.Printf("rd uint64=%d (0x%016x)", v, v)
}
return v
}
type XDRError struct {
op string
err error
}
func (e XDRError) Error() string {
return "xdr " + e.op + ": " + e.err.Error()
}
func (e XDRError) IsEOF() bool {
return e.err == io.EOF
}
func (r *Reader) Error() error {
if r.err == nil {
return nil
}
return XDRError{"read", r.err}
}
func ElementSizeExceeded(field string, size, limit int) error {
return fmt.Errorf("%s exceeds size limit; %d > %d", field, size, limit)
}

View File

@@ -0,0 +1,49 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build ipdr
package xdr
import "io"
func (r *Reader) ReadUint8() uint8 {
if r.err != nil {
return 0
}
_, r.err = io.ReadFull(r.r, r.b[:1])
if r.err != nil {
if debug {
dl.Printf("rd uint8: %v", r.err)
}
return 0
}
if debug {
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
}
return r.b[0]
}
func (r *Reader) ReadUint16() uint16 {
if r.err != nil {
return 0
}
_, r.err = io.ReadFull(r.r, r.b[:2])
if r.err != nil {
if debug {
dl.Printf("rd uint16: %v", r.err)
}
return 0
}
v := uint16(r.b[1]) | uint16(r.b[0])<<8
if debug {
dl.Printf("rd uint16=%d (0x%04x)", v, v)
}
return v
}

View File

@@ -0,0 +1,15 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !ipdr
package xdr
func (r *Reader) ReadUint8() uint8 {
return uint8(r.ReadUint32())
}
func (r *Reader) ReadUint16() uint16 {
return uint16(r.ReadUint32())
}

View File

@@ -1,147 +0,0 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package xdr
import (
"io"
"reflect"
"unsafe"
)
type Unmarshaller struct {
Error error
Data []byte
}
func (u *Unmarshaller) UnmarshalRaw(l int) []byte {
if u.Error != nil {
return nil
}
if len(u.Data) < l {
u.Error = io.ErrUnexpectedEOF
return nil
}
v := u.Data[:l]
u.Data = u.Data[l:]
return v
}
func (u *Unmarshaller) UnmarshalString() string {
return u.UnmarshalStringMax(0)
}
func (u *Unmarshaller) UnmarshalStringMax(max int) string {
buf := u.UnmarshalBytesMax(max)
if len(buf) == 0 || u.Error != nil {
return ""
}
var v string
p := (*reflect.StringHeader)(unsafe.Pointer(&v))
p.Data = uintptr(unsafe.Pointer(&buf[0]))
p.Len = len(buf)
return v
}
func (u *Unmarshaller) UnmarshalBytes() []byte {
return u.UnmarshalBytesMax(0)
}
func (u *Unmarshaller) UnmarshalBytesMax(max int) []byte {
if u.Error != nil {
return nil
}
if len(u.Data) < 4 {
u.Error = io.ErrUnexpectedEOF
return nil
}
l := int(u.Data[3]) | int(u.Data[2])<<8 | int(u.Data[1])<<16 | int(u.Data[0])<<24
if l == 0 {
u.Data = u.Data[4:]
return nil
}
if l < 0 || max > 0 && l > max {
// l may be negative on 32 bit builds
u.Error = ElementSizeExceeded("bytes field", l, max)
return nil
}
if len(u.Data) < l+4 {
u.Error = io.ErrUnexpectedEOF
return nil
}
v := u.Data[4 : 4+l]
u.Data = u.Data[4+l+Padding(l):]
return v
}
func (u *Unmarshaller) UnmarshalBool() bool {
return u.UnmarshalUint8() != 0
}
func (u *Unmarshaller) UnmarshalUint8() uint8 {
if u.Error != nil {
return 0
}
if len(u.Data) < 4 {
u.Error = io.ErrUnexpectedEOF
return 0
}
v := uint8(u.Data[3])
u.Data = u.Data[4:]
return v
}
func (u *Unmarshaller) UnmarshalUint16() uint16 {
if u.Error != nil {
return 0
}
if len(u.Data) < 4 {
u.Error = io.ErrUnexpectedEOF
return 0
}
v := uint16(u.Data[3]) | uint16(u.Data[2])<<8
u.Data = u.Data[4:]
return v
}
func (u *Unmarshaller) UnmarshalUint32() uint32 {
if u.Error != nil {
return 0
}
if len(u.Data) < 4 {
u.Error = io.ErrUnexpectedEOF
return 0
}
v := uint32(u.Data[3]) | uint32(u.Data[2])<<8 | uint32(u.Data[1])<<16 | uint32(u.Data[0])<<24
u.Data = u.Data[4:]
return v
}
func (u *Unmarshaller) UnmarshalUint64() uint64 {
if u.Error != nil {
return 0
}
if len(u.Data) < 8 {
u.Error = io.ErrUnexpectedEOF
return 0
}
v := uint64(u.Data[7]) | uint64(u.Data[6])<<8 | uint64(u.Data[5])<<16 | uint64(u.Data[4])<<24 |
uint64(u.Data[3])<<32 | uint64(u.Data[2])<<40 | uint64(u.Data[1])<<48 | uint64(u.Data[0])<<56
u.Data = u.Data[8:]
return v
}

146
Godeps/_workspace/src/github.com/calmh/xdr/writer.go generated vendored Normal file
View File

@@ -0,0 +1,146 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
package xdr
import (
"io"
"reflect"
"unsafe"
)
var padBytes = []byte{0, 0, 0}
type Writer struct {
w io.Writer
tot int
err error
b [8]byte
}
type AppendWriter []byte
func (w *AppendWriter) Write(bs []byte) (int, error) {
*w = append(*w, bs...)
return len(bs), nil
}
func NewWriter(w io.Writer) *Writer {
return &Writer{
w: w,
}
}
func (w *Writer) WriteRaw(bs []byte) (int, error) {
if w.err != nil {
return 0, w.err
}
var n int
n, w.err = w.w.Write(bs)
return n, w.err
}
func (w *Writer) WriteString(s string) (int, error) {
sh := *((*reflect.StringHeader)(unsafe.Pointer(&s)))
bh := reflect.SliceHeader{
Data: sh.Data,
Len: sh.Len,
Cap: sh.Len,
}
return w.WriteBytes(*(*[]byte)(unsafe.Pointer(&bh)))
}
func (w *Writer) WriteBytes(bs []byte) (int, error) {
if w.err != nil {
return 0, w.err
}
w.WriteUint32(uint32(len(bs)))
if w.err != nil {
return 0, w.err
}
if debug {
if len(bs) > maxDebugBytes {
dl.Printf("wr bytes (%d): %x...", len(bs), bs[:maxDebugBytes])
} else {
dl.Printf("wr bytes (%d): %x", len(bs), bs)
}
}
var l, n int
n, w.err = w.w.Write(bs)
l += n
if p := pad(len(bs)); w.err == nil && p > 0 {
n, w.err = w.w.Write(padBytes[:p])
l += n
}
w.tot += l
return l, w.err
}
func (w *Writer) WriteBool(v bool) (int, error) {
if v {
return w.WriteUint8(1)
} else {
return w.WriteUint8(0)
}
}
func (w *Writer) WriteUint32(v uint32) (int, error) {
if w.err != nil {
return 0, w.err
}
if debug {
dl.Printf("wr uint32=%d", v)
}
w.b[0] = byte(v >> 24)
w.b[1] = byte(v >> 16)
w.b[2] = byte(v >> 8)
w.b[3] = byte(v)
var l int
l, w.err = w.w.Write(w.b[:4])
w.tot += l
return l, w.err
}
func (w *Writer) WriteUint64(v uint64) (int, error) {
if w.err != nil {
return 0, w.err
}
if debug {
dl.Printf("wr uint64=%d", v)
}
w.b[0] = byte(v >> 56)
w.b[1] = byte(v >> 48)
w.b[2] = byte(v >> 40)
w.b[3] = byte(v >> 32)
w.b[4] = byte(v >> 24)
w.b[5] = byte(v >> 16)
w.b[6] = byte(v >> 8)
w.b[7] = byte(v)
var l int
l, w.err = w.w.Write(w.b[:8])
w.tot += l
return l, w.err
}
func (w *Writer) Tot() int {
return w.tot
}
func (w *Writer) Error() error {
if w.err == nil {
return nil
}
return XDRError{"write", w.err}
}

View File

@@ -0,0 +1,41 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build ipdr
package xdr
func (w *Writer) WriteUint8(v uint8) (int, error) {
if w.err != nil {
return 0, w.err
}
if debug {
dl.Printf("wr uint8=%d", v)
}
w.b[0] = byte(v)
var l int
l, w.err = w.w.Write(w.b[:1])
w.tot += l
return l, w.err
}
func (w *Writer) WriteUint16(v uint16) (int, error) {
if w.err != nil {
return 0, w.err
}
if debug {
dl.Printf("wr uint8=%d", v)
}
w.b[0] = byte(v >> 8)
w.b[1] = byte(v)
var l int
l, w.err = w.w.Write(w.b[:2])
w.tot += l
return l, w.err
}

View File

@@ -0,0 +1,14 @@
// Copyright (C) 2014 Jakob Borg. All rights reserved. Use of this source code
// is governed by an MIT-style license that can be found in the LICENSE file.
// +build !ipdr
package xdr
func (w *Writer) WriteUint8(v uint8) (int, error) {
return w.WriteUint32(uint32(v))
}
func (w *Writer) WriteUint16(v uint16) (int, error) {
return w.WriteUint32(uint32(v))
}

View File

@@ -1,24 +0,0 @@
Copyright 2012 Suryandaru Triandana <syndtr@gmail.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,27 +0,0 @@
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,22 +0,0 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the Go project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of Go, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of Go. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of Go or any code incorporated within this
implementation of Go constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of Go
shall terminate as of the date such litigation is filed.

View File

@@ -1,27 +0,0 @@
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,22 +0,0 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the Go project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of Go, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of Go. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of Go or any code incorporated within this
implementation of Go constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of Go
shall terminate as of the date such litigation is filed.

View File

@@ -1,156 +0,0 @@
// Created by cgo -godefs - DO NOT EDIT
// cgo -godefs defs_linux.go
// +build linux,mips64
package ipv6
const (
sysIPV6_ADDRFORM = 0x1
sysIPV6_2292PKTINFO = 0x2
sysIPV6_2292HOPOPTS = 0x3
sysIPV6_2292DSTOPTS = 0x4
sysIPV6_2292RTHDR = 0x5
sysIPV6_2292PKTOPTIONS = 0x6
sysIPV6_CHECKSUM = 0x7
sysIPV6_2292HOPLIMIT = 0x8
sysIPV6_NEXTHOP = 0x9
sysIPV6_FLOWINFO = 0xb
sysIPV6_UNICAST_HOPS = 0x10
sysIPV6_MULTICAST_IF = 0x11
sysIPV6_MULTICAST_HOPS = 0x12
sysIPV6_MULTICAST_LOOP = 0x13
sysIPV6_ADD_MEMBERSHIP = 0x14
sysIPV6_DROP_MEMBERSHIP = 0x15
sysMCAST_JOIN_GROUP = 0x2a
sysMCAST_LEAVE_GROUP = 0x2d
sysMCAST_JOIN_SOURCE_GROUP = 0x2e
sysMCAST_LEAVE_SOURCE_GROUP = 0x2f
sysMCAST_BLOCK_SOURCE = 0x2b
sysMCAST_UNBLOCK_SOURCE = 0x2c
sysMCAST_MSFILTER = 0x30
sysIPV6_ROUTER_ALERT = 0x16
sysIPV6_MTU_DISCOVER = 0x17
sysIPV6_MTU = 0x18
sysIPV6_RECVERR = 0x19
sysIPV6_V6ONLY = 0x1a
sysIPV6_JOIN_ANYCAST = 0x1b
sysIPV6_LEAVE_ANYCAST = 0x1c
sysIPV6_FLOWLABEL_MGR = 0x20
sysIPV6_FLOWINFO_SEND = 0x21
sysIPV6_IPSEC_POLICY = 0x22
sysIPV6_XFRM_POLICY = 0x23
sysIPV6_RECVPKTINFO = 0x31
sysIPV6_PKTINFO = 0x32
sysIPV6_RECVHOPLIMIT = 0x33
sysIPV6_HOPLIMIT = 0x34
sysIPV6_RECVHOPOPTS = 0x35
sysIPV6_HOPOPTS = 0x36
sysIPV6_RTHDRDSTOPTS = 0x37
sysIPV6_RECVRTHDR = 0x38
sysIPV6_RTHDR = 0x39
sysIPV6_RECVDSTOPTS = 0x3a
sysIPV6_DSTOPTS = 0x3b
sysIPV6_RECVPATHMTU = 0x3c
sysIPV6_PATHMTU = 0x3d
sysIPV6_DONTFRAG = 0x3e
sysIPV6_RECVTCLASS = 0x42
sysIPV6_TCLASS = 0x43
sysIPV6_ADDR_PREFERENCES = 0x48
sysIPV6_PREFER_SRC_TMP = 0x1
sysIPV6_PREFER_SRC_PUBLIC = 0x2
sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100
sysIPV6_PREFER_SRC_COA = 0x4
sysIPV6_PREFER_SRC_HOME = 0x400
sysIPV6_PREFER_SRC_CGA = 0x8
sysIPV6_PREFER_SRC_NONCGA = 0x800
sysIPV6_MINHOPCOUNT = 0x49
sysIPV6_ORIGDSTADDR = 0x4a
sysIPV6_RECVORIGDSTADDR = 0x4a
sysIPV6_TRANSPARENT = 0x4b
sysIPV6_UNICAST_IF = 0x4c
sysICMPV6_FILTER = 0x1
sysICMPV6_FILTER_BLOCK = 0x1
sysICMPV6_FILTER_PASS = 0x2
sysICMPV6_FILTER_BLOCKOTHERS = 0x3
sysICMPV6_FILTER_PASSONLY = 0x4
sysSizeofKernelSockaddrStorage = 0x80
sysSizeofSockaddrInet6 = 0x1c
sysSizeofInet6Pktinfo = 0x14
sysSizeofIPv6Mtuinfo = 0x20
sysSizeofIPv6FlowlabelReq = 0x20
sysSizeofIPv6Mreq = 0x14
sysSizeofGroupReq = 0x88
sysSizeofGroupSourceReq = 0x108
sysSizeofICMPv6Filter = 0x20
)
type sysKernelSockaddrStorage struct {
Family uint16
X__data [126]int8
}
type sysSockaddrInet6 struct {
Family uint16
Port uint16
Flowinfo uint32
Addr [16]byte /* in6_addr */
Scope_id uint32
}
type sysInet6Pktinfo struct {
Addr [16]byte /* in6_addr */
Ifindex int32
}
type sysIPv6Mtuinfo struct {
Addr sysSockaddrInet6
Mtu uint32
}
type sysIPv6FlowlabelReq struct {
Dst [16]byte /* in6_addr */
Label uint32
Action uint8
Share uint8
Flags uint16
Expires uint16
Linger uint16
X__flr_pad uint32
}
type sysIPv6Mreq struct {
Multiaddr [16]byte /* in6_addr */
Ifindex int32
}
type sysGroupReq struct {
Interface uint32
Pad_cgo_0 [4]byte
Group sysKernelSockaddrStorage
}
type sysGroupSourceReq struct {
Interface uint32
Pad_cgo_0 [4]byte
Group sysKernelSockaddrStorage
Source sysKernelSockaddrStorage
}
type sysICMPv6Filter struct {
Data [8]uint32
}

View File

@@ -1,156 +0,0 @@
// Created by cgo -godefs - DO NOT EDIT
// cgo -godefs defs_linux.go
// +build linux,mips64le
package ipv6
const (
sysIPV6_ADDRFORM = 0x1
sysIPV6_2292PKTINFO = 0x2
sysIPV6_2292HOPOPTS = 0x3
sysIPV6_2292DSTOPTS = 0x4
sysIPV6_2292RTHDR = 0x5
sysIPV6_2292PKTOPTIONS = 0x6
sysIPV6_CHECKSUM = 0x7
sysIPV6_2292HOPLIMIT = 0x8
sysIPV6_NEXTHOP = 0x9
sysIPV6_FLOWINFO = 0xb
sysIPV6_UNICAST_HOPS = 0x10
sysIPV6_MULTICAST_IF = 0x11
sysIPV6_MULTICAST_HOPS = 0x12
sysIPV6_MULTICAST_LOOP = 0x13
sysIPV6_ADD_MEMBERSHIP = 0x14
sysIPV6_DROP_MEMBERSHIP = 0x15
sysMCAST_JOIN_GROUP = 0x2a
sysMCAST_LEAVE_GROUP = 0x2d
sysMCAST_JOIN_SOURCE_GROUP = 0x2e
sysMCAST_LEAVE_SOURCE_GROUP = 0x2f
sysMCAST_BLOCK_SOURCE = 0x2b
sysMCAST_UNBLOCK_SOURCE = 0x2c
sysMCAST_MSFILTER = 0x30
sysIPV6_ROUTER_ALERT = 0x16
sysIPV6_MTU_DISCOVER = 0x17
sysIPV6_MTU = 0x18
sysIPV6_RECVERR = 0x19
sysIPV6_V6ONLY = 0x1a
sysIPV6_JOIN_ANYCAST = 0x1b
sysIPV6_LEAVE_ANYCAST = 0x1c
sysIPV6_FLOWLABEL_MGR = 0x20
sysIPV6_FLOWINFO_SEND = 0x21
sysIPV6_IPSEC_POLICY = 0x22
sysIPV6_XFRM_POLICY = 0x23
sysIPV6_RECVPKTINFO = 0x31
sysIPV6_PKTINFO = 0x32
sysIPV6_RECVHOPLIMIT = 0x33
sysIPV6_HOPLIMIT = 0x34
sysIPV6_RECVHOPOPTS = 0x35
sysIPV6_HOPOPTS = 0x36
sysIPV6_RTHDRDSTOPTS = 0x37
sysIPV6_RECVRTHDR = 0x38
sysIPV6_RTHDR = 0x39
sysIPV6_RECVDSTOPTS = 0x3a
sysIPV6_DSTOPTS = 0x3b
sysIPV6_RECVPATHMTU = 0x3c
sysIPV6_PATHMTU = 0x3d
sysIPV6_DONTFRAG = 0x3e
sysIPV6_RECVTCLASS = 0x42
sysIPV6_TCLASS = 0x43
sysIPV6_ADDR_PREFERENCES = 0x48
sysIPV6_PREFER_SRC_TMP = 0x1
sysIPV6_PREFER_SRC_PUBLIC = 0x2
sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100
sysIPV6_PREFER_SRC_COA = 0x4
sysIPV6_PREFER_SRC_HOME = 0x400
sysIPV6_PREFER_SRC_CGA = 0x8
sysIPV6_PREFER_SRC_NONCGA = 0x800
sysIPV6_MINHOPCOUNT = 0x49
sysIPV6_ORIGDSTADDR = 0x4a
sysIPV6_RECVORIGDSTADDR = 0x4a
sysIPV6_TRANSPARENT = 0x4b
sysIPV6_UNICAST_IF = 0x4c
sysICMPV6_FILTER = 0x1
sysICMPV6_FILTER_BLOCK = 0x1
sysICMPV6_FILTER_PASS = 0x2
sysICMPV6_FILTER_BLOCKOTHERS = 0x3
sysICMPV6_FILTER_PASSONLY = 0x4
sysSizeofKernelSockaddrStorage = 0x80
sysSizeofSockaddrInet6 = 0x1c
sysSizeofInet6Pktinfo = 0x14
sysSizeofIPv6Mtuinfo = 0x20
sysSizeofIPv6FlowlabelReq = 0x20
sysSizeofIPv6Mreq = 0x14
sysSizeofGroupReq = 0x88
sysSizeofGroupSourceReq = 0x108
sysSizeofICMPv6Filter = 0x20
)
type sysKernelSockaddrStorage struct {
Family uint16
X__data [126]int8
}
type sysSockaddrInet6 struct {
Family uint16
Port uint16
Flowinfo uint32
Addr [16]byte /* in6_addr */
Scope_id uint32
}
type sysInet6Pktinfo struct {
Addr [16]byte /* in6_addr */
Ifindex int32
}
type sysIPv6Mtuinfo struct {
Addr sysSockaddrInet6
Mtu uint32
}
type sysIPv6FlowlabelReq struct {
Dst [16]byte /* in6_addr */
Label uint32
Action uint8
Share uint8
Flags uint16
Expires uint16
Linger uint16
X__flr_pad uint32
}
type sysIPv6Mreq struct {
Multiaddr [16]byte /* in6_addr */
Ifindex int32
}
type sysGroupReq struct {
Interface uint32
Pad_cgo_0 [4]byte
Group sysKernelSockaddrStorage
}
type sysGroupSourceReq struct {
Interface uint32
Pad_cgo_0 [4]byte
Group sysKernelSockaddrStorage
Source sysKernelSockaddrStorage
}
type sysICMPv6Filter struct {
Data [8]uint32
}

View File

@@ -1,27 +0,0 @@
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,22 +0,0 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the Go project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of Go, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of Go. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of Go or any code incorporated within this
implementation of Go constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of Go
shall terminate as of the date such litigation is filed.

1
NICKS
View File

@@ -41,6 +41,7 @@ KayoticSully <kayoticsully@gmail.com>
kilburn <kilburn@la3.org>
kluppy <kluppy@going2blue.com>
kozec <kozec@kozec.com>
kralo <max.schulze@online.de>
krozycki <rozycki.karol@gmail.com>
letiemble <laurent.etiemble@gmail.com> <laurent.etiemble@monobjc.net>
LordLandon <lordlandon@gmail.com>

View File

@@ -188,7 +188,18 @@ func setup() {
func test(pkg string) {
setBuildEnv()
runPrint("go", "test", "-short", "-race", "-timeout", "60s", pkg)
useRace := runtime.GOARCH == "amd64"
switch runtime.GOOS {
case "darwin", "linux", "freebsd", "windows":
default:
useRace = false
}
if useRace {
runPrint("go", "test", "-short", "-race", "-timeout", "60s", pkg)
} else {
runPrint("go", "test", "-short", "-timeout", "60s", pkg)
}
}
func bench(pkg string) {

View File

@@ -48,7 +48,7 @@ var locations = map[locationEnum]string{
locKeyFile: "${config}/key.pem",
locHTTPSCertFile: "${config}/https-cert.pem",
locHTTPSKeyFile: "${config}/https-key.pem",
locDatabase: "${config}/index-v0.13.0.db",
locDatabase: "${config}/index-v0.11.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-${timestamp}.log",

View File

@@ -49,16 +49,15 @@ import (
)
var (
Version = "unknown-dev"
Codename = "Copper Cockroach"
BuildStamp = "0"
BuildDate time.Time
BuildHost = "unknown"
BuildUser = "unknown"
IsRelease bool
IsBeta bool
LongVersion string
allowedVersionExp = regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z0-9]+)*(\.\d+)*(\+\d+-g[0-9a-f]+)?(-dirty)?$`)
Version = "unknown-dev"
Codename = "Beryllium Bedbug"
BuildStamp = "0"
BuildDate time.Time
BuildHost = "unknown"
BuildUser = "unknown"
IsRelease bool
IsBeta bool
LongVersion string
)
const (
@@ -90,8 +89,9 @@ const (
func init() {
if Version != "unknown-dev" {
// If not a generic dev build, version string should come from git describe
if !allowedVersionExp.MatchString(Version) {
l.Fatalf("Invalid version string %q;\n\tdoes not match regexp %v", Version, allowedVersionExp)
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z0-9]+)*(\+\d+-g[0-9a-f]+)?(-dirty)?$`)
if !exp.MatchString(Version) {
l.Fatalf("Invalid version string %q;\n\tdoes not match regexp %v", Version, exp)
}
}
@@ -340,7 +340,7 @@ func main() {
if err != nil {
l.Fatalln("Upgrade:", err) // exits 1
}
l.Infoln("Upgraded from", options.upgradeTo)
l.Okln("Upgraded from", options.upgradeTo)
return
}
@@ -464,14 +464,14 @@ func performUpgrade(release upgrade.Release) {
if err != nil {
l.Fatalln("Upgrade:", err)
}
l.Infof("Upgraded to %q", release.Tag)
l.Okf("Upgraded to %q", release.Tag)
} else {
l.Infoln("Attempting upgrade through running Syncthing...")
err = upgradeViaRest()
if err != nil {
l.Fatalln("Upgrade:", err)
}
l.Infoln("Syncthing upgrading")
l.Okln("Syncthing upgrading")
os.Exit(exitUpgrading)
}
}
@@ -640,7 +640,6 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
dbFile := locations[locDatabase]
ldb, err := db.Open(dbFile)
if err != nil {
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
}
@@ -852,7 +851,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
mainService.Stop()
l.Infoln("Exiting")
l.Okln("Exiting")
if runtimeOptions.cpuProfile {
pprof.StopCPUProfile()
@@ -1171,13 +1170,12 @@ func autoUpgrade(cfg *config.Wrapper) {
// suitable time after they have gone out of fashion.
func cleanConfigDirectory() {
patterns := map[string]time.Duration{
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"index*.converted": 14 * 24 * time.Hour, // keep old converted indexes for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
}
for pat, dur := range patterns {

View File

@@ -175,29 +175,3 @@ func TestShortIDCheck(t *testing.T) {
t.Error("Should have gotten an error")
}
}
func TestAllowedVersions(t *testing.T) {
testcases := []struct {
ver string
allowed bool
}{
{"v0.13.0", true},
{"v0.12.11+22-gabcdef0", true},
{"v0.13.0-beta0", true},
{"v0.13.0-beta47", true},
{"v0.13.0-beta47+1-gabcdef0", true},
{"v0.13.0-beta.0", true},
{"v0.13.0-beta.47", true},
{"v0.13.0-beta.0+1-gabcdef0", true},
{"v0.13.0-beta.47+1-gabcdef0", true},
{"v0.13.0-some-weird-but-allowed-tag", true},
{"v0.13.0-not.allowed.to.do.this", false},
{"v0.13.0+not.allowed.to.do.this", false},
}
for i, c := range testcases {
if allowed := allowedVersionExp.MatchString(c.ver); allowed != c.allowed {
t.Errorf("%d: incorrect result %v != %v for %q", i, allowed, c.allowed, c.ver)
}
}
}

14
debian/control vendored
View File

@@ -1,8 +1,16 @@
Package: syncthing
Version: {{.version}}
Priority: optional
Section: net
Architecture: {{.arch}}
Depends: libc6, procps
Version: {{.version}}
Homepage: https://syncthing.net/
Maintainer: Syncthing Release Management <release@syncthing.net>
Description: Open Source Continuous File Synchronization
Syncthing does bidirectional synchronization of files between two or
more computers.
Syncthing is an application that lets you synchronize your files across
multiple devices. This means the creation, modification or deletion of files
on one machine will automatically be replicated to your other devices. We
believe your data is your data alone and you deserve to choose where it is
stored. Therefore Syncthing does not upload your data to the cloud but
exchanges your data across your machines as soon as they are online at the
same time.

View File

@@ -0,0 +1,10 @@
[Unit]
Description=Restart Syncthing after resume
After=suspend.target
[Service]
Type=oneshot
ExecStart=/usr/bin/pkill -HUP -x syncthing
[Install]
WantedBy=suspend.target

View File

@@ -181,7 +181,7 @@
"The aggregated statistics are publicly available at {%url%}.": "Τα στατιστικά που έχουν συλλεγεί είναι δημόσια διαθέσιμα στο {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Οι ρυθμίσεις έχουν αποθηκευτεί αλλά δεν έχουν ενεργοποιηθεί. Πρέπει να επανεκκινήσεις το Syncthing για να ισχύσουν οι νέες ρυθμίσεις.",
"The device ID cannot be blank.": "Η ταυτότητα της συσκευής δεν μπορεί να είναι κενή",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Η ταυτότητα της συσκευής που θα μπει εδώ βρίσκεται στο μενού «Ενέργειες > Εμφάνιση ταυτότητας» στην άλλη συσκευή. Κενοί χαρακτήρες και παύλες είναι προαιρετικοί (θα αγνοηθούν).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Η ταυτότητα της συσκευής που θα μπει εδώ βρίσκεται στο μενού «Επεξεργασία > Εμφάνιση ταυτότητας» στην άλλη συσκευή. Κενοί χαρακτήρες και παύλες είναι προαιρετικοί (απλά θα αγνοηθούν).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Η κρυπτογραφημένη αναφορά χρήσης στέλνεται καθημερινά. Χρησιμοποιείται για να παραχθούν στατιστικές για τα λειτουργικά συστήματα που χρησιμοποιούνται, τα μεγέθη των φακέλων και τις εκδόσεις των προγραμμάτων. Αν στο μέλλον συμπεριληφθούν και άλλα δεδομένα στην αναφορά χρήσης, τότε αυτό το παράθυρο θα εμφανιστεί ξανά.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Η ταυτότητα συσκευής που έδωσες δε φαίνεται έγκυρη. Θα πρέπει να είναι μια σειρά από 52 ή 56 χαρακτήρες (γράμματα και αριθμοί). Τα κενά και οι παύλες είναι προαιρετικά (αδιάφορα).",
@@ -194,16 +194,16 @@
"The following items could not be synchronized.": "Δεν ήταν δυνατόν να συγχρονιστούν τα παρακάτω αρχεία.",
"The maximum age must be a number and cannot be blank.": "Η μέγιστη ηλικία πρέπει να είναι αριθμός και σίγουρα όχι κενό.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Η μέγιστη ηλικία παλιότερων εκδόσεων (σε ημέρες, αν δώσεις 0 οι παλιότερες εκδόσεις θα διατηρούνται για πάντα).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "Το ποσοστό του ελάχιστου διαθέσιμου αποθηκευτικόυ χώρου πρέπει να είναι έναν μη-αρνητικός αριθμός μεταξύ του 0 και του 100 (συμπεριλαμβανομένων)",
"The number of days must be a number and cannot be blank.": "Ο αριθμός ημερών πρέπει να είναι αριθμός και σίγουρα όχι κενό.",
"The number of days to keep files in the trash can. Zero means forever.": "Ο αριθμός ημερών που θα διατηρούντα τα αρχεία στον κάδο. Μηδέν σημαίνει διατήρηση για πάντα.",
"The number of old versions to keep, per file.": "Πόσες παλιότερες εκδόσεις θα διατηρούνται, ανά αρχείο.",
"The number of versions must be a number and cannot be blank.": "Ο αριθμός εκδόσεων πρέπει να είναι αριθμός και σίγουρα όχι κενό.",
"The path cannot be blank.": "Το μονοπάτι δεν μπορεί να είναι κενό.",
"The rate limit must be a non-negative number (0: no limit)": "The rate limit must be a non-negative number (0: no limit)",
"The rate limit must be a non-negative number (0: no limit)": "Το όριο ταχύτητας πρέπει να είναι ένας μη-αρνητικός αριθμός (0: χωρίς όριο)",
"The rescan interval must be a non-negative number of seconds.": "Ο χρόνος επανελέγχου για αλλαγές είναι σε δευτερόλεπτα (δηλ. θετικός αριθμός).",
"They are retried automatically and will be synced when the error is resolved.": "Όταν επιλυθεί το σφάλμα θα κατεβούν και θα συχρονιστούν αυτόματα.",
"This can easily give hackers access to read and change any files on your computer.": "This can easily give hackers access to read and change any files on your computer.",
"This can easily give hackers access to read and change any files on your computer.": "Αυτό μπορεί εύκολα να δώσει πρόσβαση ανάγνωσης και επεξεργασίας αρχείων του υπολογιστή σας σε χάκερς.",
"This is a major version upgrade.": "Αυτή είναι μιας σημαντική αναβάθμιση.",
"Trash Can File Versioning": "Ο κάδος μπορεί να τηρεί εκδόσεις",
"Unknown": "Άγνωστο",

View File

@@ -76,7 +76,7 @@
"Generate": "Generoi",
"Global Discovery": "Globaali etsintä",
"Global Discovery Server": "Globaali etsintäpalvelin",
"Global Discovery Servers": "Global Discovery Servers",
"Global Discovery Servers": "Globaalit etsintäpalvelimet",
"Global State": "Globaali tila",
"Help": "Apua",
"Home page": "Kotisivu",

View File

@@ -18,7 +18,7 @@
"Alphabetic": "Alphabétique",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Une commande externe gère les versions de fichiers. Elle supprime les fichiers dans le dossier synchronisé.",
"Anonymous Usage Reporting": "Rapport anonyme de statistiques d'utilisation",
"Any devices configured on an introducer device will be added to this device as well.": "Toute machine ajoutée depuis une machine introductrice sera aussi ajoutée sur cette machine.",
"Any devices configured on an introducer device will be added to this device as well.": "Toute machine ajoutée depuis une machine initiatrice sera aussi ajoutée sur cette machine.",
"Automatic upgrades": "Mises à jour automatiques",
"Be careful!": "Faites attention !",
"Bugs": "Bugs",
@@ -44,7 +44,7 @@
"Disconnected": "Déconnecté",
"Discovery": "Découverte",
"Documentation": "Documentation",
"Download Rate": "Débit de réception",
"Download Rate": "Vitesse de réception",
"Downloaded": "Téléchargé",
"Downloading": "En cours de téléchargement",
"Edit": "Éditer",
@@ -83,7 +83,7 @@
"Ignore": "Ignorer",
"Ignore Patterns": "Modèles à éviter",
"Ignore Permissions": "Ignorer les permissions",
"Incoming Rate Limit (KiB/s)": "Limite du débit entrant (KiB/s)",
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Ko/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos dossiers et mettre hors-service Syncthing",
"Introducer": "Initiateur",
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
@@ -114,7 +114,7 @@
"Options": "Options",
"Out of Sync": "Désynchronisé",
"Out of Sync Items": "Fichiers non synchronisés",
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
"Outgoing Rate Limit (KiB/s)": "Limite du débit d'émission (Ko/s)",
"Override Changes": "Écraser les changements",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Le chemin du dossier sur l'ordinateur local sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Chemin où les versions doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
@@ -214,7 +214,7 @@
"Upgrade": "Mettre à jour",
"Upgrade To {%version%}": "Mettre à jour vers {{version}}",
"Upgrading": "Mise à jour de Syncthing",
"Upload Rate": "Débit d'envoi",
"Upload Rate": "Vitesse d'émission",
"Uptime": "Durée de fonctionnement",
"Use HTTPS for GUI": "Utiliser l'HTTPS pour le GUI",
"Version": "Version",

View File

@@ -169,7 +169,7 @@
"Statistics": "Statistiken",
"Stopped": "Stoppe",
"Support": "Understeuning",
"Sync Protocol Listen Addresses": "Sync-protokol harkadres",
"Sync Protocol Listen Addresses": "Sync-protokolharkadressen",
"Syncing": "Oan it Syncen",
"Syncthing has been shut down.": "Syncthing is útsetten",
"Syncthing includes the following software or portions thereof:": "Syncthing befettet de folgende sêftguod of parten dêrfan:",

View File

@@ -12,7 +12,7 @@
"Address": "Adresas",
"Addresses": "Adresai",
"Advanced": "Pažangus",
"Advanced Configuration": "Pažangus nustatymai",
"Advanced Configuration": "Išplėstinė konfigūracija",
"All Data": "Visiems duomenims",
"Allow Anonymous Usage Reporting?": "Siųsti anonimišką vartojimo ataskaitą?",
"Alphabetic": "Abėcėlės tvarka",
@@ -34,7 +34,7 @@
"Copied from original": "Nukopijuota iš originalo",
"Copyright © 2015 the following Contributors:": "Visos teisės saugomos © 2015 šių bendraautorių:",
"Danger!": "Pavojus!",
"Delete": "Trinti",
"Delete": "Ištrinti",
"Deleted": "Ištrinta",
"Device ID": "Įrenginio ID",
"Device Identification": "Įrenginio identifikacija",
@@ -111,15 +111,15 @@
"OK": "Gerai",
"Off": "Netaikoma",
"Oldest First": "Seniausi pirmiau",
"Options": "Nustatymai",
"Options": "Parametrai",
"Out of Sync": "Išsisinchronizavę",
"Out of Sync Items": "Nesutikrinta",
"Outgoing Rate Limit (KiB/s)": "Išeinančio srauto maksimalus greitis (KiB/s)",
"Override Changes": "Perrašyti pakeitimus",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Kelias iki aplanko šiame kompiuteryje. Bus sukurtas, jei neegzistuoja. Tildės simbolis (~) gali būti naudojamas kaip trumpinys",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Kelias, kur bus saugomos versijos (palikite tuščią numatytam .stversions aplankui).",
"Pause": "Sustabdyti",
"Paused": "Sustabdyta",
"Pause": "Pristabdyti",
"Paused": "Pristabdyta",
"Please consult the release notes before performing a major upgrade.": "Peržvelkite laidos informaciją prieš atlikdami stambų atnaujinimą.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Prašome nustatymų dialoge nustatyti valdymo skydelio vartotojo vardą ir slaptažodį.",
"Please wait": "Prašome palaukti",

View File

@@ -1,5 +1,5 @@
{
"A device with that ID is already added.": "A device with that ID is already added.",
"A device with that ID is already added.": "Urządzenie o tym ID jest już dodane.",
"A negative number of days doesn't make sense.": "Ujemna ilość dni nie ma sensu.",
"A new major version may not be compatible with previous versions.": "Nowa wersja może być niekompatybilna z poprzednimi wersjami.",
"API Key": "Klucz API",
@@ -33,7 +33,7 @@
"Copied from elsewhere": "Skopiowane z innego miejsca ",
"Copied from original": "Skopiowane z oryginału",
"Copyright © 2015 the following Contributors:": "Copyright © 2015: ",
"Danger!": "Danger!",
"Danger!": "Niebezpieczne!",
"Delete": "Usuń",
"Deleted": "Usunięto",
"Device ID": "ID urządzenia",
@@ -51,7 +51,7 @@
"Edit Device": "Edytuj urządzenie",
"Edit Folder": "Edytuj folder",
"Editing": "Edytowanie",
"Enable Relaying": "Enable Relaying",
"Enable Relaying": "Włącz przekazywanie",
"Enable UPnP": "Włącz UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Wpisz oddzielone przecinkiem adresy (\"tcp://ip:port\", \"tcp://host:port\") lub \"dynamic\" by przeprowadzić automatyczne odnalezienie adresu.",
"Enter ignore patterns, one per line.": "Wprowadź wzorce ignorowania, jeden w każdej linii.",
@@ -76,7 +76,7 @@
"Generate": "Generuj",
"Global Discovery": "Globalne odnajdywanie",
"Global Discovery Server": "Globalny serwer rozgłoszeniowy",
"Global Discovery Servers": "Global Discovery Servers",
"Global Discovery Servers": "Globalne serwery odkrywania",
"Global State": "Status globalny",
"Help": "Pomoc",
"Home page": "Strona domowa",
@@ -121,14 +121,14 @@
"Pause": "Zatrzymaj",
"Paused": "Zatrzymany",
"Please consult the release notes before performing a major upgrade.": "Zaleca się przeanalizowanie \"release notes\" przed przeprowadzeniem znaczącej aktualizacji.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Please set a GUI Authentication User and Password in the Settings dialog.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Ustaw proszę użytkownika i hasło dostępowe do GUI w Ustawieniach",
"Please wait": "Proszę czekać",
"Preview": "Podgląd",
"Preview Usage Report": "Podgląd raportu użycia.",
"Quick guide to supported patterns": "Krótki przewodnik po obsługiwanych wzorcach",
"RAM Utilization": "Użycie pamięci RAM",
"Random": "Losowo",
"Relay Servers": "Relay Servers",
"Relay Servers": "Serwery przekazywania",
"Relayed via": "Przekazane przez",
"Relays": "Przekaźniki",
"Release Notes": "Informacje o wydaniu",
@@ -142,7 +142,7 @@
"Resume": "Wznów",
"Reused": "Ponownie użyte",
"Save": "Zapisz",
"Scan Time Remaining": "Scan Time Remaining",
"Scan Time Remaining": "Pozostały czas skanowania",
"Scanning": "Skanowanie",
"Select the devices to share this folder with.": "Wybierz urządzenie, któremu udostępnić folder.",
"Select the folders to share with this device.": "Wybierz foldery do współdzielenia z tym urządzeniem.",
@@ -155,7 +155,7 @@
"Shared With": "Współdzielony z",
"Short identifier for the folder. Must be the same on all cluster devices.": "Krótki identyfikator folderu. Musi być taki sam na wszystkich urządzeniach.",
"Show ID": "Pokaż ID",
"Show QR": "Show QR",
"Show QR": "Pokaż kod QR",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Pokazane w statusie zamiast ID urządzenia.Zostanie wysłane do innych urządzeń jako opcjonalna domyślna nazwa.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Pokazane w statusie zamiast ID urządzenia. Zostanie zaktualizowane do nazwy urządzenia jeżeli pozostanie puste.",
"Shutdown": "Wyłącz",
@@ -177,11 +177,11 @@
"Syncthing is upgrading.": "Aktualizowanie Syncthing",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing wydaje się być wyłączony lub jest problem z twoim połączeniem internetowym. Próbuje ponownie...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing nie może przetworzyć twojego zapytania. Proszę przeładuj stronę lub zrestartuj Syncthing, jeśli problem pozostanie.",
"The Syncthing admin interface is configured to allow remote access without a password.": "The Syncthing admin interface is configured to allow remote access without a password.",
"The Syncthing admin interface is configured to allow remote access without a password.": "Interfejs administracyjny Syncthing jest skonfigurowany w sposób pozwalający na zdalny dostęp bez hasła.",
"The aggregated statistics are publicly available at {%url%}.": "Zebrane statystyki są publicznie dostępne pod adresem {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfiguracja została zapisana lecz nie jest aktywna. Syncthing musi zostać zrestartowany aby aktywować nową konfiguracje.",
"The device ID cannot be blank.": "ID urządzenia nie może być puste.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "ID urządzenia może być znalezione w \"Akcja > Pokaż ID\" na innym urządzeniu. Spacje i myślniki są opcjonalne (są one ignorowane).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "ID urządzenia można znaleźć w \"Edytuj -> Pokaż ID\" na zdalnym urządzeniu.\nOdstępy i myślniki są opcjonalne (ignorowane)",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Zaszyfrowane raporty użycia są wysyłane codziennie. Są one używane w celach statystycznych platform, rozmiarów katalogów i wersji programu. Jeżeli zgłaszane dane ulegną zmianie, ponownie wyświetli się ta informacja.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Wprowadzone ID urządzenia wygląda na niepoprawne. Musi zawierać 52 lub 56 znaków składających się z liter i cyfr. Odstępy i myślniki są opcjonalne.",
@@ -203,7 +203,7 @@
"The rate limit must be a non-negative number (0: no limit)": "Ograniczenie prędkości powinno być nieujemną liczbą całkowitą (0: brak ograniczeń)",
"The rescan interval must be a non-negative number of seconds.": "Interwał skanowania musi być niezerową liczbą sekund.",
"They are retried automatically and will be synced when the error is resolved.": "Ponowne próby zachodzą automatycznie, synchronizacja nastąpi po usunięciu usterki.",
"This can easily give hackers access to read and change any files on your computer.": "This can easily give hackers access to read and change any files on your computer.",
"This can easily give hackers access to read and change any files on your computer.": "Może to umożliwić osobom trzecim dostęp do odczytu i zmian dowolnych plików na urządzeniu.",
"This is a major version upgrade.": "To jest ważna aktualizacja",
"Trash Can File Versioning": "Kontrola werjsi plików w koszu",
"Unknown": "Nieznany",

View File

@@ -1,5 +1,5 @@
{
"A device with that ID is already added.": "A device with that ID is already added.",
"A device with that ID is already added.": "En enhet med det ID är redan tillagt.",
"A negative number of days doesn't make sense.": "Negativt antal dagar är inte troligt.",
"A new major version may not be compatible with previous versions.": "En ny huvudversion kan eventuellt vara inkompatibel med tidigare versioner.",
"API Key": "API-nyckel",
@@ -33,7 +33,7 @@
"Copied from elsewhere": "Kopierat utifrån",
"Copied from original": "Oförändrat",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 följande medverkande:",
"Danger!": "Danger!",
"Danger!": "Fara!",
"Delete": "Radera",
"Deleted": "Borttaget",
"Device ID": "Enhets-ID",
@@ -142,7 +142,7 @@
"Resume": "Återuppta",
"Reused": "Återanvänt",
"Save": "Spara",
"Scan Time Remaining": "Scan Time Remaining",
"Scan Time Remaining": "Skanna Återstående Tid",
"Scanning": "Uppdaterar",
"Select the devices to share this folder with.": "Ange enheterna att dela den här katalogen med.",
"Select the folders to share with this device.": "Välj kataloger att dela med den här enheten.",
@@ -155,7 +155,7 @@
"Shared With": "Delad med",
"Short identifier for the folder. Must be the same on all cluster devices.": "Kort identifieringssträng för katalogen. Måste vara samma på alla enheter i klustret.",
"Show ID": "Visa ID",
"Show QR": "Show QR",
"Show QR": "Visa QR",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Visas i stället för enhets-ID. Skickas till andra enheter som namn på denna enhet.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Visas i stället för enhets-ID. Sätts till namnet på den andra enheten vid första anslutning om det lämnas tomt.",
"Shutdown": "Stäng av",
@@ -177,11 +177,11 @@
"Syncthing is upgrading.": "Syncthing uppgraderas.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing verkar avstängd, eller finns det problem med din Internetanslutning. Försöker igen...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing verkar ha drabbats av ett problem. Uppdatera sidan eller starta om Syncthing om problemet kvarstår.",
"The Syncthing admin interface is configured to allow remote access without a password.": "The Syncthing admin interface is configured to allow remote access without a password.",
"The Syncthing admin interface is configured to allow remote access without a password.": "Syncthing administratör gränssnittet är konfigurerat för att tillåta fjärrtillträde utan ett lösenord.",
"The aggregated statistics are publicly available at {%url%}.": "Sammanställd statistik finns publikt tillgänglig på {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfigurationen har sparats men inte aktiverats. Syncthing måste startas om för att aktivera den nya konfigurationen.",
"The device ID cannot be blank.": "Enhets-ID kan inte vara tomt.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhets-ID som behövs här kan du hitta i \"Redigera > Visa ID\"-dialogen på den andra enheten. Mellanrum och bindestreck är valfria (ignoreras).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhets-ID som behövs här kan du hitta i \"Redigera > Visa ID\"-dialogen på den andra enheten. Mellanrum och bindestreck är valfria (ignoreras).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Den krypterade användarstatistiken skickas dagligen. Den används för att spåra vanliga plattformar, katalogstorlekar och versioner. Om datan som rapporteras ändras så kommer du att bli tillfrågad igen.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Det inmatade enhets-ID:t verkar inte korrekt. Det ska vara en 52 eller 56 teckens sträng bestående av siffror och bokstäver, eventuellt med mellanrum och bindestreck.",

View File

@@ -181,7 +181,7 @@
"The aggregated statistics are publicly available at {%url%}.": "Toplanan halka açık istatistiklere ulaşabileceğiniz adres {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Yapılandırma kaydedildi ancak etkinleştirilmedi. Etkinleştirmek için Syncthing yeniden başlatılmalı.",
"The device ID cannot be blank.": "Cihaz ID boş olamaz.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Buraya girilecek olan aygıt ID'si diğer cihazlarda, \"Eylemler > ID Göster\" penceresinde bulunabilir. Boşluklar ve çizgiler isteğe bağlıdır (yoksayılmış).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Buraya girilecek cihaz ID'si diğer düğümde \"Düzenle > ID Göster\" menüsünden bulunabilir. Boşluk ve kısa çizginin olup olmaması önemli değildir. (İhmal edilir)",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Şifrelenmiş kullanım bilgisi günlük olarak gönderilir. Platform, klasör büyüklüğü ve uygulama sürümü hakkında bilgi toplanır. Toplanan bilgi çeşidi değişecek olursa, sizden tekrar onay istenecek.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Girilen cihaz ID'si geçerli gibi gözükmüyor. 52 ya da 56 karakter uzunluğunda, harf ve rakamlardan oluşmalı. Boşlukların ve kısa çizgilerin olup olmaması önemli değildir.",

View File

@@ -1,5 +1,5 @@
{
"A device with that ID is already added.": "Пристрій з таким ID вже додано.",
"A device with that ID is already added.": "Пристрій з таким ID вже додано раніше.",
"A negative number of days doesn't make sense.": "Від'ємна кількість днів немає сенсу.",
"A new major version may not be compatible with previous versions.": "Нова мажорна версія може бути несумісною із попередніми версіями.",
"API Key": "API ключ",
@@ -42,7 +42,7 @@
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Пристрій {{device}} ({{address}}) намагається під’єднатися. Додати новий пристрій?",
"Devices": "Пристрої",
"Disconnected": "З’єднання відсутнє",
"Discovery": "Виявлення",
"Discovery": "Сервери обходу NAT",
"Documentation": "Документація",
"Download Rate": "Швидкість завантаження",
"Downloaded": "Завантажено",
@@ -51,7 +51,7 @@
"Edit Device": "Редагувати пристрій",
"Edit Folder": "Редагувати директорію",
"Editing": "Редагування",
"Enable Relaying": "Enable Relaying",
"Enable Relaying": "Увімкнути ретрансляцію (relaying)",
"Enable UPnP": "Увімкнути UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Введіть розділені комою (\"tcp://ip:port\", \"tcp://host:port\") адреси або \"dynamic\" для автоматичного визначення адреси.",
"Enter ignore patterns, one per line.": "Введіть шаблони ігнорування, по одному на рядок.",
@@ -63,10 +63,10 @@
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Біти прав доступу до файлів будуть проігноровані під час пошуку змін. Використовуйте на файлових системах FAT.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Файли, що замінюються або видаляються Syncthing, переміщуються у директорію .stversions. ",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Файли будуть поміщатися у директорію .stversions із відповідною позначкою часу, коли вони будуть замінятися або видалятися програмою.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Файли захищено від змін зроблених на інших пристроях, але зміни зроблені на цьому пристрої будуть надіслані решті кластеру.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Вміст папки захищено від змін, зроблених на інших пристроях, але зміни зроблені на цьому пристрої можна розіслати решті пристроїв кластеру.",
"Folder": "Директорія",
"Folder ID": "ID директорії",
"Folder Master": "Центральна директорія",
"Folder Master": "Вважати за оригінал",
"Folder Path": "Шлях до директорії",
"Folders": "Директорії",
"GUI": "Графічний інтерфейс",
@@ -74,14 +74,14 @@
"GUI Authentication User": "Логін користувача для доступу до панелі управління",
"GUI Listen Addresses": "Адреса доступу до панелі управління",
"Generate": "Згенерувати",
"Global Discovery": "Глобальне виявлення",
"Global Discovery": "Глобальне виявлення (internet)",
"Global Discovery Server": "Сервер для глобального виявлення",
"Global Discovery Servers": "Global Discovery Servers",
"Global Discovery Servers": "Сервери глобального виявлення \n(координації обходу NAT)",
"Global State": "Глобальний статус",
"Help": "Допомога",
"Home page": "Домашня сторінка",
"Ignore": "Ігнорувати",
"Ignore Patterns": "Ігнорувати шаблони",
"Ignore Patterns": "Шаблони винятків",
"Ignore Permissions": "Ігнорувати права доступу до файлів",
"Incoming Rate Limit (KiB/s)": "Ліміт швидкості завантаження (КіБ/с)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Невірна конфігурація може пошкодити вміст вашої директорії та зробити Syncthing недієздатним.",
@@ -92,7 +92,7 @@
"Last File Received": "Останній завантажений файл",
"Last seen": "З’являвся останній раз",
"Later": "Пізніше",
"Local Discovery": "Локальне виявлення",
"Local Discovery": "Локальне виявлення (LAN)",
"Local State": "Локальний статус",
"Local State (Total)": "Локальний статус (загалом)",
"Major Upgrade": "Мажорне оновлення",
@@ -115,7 +115,7 @@
"Out of Sync": "Не синхронізовано",
"Out of Sync Items": "Не синхронізовані елементи",
"Outgoing Rate Limit (KiB/s)": "Ліміт швидкості віддачі (КіБ/с)",
"Override Changes": "Перезаписати зміни",
"Override Changes": "Розіслати мою версію",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Шлях до директорії на локальному комп’ютері. Буде створений, якщо такий не існує. Символ тильди (~) може бути використаний як ярлик для",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Шлях, де повинні зберігатися версії (залиште порожнім для зберігання в .stversions в середині директорії)",
"Pause": "Пауза",
@@ -128,9 +128,9 @@
"Quick guide to supported patterns": "Швидкий посібник по шаблонам, що підтримуються",
"RAM Utilization": "Використання RAM",
"Random": "Випадково",
"Relay Servers": "Relay Servers",
"Relay Servers": "Сервери ретрансляції",
"Relayed via": "Ретранслювати через",
"Relays": "Ретрансляція",
"Relays": "Ретранслятори",
"Release Notes": "Примітки до випуску",
"Remove": "Видалити",
"Rescan": "Пересканувати",
@@ -169,7 +169,7 @@
"Statistics": "Статистика",
"Stopped": "Зупинено",
"Support": "Підтримка",
"Sync Protocol Listen Addresses": "Адреса панелі управління",
"Sync Protocol Listen Addresses": "Адреса і вхідний порт протоколу синхронізації",
"Syncing": "Синхронізація",
"Syncthing has been shut down.": "Syncthing вимкнено (закрито).",
"Syncthing includes the following software or portions thereof:": "Syncthing містить наступне програмне забезпечення (або його частини):",
@@ -181,7 +181,7 @@
"The aggregated statistics are publicly available at {%url%}.": "Зібрана статистика публічно доступна за посиланням {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Конфігурацію збережено, але не активовано. Необхідно перезапустити Syncthing для того, щоби активувати нову конфігурацію.",
"The device ID cannot be blank.": "ID пристрою не може бути порожнім.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "ID пристрою, який необхідно додати. Може бути знайдений у вікні \"Редагувати > Показати ID\" на іншому пристрої. Пробіли та тире опціональні (будуть видалені програмою).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "ID пристрою, який необхідно додати. Може бути знайдений у вікні \"Редагувати > Показати ID\" на іншому пристрої. Пробіли та тире опціональні (вони ігноруються програмою).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Зашифрована статистика використання відсилається щоденно. Вона використовується для того, щоб розробники розуміли, на яких платформах працює програма, розміри директорій та версії програми. Якщо набір даних, що збирається зазнає змін, ви обов’язково будете повідомлені через це діалогове вікно.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Введений ID пристрою невалідний. Ідентифікатор має вигляд строки довжиною 52 або 56 символів, що містить цифри та літери, із опціональними пробілами та тире.",
@@ -220,7 +220,7 @@
"Version": "Версія",
"Versions Path": "Шлях до версій",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Версії автоматично видаляються, якщо вони старше, ніж максимальний вік, або перевищують допустиму кількість файлів за інтервал.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Коли додаєте новий пристрій, пам’ятайте, що цей пристрій повинен бути доданий і на іншій стороні.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Коли додаєте новий вузол, пам’ятайте, що цей вузол повинен бути доданий і на іншій стороні.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Коли додаєте нову директорію, пам’ятайте, що ID цієї директорії використовується для того, щоб зв’язувати директорії разом між вузлами. Назви є чутливими до регістра та повинні співпадати точно між усіма вузлами.",
"Yes": "Так",
"You must keep at least one version.": "Ви повинні зберігати щонайменше одну версію.",

View File

@@ -222,7 +222,7 @@
<div class="panel-progress" ng-show="folderStatus(folder) == 'syncing'" ng-attr-style="width: {{syncPercentage(folder.id)}}%"></div>
<div class="panel-progress" ng-show="folderStatus(folder) == 'scanning' && scanProgress[folder.id] != undefined" ng-attr-style="width: {{scanPercentage(folder.id)}}%"></div>
<h3 class="panel-title">
<span class="fa fa-folder hidden-xs"></span>{{folder.id}}
<span class="fa hidden-xs fa-fw" ng-class="[folder.readOnly ? 'fa-lock' : 'fa-folder']"></span>{{folder.id}}
<span class="pull-right text-{{folderClass(folder)}}" ng-switch="folderStatus(folder)">
<span ng-switch-when="unknown"><span class="hidden-xs" translate>Unknown</span><span class="visible-xs">&#9724;</span></span>
<span ng-switch-when="unshared"><span class="hidden-xs" translate>Unshared</span><span class="visible-xs">&#9724;</span></span>
@@ -249,7 +249,7 @@
<tbody>
<tr>
<th><span class="fa fa-fw fa-folder-open"></span>&nbsp;<span translate>Folder Path</span></th>
<td class="text-right">{{folder.path}}</td>
<td class="text-right" title="{{folder.path}}">{{folder.path}}</td>
</tr>
<tr ng-if="model[folder.id].invalid || model[folder.id].error">
<th><span class="fa fa-fw fa-exclamation-triangle"></span>&nbsp;<span translate>Error</span></th>
@@ -442,7 +442,7 @@
</tr>
<tr>
<th><span class="fa fa-fw fa-tag"></span>&nbsp;<span translate>Version</span></th>
<td class="text-right">{{versionString()}}</td>
<td class="text-right" title="{{versionString()}}">{{versionString()}}</td>
</tr>
</tbody>
</table>

View File

@@ -69,6 +69,7 @@
<li class="auto-generated">Marcin Dziadus</li>
<li class="auto-generated">Mateusz Naściszewski</li>
<li class="auto-generated">Matt Burke</li>
<li class="auto-generated">Max Schulze</li>
<li class="auto-generated">Michael Jephcote</li>
<li class="auto-generated">Michael Ploujnikov</li>
<li class="auto-generated">Michael Tilli</li>

View File

File diff suppressed because one or more lines are too long

1
lib/db/.gitignore vendored
View File

@@ -1,2 +1 @@
!*.zip
testdata/*.db

View File

@@ -4,9 +4,16 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
// Package db provides a set type to track local/remote files with newness
// checks. We must do a certain amount of normalization in here. We will get
// fed paths with either native or wire-format separators and encodings
// depending on who calls us. We transform paths to wire-format (NFC and
// slashes) on the way to the database, and transform to native format
// (varying separator and encoding) on the way back out.
package db
import (
"bytes"
"encoding/binary"
"fmt"
@@ -23,10 +30,10 @@ const maxBatchSize = 256 << 10
type BlockMap struct {
db *Instance
folder uint32
folder string
}
func NewBlockMap(db *Instance, folder uint32) *BlockMap {
func NewBlockMap(db *Instance, folder string) *BlockMap {
return &BlockMap{
db: db,
folder: folder,
@@ -116,7 +123,7 @@ func (m *BlockMap) Discard(files []protocol.FileInfo) error {
// Drop block map, removing all entries related to this block map from the db.
func (m *BlockMap) Drop() error {
batch := new(leveldb.Batch)
iter := m.db.NewIterator(util.BytesPrefix(m.blockKeyInto(nil, nil, "")[:keyPrefixLen+keyFolderLen]), nil)
iter := m.db.NewIterator(util.BytesPrefix(m.blockKeyInto(nil, nil, "")[:1+64]), nil)
defer iter.Release()
for iter.Next() {
if batch.Len() > maxBatchSize {
@@ -166,13 +173,12 @@ func (f *BlockFinder) String() string {
func (f *BlockFinder) Iterate(folders []string, hash []byte, iterFn func(string, string, int32) bool) bool {
var key []byte
for _, folder := range folders {
folderID := f.db.folderIdx.ID([]byte(folder))
key = blockKeyInto(key, hash, folderID, "")
key = blockKeyInto(key, hash, folder, "")
iter := f.db.NewIterator(util.BytesPrefix(key), nil)
defer iter.Release()
for iter.Next() && iter.Error() == nil {
file := blockKeyName(iter.Key())
folder, file := fromBlockKey(iter.Key())
index := int32(binary.BigEndian.Uint32(iter.Value()))
if iterFn(folder, osutil.NativeFilename(file), index) {
return true
@@ -188,41 +194,48 @@ func (f *BlockFinder) Fix(folder, file string, index int32, oldHash, newHash []b
buf := make([]byte, 4)
binary.BigEndian.PutUint32(buf, uint32(index))
folderID := f.db.folderIdx.ID([]byte(folder))
batch := new(leveldb.Batch)
batch.Delete(blockKeyInto(nil, oldHash, folderID, file))
batch.Put(blockKeyInto(nil, newHash, folderID, file), buf)
batch.Delete(blockKeyInto(nil, oldHash, folder, file))
batch.Put(blockKeyInto(nil, newHash, folder, file), buf)
return f.db.Write(batch, nil)
}
// m.blockKey returns a byte slice encoding the following information:
// keyTypeBlock (1 byte)
// folder (4 bytes)
// folder (64 bytes)
// block hash (32 bytes)
// file name (variable size)
func blockKeyInto(o, hash []byte, folder uint32, file string) []byte {
reqLen := keyPrefixLen + keyFolderLen + keyHashLen + len(file)
func blockKeyInto(o, hash []byte, folder, file string) []byte {
reqLen := 1 + 64 + 32 + len(file)
if cap(o) < reqLen {
o = make([]byte, reqLen)
} else {
o = o[:reqLen]
}
o[0] = KeyTypeBlock
binary.BigEndian.PutUint32(o[keyPrefixLen:], folder)
copy(o[keyPrefixLen+keyFolderLen:], []byte(hash))
copy(o[keyPrefixLen+keyFolderLen+keyHashLen:], []byte(file))
copy(o[1:], []byte(folder))
for i := len(folder); i < 64; i++ {
o[1+i] = 0
}
copy(o[1+64:], []byte(hash))
copy(o[1+64+32:], []byte(file))
return o
}
// blockKeyName returns the file name from the block key
func blockKeyName(data []byte) string {
if len(data) < keyPrefixLen+keyFolderLen+keyHashLen+1 {
func fromBlockKey(data []byte) (string, string) {
if len(data) < 1+64+32+1 {
panic("Incorrect key length")
}
if data[0] != KeyTypeBlock {
panic("Incorrect key type")
}
file := string(data[keyPrefixLen+keyFolderLen+keyHashLen:])
return file
file := string(data[1+64+32:])
slice := data[1 : 1+64]
izero := bytes.IndexByte(slice, 0)
if izero > -1 {
return string(slice[:izero]), file
}
return string(slice), file
}

View File

@@ -10,7 +10,6 @@ import (
"testing"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb/util"
)
func genBlocks(n int) []protocol.BlockInfo {
@@ -56,7 +55,7 @@ func setup() (*Instance, *BlockFinder) {
}
func dbEmpty(db *Instance) bool {
iter := db.NewIterator(util.BytesPrefix([]byte{KeyTypeBlock}), nil)
iter := db.NewIterator(nil, nil)
defer iter.Release()
if iter.Next() {
return false
@@ -71,7 +70,7 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
t.Fatal("db not empty")
}
m := NewBlockMap(db, db.folderIdx.ID([]byte("folder1")))
m := NewBlockMap(db, "folder1")
f3.Flags |= protocol.FlagDirectory
@@ -153,8 +152,8 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
func TestBlockFinderLookup(t *testing.T) {
db, f := setup()
m1 := NewBlockMap(db, db.folderIdx.ID([]byte("folder1")))
m2 := NewBlockMap(db, db.folderIdx.ID([]byte("folder2")))
m1 := NewBlockMap(db, "folder1")
m2 := NewBlockMap(db, "folder2")
err := m1.Add([]protocol.FileInfo{f1})
if err != nil {
@@ -222,7 +221,7 @@ func TestBlockFinderFix(t *testing.T) {
return true
}
m := NewBlockMap(db, db.folderIdx.ID([]byte("folder1")))
m := NewBlockMap(db, "folder1")
err := m.Add([]protocol.FileInfo{f1})
if err != nil {
t.Fatal(err)

View File

@@ -42,8 +42,6 @@ const (
KeyTypeDeviceStatistic
KeyTypeFolderStatistic
KeyTypeVirtualMtime
KeyTypeFolderIdx
KeyTypeDeviceIdx
)
type fileVersion struct {

View File

@@ -1,114 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"bytes"
"github.com/syndtr/goleveldb/leveldb"
)
// convertKeyFormat converts from the v0.12 to the v0.13 database format, to
// avoid having to do rescan. The change is in the key format for folder
// labels, so we basically just iterate over the database rewriting keys as
// necessary.
func convertKeyFormat(from, to *leveldb.DB) error {
l.Infoln("Converting database key format")
blocks, files, globals, unchanged := 0, 0, 0, 0
dbi := newDBInstance(to)
i := from.NewIterator(nil, nil)
for i.Next() {
key := i.Key()
switch key[0] {
case KeyTypeBlock:
folder, file := oldFromBlockKey(key)
folderIdx := dbi.folderIdx.ID([]byte(folder))
hash := key[1+64:]
newKey := blockKeyInto(nil, hash, folderIdx, file)
if err := to.Put(newKey, i.Value(), nil); err != nil {
return err
}
blocks++
case KeyTypeDevice:
newKey := dbi.deviceKey(oldDeviceKeyFolder(key), oldDeviceKeyDevice(key), oldDeviceKeyName(key))
if err := to.Put(newKey, i.Value(), nil); err != nil {
return err
}
files++
case KeyTypeGlobal:
newKey := dbi.globalKey(oldGlobalKeyFolder(key), oldGlobalKeyName(key))
if err := to.Put(newKey, i.Value(), nil); err != nil {
return err
}
globals++
case KeyTypeVirtualMtime:
// Cannot be converted, we drop it instead :(
default:
if err := to.Put(key, i.Value(), nil); err != nil {
return err
}
unchanged++
}
}
l.Infof("Converted %d blocks, %d files, %d globals (%d unchanged).", blocks, files, globals, unchanged)
return nil
}
func oldDeviceKeyFolder(key []byte) []byte {
folder := key[1 : 1+64]
izero := bytes.IndexByte(folder, 0)
if izero < 0 {
return folder
}
return folder[:izero]
}
func oldDeviceKeyDevice(key []byte) []byte {
return key[1+64 : 1+64+32]
}
func oldDeviceKeyName(key []byte) []byte {
return key[1+64+32:]
}
func oldGlobalKeyName(key []byte) []byte {
return key[1+64:]
}
func oldGlobalKeyFolder(key []byte) []byte {
folder := key[1 : 1+64]
izero := bytes.IndexByte(folder, 0)
if izero < 0 {
return folder
}
return folder[:izero]
}
func oldFromBlockKey(data []byte) (string, string) {
if len(data) < 1+64+32+1 {
panic("Incorrect key length")
}
if data[0] != KeyTypeBlock {
panic("Incorrect key type")
}
file := string(data[1+64+32:])
slice := data[1 : 1+64]
izero := bytes.IndexByte(slice, 0)
if izero > -1 {
return string(slice[:izero]), file
}
return string(slice), file
}

View File

@@ -1,136 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"archive/zip"
"io"
"os"
"path/filepath"
"testing"
"github.com/syndtr/goleveldb/leveldb"
)
func TestLabelConversion(t *testing.T) {
os.RemoveAll("testdata/oldformat.db")
defer os.RemoveAll("testdata/oldformat.db")
os.RemoveAll("testdata/newformat.db")
defer os.RemoveAll("testdata/newformat.db")
if err := unzip("testdata/oldformat.db.zip", "testdata"); err != nil {
t.Fatal(err)
}
odb, err := leveldb.OpenFile("testdata/oldformat.db", nil)
if err != nil {
t.Fatal(err)
}
ldb, err := leveldb.OpenFile("testdata/newformat.db", nil)
if err != nil {
t.Fatal(err)
}
if err = convertKeyFormat(odb, ldb); err != nil {
t.Fatal(err)
}
ldb.Close()
odb.Close()
inst, err := Open("testdata/newformat.db")
if err != nil {
t.Fatal(err)
}
fs := NewFileSet("default", inst)
files, deleted, _ := fs.GlobalSize()
if files+deleted != 953 {
// Expected number of global entries determined by
// ../../bin/stindex testdata/oldformat.db/ | grep global | grep -c default
t.Errorf("Conversion error, global list differs (%d != 953)", files+deleted)
}
files, deleted, _ = fs.LocalSize()
if files+deleted != 953 {
t.Errorf("Conversion error, device list differs (%d != 953)", files+deleted)
}
f := NewBlockFinder(inst)
// [block] F:"default" H:1c25dea9003cc16216e2a22900be1ec1cc5aaf270442904e2f9812c314e929d8 N:"f/f2/f25f1b3e6e029231b933531b2138796d" I:3
h := []byte{0x1c, 0x25, 0xde, 0xa9, 0x00, 0x3c, 0xc1, 0x62, 0x16, 0xe2, 0xa2, 0x29, 0x00, 0xbe, 0x1e, 0xc1, 0xcc, 0x5a, 0xaf, 0x27, 0x04, 0x42, 0x90, 0x4e, 0x2f, 0x98, 0x12, 0xc3, 0x14, 0xe9, 0x29, 0xd8}
found := 0
f.Iterate([]string{"default"}, h, func(folder, file string, idx int32) bool {
if folder == "default" && file == filepath.FromSlash("f/f2/f25f1b3e6e029231b933531b2138796d") && idx == 3 {
found++
}
return true
})
if found != 1 {
t.Errorf("Found %d blocks instead of expected 1", found)
}
inst.Close()
}
func unzip(src, dest string) error {
r, err := zip.OpenReader(src)
if err != nil {
return err
}
defer func() {
if err := r.Close(); err != nil {
panic(err)
}
}()
os.MkdirAll(dest, 0755)
// Closure to address file descriptors issue with all the deferred .Close() methods
extractAndWriteFile := func(f *zip.File) error {
rc, err := f.Open()
if err != nil {
return err
}
defer func() {
if err := rc.Close(); err != nil {
panic(err)
}
}()
path := filepath.Join(dest, f.Name)
if f.FileInfo().IsDir() {
os.MkdirAll(path, f.Mode())
} else {
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer func() {
if err := f.Close(); err != nil {
panic(err)
}
}()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
for _, f := range r.File {
err := extractAndWriteFile(f)
if err != nil {
return err
}
}
return nil
}

View File

@@ -8,15 +8,11 @@ package db
import (
"bytes"
"encoding/binary"
"os"
"path/filepath"
"sort"
"strings"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/iterator"
@@ -29,33 +25,14 @@ type deletionHandler func(t readWriteTransaction, folder, device, name []byte, d
type Instance struct {
*leveldb.DB
folderIdx *smallIndex
deviceIdx *smallIndex
}
const (
keyPrefixLen = 1
keyFolderLen = 4 // indexed
keyDeviceLen = 4 // indexed
keyHashLen = 32
)
func Open(file string) (*Instance, error) {
opts := &opt.Options{
OpenFilesCacheCapacity: 100,
WriteBuffer: 4 << 20,
}
if _, err := os.Stat(file); os.IsNotExist(err) {
// The file we are looking to open does not exist. This may be the
// first launch so we should look for an old version and try to
// convert it.
if err := checkConvertDatabase(file); err != nil {
l.Infoln("Converting old database:", err)
l.Infoln("Will rescan from scratch.")
}
}
db, err := leveldb.OpenFile(file, opts)
if leveldbIsCorrupted(err) {
db, err = leveldb.RecoverFile(file, opts)
@@ -83,12 +60,9 @@ func OpenMemory() *Instance {
}
func newDBInstance(db *leveldb.DB) *Instance {
i := &Instance{
return &Instance{
DB: db,
}
i.folderIdx = newSmallIndex(i, []byte{KeyTypeFolderIdx})
i.deviceIdx = newSmallIndex(i, []byte{KeyTypeDeviceIdx})
return i
}
func (db *Instance) Compact() error {
@@ -98,10 +72,13 @@ func (db *Instance) Compact() error {
func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker, deleteFn deletionHandler) int64 {
sort.Sort(fileList(fs)) // sort list on name, same as in the database
start := db.deviceKey(folder, device, nil) // before all folder/device files
limit := db.deviceKey(folder, device, []byte{0xff, 0xff, 0xff, 0xff}) // after all folder/device files
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.deviceKey(folder, device, nil)[:keyPrefixLen+keyFolderLen+keyDeviceLen]), nil)
dbi := t.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
moreDb := dbi.Next()
@@ -260,18 +237,17 @@ func (db *Instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, l
}
func (db *Instance) withHave(folder, device []byte, truncate bool, fn Iterator) {
start := db.deviceKey(folder, device, nil) // before all folder/device files
limit := db.deviceKey(folder, device, []byte{0xff, 0xff, 0xff, 0xff}) // after all folder/device files
t := db.newReadOnlyTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.deviceKey(folder, device, nil)[:keyPrefixLen+keyFolderLen+keyDeviceLen]), nil)
dbi := t.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
for dbi.Next() {
// The iterator function may keep a reference to the unmarshalled
// struct, which in turn references the buffer it was unmarshalled
// from. dbi.Value() just returns an internal slice that it reuses, so
// we need to copy it.
f, err := unmarshalTrunc(append([]byte{}, dbi.Value()...), truncate)
f, err := unmarshalTrunc(dbi.Value(), truncate)
if err != nil {
panic(err)
}
@@ -282,20 +258,19 @@ func (db *Instance) withHave(folder, device []byte, truncate bool, fn Iterator)
}
func (db *Instance) withAllFolderTruncated(folder []byte, fn func(device []byte, f FileInfoTruncated) bool) {
start := db.deviceKey(folder, nil, nil) // before all folder/device files
limit := db.deviceKey(folder, protocol.LocalDeviceID[:], []byte{0xff, 0xff, 0xff, 0xff}) // after all folder/device files
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.deviceKey(folder, nil, nil)[:keyPrefixLen+keyFolderLen]), nil)
dbi := t.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
for dbi.Next() {
device := db.deviceKeyDevice(dbi.Key())
var f FileInfoTruncated
// The iterator function may keep a reference to the unmarshalled
// struct, which in turn references the buffer it was unmarshalled
// from. dbi.Value() just returns an internal slice that it reuses, so
// we need to copy it.
err := f.UnmarshalXDR(append([]byte{}, dbi.Value()...))
err := f.UnmarshalXDR(dbi.Value())
if err != nil {
panic(err)
}
@@ -384,10 +359,7 @@ func (db *Instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator
l.Debugf("vl.versions[0].device: %x", vl.versions[0].device)
l.Debugf("name: %q (%x)", name, name)
l.Debugf("fk: %q", fk)
l.Debugf("fk: %x %x %x",
fk[keyPrefixLen:keyPrefixLen+keyFolderLen],
fk[keyPrefixLen+keyFolderLen:keyPrefixLen+keyFolderLen+keyDeviceLen],
fk[keyPrefixLen+keyFolderLen+keyDeviceLen:])
l.Debugf("fk: %x %x %x", fk[1:1+64], fk[1+64:1+64+32], fk[1+64+32:])
panic(err)
}
@@ -431,10 +403,13 @@ func (db *Instance) availability(folder, file []byte) []protocol.DeviceID {
}
func (db *Instance) withNeed(folder, device []byte, truncate bool, fn Iterator) {
start := db.globalKey(folder, nil)
limit := db.globalKey(folder, []byte{0xff, 0xff, 0xff, 0xff})
t := db.newReadOnlyTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.globalKey(folder, nil)[:keyPrefixLen+keyFolderLen]), nil)
dbi := t.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
var fk []byte
@@ -571,7 +546,9 @@ func (db *Instance) checkGlobals(folder []byte, globalSize *sizeTracker) {
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.globalKey(folder, nil)[:keyPrefixLen+keyFolderLen]), nil)
start := db.globalKey(folder, nil)
limit := db.globalKey(folder, []byte{0xff, 0xff, 0xff, 0xff})
dbi := t.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
var fk []byte
@@ -621,72 +598,71 @@ func (db *Instance) checkGlobals(folder []byte, globalSize *sizeTracker) {
// deviceKey returns a byte slice encoding the following information:
// keyTypeDevice (1 byte)
// folder (4 bytes)
// device (4 bytes)
// folder (64 bytes)
// device (32 bytes)
// name (variable size)
func (db *Instance) deviceKey(folder, device, file []byte) []byte {
return db.deviceKeyInto(nil, folder, device, file)
}
func (db *Instance) deviceKeyInto(k []byte, folder, device, file []byte) []byte {
reqLen := keyPrefixLen + keyFolderLen + keyDeviceLen + len(file)
reqLen := 1 + 64 + 32 + len(file)
if len(k) < reqLen {
k = make([]byte, reqLen)
}
k[0] = KeyTypeDevice
binary.BigEndian.PutUint32(k[keyPrefixLen:], db.folderIdx.ID(folder))
binary.BigEndian.PutUint32(k[keyPrefixLen+keyFolderLen:], db.deviceIdx.ID(device))
copy(k[keyPrefixLen+keyFolderLen+keyDeviceLen:], []byte(file))
if len(folder) > 64 {
panic("folder name too long")
}
copy(k[1:], []byte(folder))
copy(k[1+64:], device[:])
copy(k[1+64+32:], []byte(file))
return k[:reqLen]
}
// deviceKeyName returns the device ID from the key
func (db *Instance) deviceKeyName(key []byte) []byte {
return key[keyPrefixLen+keyFolderLen+keyDeviceLen:]
return key[1+64+32:]
}
// deviceKeyFolder returns the folder name from the key
func (db *Instance) deviceKeyFolder(key []byte) []byte {
folder, ok := db.folderIdx.Val(binary.BigEndian.Uint32(key[keyPrefixLen:]))
if !ok {
panic("bug: lookup of nonexistent folder ID")
folder := key[1 : 1+64]
izero := bytes.IndexByte(folder, 0)
if izero < 0 {
return folder
}
return folder
return folder[:izero]
}
// deviceKeyDevice returns the device ID from the key
func (db *Instance) deviceKeyDevice(key []byte) []byte {
device, ok := db.deviceIdx.Val(binary.BigEndian.Uint32(key[keyPrefixLen+keyFolderLen:]))
if !ok {
panic("bug: lookup of nonexistent device ID")
}
return device
return key[1+64 : 1+64+32]
}
// globalKey returns a byte slice encoding the following information:
// keyTypeGlobal (1 byte)
// folder (4 bytes)
// folder (64 bytes)
// name (variable size)
func (db *Instance) globalKey(folder, file []byte) []byte {
k := make([]byte, keyPrefixLen+keyFolderLen+len(file))
k := make([]byte, 1+64+len(file))
k[0] = KeyTypeGlobal
binary.BigEndian.PutUint32(k[keyPrefixLen:], db.folderIdx.ID(folder))
copy(k[keyPrefixLen+keyFolderLen:], []byte(file))
if len(folder) > 64 {
panic("folder name too long")
}
copy(k[1:], []byte(folder))
copy(k[1+64:], []byte(file))
return k
}
// globalKeyName returns the filename from the key
func (db *Instance) globalKeyName(key []byte) []byte {
return key[keyPrefixLen+keyFolderLen:]
return key[1+64:]
}
// globalKeyFolder returns the folder name from the key
func (db *Instance) globalKeyFolder(key []byte) []byte {
folder, ok := db.folderIdx.Val(binary.BigEndian.Uint32(key[keyPrefixLen:]))
if !ok {
panic("bug: lookup of nonexistent folder ID")
folder := key[1 : 1+64]
izero := bytes.IndexByte(folder, 0)
if izero < 0 {
return folder
}
return folder
return folder[:izero]
}
func unmarshalTrunc(bs []byte, truncate bool) (FileIntf, error) {
@@ -716,132 +692,3 @@ func leveldbIsCorrupted(err error) bool {
return false
}
// checkConvertDatabase tries to convert an existing old (v0.11) database to
// new (v0.13) format.
func checkConvertDatabase(dbFile string) error {
oldLoc := filepath.Join(filepath.Dir(dbFile), "index-v0.11.0.db")
if _, err := os.Stat(oldLoc); os.IsNotExist(err) {
// The old database file does not exist; that's ok, continue as if
// everything succeeded.
return nil
} else if err != nil {
// Any other error is weird.
return err
}
// There exists a database in the old format. We run a one time
// conversion from old to new.
fromDb, err := leveldb.OpenFile(oldLoc, nil)
if err != nil {
return err
}
toDb, err := leveldb.OpenFile(dbFile, nil)
if err != nil {
return err
}
err = convertKeyFormat(fromDb, toDb)
if err != nil {
return err
}
err = toDb.Close()
if err != nil {
return err
}
// We've done this one, we don't want to do it again (if the user runs
// -reset or so). We don't care too much about errors any more at this stage.
fromDb.Close()
osutil.Rename(oldLoc, oldLoc+".converted")
return nil
}
// A smallIndex is an in memory bidirectional []byte to uint32 map. It gives
// fast lookups in both directions and persists to the database. Don't use for
// storing more items than fit comfortably in RAM.
type smallIndex struct {
db *Instance
prefix []byte
id2val map[uint32]string
val2id map[string]uint32
nextID uint32
mut sync.Mutex
}
func newSmallIndex(db *Instance, prefix []byte) *smallIndex {
idx := &smallIndex{
db: db,
prefix: prefix,
id2val: make(map[uint32]string),
val2id: make(map[string]uint32),
mut: sync.NewMutex(),
}
idx.load()
return idx
}
// load iterates over the prefix space in the database and populates the in
// memory maps.
func (i *smallIndex) load() {
tr := i.db.newReadOnlyTransaction()
it := tr.NewIterator(util.BytesPrefix(i.prefix), nil)
for it.Next() {
val := string(it.Value())
id := binary.BigEndian.Uint32(it.Key()[len(i.prefix):])
i.id2val[id] = val
i.val2id[val] = id
if id >= i.nextID {
i.nextID = id + 1
}
}
it.Release()
tr.close()
}
// ID returns the index number for the given byte slice, allocating a new one
// and persisting this to the database if necessary.
func (i *smallIndex) ID(val []byte) uint32 {
i.mut.Lock()
// intentionally avoiding defer here as we want this call to be as fast as
// possible in the general case (folder ID already exists). The map lookup
// with the conversion of []byte to string is compiler optimized to not
// copy the []byte, which is why we don't assign it to a temp variable
// here.
if id, ok := i.val2id[string(val)]; ok {
i.mut.Unlock()
return id
}
id := i.nextID
i.nextID++
valStr := string(val)
i.val2id[valStr] = id
i.id2val[id] = valStr
key := make([]byte, len(i.prefix)+8) // prefix plus uint32 id
copy(key, i.prefix)
binary.BigEndian.PutUint32(key[len(i.prefix):], id)
i.db.Put(key, val, nil)
i.mut.Unlock()
return id
}
// Val returns the value for the given index number, or (nil, false) if there
// is no such index number.
func (i *smallIndex) Val(id uint32) ([]byte, bool) {
i.mut.Lock()
val, ok := i.id2val[id]
i.mut.Unlock()
if !ok {
return nil, false
}
return []byte(val), true
}

View File

@@ -16,9 +16,7 @@ func TestDeviceKey(t *testing.T) {
dev := []byte("device67890123456789012345678901")
name := []byte("name")
db := OpenMemory()
db.folderIdx.ID(fld)
db.deviceIdx.ID(dev)
db := &Instance{}
key := db.deviceKey(fld, dev, name)
@@ -40,8 +38,7 @@ func TestGlobalKey(t *testing.T) {
fld := []byte("folder6789012345678901234567890123456789012345678901234567890123")
name := []byte("name")
db := OpenMemory()
db.folderIdx.ID(fld)
db := &Instance{}
key := db.globalKey(fld, name)

View File

@@ -5,6 +5,9 @@
package db
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
@@ -19,8 +22,10 @@ fileVersion Structure:
\ Vector Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of device |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ device (length + padded data) \
\ device (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -32,15 +37,13 @@ struct fileVersion {
*/
func (o fileVersion) XDRSize() int {
return o.version.XDRSize() +
4 + len(o.device) + xdr.Padding(len(o.device))
func (o fileVersion) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o fileVersion) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o fileVersion) MustMarshalXDR() []byte {
@@ -51,22 +54,37 @@ func (o fileVersion) MustMarshalXDR() []byte {
return bs
}
func (o fileVersion) MarshalXDRInto(m *xdr.Marshaller) error {
if err := o.version.MarshalXDRInto(m); err != nil {
return err
func (o fileVersion) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o fileVersion) EncodeXDRInto(xw *xdr.Writer) (int, error) {
_, err := o.version.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
m.MarshalBytes(o.device)
return m.Error
xw.WriteBytes(o.device)
return xw.Tot(), xw.Error()
}
func (o *fileVersion) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *fileVersion) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *fileVersion) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
(&o.version).UnmarshalXDRFrom(u)
o.device = u.UnmarshalBytes()
return u.Error
func (o *fileVersion) DecodeXDRFrom(xr *xdr.Reader) error {
(&o.version).DecodeXDRFrom(xr)
o.device = xr.ReadBytes()
return xr.Error()
}
/*
@@ -90,14 +108,13 @@ struct versionList {
*/
func (o versionList) XDRSize() int {
return 4 + xdr.SizeOfSlice(o.versions)
func (o versionList) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o versionList) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o versionList) MustMarshalXDR() []byte {
@@ -108,35 +125,43 @@ func (o versionList) MustMarshalXDR() []byte {
return bs
}
func (o versionList) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(uint32(len(o.versions)))
func (o versionList) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o versionList) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint32(uint32(len(o.versions)))
for i := range o.versions {
if err := o.versions[i].MarshalXDRInto(m); err != nil {
return err
_, err := o.versions[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
return m.Error
return xw.Tot(), xw.Error()
}
func (o *versionList) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *versionList) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *versionList) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
_versionsSize := int(u.UnmarshalUint32())
func (o *versionList) DecodeXDRFrom(xr *xdr.Reader) error {
_versionsSize := int(xr.ReadUint32())
if _versionsSize < 0 {
return xdr.ElementSizeExceeded("versions", _versionsSize, 0)
} else if _versionsSize == 0 {
o.versions = nil
} else {
if _versionsSize <= len(o.versions) {
o.versions = o.versions[:_versionsSize]
} else {
o.versions = make([]fileVersion, _versionsSize)
}
for i := range o.versions {
(&o.versions[i]).UnmarshalXDRFrom(u)
}
}
return u.Error
o.versions = make([]fileVersion, _versionsSize)
for i := range o.versions {
(&o.versions[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}

View File

@@ -97,7 +97,7 @@ func NewFileSet(folder string, db *Instance) *FileSet {
localVersion: make(map[protocol.DeviceID]int64),
folder: folder,
db: db,
blockmap: NewBlockMap(db, db.folderIdx.ID([]byte(folder))),
blockmap: NewBlockMap(db, folder),
mutex: sync.NewMutex(),
}
@@ -244,7 +244,7 @@ func DropFolder(db *Instance, folder string) {
db.dropFolder([]byte(folder))
bm := &BlockMap{
db: db,
folder: db.folderIdx.ID([]byte(folder)),
folder: folder,
}
bm.Drop()
NewVirtualMtimeRepo(db, folder).Drop()

View File

Binary file not shown.

View File

@@ -7,6 +7,8 @@
package db
import (
"bytes"
"github.com/calmh/xdr"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -16,32 +18,32 @@ type FileInfoTruncated struct {
}
func (o *FileInfoTruncated) UnmarshalXDR(bs []byte) error {
return o.UnmarshalXDRFrom(&xdr.Unmarshaller{Data: bs})
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *FileInfoTruncated) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.Name = u.UnmarshalStringMax(8192)
o.Flags = u.UnmarshalUint32()
o.Modified = int64(u.UnmarshalUint64())
(&o.Version).UnmarshalXDRFrom(u)
o.LocalVersion = int64(u.UnmarshalUint64())
_BlocksSize := int(u.UnmarshalUint32())
func (o *FileInfoTruncated) DecodeXDRFrom(xr *xdr.Reader) error {
o.Name = xr.ReadStringMax(8192)
o.Flags = xr.ReadUint32()
o.Modified = int64(xr.ReadUint64())
(&o.Version).DecodeXDRFrom(xr)
o.LocalVersion = int64(xr.ReadUint64())
_BlocksSize := int(xr.ReadUint32())
if _BlocksSize < 0 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 1000000)
} else if _BlocksSize == 0 {
o.Blocks = nil
} else {
if _BlocksSize > 1000000 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 1000000)
}
for i := 0; i < _BlocksSize; i++ {
size := int64(u.UnmarshalUint32())
o.CachedSize += size
u.UnmarshalBytes()
}
}
if _BlocksSize > 1000000 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 1000000)
}
return u.Error
buf := make([]byte, 64)
for i := 0; i < _BlocksSize; i++ {
size := xr.ReadUint32()
o.CachedSize += int64(size)
xr.ReadBytesMaxInto(64, buf)
}
return xr.Error()
}
func BlocksToSize(num int) int64 {

View File

@@ -7,7 +7,6 @@
package db
import (
"encoding/binary"
"fmt"
"time"
)
@@ -25,12 +24,10 @@ type VirtualMtimeRepo struct {
}
func NewVirtualMtimeRepo(ldb *Instance, folder string) *VirtualMtimeRepo {
var prefix [5]byte // key type + 4 bytes folder idx number
prefix[0] = KeyTypeVirtualMtime
binary.BigEndian.PutUint32(prefix[1:], ldb.folderIdx.ID([]byte(folder)))
prefix := string(KeyTypeVirtualMtime) + folder
return &VirtualMtimeRepo{
ns: NewNamespacedKV(ldb, string(prefix[:])),
ns: NewNamespacedKV(ldb, prefix),
}
}

View File

@@ -5,6 +5,9 @@
package discover
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
@@ -37,16 +40,13 @@ struct Announce {
*/
func (o Announce) XDRSize() int {
return 4 +
o.This.XDRSize() +
4 + xdr.SizeOfSlice(o.Extra)
func (o Announce) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o Announce) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o Announce) MustMarshalXDR() []byte {
@@ -57,49 +57,58 @@ func (o Announce) MustMarshalXDR() []byte {
return bs
}
func (o Announce) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(o.Magic)
if err := o.This.MarshalXDRInto(m); err != nil {
return err
func (o Announce) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Announce) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint32(o.Magic)
_, err := o.This.EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
if l := len(o.Extra); l > 16 {
return xdr.ElementSizeExceeded("Extra", l, 16)
return xw.Tot(), xdr.ElementSizeExceeded("Extra", l, 16)
}
m.MarshalUint32(uint32(len(o.Extra)))
xw.WriteUint32(uint32(len(o.Extra)))
for i := range o.Extra {
if err := o.Extra[i].MarshalXDRInto(m); err != nil {
return err
_, err := o.Extra[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
return m.Error
return xw.Tot(), xw.Error()
}
func (o *Announce) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *Announce) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *Announce) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.Magic = u.UnmarshalUint32()
(&o.This).UnmarshalXDRFrom(u)
_ExtraSize := int(u.UnmarshalUint32())
func (o *Announce) DecodeXDRFrom(xr *xdr.Reader) error {
o.Magic = xr.ReadUint32()
(&o.This).DecodeXDRFrom(xr)
_ExtraSize := int(xr.ReadUint32())
if _ExtraSize < 0 {
return xdr.ElementSizeExceeded("Extra", _ExtraSize, 16)
} else if _ExtraSize == 0 {
o.Extra = nil
} else {
if _ExtraSize > 16 {
return xdr.ElementSizeExceeded("Extra", _ExtraSize, 16)
}
if _ExtraSize <= len(o.Extra) {
o.Extra = o.Extra[:_ExtraSize]
} else {
o.Extra = make([]Device, _ExtraSize)
}
for i := range o.Extra {
(&o.Extra[i]).UnmarshalXDRFrom(u)
}
}
return u.Error
if _ExtraSize > 16 {
return xdr.ElementSizeExceeded("Extra", _ExtraSize, 16)
}
o.Extra = make([]Device, _ExtraSize)
for i := range o.Extra {
(&o.Extra[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
/*
@@ -109,8 +118,10 @@ Device Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ ID (length + padded data) \
\ ID (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Addresses |
@@ -135,16 +146,13 @@ struct Device {
*/
func (o Device) XDRSize() int {
return 4 + len(o.ID) + xdr.Padding(len(o.ID)) +
4 + xdr.SizeOfSlice(o.Addresses) +
4 + xdr.SizeOfSlice(o.Relays)
func (o Device) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o Device) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o Device) MustMarshalXDR() []byte {
@@ -155,75 +163,77 @@ func (o Device) MustMarshalXDR() []byte {
return bs
}
func (o Device) MarshalXDRInto(m *xdr.Marshaller) error {
func (o Device) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Device) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.ID); l > 32 {
return xdr.ElementSizeExceeded("ID", l, 32)
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 32)
}
m.MarshalBytes(o.ID)
xw.WriteBytes(o.ID)
if l := len(o.Addresses); l > 16 {
return xdr.ElementSizeExceeded("Addresses", l, 16)
return xw.Tot(), xdr.ElementSizeExceeded("Addresses", l, 16)
}
m.MarshalUint32(uint32(len(o.Addresses)))
xw.WriteUint32(uint32(len(o.Addresses)))
for i := range o.Addresses {
if err := o.Addresses[i].MarshalXDRInto(m); err != nil {
return err
_, err := o.Addresses[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
if l := len(o.Relays); l > 16 {
return xdr.ElementSizeExceeded("Relays", l, 16)
return xw.Tot(), xdr.ElementSizeExceeded("Relays", l, 16)
}
m.MarshalUint32(uint32(len(o.Relays)))
xw.WriteUint32(uint32(len(o.Relays)))
for i := range o.Relays {
if err := o.Relays[i].MarshalXDRInto(m); err != nil {
return err
_, err := o.Relays[i].EncodeXDRInto(xw)
if err != nil {
return xw.Tot(), err
}
}
return m.Error
return xw.Tot(), xw.Error()
}
func (o *Device) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *Device) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *Device) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.ID = u.UnmarshalBytesMax(32)
_AddressesSize := int(u.UnmarshalUint32())
func (o *Device) DecodeXDRFrom(xr *xdr.Reader) error {
o.ID = xr.ReadBytesMax(32)
_AddressesSize := int(xr.ReadUint32())
if _AddressesSize < 0 {
return xdr.ElementSizeExceeded("Addresses", _AddressesSize, 16)
} else if _AddressesSize == 0 {
o.Addresses = nil
} else {
if _AddressesSize > 16 {
return xdr.ElementSizeExceeded("Addresses", _AddressesSize, 16)
}
if _AddressesSize <= len(o.Addresses) {
o.Addresses = o.Addresses[:_AddressesSize]
} else {
o.Addresses = make([]Address, _AddressesSize)
}
for i := range o.Addresses {
(&o.Addresses[i]).UnmarshalXDRFrom(u)
}
}
_RelaysSize := int(u.UnmarshalUint32())
if _AddressesSize > 16 {
return xdr.ElementSizeExceeded("Addresses", _AddressesSize, 16)
}
o.Addresses = make([]Address, _AddressesSize)
for i := range o.Addresses {
(&o.Addresses[i]).DecodeXDRFrom(xr)
}
_RelaysSize := int(xr.ReadUint32())
if _RelaysSize < 0 {
return xdr.ElementSizeExceeded("Relays", _RelaysSize, 16)
} else if _RelaysSize == 0 {
o.Relays = nil
} else {
if _RelaysSize > 16 {
return xdr.ElementSizeExceeded("Relays", _RelaysSize, 16)
}
if _RelaysSize <= len(o.Relays) {
o.Relays = o.Relays[:_RelaysSize]
} else {
o.Relays = make([]Relay, _RelaysSize)
}
for i := range o.Relays {
(&o.Relays[i]).UnmarshalXDRFrom(u)
}
}
return u.Error
if _RelaysSize > 16 {
return xdr.ElementSizeExceeded("Relays", _RelaysSize, 16)
}
o.Relays = make([]Relay, _RelaysSize)
for i := range o.Relays {
(&o.Relays[i]).DecodeXDRFrom(xr)
}
return xr.Error()
}
/*
@@ -233,8 +243,10 @@ Address Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of URL |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ URL (length + padded data) \
\ URL (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -245,14 +257,13 @@ struct Address {
*/
func (o Address) XDRSize() int {
return 4 + len(o.URL) + xdr.Padding(len(o.URL))
func (o Address) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o Address) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o Address) MustMarshalXDR() []byte {
@@ -263,21 +274,35 @@ func (o Address) MustMarshalXDR() []byte {
return bs
}
func (o Address) MarshalXDRInto(m *xdr.Marshaller) error {
func (o Address) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Address) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.URL); l > 2083 {
return xdr.ElementSizeExceeded("URL", l, 2083)
return xw.Tot(), xdr.ElementSizeExceeded("URL", l, 2083)
}
m.MarshalString(o.URL)
return m.Error
xw.WriteString(o.URL)
return xw.Tot(), xw.Error()
}
func (o *Address) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *Address) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *Address) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.URL = u.UnmarshalStringMax(2083)
return u.Error
func (o *Address) DecodeXDRFrom(xr *xdr.Reader) error {
o.URL = xr.ReadStringMax(2083)
return xr.Error()
}
/*
@@ -287,8 +312,10 @@ Relay Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of URL |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ URL (length + padded data) \
\ URL (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Latency |
@@ -302,14 +329,13 @@ struct Relay {
*/
func (o Relay) XDRSize() int {
return 4 + len(o.URL) + xdr.Padding(len(o.URL)) + 4
func (o Relay) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o Relay) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o Relay) MustMarshalXDR() []byte {
@@ -320,21 +346,35 @@ func (o Relay) MustMarshalXDR() []byte {
return bs
}
func (o Relay) MarshalXDRInto(m *xdr.Marshaller) error {
func (o Relay) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Relay) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.URL); l > 2083 {
return xdr.ElementSizeExceeded("URL", l, 2083)
return xw.Tot(), xdr.ElementSizeExceeded("URL", l, 2083)
}
m.MarshalString(o.URL)
m.MarshalUint32(uint32(o.Latency))
return m.Error
xw.WriteString(o.URL)
xw.WriteUint32(uint32(o.Latency))
return xw.Tot(), xw.Error()
}
func (o *Relay) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *Relay) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *Relay) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.URL = u.UnmarshalStringMax(2083)
o.Latency = int32(u.UnmarshalUint32())
return u.Error
func (o *Relay) DecodeXDRFrom(xr *xdr.Reader) error {
o.URL = xr.ReadStringMax(2083)
o.Latency = int32(xr.ReadUint32())
return xr.Error()
}

View File

@@ -23,6 +23,7 @@ const (
LevelDebug LogLevel = iota
LevelVerbose
LevelInfo
LevelOK
LevelWarn
LevelFatal
NumLevels
@@ -41,6 +42,8 @@ type Logger interface {
Verbosef(format string, vals ...interface{})
Infoln(vals ...interface{})
Infof(format string, vals ...interface{})
Okln(vals ...interface{})
Okf(format string, vals ...interface{})
Warnln(vals ...interface{})
Warnf(format string, vals ...interface{})
Fatalln(vals ...interface{})
@@ -162,6 +165,24 @@ func (l *logger) Infof(format string, vals ...interface{}) {
l.callHandlers(LevelInfo, s)
}
// Okln logs a line with an OK prefix.
func (l *logger) Okln(vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintln(vals...)
l.logger.Output(2, "OK: "+s)
l.callHandlers(LevelOK, s)
}
// Okf logs a formatted line with an OK prefix.
func (l *logger) Okf(format string, vals ...interface{}) {
l.mut.Lock()
defer l.mut.Unlock()
s := fmt.Sprintf(format, vals...)
l.logger.Output(2, "OK: "+s)
l.callHandlers(LevelOK, s)
}
// Warnln logs a formatted line with a WARNING prefix.
func (l *logger) Warnln(vals ...interface{}) {
l.mut.Lock()

View File

@@ -19,6 +19,8 @@ func TestAPI(t *testing.T) {
l.AddHandler(LevelDebug, checkFunc(t, LevelDebug, &debug))
info := 0
l.AddHandler(LevelInfo, checkFunc(t, LevelInfo, &info))
ok := 0
l.AddHandler(LevelOK, checkFunc(t, LevelOK, &ok))
warn := 0
l.AddHandler(LevelWarn, checkFunc(t, LevelWarn, &warn))
@@ -26,15 +28,20 @@ func TestAPI(t *testing.T) {
l.Debugln("test", 0)
l.Infof("test %d", 1)
l.Infoln("test", 1)
l.Okf("test %d", 2)
l.Okln("test", 2)
l.Warnf("test %d", 3)
l.Warnln("test", 3)
if debug != 6 {
if debug != 8 {
t.Errorf("Debug handler called %d != 8 times", debug)
}
if info != 4 {
if info != 6 {
t.Errorf("Info handler called %d != 6 times", info)
}
if ok != 4 {
t.Errorf("Ok handler called %d != 4 times", ok)
}
if warn != 2 {
t.Errorf("Warn handler called %d != 2 times", warn)
}

View File

@@ -189,7 +189,7 @@ func (m *Model) StartFolderRW(folder string) {
m.folderRunnerTokens[folder] = append(m.folderRunnerTokens[folder], token)
m.fmut.Unlock()
l.Infoln("Ready to synchronize", folder, "(read-write)")
l.Okln("Ready to synchronize", folder, "(read-write)")
}
func (m *Model) warnAboutOverwritingProtectedFiles(folder string) {
@@ -241,7 +241,7 @@ func (m *Model) StartFolderRO(folder string) {
m.folderRunnerTokens[folder] = append(m.folderRunnerTokens[folder], token)
m.fmut.Unlock()
l.Infoln("Ready to synchronize", folder, "(read only; no external updates accepted)")
l.Okln("Ready to synchronize", folder, "(read only; no external updates accepted)")
}
func (m *Model) RemoveFolder(folder string) {

View File

@@ -11,16 +11,15 @@ type header struct {
compression bool
}
func (h header) MarshalXDRInto(m *xdr.Marshaller) error {
v := encodeHeader(h)
m.MarshalUint32(v)
return m.Error
func (h header) encodeXDR(xw *xdr.Writer) (int, error) {
u := encodeHeader(h)
return xw.WriteUint32(u)
}
func (h *header) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
v := u.UnmarshalUint32()
*h = decodeHeader(v)
return u.Error
func (h *header) decodeXDR(xr *xdr.Reader) error {
u := xr.ReadUint32()
*h = decodeHeader(u)
return xr.Error()
}
func encodeHeader(h header) uint32 {

View File

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,6 @@ import (
"time"
lz4 "github.com/bkaradzic/go-lz4"
"github.com/calmh/xdr"
)
const (
@@ -131,7 +130,8 @@ type rawConnection struct {
pool sync.Pool
compression Compression
readerBuf []byte // used & reused by readMessage
rdbuf0 []byte // used & reused by readMessage
rdbuf1 []byte // used & reused by readMessage
}
type asyncResult struct {
@@ -146,8 +146,7 @@ type hdrMsg struct {
}
type encodable interface {
MarshalXDRInto(m *xdr.Marshaller) error
XDRSize() int
AppendXDR([]byte) ([]byte, error)
}
type isEofer interface {
@@ -375,14 +374,18 @@ func (c *rawConnection) readerLoop() (err error) {
}
func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
hdrBuf := make([]byte, 8)
_, err = io.ReadFull(c.cr, hdrBuf)
if cap(c.rdbuf0) < 8 {
c.rdbuf0 = make([]byte, 8)
} else {
c.rdbuf0 = c.rdbuf0[:8]
}
_, err = io.ReadFull(c.cr, c.rdbuf0)
if err != nil {
return
}
hdr = decodeHeader(binary.BigEndian.Uint32(hdrBuf[:4]))
msglen := int(binary.BigEndian.Uint32(hdrBuf[4:]))
hdr = decodeHeader(binary.BigEndian.Uint32(c.rdbuf0[0:4]))
msglen := int(binary.BigEndian.Uint32(c.rdbuf0[4:8]))
l.Debugf("read header %v (msglen=%d)", hdr, msglen)
@@ -396,40 +399,27 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
return
}
// c.readerBuf contains a buffer we can reuse. But once we've unmarshalled
// a message from the buffer we can't reuse it again as the unmarshalled
// message refers to the contents of the buffer. The only case we a buffer
// ends up in readerBuf for reuse is when the message is compressed, as we
// then decompress into a new buffer instead.
var msgBuf []byte
if cap(c.readerBuf) >= msglen {
// If we have a buffer ready in rdbuf we just use that.
msgBuf = c.readerBuf[:msglen]
if cap(c.rdbuf0) < msglen {
c.rdbuf0 = make([]byte, msglen)
} else {
// Otherwise we allocate a new buffer.
msgBuf = make([]byte, msglen)
c.rdbuf0 = c.rdbuf0[:msglen]
}
_, err = io.ReadFull(c.cr, msgBuf)
_, err = io.ReadFull(c.cr, c.rdbuf0)
if err != nil {
return
}
l.Debugf("read %d bytes", len(msgBuf))
l.Debugf("read %d bytes", len(c.rdbuf0))
msgBuf := c.rdbuf0
if hdr.compression && msglen > 0 {
// We're going to decompress msgBuf into a different newly allocated
// buffer, so keep msgBuf around for reuse on the next message.
c.readerBuf = msgBuf
msgBuf, err = lz4.Decode(nil, msgBuf)
c.rdbuf1 = c.rdbuf1[:cap(c.rdbuf1)]
c.rdbuf1, err = lz4.Decode(c.rdbuf1, c.rdbuf0)
if err != nil {
return
}
msgBuf = c.rdbuf1
l.Debugf("decompressed to %d bytes", len(msgBuf))
} else {
c.readerBuf = nil
}
if shouldDebug() {
@@ -611,14 +601,7 @@ func (c *rawConnection) writerLoop() {
case hm := <-c.outbox:
if hm.msg != nil {
// Uncompressed message in uncBuf
msgLen := hm.msg.XDRSize()
if cap(uncBuf) >= msgLen {
uncBuf = uncBuf[:msgLen]
} else {
uncBuf = make([]byte, msgLen)
}
m := &xdr.Marshaller{Data: uncBuf}
err = hm.msg.MarshalXDRInto(m)
uncBuf, err = hm.msg.AppendXDR(uncBuf[:0])
if hm.done != nil {
close(hm.done)
}

View File

@@ -3,6 +3,7 @@
package protocol
import (
"bytes"
"encoding/binary"
"encoding/hex"
"encoding/json"
@@ -54,13 +55,14 @@ func TestHeaderMarshalUnmarshal(t *testing.T) {
ver = int(uint(ver) % 16)
id = int(uint(id) % 4096)
typ = int(uint(typ) % 256)
buf := make([]byte, 4)
buf := new(bytes.Buffer)
xw := xdr.NewWriter(buf)
h0 := header{version: ver, msgID: id, msgType: typ}
h0.MarshalXDRInto(&xdr.Marshaller{Data: buf})
h0.encodeXDR(xw)
xr := xdr.NewReader(buf)
var h1 header
h1.UnmarshalXDRFrom(&xdr.Unmarshaller{Data: buf})
h1.decodeXDR(xr)
return h0 == h1
}
if err := quick.Check(f, nil); err != nil {
@@ -126,7 +128,8 @@ func TestVersionErr(t *testing.T) {
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
timeoutWriteHeader(c0.cw, header{
w := xdr.NewWriter(c0.cw)
timeoutWriteHeader(w, header{
version: 2, // higher than supported
msgID: 0,
msgType: messageTypeIndex,
@@ -151,7 +154,8 @@ func TestTypeErr(t *testing.T) {
c0.ClusterConfig(ClusterConfigMessage{})
c1.ClusterConfig(ClusterConfigMessage{})
timeoutWriteHeader(c0.cw, header{
w := xdr.NewWriter(c0.cw)
timeoutWriteHeader(w, header{
version: 0,
msgID: 0,
msgType: 42, // unknown type
@@ -201,7 +205,7 @@ func TestElementSizeExceededNested(t *testing.T) {
m := ClusterConfigMessage{
ClientName: "longstringlongstringlongstringinglongstringlongstringlonlongstringlongstringlon",
}
_, err := m.MarshalXDR()
_, err := m.EncodeXDR(ioutil.Discard)
if err == nil {
t.Errorf("ID length %d > max 64, but no error", len(m.Folders[0].ID))
}
@@ -209,19 +213,12 @@ func TestElementSizeExceededNested(t *testing.T) {
func TestMarshalIndexMessage(t *testing.T) {
f := func(m1 IndexMessage) bool {
if len(m1.Options) == 0 {
m1.Options = nil
}
for i, f := range m1.Files {
m1.Files[i].CachedSize = 0
if len(f.Blocks) == 0 {
m1.Files[i].Blocks = nil
} else {
for j := range f.Blocks {
f.Blocks[j].Offset = 0
if len(f.Blocks[j].Hash) == 0 {
f.Blocks[j].Hash = nil
}
for j := range f.Blocks {
f.Blocks[j].Offset = 0
if len(f.Blocks[j].Hash) == 0 {
f.Blocks[j].Hash = nil
}
}
}
@@ -236,9 +233,6 @@ func TestMarshalIndexMessage(t *testing.T) {
func TestMarshalRequestMessage(t *testing.T) {
f := func(m1 RequestMessage) bool {
if len(m1.Options) == 0 {
m1.Options = nil
}
return testMarshal(t, "request", &m1, &RequestMessage{})
}
@@ -262,9 +256,6 @@ func TestMarshalResponseMessage(t *testing.T) {
func TestMarshalClusterConfigMessage(t *testing.T) {
f := func(m1 ClusterConfigMessage) bool {
if len(m1.Options) == 0 {
m1.Options = nil
}
return testMarshal(t, "clusterconfig", &m1, &ClusterConfigMessage{})
}
@@ -284,11 +275,13 @@ func TestMarshalCloseMessage(t *testing.T) {
}
type message interface {
MarshalXDR() ([]byte, error)
UnmarshalXDR([]byte) error
EncodeXDR(io.Writer) (int, error)
DecodeXDR(io.Reader) error
}
func testMarshal(t *testing.T, prefix string, m1, m2 message) bool {
var buf bytes.Buffer
failed := func(bc []byte) {
bs, _ := json.MarshalIndent(m1, "", " ")
ioutil.WriteFile(prefix+"-1.txt", bs, 0644)
@@ -301,7 +294,7 @@ func testMarshal(t *testing.T, prefix string, m1, m2 message) bool {
}
}
buf, err := m1.MarshalXDR()
_, err := m1.EncodeXDR(&buf)
if err != nil && strings.Contains(err.Error(), "exceeds size") {
return true
}
@@ -310,20 +303,23 @@ func testMarshal(t *testing.T, prefix string, m1, m2 message) bool {
t.Fatal(err)
}
err = m2.UnmarshalXDR(buf)
bc := make([]byte, len(buf.Bytes()))
copy(bc, buf.Bytes())
err = m2.DecodeXDR(&buf)
if err != nil {
failed(buf)
failed(bc)
t.Fatal(err)
}
ok := reflect.DeepEqual(m1, m2)
if !ok {
failed(buf)
failed(bc)
}
return ok
}
func timeoutWriteHeader(w io.Writer, hdr header) {
func timeoutWriteHeader(w *xdr.Writer, hdr header) {
// This tries to write a message header to w, but times out after a while.
// This is useful because in testing, with a PipeWriter, it will block
// forever if the other side isn't reading any more. On the other hand we
@@ -336,7 +332,8 @@ func timeoutWriteHeader(w io.Writer, hdr header) {
done := make(chan struct{})
go func() {
w.Write(buf[:])
w.WriteRaw(buf[:])
l.Infoln("write completed")
close(done)
}()
select {

View File

@@ -7,29 +7,37 @@ import "github.com/calmh/xdr"
// This stuff is hacked up manually because genxdr doesn't support 'type
// Vector []Counter' declarations and it was tricky when I tried to add it...
func (v Vector) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(uint32(len(v)))
for i := range v {
m.MarshalUint64(uint64(v[i].ID))
m.MarshalUint64(v[i].Value)
}
return m.Error
type xdrWriter interface {
WriteUint32(uint32) (int, error)
WriteUint64(uint64) (int, error)
}
type xdrReader interface {
ReadUint32() uint32
ReadUint64() uint64
}
func (v *Vector) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
l := int(u.UnmarshalUint32())
// EncodeXDRInto encodes the vector as an XDR object into the given XDR
// encoder.
func (v Vector) EncodeXDRInto(w xdrWriter) (int, error) {
w.WriteUint32(uint32(len(v)))
for i := range v {
w.WriteUint64(uint64(v[i].ID))
w.WriteUint64(v[i].Value)
}
return 4 + 16*len(v), nil
}
// DecodeXDRFrom decodes the XDR objects from the given reader into itself.
func (v *Vector) DecodeXDRFrom(r xdrReader) error {
l := int(r.ReadUint32())
if l > 1e6 {
return xdr.ElementSizeExceeded("number of counters", l, 1e6)
}
n := make(Vector, l)
for i := range n {
n[i].ID = ShortID(u.UnmarshalUint64())
n[i].Value = u.UnmarshalUint64()
n[i].ID = ShortID(r.ReadUint64())
n[i].Value = r.ReadUint64()
}
*v = n
return u.Error
}
func (v Vector) XDRSize() int {
return 4 + 16*len(v)
return nil
}

View File

@@ -5,6 +5,9 @@
package protocol
import (
"bytes"
"io"
"github.com/calmh/xdr"
)
@@ -31,14 +34,13 @@ struct header {
*/
func (o header) XDRSize() int {
return 4 + 4 + 4
func (o header) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o header) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o header) MustMarshalXDR() []byte {
@@ -49,22 +51,36 @@ func (o header) MustMarshalXDR() []byte {
return bs
}
func (o header) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(o.magic)
m.MarshalUint32(uint32(o.messageType))
m.MarshalUint32(uint32(o.messageLength))
return m.Error
func (o header) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o header) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint32(o.magic)
xw.WriteUint32(uint32(o.messageType))
xw.WriteUint32(uint32(o.messageLength))
return xw.Tot(), xw.Error()
}
func (o *header) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *header) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *header) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.magic = u.UnmarshalUint32()
o.messageType = int32(u.UnmarshalUint32())
o.messageLength = int32(u.UnmarshalUint32())
return u.Error
func (o *header) DecodeXDRFrom(xr *xdr.Reader) error {
o.magic = xr.ReadUint32()
o.messageType = int32(xr.ReadUint32())
o.messageLength = int32(xr.ReadUint32())
return xr.Error()
}
/*
@@ -78,9 +94,10 @@ struct Ping {
*/
func (o Ping) XDRSize() int {
return 0
func (o Ping) EncodeXDR(w io.Writer) (int, error) {
return 0, nil
}
func (o Ping) MarshalXDR() ([]byte, error) {
return nil, nil
}
@@ -89,7 +106,15 @@ func (o Ping) MustMarshalXDR() []byte {
return nil
}
func (o Ping) MarshalXDRInto(m *xdr.Marshaller) error {
func (o Ping) AppendXDR(bs []byte) ([]byte, error) {
return bs, nil
}
func (o Ping) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}
func (o *Ping) DecodeXDR(r io.Reader) error {
return nil
}
@@ -97,8 +122,8 @@ func (o *Ping) UnmarshalXDR(bs []byte) error {
return nil
}
func (o *Ping) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
return nil
func (o *Ping) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}
/*
@@ -112,9 +137,10 @@ struct Pong {
*/
func (o Pong) XDRSize() int {
return 0
func (o Pong) EncodeXDR(w io.Writer) (int, error) {
return 0, nil
}
func (o Pong) MarshalXDR() ([]byte, error) {
return nil, nil
}
@@ -123,7 +149,15 @@ func (o Pong) MustMarshalXDR() []byte {
return nil
}
func (o Pong) MarshalXDRInto(m *xdr.Marshaller) error {
func (o Pong) AppendXDR(bs []byte) ([]byte, error) {
return bs, nil
}
func (o Pong) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}
func (o *Pong) DecodeXDR(r io.Reader) error {
return nil
}
@@ -131,8 +165,8 @@ func (o *Pong) UnmarshalXDR(bs []byte) error {
return nil
}
func (o *Pong) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
return nil
func (o *Pong) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}
/*
@@ -146,9 +180,10 @@ struct JoinRelayRequest {
*/
func (o JoinRelayRequest) XDRSize() int {
return 0
func (o JoinRelayRequest) EncodeXDR(w io.Writer) (int, error) {
return 0, nil
}
func (o JoinRelayRequest) MarshalXDR() ([]byte, error) {
return nil, nil
}
@@ -157,7 +192,15 @@ func (o JoinRelayRequest) MustMarshalXDR() []byte {
return nil
}
func (o JoinRelayRequest) MarshalXDRInto(m *xdr.Marshaller) error {
func (o JoinRelayRequest) AppendXDR(bs []byte) ([]byte, error) {
return bs, nil
}
func (o JoinRelayRequest) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}
func (o *JoinRelayRequest) DecodeXDR(r io.Reader) error {
return nil
}
@@ -165,8 +208,8 @@ func (o *JoinRelayRequest) UnmarshalXDR(bs []byte) error {
return nil
}
func (o *JoinRelayRequest) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
return nil
func (o *JoinRelayRequest) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}
/*
@@ -180,9 +223,10 @@ struct RelayFull {
*/
func (o RelayFull) XDRSize() int {
return 0
func (o RelayFull) EncodeXDR(w io.Writer) (int, error) {
return 0, nil
}
func (o RelayFull) MarshalXDR() ([]byte, error) {
return nil, nil
}
@@ -191,7 +235,15 @@ func (o RelayFull) MustMarshalXDR() []byte {
return nil
}
func (o RelayFull) MarshalXDRInto(m *xdr.Marshaller) error {
func (o RelayFull) AppendXDR(bs []byte) ([]byte, error) {
return bs, nil
}
func (o RelayFull) EncodeXDRInto(xw *xdr.Writer) (int, error) {
return xw.Tot(), xw.Error()
}
func (o *RelayFull) DecodeXDR(r io.Reader) error {
return nil
}
@@ -199,8 +251,8 @@ func (o *RelayFull) UnmarshalXDR(bs []byte) error {
return nil
}
func (o *RelayFull) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
return nil
func (o *RelayFull) DecodeXDRFrom(xr *xdr.Reader) error {
return xr.Error()
}
/*
@@ -210,8 +262,10 @@ JoinSessionRequest Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Key |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Key (length + padded data) \
\ Key (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -222,14 +276,13 @@ struct JoinSessionRequest {
*/
func (o JoinSessionRequest) XDRSize() int {
return 4 + len(o.Key) + xdr.Padding(len(o.Key))
func (o JoinSessionRequest) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o JoinSessionRequest) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o JoinSessionRequest) MustMarshalXDR() []byte {
@@ -240,21 +293,35 @@ func (o JoinSessionRequest) MustMarshalXDR() []byte {
return bs
}
func (o JoinSessionRequest) MarshalXDRInto(m *xdr.Marshaller) error {
func (o JoinSessionRequest) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o JoinSessionRequest) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.Key); l > 32 {
return xdr.ElementSizeExceeded("Key", l, 32)
return xw.Tot(), xdr.ElementSizeExceeded("Key", l, 32)
}
m.MarshalBytes(o.Key)
return m.Error
xw.WriteBytes(o.Key)
return xw.Tot(), xw.Error()
}
func (o *JoinSessionRequest) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *JoinSessionRequest) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *JoinSessionRequest) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.Key = u.UnmarshalBytesMax(32)
return u.Error
func (o *JoinSessionRequest) DecodeXDRFrom(xr *xdr.Reader) error {
o.Key = xr.ReadBytesMax(32)
return xr.Error()
}
/*
@@ -266,8 +333,10 @@ Response Structure:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Code |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Message |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Message (length + padded data) \
\ Message (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -279,15 +348,13 @@ struct Response {
*/
func (o Response) XDRSize() int {
return 4 +
4 + len(o.Message) + xdr.Padding(len(o.Message))
func (o Response) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o Response) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o Response) MustMarshalXDR() []byte {
@@ -298,20 +365,34 @@ func (o Response) MustMarshalXDR() []byte {
return bs
}
func (o Response) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(uint32(o.Code))
m.MarshalString(o.Message)
return m.Error
func (o Response) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o Response) EncodeXDRInto(xw *xdr.Writer) (int, error) {
xw.WriteUint32(uint32(o.Code))
xw.WriteString(o.Message)
return xw.Tot(), xw.Error()
}
func (o *Response) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *Response) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *Response) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.Code = int32(u.UnmarshalUint32())
o.Message = u.UnmarshalString()
return u.Error
func (o *Response) DecodeXDRFrom(xr *xdr.Reader) error {
o.Code = int32(xr.ReadUint32())
o.Message = xr.ReadString()
return xr.Error()
}
/*
@@ -321,8 +402,10 @@ ConnectRequest Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ ID (length + padded data) \
\ ID (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -333,14 +416,13 @@ struct ConnectRequest {
*/
func (o ConnectRequest) XDRSize() int {
return 4 + len(o.ID) + xdr.Padding(len(o.ID))
func (o ConnectRequest) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o ConnectRequest) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o ConnectRequest) MustMarshalXDR() []byte {
@@ -351,21 +433,35 @@ func (o ConnectRequest) MustMarshalXDR() []byte {
return bs
}
func (o ConnectRequest) MarshalXDRInto(m *xdr.Marshaller) error {
func (o ConnectRequest) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o ConnectRequest) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.ID); l > 32 {
return xdr.ElementSizeExceeded("ID", l, 32)
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 32)
}
m.MarshalBytes(o.ID)
return m.Error
xw.WriteBytes(o.ID)
return xw.Tot(), xw.Error()
}
func (o *ConnectRequest) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *ConnectRequest) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *ConnectRequest) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.ID = u.UnmarshalBytesMax(32)
return u.Error
func (o *ConnectRequest) DecodeXDRFrom(xr *xdr.Reader) error {
o.ID = xr.ReadBytesMax(32)
return xr.Error()
}
/*
@@ -375,19 +471,25 @@ SessionInvitation Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ From (length + padded data) \
/ /
| Length of From |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Key (length + padded data) \
\ From (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Key |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Address (length + padded data) \
\ Key (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 16 zero bits | Port |
| Length of Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Address (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0x0000 | Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Server Socket (V=0 or 1) |V|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -403,16 +505,13 @@ struct SessionInvitation {
*/
func (o SessionInvitation) XDRSize() int {
return 4 + len(o.From) + xdr.Padding(len(o.From)) +
4 + len(o.Key) + xdr.Padding(len(o.Key)) +
4 + len(o.Address) + xdr.Padding(len(o.Address)) + 4 + 4
func (o SessionInvitation) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.EncodeXDRInto(xw)
}
func (o SessionInvitation) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
return o.AppendXDR(make([]byte, 0, 128))
}
func (o SessionInvitation) MustMarshalXDR() []byte {
@@ -423,33 +522,47 @@ func (o SessionInvitation) MustMarshalXDR() []byte {
return bs
}
func (o SessionInvitation) MarshalXDRInto(m *xdr.Marshaller) error {
func (o SessionInvitation) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.EncodeXDRInto(xw)
return []byte(aw), err
}
func (o SessionInvitation) EncodeXDRInto(xw *xdr.Writer) (int, error) {
if l := len(o.From); l > 32 {
return xdr.ElementSizeExceeded("From", l, 32)
return xw.Tot(), xdr.ElementSizeExceeded("From", l, 32)
}
m.MarshalBytes(o.From)
xw.WriteBytes(o.From)
if l := len(o.Key); l > 32 {
return xdr.ElementSizeExceeded("Key", l, 32)
return xw.Tot(), xdr.ElementSizeExceeded("Key", l, 32)
}
m.MarshalBytes(o.Key)
xw.WriteBytes(o.Key)
if l := len(o.Address); l > 32 {
return xdr.ElementSizeExceeded("Address", l, 32)
return xw.Tot(), xdr.ElementSizeExceeded("Address", l, 32)
}
m.MarshalBytes(o.Address)
m.MarshalUint16(o.Port)
m.MarshalBool(o.ServerSocket)
return m.Error
xw.WriteBytes(o.Address)
xw.WriteUint16(o.Port)
xw.WriteBool(o.ServerSocket)
return xw.Tot(), xw.Error()
}
func (o *SessionInvitation) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.DecodeXDRFrom(xr)
}
func (o *SessionInvitation) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.DecodeXDRFrom(xr)
}
func (o *SessionInvitation) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.From = u.UnmarshalBytesMax(32)
o.Key = u.UnmarshalBytesMax(32)
o.Address = u.UnmarshalBytesMax(32)
o.Port = u.UnmarshalUint16()
o.ServerSocket = u.UnmarshalBool()
return u.Error
func (o *SessionInvitation) DecodeXDRFrom(xr *xdr.Reader) error {
o.From = xr.ReadBytesMax(32)
o.Key = xr.ReadBytesMax(32)
o.Address = xr.ReadBytesMax(32)
o.Port = xr.ReadUint16()
o.ServerSocket = xr.ReadBool()
return xr.Error()
}

View File

@@ -74,13 +74,7 @@ func WriteMessage(w io.Writer, message interface{}) error {
func ReadMessage(r io.Reader) (interface{}, error) {
var header header
buf := make([]byte, header.XDRSize())
if _, err := io.ReadFull(r, buf); err != nil {
return nil, err
}
if err := header.UnmarshalXDR(buf); err != nil {
if err := header.DecodeXDR(r); err != nil {
return nil, err
}
@@ -88,43 +82,38 @@ func ReadMessage(r io.Reader) (interface{}, error) {
return nil, fmt.Errorf("magic mismatch")
}
buf = make([]byte, int(header.messageLength))
if _, err := io.ReadFull(r, buf); err != nil {
return nil, err
}
switch header.messageType {
case messageTypePing:
var msg Ping
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypePong:
var msg Pong
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypeJoinRelayRequest:
var msg JoinRelayRequest
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypeJoinSessionRequest:
var msg JoinSessionRequest
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypeResponse:
var msg Response
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypeConnectRequest:
var msg ConnectRequest
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypeSessionInvitation:
var msg SessionInvitation
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
case messageTypeRelayFull:
var msg RelayFull
err := msg.UnmarshalXDR(buf)
err := msg.DecodeXDR(r)
return msg, err
}

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-BEP" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-BEP" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.
@@ -56,7 +56,7 @@ level protocols providing encryption and authentication.
.sp
.nf
.ft C
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-|
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+
| Block Exchange Protocol |
|\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-|
| Encryption & Auth (TLS 1.2) |
@@ -498,14 +498,14 @@ struct ClusterConfigMessage {
string ClientVersion<64>;
Folder Folders<1000000>;
Option Options<64>;
}
};
struct Folder {
string ID<256>;
Device Devices<1000000>;
unsigned int Flags;
Option Options<64>;
}
};
struct Device {
opaque ID<32>;
@@ -516,12 +516,12 @@ struct Device {
hyper MaxLocalVersion;
unsigned int Flags;
Option Options<64>;
}
};
struct Option {
string Key<64>;
string Value<1024>;
}
};
.ft P
.fi
.UNINDENT
@@ -750,7 +750,7 @@ struct IndexMessage {
FileInfo Files<1000000>;
unsigned int Flags;
Option Options<64>;
}
};
struct FileInfo {
string Name<8192>;
@@ -758,22 +758,22 @@ struct FileInfo {
hyper Modified;
Vector Version;
hyper LocalVersion;
BlockInfo Blocks<1000000>;
}
BlockInfo Blocks<10000000>;
};
struct Vector {
Counter Counters<>
}
Counter Counters<>;
};
struct Counter {
unsigned hyper ID
unsigned hyper Value
}
unsigned hyper ID;
unsigned hyper Value;
};
struct BlockInfo {
unsigned int Size;
opaque Hash<64>;
}
};
.ft P
.fi
.UNINDENT
@@ -858,7 +858,7 @@ struct RequestMessage {
opaque Hash<64>;
unsigned int Flags;
Option Options<64>;
}
};
.ft P
.fi
.UNINDENT
@@ -1054,7 +1054,7 @@ T{
T} T{
Total length
T} T{
64 MiB
512 MiB
T}
_
T{
@@ -1098,7 +1098,7 @@ T{
T} T{
Number of Blocks
T} T{
1.000.000
10.000.000
T}
_
T{
@@ -1214,6 +1214,10 @@ T} T{
T}
_
.TE
.sp
The currently defined values allow maximum file size of 1220 GiB
(10.000.000 x 128 KiB). The maximum message size covers an Index message
for the maximum file.
.SH EXAMPLE EXCHANGE
.TS
center;

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-CONFIG" "5" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.
@@ -284,6 +284,9 @@ conflict copies altogether.
<device id="5SYI2FS\-LW6YAXI\-JJDYETS\-NDBBPIO\-256MWBO\-XDPXWVG\-24QPUM4\-PDW4UQU" name="syno" compression="metadata" introducer="false">
<address>dynamic</address>
</device>
<device id="2CYF2WQ\-AKZO2QZ\-JAKWLYD\-AGHMQUM\-BGXUOIS\-GYILW34\-HJG3DUK\-LRRYQAR" name="syno local" compression="metadata" introducer="false">
<address>tcp://192.0.2.1:22001</address>
</device>
.ft P
.fi
.UNINDENT
@@ -327,25 +330,32 @@ should copy their list of devices per folder when connecting.
.UNINDENT
.sp
In addition, one or more \fBaddress\fP child elements must be present. Each
contains an address to use when attempting to connect to this device and will
be tried in order. Accepted formats are:
contains an address or host name to use when attempting to connect to this device and will
be tried in order. Entries other than \fBdynamic\fP must be prefixed with \fBtcp://\fP\&. Accepted formats are:
.INDENT 0.0
.TP
.B IPv4 address (\fB192.0.2.42\fP)
.B IPv4 address (\fBtcp://192.0.2.42\fP)
The default port (22000) is used.
.TP
.B IPv4 address and port (\fB192.0.2.42:12345\fP)
.B IPv4 address and port (\fBtcp://192.0.2.42:12345\fP)
The address and port is used as given.
.TP
.B IPv6 address (\fB2001:db8::23:42\fP)
The default port (22000) is used.
.B IPv6 address (\fBtcp://[2001:db8::23:42]\fP)
The default port (22000) is used. The address must be enclosed in
square brackets.
.TP
.B IPv6 address and port (\fB[2001:db8::23:42]:12345\fP)
.B IPv6 address and port (\fBtcp://[2001:db8::23:42]:12345\fP)
The address and port is used as given. The address must be enclosed in
square brackets.
.TP
.B Host name (\fBtcp://fileserver\fP)
The host name will be used on the default port (22000) and connections will be attempted via both IPv4 and IPv6, depending on name resolution.
.TP
.B Host name and port (\fBtcp://fileserver:12345\fP)
The host name will be used on the given port and connections will be attempted via both IPv4 and IPv6, depending on name resolution.
.TP
.B \fBdynamic\fP
The word \fBdynamic\fP means to use local and global discovery to find the
The word \fBdynamic\fP (without \fBtcp://\fP prefix) means to use local and global discovery to find the
device.
.UNINDENT
.SH IGNOREDDEVICE ELEMENT

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-DEVICE-IDS" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-EVENT-API" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-FAQ" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.
@@ -156,6 +156,9 @@ causes a conflict on change you\(aqll end up with \fBsync\-conflict\-...sync\-co
.sp
Each user should run their own Syncthing instance. Be aware that you might need
to configure ports such that they do not overlap (see the config.xml).
.SS Does Syncthing support syncing between folders on the same system?
.sp
Syncthing is not designed to sync locally and the overhead involved in doing so will waste resources. There are better programs to achieve this such as rsync or Unison.
.SS Is Syncthing my ideal backup application?
.sp
No, Syncthing is not a backup application because all changes to your files

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-GLOBALDISCO" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-LOCALDISCO" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v3
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-NETWORKING" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-RELAY" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-REST-API" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-SECURITY" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-STIGNORE" "5" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "TODO" "7" "January 30, 2016" "v0.12" "Syncthing"
.TH "TODO" "7" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
Todo \- Keep automatic backups of deleted files by other nodes
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING" "1" "January 30, 2016" "v0.12" "Syncthing"
.TH "SYNCTHING" "1" "March 04, 2016" "v0.12" "Syncthing"
.SH NAME
syncthing \- Syncthing
.
@@ -38,7 +38,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.ft C
syncthing [\-audit] [\-generate=<dir>] [\-gui\-address=<address>] [\-gui\-apikey=<key>]
[\-home=<dir>] [\-logfile=<filename>] [\-logflags=<flags>] [\-no\-browser]
[\-no\-console] [\-no\-restart] [\-reset] [\-upgrade] [\-upgrade\-check]
[\-no\-console] [\-no\-restart] [\-paths] [\-reset] [\-upgrade] [\-upgrade\-check]
[\-upgrade\-to=<url>] [\-verbose] [\-version]
.ft P
.fi
@@ -123,6 +123,11 @@ Do not restart; just exit.
.UNINDENT
.INDENT 0.0
.TP
.B \-paths
Print the paths used for configuration, keys, database, GUI overrides, default sync folder and the log file.
.UNINDENT
.INDENT 0.0
.TP
.B \-reset
Reset the database.
.UNINDENT

View File

@@ -42,7 +42,7 @@ func init() {
}
func main() {
actual := actualAuthorEmails()
actual := actualAuthorEmails("cmd/", "lib/", "gui/", "test/", "script/")
listed := listedAuthorEmails()
missing := actual.except(listed)
if len(missing) > 0 {
@@ -56,8 +56,9 @@ func main() {
// actualAuthorEmails returns the set of author emails found in the actual git
// commit log, except those in excluded commits.
func actualAuthorEmails() stringSet {
cmd := exec.Command("git", "log", "--format=%H %ae")
func actualAuthorEmails(paths ...string) stringSet {
args := append([]string{"log", "--format=%H %ae"}, paths...)
cmd := exec.Command("git", args...)
bs, err := cmd.Output()
if err != nil {
log.Fatal("authorEmails:", err)

View File

@@ -14,6 +14,7 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"os"
@@ -191,34 +192,30 @@ func alterFiles(dir string) error {
return osutil.TryRename(path, newPath)
}
/*
This doesn't in fact work. Sometimes it appears to. We need to get this sorted...
// Switch between files and directories
case r == 3 && comps > 3 && rand.Float64() < 0.2:
if !info.Mode().IsRegular() {
err = removeAll(path)
if err != nil {
return err
}
d1 := []byte("I used to be a dir: " + path)
err := ioutil.WriteFile(path, d1, 0644)
if err != nil {
return err
}
} else {
err := osutil.Remove(path)
if err != nil {
return err
}
err = os.MkdirAll(path, 0755)
if err != nil {
return err
}
generateFiles(path, 10, 20, "../LICENSE")
}
// Switch between files and directories
case r == 3 && comps > 3 && rand.Float64() < 0.2:
if !info.Mode().IsRegular() {
err = removeAll(path)
if err != nil {
return err
*/
}
d1 := []byte("I used to be a dir: " + path)
err := ioutil.WriteFile(path, d1, 0644)
if err != nil {
return err
}
} else {
err := osutil.Remove(path)
if err != nil {
return err
}
err = os.MkdirAll(path, 0755)
if err != nil {
return err
}
generateFiles(path, 10, 20, "../LICENSE")
}
return err
/*
This fails. Bug?