We've always, since the introduction of conflicts, had the policy that
deletes lose against any other change, for safety's sake. This is a
problem, however, because it means the sort order of versions is not a
total order.
That is, given two versions `A` and `B` that are currently in conflict,
we will sort them in a given order (let's say `A, B`, so `A < B` for
ordering purposes: we say "A wins over B" or "A is newer than B") and
consider the first in the list the winner. The loser (who has `B` on
disk) will process the conflict at some point and move the file to a
conflict copy and announce `A'` as the resolved conflict. The winner
(with `A` on disk) doesn't do anything.
However, if `A` is deleted the ordering changes. We still have `A < B`
and, of course, `Adel < A` (this is not even a conflict, just linear
order). In most sane systems this would imply the ordering `Adel < A <
B`, however in our case we in fact have `B < Adel` because any version
wins over a deleted one, so there is no logical ordering at all of the
files at this point. `Adel < A < B < Adel ???` In practice the deleted
version may end up at the head or the tail of the list, depending on the
order we do the compares.
Hence, at this point, "whatever" happens and it's not guaranteed to make
any sense. 😬
I propose that we resolve this my simply letting deletes be versions
like anything else and maintain a total ordering based on just version
vectors with the existing tie breakers like always. That means a delete
can win in a conflict situation, and the result should be that the file
is moved to a conflict copy on the losing device. I think this retains
the data safety to almost the same degree as previously, while removing
probably an entire class of strange out of sync bugs...
---
(A potential wrinkle here is that, ideally, we wouldn't even create the
conflict copy when the delete and the losing version represent the same
data -- same as when we handle normal modification conflicts. However,
the deleted FileInfo doesn't carry any information on what the contents
were, so we can't do that right now. A possible future extension would
be to carry the block list hash of the deleted data in the deleted
FileInfo and use that for this purpose, but I don't want to complicate
this PR with that. The block list hash itself also isn't a
protocol-defined thing at the moment, it's something implementation
dependent that we just use locally.)
This is a draft because I haven't adjusted all the tests yet, I'd like
to get feedback on the change overall first, before spending time on
that.
In my opinion the main win of this change is in it's lower complexity
resp. fewer moving parts. It should also be faster as it only does one
query instead of two, but I have no idea if that's practically
relevant.
This also mirrors the v1 DB, where a block map key had the name
appended. Not that this is an argument for the change, it was mostly
reassuring me that I might not be missing something key here
conceptually (I might still be of course, please tell me :) ).
And the change isn't mainly intrinsically motivated, instead it came
up while fixing a bug in the copier. And the nested nature of that code
makes the fix harder, and "un-nesting" it required me to understand
what's happening. This change fell out of that.
This changes the database structure to use one database per folder, with
a small main database to coordinate. Reverts the prior change to buffer
all files in memory when pulling, meaning there is now a phase where the
WAL file will grow significantly, at least for initial sync of folders
with many directories.
---------
Co-authored-by: bt90 <btom1990@googlemail.com>
This reduces the number of file entries we carry in the database,
sometimes significantly. The downside is that if a file is deleted while
a device is offline, and that device comes back more than the cutoff
interval (six months) later, those files will get resurrected at some
point.
The current limit is far too low for our workloads. Perhaps we should
aim even higher than the 64MiB this patch proposes?
Citing the sqlite docs:
> [...] after committing a transaction the rollback journal file may
remain in the file-system. **This increases performance for subsequent
transactions since overwriting an existing file is faster than append to
a file**, but it also consumes file-system space.
tl;dr: if the limit is too low, we're shooting ourselves in the foot in
terms of performance.
Switch the database from LevelDB to SQLite, for greater stability and
simpler code.
Co-authored-by: Tommy van der Vorst <tommy@pixelspark.nl>
Co-authored-by: bt90 <btom1990@googlemail.com>