This whole method is a bit broken. It assumes that, when we did an import from
backup, the backup that they came from is the same as the "current" version.
We *could* add another argument, but to be honest I find the whole method a bit
pointless. It's not particularly tied to the `BackupMachine` abstraction. Let's
instead expose `Store::import_room_keys` and
`ExportedRooomKey::from_backed_up_room_key` publicly, and tell callers to use
that directly.
When we are importing a batch of room keys, use the newly-added
`CryptoStore::save_inbound_group_sessions` method instead of
`CryptoStore::save_changes`.
To do this, we need to pass the backup version into `Store::import_room_keys`
instead of just `from_backup` flag.
When we add a batch of inbound group sessions to the store, if they came from a
backup, we need to record which backup version they came from.
`CryptoStore::save_changes` doesn't give us an easy way to set the backup
version (we could add it to the `Changes` struct, but ugh).
So, we add a new method `save_inbound_group_sessions` which takes the backup
version as a separate param. This works fine, because whenever we import
sessions there are no other changes to store.
The new method isn't used outside of tests yet -- that comes in a follow-up
commit.
This moves code around, and tweaks the behavior:
- `WeakRoom::get()` returns an `Option`, that will be None if the client
is missing or the room is missing.
- `PaginableRoom` for `WeakRoom` will now return a default response if
the room could not be reconstructed from the weak room. It's fine to do
so because we're in a shutdown context.
It's possible the timeline starts with a read marker, in which case the
first item won't be a day divider; in that case, check for the next
item, if present.
test_start_with_read_marker
This patch adds another check to ensure `AsVector` generates
`VectorDiff`s that, once combined, produce an expected `Vector`. It
avoids errors when unit testing `VectorDiff` alone.
This patch ensures that an underflow is not possible when the length
of `chunks` is 0. In practise, it's not possible because there is
_always_ one chunk inside `LinkedChunk`, but it's better to have good
code habits.
This patch removes the `unsafe` part of `as_vector`. The idea is to
pass `Iter` (the forward iterator of `Chunk`) to `AsVector` so that it
internally computes `initial_chunk_lengths`. The shape of this data must
no longer be guaranteed by the caller.
This patch goes a bit further: `UpdateToVectorDiff` has
a new constructor which consumes this `Iter` and builds
`initial_chunk_lengths` itself. Even better!
Finally, `Updates::as_vector` is removed. It's clearly no longer
necessary and it was creating borrowing issues anyway with the new code
structure.
Allow applications to skip the PBKDF2 operation if they already have a cryptographically secure key,
instead using a simple HKDF to derive a key.
In order to maintain compatibility for existing element-web sessions, if we discover that we have an
existing store that was encrypted with a key derived from PBKDF2, then we reconstruct what
element-web used to do: specifically, we base64-encode the key to obtain the "passphrase" that
was previously passed in. If that matches, we know we've got the right key, and can update the
meta store accordingly.
Part of a resolution to element-hq/element-web#26821.
Signed-off-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
Co-authored-by: Damir Jelić <poljar@termina.org.uk>