It has an account field besides the issuer field.
Also store it as immutable, the data used for authentication will be
stored in another variable.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
`SlidingSync::sync` had a legacy behaviour when a `M_UNKNOWN_POS`
error was received from the server: It was resetting `pos` and sticky
parameters before re-running a sync-loop iteration, hoping to get a
valid response from the server after that. This retrying
mechanism was running up to 3 times in row (represented by the
`MAXIMUM_SLIDING_SYNC_SESSION_EXPIRATION` constant) before stopping the
`sync` for real.
While it seemed a good idea, it actually brings numerous problems:
1. Each iteration in the sync-loop generates a new request, thus making
the `ranges` of the requests to move forwards. For `SlidingSyncList`
that are in `Selective` sync-mode, there is no problem, but for
`Growing` or `Paging` sync-modes, the `ranges` increased. Thus,
when a `SlidingSync` session expires, instead of returning to
the caller to do something clever (like what `RoomList` does: start
again with a small range so that the “first” sync after a session
expiration is guaranteed to be fast), it was running larger and
larger requests, up to 3 times.
2. `M_UNKNOWN_POS` _is_ an error. Yes, `SlidingSync` must reset `pos`
and must invalidate sticky parameters, but the `sync` must be
stopped, and the error must be returned so that the caller can do
something about it. Until now, this error was likely to be missed by
the caller.
3. This legacy mechanism was forbidding `SlidingSync::sync`'s caller to
do something with this special error. And since the caller was blind to
this error, it disallowed more smart error management.
This patch removes this legacy retry mechanism entirely. When
`M_UNKNOWN_POS` is received from the server, `pos` is reset and sticky
parameters are invalidated as before, but the error is returned like any
other error.
This patch renames and updates an existing test that was testing sticky
parameters invalidation on `M_UNKNOWN_POS`, to include some assertions
on `pos` and to ensure that the sync-loop is stopped accordingly.
This is the first PR for splitting the sync loop into two. This offers a new high-level API, `NotificationApi`, that makes use of a separate `SlidingSync` instance which sole role is to listen to to-device events and e2ee; it's pre-configured to do so. That means we're not force-enabling e2ee and to-device by default for every sliding sync instance, and as such we won't either generate Olm requests to the home server in general.
In the future, this new high-level API will hide some low-level dirty details so that its can be instantiated in multiple processes at the same time (lock across process, invalidate and refill crypto caches, etc.).
An embedder who would want to make use of this would need the following:
- a main sliding sync instance, without e2ee and to-device. Using the `matrix_sdk_ui::RoomList` would be the best bet, at this time.
- an instance of this `matrix_sdk_ui::NotificationApi`, with a different identifier.
Note that this is not ready to be used in an external process; or it will cause the same kind of issues that we're seeing as of today: invalid crypto caches resulting in UTD, etc.
Fixes https://github.com/matrix-org/matrix-rust-sdk/issues/1961.
Usually it's impossible to create a Olm session from a pre-key message
twice. The one-time key that should be used for the 3DH step will be
used up and we're going to throw a `MissingOneTimeKey` error.
This used to be true and unproblematic until we added fallback keys,
these keys will not get discarded immediately after they have been used
once.
This means that a pre-key message, for which we already have a Session,
but decryption for it fails, might create a new Session overwriting the
existing one which will essentially reset the ratchet.
This implements a value-based lock in the crypto stores. The intent is to use that for multiple processes to be able to make writes into the store concurrently, while still cooperating on who does them. In particular, we need this for #1928, since we may have up to two different processes trying to write into the crypto store at the same time.
## New methods in the `CryptoStore` trait
The idea is to introduce two new methods touching **custom values** in the crypto store:
- one to atomically insert a value, only if it was missing (so, not following the semantics of `upsert` used in the `set_custom_value`)
- one to atomically remove a custom value
Those two operations match the semantics we want:
- take the lock only if it ain't taken already == insert an entry only if it was missing
- release the lock = remove the entry
By looking at the number of lines affected by the query, we can infer whether the insert/remove happened or not, that is, if we managed to take the lock or not.
## High-level APIs
I've also added an high-level API, `CryptoStoreLock`, that helps managing such a lock, and adds some niceties on top of that:
- exponential backoff to retry attempts at acquiring the lock, when it was already taken
- attempt to gracefully recover when the lock has been taken by an app that's been killed by the environment
- full configuration of the key / value / backoff parameters
While it'd be nice to have something like a `CryptoStoreLockGuard`, it's hard to implement without being racy, because of the `async` statements that would happen in the `Drop` method (and async drop isn't stable yet).
## Test program
There's also a test program in which I shamelessly show my rudimentary unix skills; I've put it in the `labs/` directory but this could as well be a large integration test. A parent program initially fills a custom crypto store, then creates a `pipe()` for 1-way communication with a child created with `fork()`; then the parent sends commands to the child. These commands consist in reading and writing into the crypto store, using a lock. And while the child attempts to perform these operations, the parent tries hard to get the lock at the same time. This helps figuring out a few issues and making sure that cross-process locking would work as intended.