Before this patch, the meta field would be mutated, even when the transaction would be aborted. This changes the update scheme to meta
with the following:
- when creating the transaction, clone the meta (but keep the pointer location to the previous one)
- all the transaction's methods operate on the WIP meta
- when committing, replace the previous meta with the current one
This patch is a work-in-progress. It explores an experimental data
structure to store events in an efficient way.
Note: in this comment, I will use the term _store_ to mean _database_
or _storage_.
The biggest constraint is the following: events can be ordered in
multiple ways, either topological order, or sync order. The problem is
that, when syncing events (with `/sync`), or when fetching events (with
`/messages`), we **don't know** how to order the newly received events
compared to the already downloaded events. A reconciliation algorithm
must be written (see #3058). However, from the “storage” point of view,
events must be read, written and re-ordered efficiently.
The simplest approach would be to use an `order_index` for example.
Every time a new event is inserted, it uses the position of the last
event, increments it by one, and done.
However, inserting a new event in _the middle_ of existing events would
shift all events on one side of the insertion point: given `a`, `b`,
`c`, `d`, `e`, `f` with `f` being the most recent event, if `g` needs
to be inserted between `b` and `c`, then `c`, `d`, `e`, `f`'s ordering
positions need to be shifted. That's not optimal at all as it would
imply a lot of updates in the store.
Example of a relational database:
| ordering_index | event |
|----------------|-------|
| 0 | `a` |
| 1 | `b` |
| 2 | `g` |
| 3 | `c` |
| … | … |
An insertion can be O(n), and it can happen more frequently than one
can think of. Let's imagine a permalink to an old message: the user
opens it, a couple of events are fetched (with `/messages`), and these
events must be inserted in the store, thus potentially shifting a lot of
existing events. Another example: Imagine the SDK has a search API for
events; as long as no search result is found, the SDK will back-paginate
until reaching the beginning of the room; every time there is a
back-pagination, a block of events will be inserted: there is more and
more events to shift at each back-pagination.
OK, let's forget the `order_index`. Let's use a linked list then? Each
event has a _link_ to the _previous_ and to the _next_ event.
Inserting an event would be at worst O(3) in this case: if the previous
event exists, it must be updated, if the next event exists, it must be
updated, finally, insert the new event.
Example with a relational database:
| previous | id | event | next |
|----------|---------|-------|---------|
| null | `id(a)` | `a` | `id(b)` |
| `id(a)` | `id(b)` | `b` | `id(c)` |
| `id(b)` | `id(c)` | `c` | null |
This approach ensures a fast _writing_, but a terribly slow _reading_.
Indeed, reading N events require N queries in the store. Events aren't
contiguous in the store, and cannot be ordered by the database engine
(e.g. with `ORDER BY` for SQL-based database). So it really requires one
query per event. That's a no-go.
In the two scenarios above, another problem arises. How to represent
a gap? Indeed, when new events are synced (via `/sync`), sometimes the
response contains a `limited` flag, which means that the results are
_partial_.
Let's take the following example: the store contains `a`, `b`, `c`.
After a long offline period (during which the room has been pretty
active), a sync is started, which provides the following events: `x`,
`y`, `z` + the _limited_ flag. The app is killed and reopened later.
The event cache store will contain `a`, `b`, `c`, `x`, `y`, `z`. How
do we know that there is a hole/a gap between `c` and `x`? This is an
important information! When `z`, `y` and `x` are displayed, and the user
would like to scroll up, the SDK must know that it must back-paginate
before providing `c`, `b` and `a`.
So the data structure we use must also represent gaps. This information
is also crucial for the events reconciliation algorithm.
What about a mix between the two? Here is _Linked Chunk_.
A _linked chunk_ is like a linked list, except that each node is either
a _Gap_ or an _Items_. A _Gap_ contains nothing, it's just a gap. An
_Items_ contains _several_ events. A node is called a _Chunk_. A _chunk_
has a maximum size, which is called a _capacity_. When a chunk is full,
a new chunk is created and linked appropriately. Inside a chunk, an
ordering index is used to order events. At this point, it becomes a
trade-off the find the appropriate chunk size to balance the performance
between reading and writing. Nonetheless, if the chunk size is 50, then
reading events is 50 times more efficient with a linked chunk than with
a linked list, and writing events is at worst O(49), compare to the O(n
- 1) of the ordering index.
Example with a relational database. First table is `events`, second
table is `chunks`.
| chunk id | index | event |
|----------|-------|-------|
| `$0` | 0 | `a` |
| `$0` | 1 | `b` |
| `$0` | 2 | `c` |
| `$0` | 3 | `d` |
| `$2` | 0 | `e` |
| `$2` | 1 | `f` |
| `$2` | 2 | `g` |
| `$2` | 3 | `h` |
| chunk id | type | previous | next |
|----------|-------|----------|------|
| `$0` | items | null | `$1` |
| `$1` | gap | `$0` | `$2` |
| `$2` | items | `$1` | null |
Reading the last chunk consists of reading all events where the
`chunk_id` is `$2` for example, and contains events `e`, `f`, `g` and
`h`. We can sort them easily by using the `event_index` column. The
previous chunk is a gap. The previous chunk contains events `a`, `b`,
`c` and `d`.
Being able to read events by chunk clearly limit the amount of reading
and writing in the store. It is also close to what will be really done
in real life with this store. It also allows to represent gaps. We can
replace a gap by new chunk pretty easily with few writings.
A summary:
| Data structure | Reading | Writing |
|----------------|-------------------|-----------------|
| Ordering index | “O(1)”[^1] (fast) | O(n - 1) (slow) |
| Linked list | O(n) (slow) | O(3) (fast) |
| Linked chunk | O(n / capacity) | O(capacity - 1) |
This patch contains a draft implementation of a linked chunk. It will
strictly only contain the required API for the `EventCache`, understand
it _is not_ designed as a generic data structure type.
[^1]: O(1) because it's simply one query to run; the database engine
does the sorting for us in a very efficient way, particularly if the
`ordering_index` is an unsigned integer.
This adds a new mechanism in the UI crate (since re-attempts to decrypt happen in the timeline, as of today — later that'll happen in the event cache) to notify whenever we run into a UTD (an event couldn't be decrypted) or a late-decryption event (after some time, a UTD could be decrypted).
This new hook will deduplicate pings for the same event (identifying events on their event id), and also implements an optional grace period. If an event was a UTD:
- if it's still a UTD after the grace period, then it's reported with a `None` `time_to_decrypt`,
- if it's not a UTD anymore (i.e. it's been decrypted in the meanwhile), then it's reported with a `time_to_decrypt` set to the time it took to decrypt the event (approximate, since it starts counting from the time the timeline receives it, not the time the SDK fails to decrypt it at first).
It's configurable as an optional hook on timeline builders. For the FFI, it's configurable at the sync service's level with a "delegate", and then the sync service will forward the hook to the timelines it creates, and the hook will forward the UTD info to the delegate.
Part of https://github.com/element-hq/element-meta/issues/2300.
---
* ui: add a new module and trait for subscribing to unable-to-decrypt notifications
and late decryptions (i.e. the key came in after the event that required it for decryption).
* timeline: hook in (!) the unable-to-decrypt hook
* timeline: prefix some test names with test_
* utd hook: delay reporting a UTDs
* ffi: add support for configuring the utd hook
* utd hook: switch strategy, have a single hook
And have the data structure contain extra information about late-decryption events.
* utd hook: rename `SmartUtdHook` to `UtdHookManager`
* ffi: configure the UTD hook with a grace period of 60 seconds
And ignore UTDs that have been late-decrypted in less than 4 seconds.
* utd hook: update documentation and satisfy the clippy gods
* ffi: introduce another UnableToDecryptInfo FFI struct that exposes simplified fields from the SDK's version
* review: introduce type alias for pending utd reports
* review: address other review comments
This adds support for back-pagination into the event cache, supporting enough features for integrating with the timeline (which is going to happen in a separate PR).
The idea is to provide two new primitives:
- one to get (or wait, if we don't have any handy) the latest pagination token received by the sync,
- one to run a single back-pagination, given a token (or not — which will backpaginate from the end of the room's timeline)
The timeline code can then use those two primitives in a loop to replicate the current behavior it has (next PR to be open Soon™).
The representation of events in the store is changed, so that a timeline can have *entries*, which are one of two things:
- either an event, as before
- or a gap, identified by a backpagination token (at the moment)
This allows us to avoid a lot of complexity from the back-pagination code in the timeline, where we'd attach the backpagination token to an event that only had an event_id. We don't have to do this here, and I suppose we could even attach the backpagination token to the next event itself.
This doesn't do reconciliation yet; the plan is to add it as a next step.
It's a bit unclear whether the crypto-store generation counter is doing the right thing
in terms of causing us to reload the OlmMachine. There is a suspicion that things
might be keeping hold of references to the old OlmMachine.
This PR attempts to add the generation number to the logging for any operations that
hold the cross-process lock. It's obviously not bulletproof: for example, it is possible
for the OlmMachine to be replaced without holding the lock; but hopefully this will
at least help us understand what's going on.
This patch enables the `compat-tag-info` feature on Ruma, so that
`TagInfo::order` can be deserialized from both a `f64` or a `string`
representing a `f64`[^1].
[^1]: f24cae17f5/crates/ruma-events/src/tag.rs (L180-L185)
Wiremock doesn't allow overwriting of a mock, so if we want to mock
different sync response bodies for the same path we'll have to mount the
mock in a subscobe.
This also removes the need to access some internal OlmMachine state to
get us notified about changed devices.
When all room members are loaded, we do not need an incremental member update. We know that parsing the /members response will only lead to more ambiguous names, not less. And because /members returns the complete list, we can directly use that list as the disambiguation map.
This improves the performance in my emulator from 56s to 9s and on a less performant device from 11mins to 11s (Tested experimentally on Matrix HQ using log statements in element android. If I have time, I will write a proper benchmark tomorrow.
See also https://github.com/matrix-org/matrix-rust-sdk/pull/3184#issuecomment-1986170631 for a more detailed benchmark run.
---
* members: Simplify disambiguation logic
* members: Prevent api misuse for receive_members
* members: Benchmark receive_all_members performance
* sdk: remove unused import
* sdk-base: rename `ApiMisuse` error to `InvalidReceiveMembersParameters`
* benchmarks: extract the member loading benchmark to `room_bench.rs`
* benchmarks: remove wiremock
* sdk-base: fix format
* sdk-base: try fixing tests
* benchmark: Provide some data to the store so the search and disambiguation happen
* benchmark: fix clippy
* benchmark: use a constant for `MEMBERS_IN_ROOM`
* sdk(style): reduce indent in `receive_all_members`
---------
Co-authored-by: Jorge Martín <jorgem@element.io>
Co-authored-by: Benjamin Bouvier <public@benj.me>
The idea of this patch is to explore the possibility to unify the
`all_rooms` list with the `invites` list in `RoomListService`.
Since this is entirely experimental, it's behind a new feature
flag. The feature itself can be configured at runtime by using the
new `SyncServiceBuilder::with_unified_invites_in_room_list` builder
method, or directly with `RoomListService::new_with_unified_invites`
constructor.
This is needed to be able to diff between increases and decreases of power levels ("user Alice was promoted Admin", etc.).
---
* ffi: add `previous` power levels to `OtherState::RoomPowerLevels`
This is needed to be able to diff between increases and decreases of power levels.
* ffi: please clippy
* ffi: inline initialization of `previous` and `users`