The `RoomEvents` doesn't hold the `Deduplicator` instance now, it's the
role of the `RoomEventCacheState`. This slightly simplifies the code, in
a few cases.
This patch removes the invariant stating that a `LinkedChunk` must start
by a chunk of type items. This has never been really useful but it's now
annoying to have this (with iterative loading of a `LinkedChunk` via the
`EventCache`, it's now possible to get a gap as the first chunk). Let's
remove this invariant.
We encountered this warning a lot in the logs after upgrading the SDK today.
My understanding is that this path is expected if the event is not yet in the timeline, so it's nothing to warn about.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
A room can be associated to a lot of data, depending on the number of members in the room.
So freeing space on the filesystem should be worth it in some cases.
An (extreme) example: I have a test account that is in ~60 rooms, a few of those big public rooms, including Matrix HQ. The size of the matrix-sdk-state.sqlite3 file is 542 MB. Using this PR and leaving, then forgetting Matrix HQ brings the DB down to 255 MB.
The `EventTimelineItem` would get from a bool to an `Option<bool>`. If
the metadata's `is_room_encrypted` was set to `None`, then it would use
`false` as the value, before wrapping it again into an `Option`.
Since the only reader of `EventTimelineItem::is_room_encrypted()`
doesn't really care about the difference between `Some(false)` and
`None`, in `EventTimelineItem::get_shield()`, we can use a plain `bool`
instead of an `Option`, and not distinguish `Some(false)` from `None`.
At worst, it means that sometimes we don't know the room encryption
status yet, and consider the room unencrypted; as soon as we'll get an
update about encryption state, all the items will be marked as encrypted
anyways, if needs be.
## Some context
An aggregation is an event that relates to another event: for instance,
a
reaction, a poll response, and so on and so forth.
## Some requirements
Because of the sync mechanisms and federation, it can happen that a
related
event is received *before* receiving the event it relates to. Those
events
must be accounted for, stashed somewhere, and reapplied later, if/when
the
related-to event shows up.
In addition to that, a room's event cache can also decide to move events
around, in its own internal representation (likely because it ran into
some
duplicate events, or it managed to decrypt a previously UTD event).
When that happens, a timeline opened on the given room
will see a removal then re-insertion of the given event. If that event
was
the target of aggregations, then those aggregations must be re-applied
when
the given event is reinserted.
## Some solution
To satisfy both requirements, the [`Aggregations`] "manager" object
provided
by this PR will take care of memoizing aggregations, **for the entire
lifetime of the timeline** (or until it's clear'd by some
caller). Aggregations are saved in memory, and have the same lifetime as
that of a timeline. This makes it possible to apply pending aggregations
to cater for the first use case, and to never lose any aggregations in
the
second use case.
## Some points for the reviewer
- I think the most controversial point is that all aggregations are
memoized for the entire lifetime of the timeline. Would that become an
issue, we can get back to some incremental scheme, in the future:
instead of memoizing aggregations for the entire lifetime of the
timeline, we'd attach them to a single timeline item. When that item is
removed, we'd put the aggregations back into a "pending" stash of
aggregations. If the item is reinserted later, we could peek at the
pending stash of aggregations, remove any that's in there, and reapply
them to the reinserted event. This is what the [first version of this
patch](ec64b9e0bc)
did, in a much more adhoc way, for reactions only; based on the current
PR, we could do the same in a simpler manner
- while the PR has small commits, they don't quite make sense to review
individually, I'm afraid, as I was trying to find a way to make a
general system that would work not only for reactions, poll responses
and ends. As a matter of fact, the first commits may have introduced
code that is changed in subsequent commits, making the review a bit
hazardous. Happy to have a live reviewing party over Element Call, if
that helps, considering the size of the patch.
- future work may include using the aggregations manager for edits too,
leading to more code removal.
The test ends up with checking that the redact endpoint has been hit
once. It's actually the send queue doing the redaction as a dependent
send request, and it doesn't provide any notification mechanism in this
case, so we can't really know when it's done doing it.
One solution would be to not check the number of calls to the redact/
endpoint, but that means checking for fewer things. Instead, I made it
so that when hit, the endpoint will signal it to the main task using a
oneshot channel; then the main task waits with a long timeout for the
receiving end to get the notification it's been sent, which should be
sufficient.
This would help find test failures specific to experimental-oidc, as
well as doctests failing (which would have prevented the failures fixed
in https://github.com/matrix-org/matrix-rust-sdk/pull/4614 to happen in
the first place).
This patch removes the `Clear` variant of the `RoomEventCacheUpdate`
enum. This one is not needed anymore since we have
`UpdateTimelineEvents` which contains updates as `Vec<VectorDiff<_>>`.
`VectorDiff` _has_ a `Clear` variant. It resulted in a double clear
every time.
This patch updates `RoomEventCacheInner::reset` and
`RoomEventCacheInner::with_events_mut` to annotate them with a
`#[must_use]`. Since they return the updates as `VectorDiff`s,
they **must** be broadcasted/propagated somewhere, likely with
`RoomEventCacheUpdate`. This mechanism ensures to not miss updates.
This patch fixes an issue where 0 was used as the initial value for
the `Skip` higher-order stream in the `TimelineSubscriber`. This is
wrong, as the `SkipCount` value may have been modified before the
`TimelineSubscriber` is created.
This patch provides a test to reproduce the problem.
This patch gathers the logic of the `Timeline::subscribe` into a single
type: `TimelineSubscriber`.
The `TimelineController::subscribe_batched_and_limited` method is
renamed `subscribe` to match `Timeline::subscribe`. Things are simpler
to apprehend.
The `TimelineSubscriber` type configures the subscriber/stream in a
single place. It takes an `&ObservableItems` and a `&SkipCount`, and
configures everything. It also provides a single place to document the
behaviour of the subscriber, with the `Skip` higher-order stream.