## Some context
An aggregation is an event that relates to another event: for instance,
a
reaction, a poll response, and so on and so forth.
## Some requirements
Because of the sync mechanisms and federation, it can happen that a
related
event is received *before* receiving the event it relates to. Those
events
must be accounted for, stashed somewhere, and reapplied later, if/when
the
related-to event shows up.
In addition to that, a room's event cache can also decide to move events
around, in its own internal representation (likely because it ran into
some
duplicate events, or it managed to decrypt a previously UTD event).
When that happens, a timeline opened on the given room
will see a removal then re-insertion of the given event. If that event
was
the target of aggregations, then those aggregations must be re-applied
when
the given event is reinserted.
## Some solution
To satisfy both requirements, the [`Aggregations`] "manager" object
provided
by this PR will take care of memoizing aggregations, **for the entire
lifetime of the timeline** (or until it's clear'd by some
caller). Aggregations are saved in memory, and have the same lifetime as
that of a timeline. This makes it possible to apply pending aggregations
to cater for the first use case, and to never lose any aggregations in
the
second use case.
## Some points for the reviewer
- I think the most controversial point is that all aggregations are
memoized for the entire lifetime of the timeline. Would that become an
issue, we can get back to some incremental scheme, in the future:
instead of memoizing aggregations for the entire lifetime of the
timeline, we'd attach them to a single timeline item. When that item is
removed, we'd put the aggregations back into a "pending" stash of
aggregations. If the item is reinserted later, we could peek at the
pending stash of aggregations, remove any that's in there, and reapply
them to the reinserted event. This is what the [first version of this
patch](ec64b9e0bc)
did, in a much more adhoc way, for reactions only; based on the current
PR, we could do the same in a simpler manner
- while the PR has small commits, they don't quite make sense to review
individually, I'm afraid, as I was trying to find a way to make a
general system that would work not only for reactions, poll responses
and ends. As a matter of fact, the first commits may have introduced
code that is changed in subsequent commits, making the review a bit
hazardous. Happy to have a live reviewing party over Element Call, if
that helps, considering the size of the patch.
- future work may include using the aggregations manager for edits too,
leading to more code removal.
The `SyncService::stop()` method could fail for the following reasons:
1. The supervisor was not properly started up, this is a programmer error.
2. The supervisor task wouldn't shut down and instead it returns a JoinError.
3. We couldn't notify the supervisor task that it should shutdown due the channel being closed.
All of those cases shouldn't ever happen and the supervisor task will be
stopped in all of them.
1. Since there is no supervisor to be stopped, we can safely just log an
error, our tests ensure that a `SyncService::start()` does create a
supervisor.
2. A JoinError can be returned if the task has been cancelled or if the
supervisor task has panicked. Since we never cancel the task, nor
have any panics in the supervisor task, we can assume that this won't
happen.
3. The supervisor task holds on to a reference to the receiving end of
the channel, as long as the task is alive the channel can not be
closed.
In conclusion, it doesn't seem to be useful to forward these error cases
to the user.
This patch renames `Timeline::subscribe_batched` to
`Timeline::subscribe`. Since the `Timeline::subscribe` method has been
removed because unused, it no longer makes sense to have a “batched”
variant here. Let's simplify things!
As the comment noted, they're essentially doing the same thing. A
`TimelineEvent` may not have computed push actions, and in that regard
it seemed more correct than `SyncTimelineEvent`, so another commit will
make the field optional.
This patch changes all calls to `Timeline::subscribe` to replace them by
`Timeline::subscribe_batched`. Most of them are in tests. It's the first
step of a plan to remove `Timeline::subscribe`.
The rest of the patch updates all the tests to use
`Timeline::subscribe_batched`.
It was used in places where we could make use of other helpers, in some
cases. Also introduces the `room_avatar` helper to create the room
avatar state event.
Regenerate Dan's data with new cross-signing and device keys, for which I know
the private keys.
The signatures are manually calculated for now; this will be improved in a
later commit.
This removes one sync that happens in the background, because it's
likely spurious and may be confusing the server about what's been seen
by the current client.
We have quite a few `allow(dead_code)` annotations. While it's OK to use
in situations where the Cargo-feature combination explodes and makes it
hard to reason about when something is actually used or not, in other
situations it can be avoided, and show actual, dead code.
A previous patch deduplicates the remote events conditionnally. This
patch does the same but for timeline items.
The `Timeline` has its own deduplication algorithm (for remote
events, and for timeline items). The `Timeline` is about to receive
its updates via the `EventCache` which has its own deduplication
mechanism (`matrix_sdk::event_cache::Deduplicator`). To avoid conflicts
between the two, we conditionnally deduplicate timeline items based on
`TimelineSettings::vectordiffs_as_inputs`.
This patch takes the liberty to refactor the deduplication mechanism of
the timeline items to make it explicit with its own methods, so
that it can be re-used for `TimelineItemPosition::At`. A specific
short-circuit was present before, which is no more possible with the
rewrite to a generic mechanism. Consequently, when a local timeline
item becomes a remote timeline item, it was previously updated (via
`ObservableItems::replace`), but now the local timeline item is removed
(via `ObservableItems::remove`), and then the remote timeline item is
inserted (via `ObservableItems::insert`). Depending of whether a virtual
timeline item like a date divider is around, the position of the removal
and the insertion might not be the same (!), which is perfectly fine as
the date divider will be re-computed anyway. The result is exactly the
same, but the `VectorDiff` updates emitted by the `Timeline` are a bit
different (different paths, same result).
This is why this patch needs to update a couple of tests.
When starting to back-paginate, in this test, we:
- either have a previous-batch token, that points to the first event
*before* the message was sent,
- or have no previous-batch token, because we stopped sync before
receiving the first sync result.
Because of the behavior introduced in 944a9220, we don't restart
back-paginating from the end, if we've reached the start. Now, if we are
in the case described by the first bullet item, then we may backpaginate
until the start of the room, and stop then, because we've back-paginated
all events. And so we'll never see the message sent by Alice after we
stopped sync'ing.
One solution to get to the desired state is to clear the internal state
of the room event cache, thus deleting the previous-batch token, thus
causing the situation described in the second bullet item. This achieves
what we want, that is, back-paginating from the end of the timeline.
Adds new UtdCause variants for withheld keys, enabling applications to display customised messages when an Unable-To-Decrypt message is expected.
refactor(crypto): Move WithheldCode from crypto to common crate
These are all coming from macro invocations of macros that are defined
in other crates. It's likely a clippy issue. We should try to revert
this the next time we bump the nightly version we're using.