Rather than attempting to trigger Alice's encryption sync loop with an incoming
message from Bob, instead have Alice send a message once Bob has joined the
room.
This patch renames the fields in `RoomUpdate`: `join` becomes `joined`,
`leave` becomes `left`, `invite` becomes `invited`, to match their
associate type.
In preparation for threads we have realised that the `reactions`,
`thread_root` and `in_reply_to` were only available on `Message` types,
which doesn't play well with Stickers and Polls.
This PR introduces a new `Aggregated` `TimelineItemContent` variant
which holds the message `kind` (Message, Sticker, Poll) as well as well
as any related aggregated data. it will help treat them all in a similar
fashion as well as account for future changes.
There are no functional changes, it's mostly about moving code around
and the FFI interfaces haven't changed.
Part of #4833.
This patch introduces the new `EncryptionState` to represent the 3
possible states: `Encrypted`, `NotEncrypted` or `Unknown`. All the
`is_encrypted` methods have been replaced by `encryption_state`.
The most noticable change is in `matrix_sdk::Room` where `async fn
is_encrypted(&self) -> Result<bool>` has been replaced by `fn fn
encryption_state(&self) -> EncryptionState`. However, a new `async
fn latest_encryption_state(&self) -> Result<EncryptionState>` method
“restores” the previous behaviour by calling `request_encryption_state`
if necessary.
The idea is that the caller is now responsible to call
`request_encryption_state` if desired, or use `latest_encryption_state`
to automate the call if necessary. `encryption_state` is now non-async
and infallible everywhere.
`matrix-sdk-ffi` has been updated but no methods have been added for
the moment.
It's unusual to have the method on the parent type when the field type
could also hold the method. In fact, this was the only bool getter
inspecting the timeline's content, so let's move the method next to as
its siblings, for consistency, and let's spell it out fully for clarity.
This is done to fix an issue with these events being received and processed twice when `NotificationProcessSetup` is `SingleProcess`, causing issues with user verification.
This can be used to ignore verification requests in this sliding sync instance, preventing issues found where several sliding sync instances with the same client process events simultaneously and re-process the same verification request events during their initial syncs.
## Some context
An aggregation is an event that relates to another event: for instance,
a
reaction, a poll response, and so on and so forth.
## Some requirements
Because of the sync mechanisms and federation, it can happen that a
related
event is received *before* receiving the event it relates to. Those
events
must be accounted for, stashed somewhere, and reapplied later, if/when
the
related-to event shows up.
In addition to that, a room's event cache can also decide to move events
around, in its own internal representation (likely because it ran into
some
duplicate events, or it managed to decrypt a previously UTD event).
When that happens, a timeline opened on the given room
will see a removal then re-insertion of the given event. If that event
was
the target of aggregations, then those aggregations must be re-applied
when
the given event is reinserted.
## Some solution
To satisfy both requirements, the [`Aggregations`] "manager" object
provided
by this PR will take care of memoizing aggregations, **for the entire
lifetime of the timeline** (or until it's clear'd by some
caller). Aggregations are saved in memory, and have the same lifetime as
that of a timeline. This makes it possible to apply pending aggregations
to cater for the first use case, and to never lose any aggregations in
the
second use case.
## Some points for the reviewer
- I think the most controversial point is that all aggregations are
memoized for the entire lifetime of the timeline. Would that become an
issue, we can get back to some incremental scheme, in the future:
instead of memoizing aggregations for the entire lifetime of the
timeline, we'd attach them to a single timeline item. When that item is
removed, we'd put the aggregations back into a "pending" stash of
aggregations. If the item is reinserted later, we could peek at the
pending stash of aggregations, remove any that's in there, and reapply
them to the reinserted event. This is what the [first version of this
patch](ec64b9e0bc)
did, in a much more adhoc way, for reactions only; based on the current
PR, we could do the same in a simpler manner
- while the PR has small commits, they don't quite make sense to review
individually, I'm afraid, as I was trying to find a way to make a
general system that would work not only for reactions, poll responses
and ends. As a matter of fact, the first commits may have introduced
code that is changed in subsequent commits, making the review a bit
hazardous. Happy to have a live reviewing party over Element Call, if
that helps, considering the size of the patch.
- future work may include using the aggregations manager for edits too,
leading to more code removal.
The `SyncService::stop()` method could fail for the following reasons:
1. The supervisor was not properly started up, this is a programmer error.
2. The supervisor task wouldn't shut down and instead it returns a JoinError.
3. We couldn't notify the supervisor task that it should shutdown due the channel being closed.
All of those cases shouldn't ever happen and the supervisor task will be
stopped in all of them.
1. Since there is no supervisor to be stopped, we can safely just log an
error, our tests ensure that a `SyncService::start()` does create a
supervisor.
2. A JoinError can be returned if the task has been cancelled or if the
supervisor task has panicked. Since we never cancel the task, nor
have any panics in the supervisor task, we can assume that this won't
happen.
3. The supervisor task holds on to a reference to the receiving end of
the channel, as long as the task is alive the channel can not be
closed.
In conclusion, it doesn't seem to be useful to forward these error cases
to the user.
This patch renames `Timeline::subscribe_batched` to
`Timeline::subscribe`. Since the `Timeline::subscribe` method has been
removed because unused, it no longer makes sense to have a “batched”
variant here. Let's simplify things!