This introduces a new helper object to run arbitrary pagination requests, backwards- or forward-. At the moment they're disconnected from the event cache, although I've put the files there for future convenience, since at some point we'll want to merge the retrieved events with the cache (? maybe).
This little state machine makes it possible to retrieve the initial data, given an initial event id, using the /context endpoint; then allow stateful pagination using a paginator kind of API. Paginating in the timeline indicates whether we've reached the start/end of the timeline.
The test for the state subscription is quite extensive and makes sure the basic functionality works as intended.
Some testing helpers have been (re)introduced in the SDK crate, simplifying the code, and introducing a better `EventFactory` / `EventBuilder` pattern than the existing one in the `matrix-sdk-test` crate. In particular, this can make use of some types in `matrix-sdk`, notably `SyncTimelineEvent` and `TimelineEvent`, and I've found the API to be simpler to use as well.
Part of #3234.
This protects against the sliding sync proxy (or synapse) sending lots
of duplicate fully read marker events for the same room in case of
reset. This would lead to forwarding lots and lots of
`RoomEventCacheUpdate`s to the timeline, cluttering the channel, and
resulting in a lag. This lag would then clear the timeline, seemingly
cause missing chunks from history.
https://github.com/matrix-org/matrix-rust-sdk/pull/3312 is likely the
long term fix (after a clear(), the timeline should retrieve all the
memoized events from the event cache), but this should prevent the root
cause of the spamming in the first place, getting us back to a safe
state.
We do see lots of "broken channel" log lines for these two log
statements, and since I'm unclear why they happened, I'd like to add a
bit more logging to those.
Also makes the log level consistent, both are set to "warn" instead of
one warn and one error. Usually it's not a big deal because the only
error that may happen is that the channel is broken, indicating the task
died before, so there's no need to stop it manually.
Those two issues really aren't errors we should be too scared of:
- on the one hand, they don't prevent correct usage of the timeline, but
slightly decrease the UX's quality
- on the other hand, they may indicate that read receipts haven't been
received yet, OR some events are unknown (can be happening if the
previous read receipt refers to an event the timeline doesn't know
about).
So I lowered their severity to debug instead of error, since only
outstanding issues should errors or warnings in my opinion.
`force_update_sender_profiles` goes through all the timeline items, even
though the `ambiguity_changes` map should be empty, leading to wasted
work. It might not be a big deal, but it's nice to avoid awaiting locks
when we don't have to.
First, add a log line, since this is a pretty big deal if we start
lagging behind, and we should understand this clearly in rageshakes.
Second: don't remove existing `RoomEventCache`s, but instead clear them:
we shouldn't re-create new `RoomEventCache`s for the same room id,
otherwise a previous subscriber to updates to `RoomEventCache` would not
get newer updates coming later on.
Third: let consumers know that we cleared the events from the
`RoomEventCaches`.
* ffi: Use async functions in AuthenticationService
* Fix swift tests
* Set async_runtime for uniffi::export attribute
* Rename RwLocks
* Get rid of unnecessary map_err
---------
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
Before this patch, only local echoes that were messages not sent or
messages succesfully sent (but not ack'd yet by sync) would be pinned to
the bottom.
This fixes it by also pinning events that failed to send.
Fixes#3287.
The algorithm works on the basis that we remove a previous day divider
if it was spurious. But we'd never do this, for a final item that would
be a day divider! So chase these explicitly, also ignore read markers
that would be in the way.