The test relied on the fact that sending an event from a given timeline
is not observable from another timeline. Indeed, it sent a message using
a first timeline object, then constructed a second timeline object and
expected only the remote event to be in there.
Now, the sending queue is shared across all instances of a Room, thus
all instances of a Timeline, and the second timeline can see the local
echo for the message sent by the first timeline.
The "fix" is thus in the test structure itself: when waiting for the
remote echo to be there, check that the timeline item doesn't pertain to
a local echo, i.e. is a remote echo.
Technically, the test already passed before the change in the builder,
because `TimelineEventHandler::handle_event` already filters out local
events on non-live timelines, but it's a waste of resources to even
spawn the local echo listener task in that case, so let's not do it.
By keeping a reference to the FFI Client, we make sure that the SDK
Client is dropped while in a tokio runtime context. This makes it
possible for an `Encryption` (FFI) object to outlive the `Client` (FFI)
object without crashing (because deadpool requires to be in a tokio
runtime context when dropping).
Following https://github.com/matrix-org/matrix-rust-sdk/pull/3473, we
made it so that if there's a base-path but no username, then we'll
create a sqlite database.
Unfortunately, this introduced a bad side-effect: for the temporary
client used during homeserver resolution, this would create a temporary
sqlite database.
The "fix" is to not set the base path for the temporary client, and only
for the other caller of `new_client_builder()`, manually. The long term
fix is to get rid of the `AuthenticationService` so we can test it
properly.
This patch updates Ruma to 75e8829 so that `RoomSummary::heroes` is now
a `Vec<OwnedUserId>` instead of a `Vec<String>`. This patch updates our
code accordingly by removing the parsing of heroes as `OwnedUserId`.
Without this, the configs from `.cargo/config.toml` were not read in CI
tasks, causing false positives when running Clippy on CI (i.e. there
were issues observed when compiling locally that were not found when
compiling remotely).
Not entirely sure why it's needed, because I'm seeing the issues when
I'm using `cargo xtask ci clippy` locally, with nothing changed.
It turns out that the failure came when using the known room path in the
room preview: Alice knows about the room, but for some reason the client
didn't retrieve all the state events from the sliding sync proxy yet.
Before, the sync would be considered stable after 2 seconds. This is too
little, considering that events come from the proxy that listens to
events from synapse. Raising this threshold to 15 seconds should help
getting all the room information from the proxy, and thus get all the
information we expected in the client.
While this mutex is taken, in `oldest_token()` we can get a hold of the
RoomEvent mutex, making it so that the `waited_...` token covers the
critical region of room events.
Unfortunately, in `clear()`, we're taking these two locks in the
opposite order, implying that the room events critical region now
overlaps that of the `waited_...` lock.
The way to break the cycle is to keep the `waited_` token for as short
as possible.
So as to not use sync mutexes across await points, we have to use an
async mutex, BUT it can't be immediately called in a wiremock responder,
so we need to shoe-horn a bit: create a new tokio runtime, which can
only be called from another running thread, etc. It's a bit ugly, but I
couldn't find another mechanism to block the responder from returning a
Response immediately; happy to change that anytime if there's a simpler
way.
A client would require this to know that the sending queue has been
globally disabled, across all the rooms, otherwise they couldn't know
that it needs to be re-enabled later on.
This implements the following features:
- enabling/disabling the sending queue at a client-wide granularity,
- one sending queue per room, that lives until the client is shutting
down,
- retrying sending an event three times, if it failed in the past, with
some exponential backoff to attempt to re-send,
- allowing to cancel sending an event we haven't sent yet.
fixup sdk
This can be equivalently retrieved using:
`get_timeline_event_by_event_id(event_id).content().as_message()?.content()`
As per the commit date, this is used in one place in both EXA and EXI,
so it seems fine to change the caller's call sites.
And this avoids one explicit call site of
`Timeline::item_by_event_id()`.
If an event cannot be decrypted after a grace period, it is reported to the application (and thence Posthog) as a UTD event. Currently, if it is then successfully decrypted, the event is then re-reported as a "late decryption".
This does not match the expected behaviour in Posthog - an event is *either* a UTD, or a late decryption; it makes no sense for it to be both. This PR fixes the problem.
I've attempted to make the commits sensible, but I'm not entirely sure I've succeeded. The tests do pass after each commit, though. The interesting change itself is somewhere in the middle; there is non-functional groundwork before and cleanup afterwards.
---
* ui: Factor out UTD report code to a closure
For now, this doesn't help much, but in future there will be more logic here,
and it helps reduce the repetition between the delay and no-delay cases.
* ui: Convert UtdHookManager::pending_delayed to a HashMap
* ui: Store decryption time in `UtdHookManager::pending_delayed`
This is a step on getting rid of `known_utds`
* ui: Fix re-reporting of late decryptions
This fixes the problem where a message that was previously reported as a UTD,
and was then subsequently successfully decrypted, is then re-reported as a late
decryption. This artificially inflated the UTD metrics.
We do this by checking the `pending_delayed` list in `on_late_decrypt`, instead
of the `known_utds` list. There is some associated reordering of code to get
the locking right.
* ui: Remove unused "utd report time" from `UtdHookManager::known_utds`
* ui: Replace `UtdHookManager::known_utds` with `reported_utds`
Keep a list of the UTDs we've actually reported, rather than the union of those
we've reported together with those we might report in a while.
I find this much easier to reason about.
* Address minor review comments
* Reinstate assertion in UTD hook tests
* Reinstate `known_utds`
To differentiate the SAS state between the party
that sent the verification start and the party that received it.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>