Prior to this patch, in `RoomEventCacheInner::backpaginate`, when the
`token` validity was checked, and it was invalid:
* before calling `/messages`, `Err(EventCacheError::UnknownBackpaginationToken)` was returned,
* after calling `/messages`, `Ok(BackPaginationOutput::UnknownBackpaginationToken)` was returned.
This patch tries to uniformize this by only returning
`Ok(BackPaginationOutput::UnknownBackpaginationToken)`.
That's a tradeoff. It will probably be refactor later.
The idea is also to call `/messages` **before** taking the write-lock
of `RoomEvents`, otherwise it can keep the lock for up to 30secs in
this case. Also, checking the validity of the `token` **before** and
**after** `/messages` is not necessary: it can be done only after.
This patch ensures that operations on `RoomEvents` happen in one block,
by sharing the same lock.
2 new methods are created: `replace_all_events_by` and
`append_new_events`.
The test `test_reset_while_backpaginating` was expecting a
race-condition, which no longer exists. It first initially tried to
assert a workaround about this race-condition. It doesn't hold anymore.
Rewrite the test to assert the (correct) new behaviour.
This patch implements the following wrapper methods (over
`LinkedChunk`): `push_gap`, `replace_gap_at` and `events`. This patch
also implements the `reset` method that clears/drops all chunks in the
`LinkedChunk`.
There was a message sent, *then* an attempt to wait for the remote echo later. It's not ideal, because if the time setting up the waiting is high enough, and the server is fast
enough, the remote echo could come *before* we started waiting for it, resulting in timeouts. This fixes it by spawning the waiting task first, and then only sending the message.
Let's see how this helps with this test.
This adds additional checks for each room updates, and works around a few race conditions, notably one where the server would send a remote echo for a message, but not update the
computed unread_notification_counts immediately. This tends to make the test more stable, in that each response is well known and now properly tested.
It's not worth panic'ing the whole timeline because we removed the wrong item; worst case, users will complain and can send a rageshake that contains all the information that's
needed to debug what went wrong.
There could be situations where we have a day divider, then a read marker, then an event. In that case, when looking at the event, if the previous day divider is wrong and needs
to be removed, we would assume the "previous item" (= the day divider) was at position i-1, which could be that of the read marker, and we'd remove the read marker instead of the
day divider.
Closes#3265.
There is currently no way to get the URL of a homeserver's sliding sync proxy before logging in on the homeserver.
I suggest exposing the URL via the `HomeserverLoginDetail` struct after configuring the homeserver (through `configure_homeserver`).
Since the homeserver may not declare a corresponding sliding sync proxy, this value is an `Optional`. It is used later in `configure_homeserver` to check if a sliding sync proxy exists and throws an error otherwise. Previously, this check was done against the client's `discovered_sliding_sync_proxy` function.
- [ ] Public API changes documented in changelogs (optional)
Signed-off-by: Thomas Völkl <thomas@vollkorntomate.de>
Signed-off-by: Benjamin Bouvier <public@benj.me>
Co-authored-by: Benjamin Bouvier <public@benj.me>
Local echoes (which haven't received a remote echo yet) can have no event id, so when computing the report, don't unwrap the event id but use a sensible
default instead.
Also tweaks comments from a previous version of another PR. And rename `DayDividerAdjuster::maybe_adjust_day_dividers` to `run`.
The previous API relied on the callers not forgetting about adjusting day dividers after handling an event.
This makes it statically impossible, by requiring that `TimelineEventHandler` takes a `&mut DayDividerAdjuster` when operating, that it marks as not "consumed". Later, the caller must call `DayDividerAdjuster::maybe_adjust_day_dividers()`, to mark it as consumed. When dropping, we check that it's been consumed, otherwise we panic — as it's a developer error to not call `maybe_adjust_day_dividers()`.
Notably, split the code into smaller functions, and revamp the high-level signatures so the individual handle_ functions don't take a bazillion arguments.
The reports now include the final state as well as the set of operations to run, so we can really debug all the steps just from looking at a rageshake.
When we're removing or inserting any day divider, we're also updating the offset, so that subsequent operations happen at the right positions.
The previous code made the error to clamp the offset when assigning it, instead of letting it be "out of bounds" and clamping the uses, which is
the correct way to implement this.
The test has been failing with a timeout recently, several time. Let's see if it was a fluke caused by the low threshold (because the server might be
busy handling other requests from other tests), or an actual issue.
It appears that tarpaulin complains if the symbol information is stripped from
the binary, and as of Rust 1.77, `debug=0` causes Cargo to strip all debug
info.
To fix this, set `debug=1`.