The method
`RoomEventCacheStateLockWriteGuard::load_more_events_backwards` is used
only in one-place: in `RoomPagination::load_more_events_backwards`. This
patch inlines this method as it aims at living in `RoomPagination`, not
somewhere else.
It's purely code move. Nothing else has changed.
This patch also updates a test that was accessing
`load_more_events_backwards` directly. Now it runs it via
`RoomPagination`.
This patch updates `ThreadPagination` to hold a `ThreadEventCacheInner`
instead of a `RoomEventCacheInner`! It makes more sense and it
splits/isolates the types even more.
`RoomEventCache::thread_pagination` is now async and returns a
`Result<ThreadPagination>` because it needs to load its state to fetch
a `ThreadEventCache`. Later, accessing a thread wouldn't happen in
`RoomEventCache` but in `Caches`, one step at a time.
This patch adds the `weak_room: WeakRoom` field to
`ThreadEventCacheInner`. This is a prerequisite to have
`ThreadPagination` uses `ThreadEventCacheInner` instead of
`RoomEventCacheInner`!
This patch creates the `ThreadEventCacheState` type. It uses
`caches::lock::StateLock`, just like `RoomEventCacheState`. It allows
to have the `read()` and `write()` method to access the state, and to
reload it when necessary, see the `caches::lock::Store` implementation.
This patch thus creates `ThreadEventCacheStateLockReadGuard` and
`ThreadEventCacheStateLockWriteGuard`. The methods touching the state in
`ThreadEventCacheInner` are moved to these lock types.
They are purely code moves (plus changes to reach the correct data): no
change in the semantics.
This patch creates `ThreadEventCacheInner` so that `ThreadEventCache`
can be shallow cloned (which will be useful for `ThreadPagination`).
That's also the first step to introduce `ThreadEventCacheState`!
The new integration tests for the event cache cover the same situations
that were tested by the previous tests on compute_unread_count_legacy,
so we're fine here.
A gappy sync may cause a linked chunk to shrink, waiting for callers to
lazy-reload it again in the future. But, because the unread counts
computation rely on the in-memory linked chunk, this means that the
values computed for the unread count may be incorrect (and decrease).
Fortunately, this situation is rather easy to detect, because the latest
active read receipt doesn't change in this case, so we can first check
that, and then manually readjust the unread counts, if they've
decreased.
Future work should trigger back-pagination in those cases, so the unread
counts keeps on being precise, despite the gappy sync.
This test exhibits an edge case: when a room event cache is shrunk
(because of a gappy/limited sync), then the unread count might decrease,
without the latest active read receipt changing.
`test_room_notification_count` started to intermittently fail on main,
because the computation of unread counts has moved from the sliding sync
processing to the event cache. As a result, new irrelevant RoomInfo
updates (related to the unread counts) can happen, and they might happen
quickly enough that the server reponse for sending an event happens
after 2 seconds (remember, we need to factor in the time to do the e2ee
key exchange, and so on and so forth).
Bumping the time between two RoomInfo updates should be sufficient to
avoid the intermittent failure.
This patch fixes a bug in `xtask log sync` which can miss a `sync_once`
log when the `pos` field is absent. It happens when there is no `pos`!
Example where `pos` is absent before `timeout`. Note the double space
before `timeout`:
```
… > sync_once{conn_id="room-list" timeout=0} > send{request_id="REQ-15" …
```
While when the `pos` is present, it's:
```
… > sync_once{conn_id="room-list" pos="0/m67590980…" timeout=30000} > send{request_id="REQ-23" …
```
This is a basic implementation that works, but it should unlock
improvements already (getting the unread count updated whenever a UTD
has been resolved) and it will pave the way for future improvements
(notably with respect to performance).
This is almost only code motion in this commit.
At this point, some tests don't pass, as the support for using the read
receipt code in the event cache isn't plugged in to the event cache
itself.
This patch makes `RoomEventCacheInner` more private, from `event_cache`
to `event_cache::caches`.
Consequently, `PinnedEventCacheState`, `RoomPagination::new` and
`ThreadPagination::new` follows the same restriction.
This patch makes `RoomEventCache::new`,
`RoomEventCache::handle_joined_room_update` and
`RoomEventCache::handle_left_room_update` more private. They are no more
accessible from `event_cache` but only from `event_cache::caches`.
This patch moves `test_uniq_read_marker` from `event_cache` to
`event-cache::caches::room`.
This patch is a prerequisite to the next patch where the
`RoomEventCache::handle_joined_room_update` will become more private.
Moving this test is necessary.