This makes it possible to common out the implementations of functions
that should be available to both kinds.
Also, moves a bit of code that could easily live in the
`RoomEventCacheState` impl block, instead of being free functions.
There was a subtle bug that a receipt would be considered active, and
then on the subsequent call to `select_best_receipt`, it could be
forgotten in favor of an older receipt. The regression test shows one
such case, where before this patch, the count would incorrectly say 3,
not 2, because the active read receipt moved backwards to the implicit
receipt.
The solution is to stop looking for a better receipt, if we run into the
latest active read receipt. Having `found` set to `None` in this case
means we hadn't found any better read receipt anyways.
It turns out on Android, rustls needs [a custom setup](https://github.com/rustls/rustls-platform-verifier#android) and adding the `rustls-platform-verifier-android` library that's [not available on Maven](https://github.com/rustls/rustls-platform-verifier/issues/115).
Then, from the Android clients we'd need to call some exposed JNI function so we can provide a JVM context from where Rust can take the `Application` component and access its contents to read its credentials storage. Thanks to some tricks we can use `libloading` to simulate this call from Rust itself and properly initialise the platform verifier.
Note self-signed certificates will no longer work with these changes on Android, and providing them in `ClientBuilder::add_root_certificates` will make most requests fail. This can be handled separately.
This method is now only used by tests, so I opted to lock it
behind the test configuration to appease CI.
Signed-off-by: Skye Elliot <actuallyori@gmail.com>
This patch removes the `enable_latest_event_sorter` flag in
`RoomList::entry_with_dynamic_adapters_with`. This sorter is now stable
enough, we can always enable it.
The method
`RoomEventCacheStateLockWriteGuard::load_more_events_backwards` is used
only in one-place: in `RoomPagination::load_more_events_backwards`. This
patch inlines this method as it aims at living in `RoomPagination`, not
somewhere else.
It's purely code move. Nothing else has changed.
This patch also updates a test that was accessing
`load_more_events_backwards` directly. Now it runs it via
`RoomPagination`.
This patch updates `ThreadPagination` to hold a `ThreadEventCacheInner`
instead of a `RoomEventCacheInner`! It makes more sense and it
splits/isolates the types even more.
`RoomEventCache::thread_pagination` is now async and returns a
`Result<ThreadPagination>` because it needs to load its state to fetch
a `ThreadEventCache`. Later, accessing a thread wouldn't happen in
`RoomEventCache` but in `Caches`, one step at a time.
This patch adds the `weak_room: WeakRoom` field to
`ThreadEventCacheInner`. This is a prerequisite to have
`ThreadPagination` uses `ThreadEventCacheInner` instead of
`RoomEventCacheInner`!
This patch creates the `ThreadEventCacheState` type. It uses
`caches::lock::StateLock`, just like `RoomEventCacheState`. It allows
to have the `read()` and `write()` method to access the state, and to
reload it when necessary, see the `caches::lock::Store` implementation.
This patch thus creates `ThreadEventCacheStateLockReadGuard` and
`ThreadEventCacheStateLockWriteGuard`. The methods touching the state in
`ThreadEventCacheInner` are moved to these lock types.
They are purely code moves (plus changes to reach the correct data): no
change in the semantics.
This patch creates `ThreadEventCacheInner` so that `ThreadEventCache`
can be shallow cloned (which will be useful for `ThreadPagination`).
That's also the first step to introduce `ThreadEventCacheState`!
The new integration tests for the event cache cover the same situations
that were tested by the previous tests on compute_unread_count_legacy,
so we're fine here.
A gappy sync may cause a linked chunk to shrink, waiting for callers to
lazy-reload it again in the future. But, because the unread counts
computation rely on the in-memory linked chunk, this means that the
values computed for the unread count may be incorrect (and decrease).
Fortunately, this situation is rather easy to detect, because the latest
active read receipt doesn't change in this case, so we can first check
that, and then manually readjust the unread counts, if they've
decreased.
Future work should trigger back-pagination in those cases, so the unread
counts keeps on being precise, despite the gappy sync.
This test exhibits an edge case: when a room event cache is shrunk
(because of a gappy/limited sync), then the unread count might decrease,
without the latest active read receipt changing.