Tokens are not necessary for the restoration of the crypto/store, only
the meta. Required for OpenID Connect support where we need the tokens to get the
meta.
When the verification support was initially bound, Uniffi did not
support objects (the OlmMachine) returning other objects (our
verification objects). Instead we converted all the verification objects
into pure data structs which the other side had to poll for changes.
Now that Uniffi supports returning objects we can refactor this and make
the API on the other side the same as on the Rust side.
With these changes, the user of sliding sync can configure the SlidingSync-Builder to store and recover the state from storage. For that they should call cold_cache("my-sliding-sync-key") with their preferred storage location during setup time on SlidingSyncBuilder. If a cached version is found under that key, the internal state of the rooms-list as well as each view found (identified by their name) will be set from storage immediately - allowing the user to show the last cached state. The Builder then also uses the same key to store its latest state after every update received from remote.
👉 Note on room list:
As we store a disconnected state, we are saving all RoomList entries as either Empty or Invalidated. This allows for an easier updating when the server comes back with results, as we don't track the ranges in cache - in the view of the server, we haven't seen anything yet.
👉 Note on timeline events:
This does store all existing timeline events - initial and seen during the session (as we keep track of them right now) - and will recover that state. However, as we can't be sure wether we have gaps in the timeline, the timeline items are reset upon receiving updates for them from the server.
👉 Note on (full-sync-)view:
While we are caching the server returned results (view list, room count and room info) as is, we are not caching the internal state (whether we are catching up) nor the ranges (as they are likely to be out of date) - those will be acting the same way you configured them, just with preloaded results. So for a full-sync-view, it will still do requests in batches and replace the corresponding set of rooms. Which could mean you might see the same room appear twice, though the cached one would be marked as invalidated. This might be problematic if the list the UI shows is longer than the batches fetched, but would be resolved quickly when catching up.
👉 On recovery:
When the sliding sync receives a M_UNKNOWN_POS, indicating the server has expired our session, sliding sync now transparently retries with up to three times to restart the sliding sync with full set of extensions and the latest views at their existing windows, the current room state is held. For full sync this starts a new sync-up with the existing room list staying intact. This also works from the offline at start.