Creating many threads may use a bit of memory: on a machine with N
devices, exactly N*2 MB of memory may be consumed.
That might be a lot for a NSE process on iOS, which can only have up to
16 MB of RAM allocated for it. For this case, we introduce a new FFI
method `setup_lightweight_tokio_runtime` which will spawn at most 4
worker threads and 1 blocking thread. This should be sufficient for most
use cases.
Being able to always use the first redirect URI in the client metadata
seems to be very specific to the FFI bindings.
For example clients that need to bind a port on localhost need to
provide a custom redirect URI each time.
So we ask for the redirect URI, and keep the current behavior only for
the bindings.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
This patch introduces the new `EncryptionState` to represent the 3
possible states: `Encrypted`, `NotEncrypted` or `Unknown`. All the
`is_encrypted` methods have been replaced by `encryption_state`.
The most noticable change is in `matrix_sdk::Room` where `async fn
is_encrypted(&self) -> Result<bool>` has been replaced by `fn fn
encryption_state(&self) -> EncryptionState`. However, a new `async
fn latest_encryption_state(&self) -> Result<EncryptionState>` method
“restores” the previous behaviour by calling `request_encryption_state`
if necessary.
The idea is that the caller is now responsible to call
`request_encryption_state` if desired, or use `latest_encryption_state`
to automate the call if necessary. `encryption_state` is now non-async
and infallible everywhere.
`matrix-sdk-ffi` has been updated but no methods have been added for
the moment.
Instead of keeping state for the `Paginator` instance, we create one
when needs be, in the `run_backwards_impl` method, and initialize it
with a previous-batch token. This is simpler than keeping one alive, and
making sure that we reset it in the right places.
This patch adds support for the `shared_history` flag from MSC3061 to
the `m.room_key` content, exported room keys, and backed-up room keys.
The flag is now persisted in our `InboundGroupSession`. Additionally,
when creating a new `InboundGroupSession`, we ensure the
`shared_history` flag is set appropriately.
MSC3061: https://github.com/matrix-org/matrix-spec-proposals/pull/3061
Token revocation was split out from MSC2964 to MSC4254, and RP-Initiated
logout is now mentioned only as an alternative.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
This is the method to get the server metadata in the latest draft of
[MSC2965](https://github.com/matrix-org/matrix-spec-proposals/pull/2965).
We still keep the old behavior with `GET /auth_issuer` as fallback for
now because it has wider server support.
There are some pre-main commit cleanups to simplify the main commit.
This can be reviewed commit by commit.
The changes were tested with the oidc_cli example on beta.matrix.org.
Closes#4550.
---------
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
This should be the most common case, and is already the only case
supported by the higher level APIs like `url_for_oidc` and
`login_with_qr_code`. It simplifies the API because we can call
`restore_registered_client` directly from `register_client`, which was a
TODO.
- [x] Public API changes documented in changelogs (optional)
---------
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
## Some context
An aggregation is an event that relates to another event: for instance,
a
reaction, a poll response, and so on and so forth.
## Some requirements
Because of the sync mechanisms and federation, it can happen that a
related
event is received *before* receiving the event it relates to. Those
events
must be accounted for, stashed somewhere, and reapplied later, if/when
the
related-to event shows up.
In addition to that, a room's event cache can also decide to move events
around, in its own internal representation (likely because it ran into
some
duplicate events, or it managed to decrypt a previously UTD event).
When that happens, a timeline opened on the given room
will see a removal then re-insertion of the given event. If that event
was
the target of aggregations, then those aggregations must be re-applied
when
the given event is reinserted.
## Some solution
To satisfy both requirements, the [`Aggregations`] "manager" object
provided
by this PR will take care of memoizing aggregations, **for the entire
lifetime of the timeline** (or until it's clear'd by some
caller). Aggregations are saved in memory, and have the same lifetime as
that of a timeline. This makes it possible to apply pending aggregations
to cater for the first use case, and to never lose any aggregations in
the
second use case.
## Some points for the reviewer
- I think the most controversial point is that all aggregations are
memoized for the entire lifetime of the timeline. Would that become an
issue, we can get back to some incremental scheme, in the future:
instead of memoizing aggregations for the entire lifetime of the
timeline, we'd attach them to a single timeline item. When that item is
removed, we'd put the aggregations back into a "pending" stash of
aggregations. If the item is reinserted later, we could peek at the
pending stash of aggregations, remove any that's in there, and reapply
them to the reinserted event. This is what the [first version of this
patch](ec64b9e0bc)
did, in a much more adhoc way, for reactions only; based on the current
PR, we could do the same in a simpler manner
- while the PR has small commits, they don't quite make sense to review
individually, I'm afraid, as I was trying to find a way to make a
general system that would work not only for reactions, poll responses
and ends. As a matter of fact, the first commits may have introduced
code that is changed in subsequent commits, making the review a bit
hazardous. Happy to have a live reviewing party over Element Call, if
that helps, considering the size of the patch.
- future work may include using the aggregations manager for edits too,
leading to more code removal.
The `SyncService::stop()` method could fail for the following reasons:
1. The supervisor was not properly started up, this is a programmer error.
2. The supervisor task wouldn't shut down and instead it returns a JoinError.
3. We couldn't notify the supervisor task that it should shutdown due the channel being closed.
All of those cases shouldn't ever happen and the supervisor task will be
stopped in all of them.
1. Since there is no supervisor to be stopped, we can safely just log an
error, our tests ensure that a `SyncService::start()` does create a
supervisor.
2. A JoinError can be returned if the task has been cancelled or if the
supervisor task has panicked. Since we never cancel the task, nor
have any panics in the supervisor task, we can assume that this won't
happen.
3. The supervisor task holds on to a reference to the receiving end of
the channel, as long as the task is alive the channel can not be
closed.
In conclusion, it doesn't seem to be useful to forward these error cases
to the user.