We don't need the `hyper` feature as we use our own HTTP client,
and the `keystore` will not be used in most cases.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
Fallback keys until now have been rotated on the basis that the
homeserver tells us that a fallback key has been used.
Now this leads to various problems if the server tells us too often that
it has been used, i.e. if we receive the same sync response too often.
It leaves us also open to the homeserver being dishonest and never
telling us that the fallback key has been used.
Let's avoid all these problems by just rotating the fallback key in a
time-based manner.
Signed-off-by: Damir Jelić <poljar@termina.org.uk>
Co-authored-by: Andy Balaam <andy.balaam@matrix.org>
Now that there is some support for [MSC2530](https://github.com/matrix-org/matrix-spec-proposals/pull/2530), I gave adding sending captions a try. ( This is my first time with Rust 😄 )
I tried it on Element X with a hardcoded caption and it seems to work well

(It even got forwarded through mautrix-whatsapp and the caption was visible on the Whatsapp side)
---
* ffi: Expose filename and formatted body fields for media captions
In relevance to MSC2530
* MSC2530: added the ability to send media with captions
Signed-off-by: Marco Antonio Alvarez <surakin@gmail.com>
* signoff
Signed-off-by: Marco Antonio Alvarez <surakin@gmail.com>
* fixing the import messup
* fix missing parameters in documentation
* fix formatting
* move optional parameters to the end
* more formatting fixes
* more formatting fixes
* rename url parameter to filename in send_attachment and helpers
* fix send_attachment documentation example
* move caption and formatted_caption into attachmentconfig
* fix formatting
* fix formatting
* fix formatting (hopefully the last one)
* updated stale comments
* simplify attachment message comments
---------
Signed-off-by: Marco Antonio Alvarez <surakin@gmail.com>
Co-authored-by: SpiritCroc <dev@spiritcroc.de>
This patch updates `ChunkContent::Gap` to hold a content `U`. Thus,
`Chunk` and LinkedChunk` both get a new generic parameter `U`. Some
methods like `new_gap` or `insert_gap_at` take a new `content: U`
parameter.
This type `RoomEvents` (that uses `LinkedChunk`) is also updated
accordingly.
The small first commit reintroduces `sanitize_server_name` in the public API. In Fractal, we use it to validate the string in the input before allowing the user to trigger the discovery.
The main commit changes a bit the discovery process: before, server names like `example.org` would only be checked for the presence of a well-known, and only URLs like `https://example.org` would be checked as a homeserver. Now, providing `example.org` will also check if `https://example.org` is the URL of a homeserver.
Sadly I don't think it's possible to add tests for it as it would require to have a homeserver accessible via HTTPS.
---
* sdk: Restore sanitize_server_name in the public API
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
* sdk: Check if a provided server name points to a homeserver during discovery
Before, only URLs like `https://example.org` would be checked as a homeserver.
Providing `example.org` will check if `https://example.org` is the URL of a homeserver.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
* Refactor to avoid duplication
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
---------
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
Before this patch, the meta field would be mutated, even when the transaction would be aborted. This changes the update scheme to meta
with the following:
- when creating the transaction, clone the meta (but keep the pointer location to the previous one)
- all the transaction's methods operate on the WIP meta
- when committing, replace the previous meta with the current one
This patch is a work-in-progress. It explores an experimental data
structure to store events in an efficient way.
Note: in this comment, I will use the term _store_ to mean _database_
or _storage_.
The biggest constraint is the following: events can be ordered in
multiple ways, either topological order, or sync order. The problem is
that, when syncing events (with `/sync`), or when fetching events (with
`/messages`), we **don't know** how to order the newly received events
compared to the already downloaded events. A reconciliation algorithm
must be written (see #3058). However, from the “storage” point of view,
events must be read, written and re-ordered efficiently.
The simplest approach would be to use an `order_index` for example.
Every time a new event is inserted, it uses the position of the last
event, increments it by one, and done.
However, inserting a new event in _the middle_ of existing events would
shift all events on one side of the insertion point: given `a`, `b`,
`c`, `d`, `e`, `f` with `f` being the most recent event, if `g` needs
to be inserted between `b` and `c`, then `c`, `d`, `e`, `f`'s ordering
positions need to be shifted. That's not optimal at all as it would
imply a lot of updates in the store.
Example of a relational database:
| ordering_index | event |
|----------------|-------|
| 0 | `a` |
| 1 | `b` |
| 2 | `g` |
| 3 | `c` |
| … | … |
An insertion can be O(n), and it can happen more frequently than one
can think of. Let's imagine a permalink to an old message: the user
opens it, a couple of events are fetched (with `/messages`), and these
events must be inserted in the store, thus potentially shifting a lot of
existing events. Another example: Imagine the SDK has a search API for
events; as long as no search result is found, the SDK will back-paginate
until reaching the beginning of the room; every time there is a
back-pagination, a block of events will be inserted: there is more and
more events to shift at each back-pagination.
OK, let's forget the `order_index`. Let's use a linked list then? Each
event has a _link_ to the _previous_ and to the _next_ event.
Inserting an event would be at worst O(3) in this case: if the previous
event exists, it must be updated, if the next event exists, it must be
updated, finally, insert the new event.
Example with a relational database:
| previous | id | event | next |
|----------|---------|-------|---------|
| null | `id(a)` | `a` | `id(b)` |
| `id(a)` | `id(b)` | `b` | `id(c)` |
| `id(b)` | `id(c)` | `c` | null |
This approach ensures a fast _writing_, but a terribly slow _reading_.
Indeed, reading N events require N queries in the store. Events aren't
contiguous in the store, and cannot be ordered by the database engine
(e.g. with `ORDER BY` for SQL-based database). So it really requires one
query per event. That's a no-go.
In the two scenarios above, another problem arises. How to represent
a gap? Indeed, when new events are synced (via `/sync`), sometimes the
response contains a `limited` flag, which means that the results are
_partial_.
Let's take the following example: the store contains `a`, `b`, `c`.
After a long offline period (during which the room has been pretty
active), a sync is started, which provides the following events: `x`,
`y`, `z` + the _limited_ flag. The app is killed and reopened later.
The event cache store will contain `a`, `b`, `c`, `x`, `y`, `z`. How
do we know that there is a hole/a gap between `c` and `x`? This is an
important information! When `z`, `y` and `x` are displayed, and the user
would like to scroll up, the SDK must know that it must back-paginate
before providing `c`, `b` and `a`.
So the data structure we use must also represent gaps. This information
is also crucial for the events reconciliation algorithm.
What about a mix between the two? Here is _Linked Chunk_.
A _linked chunk_ is like a linked list, except that each node is either
a _Gap_ or an _Items_. A _Gap_ contains nothing, it's just a gap. An
_Items_ contains _several_ events. A node is called a _Chunk_. A _chunk_
has a maximum size, which is called a _capacity_. When a chunk is full,
a new chunk is created and linked appropriately. Inside a chunk, an
ordering index is used to order events. At this point, it becomes a
trade-off the find the appropriate chunk size to balance the performance
between reading and writing. Nonetheless, if the chunk size is 50, then
reading events is 50 times more efficient with a linked chunk than with
a linked list, and writing events is at worst O(49), compare to the O(n
- 1) of the ordering index.
Example with a relational database. First table is `events`, second
table is `chunks`.
| chunk id | index | event |
|----------|-------|-------|
| `$0` | 0 | `a` |
| `$0` | 1 | `b` |
| `$0` | 2 | `c` |
| `$0` | 3 | `d` |
| `$2` | 0 | `e` |
| `$2` | 1 | `f` |
| `$2` | 2 | `g` |
| `$2` | 3 | `h` |
| chunk id | type | previous | next |
|----------|-------|----------|------|
| `$0` | items | null | `$1` |
| `$1` | gap | `$0` | `$2` |
| `$2` | items | `$1` | null |
Reading the last chunk consists of reading all events where the
`chunk_id` is `$2` for example, and contains events `e`, `f`, `g` and
`h`. We can sort them easily by using the `event_index` column. The
previous chunk is a gap. The previous chunk contains events `a`, `b`,
`c` and `d`.
Being able to read events by chunk clearly limit the amount of reading
and writing in the store. It is also close to what will be really done
in real life with this store. It also allows to represent gaps. We can
replace a gap by new chunk pretty easily with few writings.
A summary:
| Data structure | Reading | Writing |
|----------------|-------------------|-----------------|
| Ordering index | “O(1)”[^1] (fast) | O(n - 1) (slow) |
| Linked list | O(n) (slow) | O(3) (fast) |
| Linked chunk | O(n / capacity) | O(capacity - 1) |
This patch contains a draft implementation of a linked chunk. It will
strictly only contain the required API for the `EventCache`, understand
it _is not_ designed as a generic data structure type.
[^1]: O(1) because it's simply one query to run; the database engine
does the sorting for us in a very efficient way, particularly if the
`ordering_index` is an unsigned integer.
This adds a new mechanism in the UI crate (since re-attempts to decrypt happen in the timeline, as of today — later that'll happen in the event cache) to notify whenever we run into a UTD (an event couldn't be decrypted) or a late-decryption event (after some time, a UTD could be decrypted).
This new hook will deduplicate pings for the same event (identifying events on their event id), and also implements an optional grace period. If an event was a UTD:
- if it's still a UTD after the grace period, then it's reported with a `None` `time_to_decrypt`,
- if it's not a UTD anymore (i.e. it's been decrypted in the meanwhile), then it's reported with a `time_to_decrypt` set to the time it took to decrypt the event (approximate, since it starts counting from the time the timeline receives it, not the time the SDK fails to decrypt it at first).
It's configurable as an optional hook on timeline builders. For the FFI, it's configurable at the sync service's level with a "delegate", and then the sync service will forward the hook to the timelines it creates, and the hook will forward the UTD info to the delegate.
Part of https://github.com/element-hq/element-meta/issues/2300.
---
* ui: add a new module and trait for subscribing to unable-to-decrypt notifications
and late decryptions (i.e. the key came in after the event that required it for decryption).
* timeline: hook in (!) the unable-to-decrypt hook
* timeline: prefix some test names with test_
* utd hook: delay reporting a UTDs
* ffi: add support for configuring the utd hook
* utd hook: switch strategy, have a single hook
And have the data structure contain extra information about late-decryption events.
* utd hook: rename `SmartUtdHook` to `UtdHookManager`
* ffi: configure the UTD hook with a grace period of 60 seconds
And ignore UTDs that have been late-decrypted in less than 4 seconds.
* utd hook: update documentation and satisfy the clippy gods
* ffi: introduce another UnableToDecryptInfo FFI struct that exposes simplified fields from the SDK's version
* review: introduce type alias for pending utd reports
* review: address other review comments
This adds support for back-pagination into the event cache, supporting enough features for integrating with the timeline (which is going to happen in a separate PR).
The idea is to provide two new primitives:
- one to get (or wait, if we don't have any handy) the latest pagination token received by the sync,
- one to run a single back-pagination, given a token (or not — which will backpaginate from the end of the room's timeline)
The timeline code can then use those two primitives in a loop to replicate the current behavior it has (next PR to be open Soon™).
The representation of events in the store is changed, so that a timeline can have *entries*, which are one of two things:
- either an event, as before
- or a gap, identified by a backpagination token (at the moment)
This allows us to avoid a lot of complexity from the back-pagination code in the timeline, where we'd attach the backpagination token to an event that only had an event_id. We don't have to do this here, and I suppose we could even attach the backpagination token to the next event itself.
This doesn't do reconciliation yet; the plan is to add it as a next step.