A room can be associated to a lot of data, depending on the number of members in the room.
So freeing space on the filesystem should be worth it in some cases.
An (extreme) example: I have a test account that is in ~60 rooms, a few of those big public rooms, including Matrix HQ. The size of the matrix-sdk-state.sqlite3 file is 542 MB. Using this PR and leaving, then forgetting Matrix HQ brings the DB down to 255 MB.
The WAL file can grow depending on the transactions that are run. A
critical case is VACUUM which basically writes the content of the DB
file to the WAL file before writing it back to the DB file.
SQLite doesn't try to reduce the size of the file after that unless we
set an explicit limit,
so we could end up taking twice the size of the database on the
filesystem.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
It should have been done in the migration of version 7, to reduce the
size of the database on the filesystem after the media cache was moved
to the SqliteEventCacheStore. Better late than never.
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
Add a new created_at to the send_queue_events and
dependent_send_queue_events stored records. This will allow clients to
understand how stale a pending message might be in the event that the
queue encounters and error and becomes wedged.
This change is exposed through the FFI on the `EventTimelineItem` struct
as a new optional field named `local_created_at`. It will be `None` for
any Remote event, and `Some` for Local events (except for those that
were enqueued before the migrations were run).
Signed-off-by: Daniel Salinas
---------
Signed-off-by: Daniel Salinas <zzorba@users.noreply.github.com>
Co-authored-by: Daniel Salinas <danielsalinas@daniels-mbp-2.myfiosgateway.com>
Co-authored-by: Benjamin Bouvier <benjamin@bouvier.cc>
Co-authored-by: Daniel Salinas <danielsalinas@Daniels-MBP-2.attlocal.net>
We have quite a few `allow(dead_code)` annotations. While it's OK to use
in situations where the Cargo-feature combination explodes and makes it
hard to reason about when something is actually used or not, in other
situations it can be avoided, and show actual, dead code.
This will allow us to keep track of which join room requests are marked as 'seen' by the current user and return them as such.
Also, add some methods to `Room` to mark new join requests as seen and to get the current ids for the seen join requests.
If a linked chunk update starts with a RemoveChunk update, then the
transaction may start with a SELECT query and be considered a read
transaction. Soon enough, it will be upgraded into a write transaction,
because of the next UPDATE/DELETE operations that happen thereafter. If
there's another write transaction already happening, this may result in
a SQLITE_BUSY error, according to
https://www.sqlite.org/lang_transaction.html#deferred_immediate_and_exclusive_transactions
One solution is to always start the transaction in immediate mode. This
may also fail with SQLITE_BUSY according to the documentation, but it's
unclear whether it will happen in general, since we're using WAL mode
too. Let's try it out.
- rename RawLinkedChunk -> RawChunk
- rename RawChunk::id -> RawChunk::identifier
- precise that a `RawChunk` is mostly a `Chunk` with different
previous/next links.
And let the caller rebuild the linked chunk. This is slightly nicer in
that it allows us to display the raw representation of a reloaded linked
chunk, before checking its internal state is consistent; this will allow
for better debug of issues related to the linked chunk internal state.
No functional changes.
Extracted from #4329. This does not change the `MediaFormat` of the
request used in the media cache by the send queue.
---------
Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
This required adding support for *reading* out of the event cache, for
the sqlite backend. This paves the way for the next PR (reload from the
cache), and it should also help with testing at the `EventCacheStore`
trait layer some day.
These are all coming from macro invocations of macros that are defined
in other crates. It's likely a clippy issue. We should try to revert
this the next time we bump the nightly version we're using.
This patch updates `EventCacheStore::handle_linked_chunk_updates` to
take a `Vec<Update<Item, Gap>>` instead of `&[Update<Item, Gap>]`.
In fact, `linked_chunk::ObservableUpdates::take()` already returns a
`Vec<Update<Item, Gap>>`; we can simply forward this `Vec` up to here
without any further clones.
Prior to this patch, the send queue would not maintain the ordering of
sending a media *then* a text, because it would push back a dependent
request graduating into a queued request.
The solution implemented here consists in adding a new priority column
to the send queue, defaulting to 0 for existing events, and use higher
priorities for the media uploads, so they're considered before other
requests.
A high priority is also used for aggregation events that are sent late,
so they're sent as soon as possible, before other subsequent events.