2ebf672866 introduced special handling to
mark exported GNOME Shell search provider .ini files as disabled by
default. This functionality was not previously tested.
We can only support this if the host bwrap is not setuid (at least for
now). This allows callers to detect this case ahead of time. We also
detect this case when called and return a better error code that
can be detected.
This uses the new bwrap feature via flatpak run --parent-expose-pids to
make new new sandbox pid namespace be a child of the callers sandbox.
Pretty obvious, the only weird thing is that we can't get the peer pid
directly from the caller (as it goes via the dbus proxy) so we have
to look that up from the instance data.
Given the pid of an existing flatpak process, if --parent-expose-pids is
specified, the new sandbox is run such that its processes are visible in
the specified sandbox.
In all other senses the two are disjoint though. The new sandbox is
still isolated from the host and the existing sandbox.
This allows the authenticator to directly do UI and parent it to the
relevant window. The actual parent string is specified just like
the xdg-desktop-portal one.
There is a new flatpak_transaction_set_parent_window() function that
clients can use to signal the what window they want to be parented to.
This allows the authenticator to handle each token type differently.
For example, this allows a "purchase" type to run the donation
webflow, but not require login (and then store the fact that this was
run locally).
This makes it very easy to reuse a single authenticator for several
remotes. This is useful for the a default authenticator implementation
that we can ship with flatpak and use for e.g. flathub.
These signals are emitted when the authenticator needs some kind of
web-based authentication. If the caller implements webflow-start and
returns TRUE, then it needs to show the user the URL and allow the user
to interact with it.
Typically this ends with the web-page being redirected to a url to
localhost or similar which tells the authenticator the result of the
operations. This will cause the webflow-done signal to be emitted and
the transaction operation to continue. If something goes wrong (or the
signal is not handled) it will also report webflow-done, but then the
transaction will fail in a normal way.
Generally all users of FlatpakTransaction need to do is:
On webflow-start, show a browser window with the url and return TRUE.
On webflow-done, close the browser window if its still visible.
If the user closes the browser window early, call
flatpak_transaction_abort_webflow().
If request-webflow file exists, then the authenticator will listen
to a local socket and start a webflow request with a uri pointing to it.
If anything connects to the uri it will consider the flow ok and continue.
If the client calls close() instead it will silently succeed anyway
if require-webflow doesn't exists, and fail if is exists.
When resolving the transactions we call RequestRefTokens as needed
to get bearer tokens for some refs. These calls can also emit
the Webflow signal on the request object with a url. It is then
up to the client to show this url in some way.
Once the required operations are done in the browser it will redirect
to some url that will reach the authenticator, telling it that the
operation is done and the final result of it. At that point the
authenticator will emit the WebflowDone signal and continue.
If the cliend doesn't want to do the web flow it can call the close
operation on the request object.
When we need a bearer token, look up the configured authenticator for
the remote and ask it for tokens. Also updates the test-auth test
with to use the new test authenticator instead of the previous
env var hack.
This is a trivial implementation of org.freedesktop.Flatpak.Authenticator
that just reads the contents of the "required-token" file and returns
that as the tokens for all refs.
Unfortunately we lose some error information when we pull multiple
refs, ending with a generic "something failed" error rather than the
401 error so in the p2p case we can't verify that we get the right
errors.
The p2p case is kinda weird wrt tokens. We can do most of the basics,
like which refs need updating using the partial summary from the p2p
mirrors, but we can't rely 100% on the ostree-metadata info for core
info like permissions or dependencies, since it may be out-of-sync.
So, if the information in the ostree-metadata doesn't match the
commit we're resolving, the p2p resolve code actually pulls the actual
commit objects as part of a resolve.
Now, the commit objects are protected by bearer tokens, so we need to
pass them while doing this pull. Unfortunately the information about
which refs requires tokens are part of the commit, which is a circular
dependency. We resolve this by relying on the (possibly stale, but
probably ok) copy of the need-token info in the ostree-repo metadata.
So, we do the first part of the p2p resolve, then for all the
not-yet-resolved ops (i.e. ones that actually need updates) we look
in the ostree-metadata for which refs need tokens, generate tokens
and then do the pulling with the tokens.
This is an iterative process, because resolving a ref can create more
update operations, which may need more tokens.
Also, since the lower level APIs don't allow you to pass different tokens
for different parts change this function to support passing a subset
of the resolves, so that we can pass all that need a specific token in
one go, and then call this multiple times. The way we handle this is
by saving all the original ref_to_checksum hashtables for all results
and then re-create them with the subset of refs needed when pulling.
If the commit is available in the ostree-metadata and it matches what
the latest available commit in the p2p results then resolve it to that, so
we don't have to download the commit object.
This tries to resolve the p2p resolve operation from the info in
a ostree-metadata commit. This only works if the resolve ended up
on the same commit id as what was available in the ostree-metadata
which may not be correct if the two are not synchronized.
We extract the need-token key from the summary and if set we
calculate a token to use for the operation, which we then pass
to install/update.
For now the actual token just comes from the FLATPAK_TEST_TOKEN
environment var. The details of this will be fleshed out later.
Additionally, this does not support the p2p case, because there
we need the token in order to request the commit during the resolve.
This will also be added later.
This is in the same order as the xa.cache array and contains the id of
the commit that the cached data is about. This is not necessary in the
non-p2p summary metadata, because in that we always have a matching
ref -> commit array.
However, in the p2p case this information can be useful.