When you use podman logs with --until and --follow it should exit after
the requested until time and not keep hanging forever.
This fixes the behavior for the k8s-file backend.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When you use podman logs with --until and --follow it should exit after
the requested until time and not keep hanging forever.
To make this work I reworked the code to use the better journald event
reading code for logs as well. this correctly uses the sd_journal API
without having to compare the cursors to find the EOF.
The same problems exists for the k8s-file driver, I will fix this in the
next commit.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The `containerCouldBeLogging` bool should not be false by default, when
--since is used we seek in the journal and can miss the start event so
that bool would stay false forever. This means that a running container
is not followed even when it should.
To fix this we can just set the `containerCouldBeLogging` bool based on
the current contianer state.
Fixes#16950
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
do not allow removing containers that are in the stopping state,
otherwise it can lead to a race condition where a "podman rm" removes
the container from the storage while another process is stopping the
same container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2155828
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Reasoning
---------
When the log-driver is passthrough, the journal socket is passed to the containers as-is which has two advantages:
1. journald can see who the actual sender of the log event is,
rather than thinking everything comes from the conmon process
2. conmon will not have to copy all the log data
Code Changes
------------
If log-driver was not set by the user and service-container is set use
passthrough as the default log-driver
Update the system tests
- explicitly set logdriver in sdnotify and play tests
- podman-kube template test: Verify the default log driver for service-container
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
New tests got added since I've been on PTO. Some of those tests
weren't doing cleanup, resulting in nasty red logs. Fix.
Signed-off-by: Ed Santiago <santiago@redhat.com>
In 'run_podman ? ...', the question mark will _usually_ be
interpreted as a literal question mark, meaning "ignore
exit status". But if there are one or more single-character
filenames in the working directory, such as droppings from
a command such as 'my-test-command > a', Very Bad Things
will happen: the test will fail with an incomprehensible
error message. Prevent that.
Signed-off-by: Ed Santiago <santiago@redhat.com>
...based on f37, not f31. And make it fedora-minimal so it's
smaller. And clean up dnf so it's even smaller. And tag it
with our proper YMD tag, and commit the script that builds it.
This broke the system-df tests. In the process of resolving
that, I found those tests a little lacking. So, improve their
coverage a little bit.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Make sure the Container unit correctly references the volume
Start the Container service and not the Volume one
Remove the volume
Print the name of the service when status does not match
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
The test was only waiting for the port to be ready but that doesn't
imply the server being ready to serve requests. Hence, add a loop
waiting for the `info` call to succeed.
Fixes: #16916
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
If you are running temporary containers within podman play kube
we should really be running these in read-only mode. For automotive
they plan on running all of their containers in read-only temporal
mode. Adding this option guarantees that the container image is not
being modified during the running of the container.
The containers can only write to tmpfs mounted directories.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The function grew into a big hairy ball over time and I personally
refrained from touching it as it seemed fragile. Hence, refactor
the function into something more comprehensible and maintainable.
There is still potential for improvement but I want to tackle one
thing at a time.
[NO NEW TESTS NEEDED] as it shouldn't change behavior.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
These just run once and are considered successful at exist. Not much is
needed to support it, but we have to avoid overwriting the type
with Type=notify.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
False is the assumed value, and inspect and podman generate kube are
being cluttered with a ton of annotations that indicate nothing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This adds basic container and volume system tests for quadlet. These
install and run actual systemd units and ensure they work.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
In the recent past, I met the frequent need to wait for a container to
exist that, at the same time, may get removed (e.g., system tests in [1]).
Add an `--ignore` option to podman-wait which will ignore errors when a
specified container is missing and mark its exit code as -1. Also
remove ID fields from the WaitReport. It is actually not used by
callers and removing it makes the code simpler and faster.
Once merged, we can go over the tests and simplify them.
[1] github.com/containers/podman/pull/16852
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Also update vendor of containers/storage and image
Cleanup display of added/dropped capabilties as well
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
whenever the podman process is launched, it runs any file found in
these directories:
- /etc/containers/auth-scripts
- /usr/libexec/podman/auth-scripts
The current podman command line is passed as arguments to the
process.
If any of the processes fail, the error is immediately reported back
from podman that exits with the same error code.
[NO NEW TESTS NEEDED] requires a system-wide configuration.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Init containers are removed once they exit, but podman
reports and error that the container does not exist, when
it was previously removed. Stop reporting missing containers
when removing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
With the 4.0 network rewrite I introduced a regression in 094e1d70de.
It only covered the case where a checkpoint is restored via --import.
The normal restore path was not covered since the static ip/mac are now
part in an extra db bucket. This commit fixes that by changing the config
in the db.
Note that there were no test for --ignore-static-ip/mac so I added a big
system test which should cover all cases (even the ones that already
work). This is not exactly pretty but I don't have to enough time to
come up with something better at the moment.
Fixes#16666
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The remote client should be allowed to specify if the container should
be run with the proxy env vars. It will still use the proxy vars from
the server process and not the client. This makes podman-remote more
consistent with the local version and easier to use in environments
where a proxy is required.
Fixes#16520
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As outlined in #16076, a subsequent BARRIER *may* follow the READY
message sent by a container. To correctly imitate the behavior of
systemd's NOTIFY_SOCKET, the notify proxies span up by `kube play` must
hence process messages for the entirety of the workload.
We know that the workload is done and that all containers and pods have
exited when the service container exits. Hence, all proxies are closed
at that time.
The above changes imply that Podman runs for the entirety of the
workload and will henceforth act as the MAINPID when running inside of
systemd. Prior to this change, the service container acted as the
MAINPID which is now not possible anymore; Podman would be killed
immediately on exit of the service container and could not clean up.
The kube template now correctly transitions to in-active instead of
failed in systemd.
Fixes: #16076Fixes: #16515
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The flake in #16076 is likely related to the notify message not being
delivered/read correctly. Move sending the message into an exec session
such that flakes will reveal an error message.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The containers should be able to write to tmpfs mounted directories.
Also cleanup output of podman kube generate to not show default values.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When the new `events_container_create_inspect_data` option is enabled in
containers.conf set the `ContainersInspectData` event field for each
container-create event.
The data was requested for the purpose of auditing (e.g., intrusion
detection).
Jira: https://issues.redhat.com/browse/RUN-1702
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When restarting a container, clean up the healthcheck state by removing
the old log on disk. Carrying over the old state can lead to various
issues, for instance, in a wrong failing streak and hence wrong
behaviour after the restart.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2144754
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>