I am really not sure why the caller even should have the option to set
this. We should always use the correct isolation type based on the
privileges the server runs under never the client. podman-remote build
seems to send the default based on its local privs which was wrong as
well. To fix this I also changed the client to send the default if the
isolation flag is not set.
Fixes#22109
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We need to take another lock to prevent concurrent starts from different
machines.
I manually tested it by starting three VM in parallel with:
podman machine start & podman machine start test1 & podman machine start test2
I also added a CI test that seems to work as expected (failed with the
old binary, worked with the new)
Before this patch I was able to start more than VM, with this patch it
now only starts one of them and the other ones will fail to start with
a proper error.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This function is not used, it has been refactored in the general
starting good higher up the stack.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Currently we first read the conf and then lock, this is racy because
while we wait for the lock another process might change the state so
the only way to have the actual current state is to read the file
while holding the lock.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
- Fixes conflicts such as removal of second machine deleting a socket of a
the first machine while it's running
- Move API socket into runtime directory for consistency
- Add API and gvproxy sockets to removal list
- Cleanup related logic
Signed-off-by: Jason T. Greene <jason.greene@redhat.com>
we are having second thoughts about *requiring* a policy.json on podman
machine hosts. we are concerned that we need to work out some more use
cases to be sure we do not make choices now that limit us in the near
term future. for example, should the policy files be the same for
container images and machine images? And should one live on the host
machine and the other live in the machine?
therefore, if a policy.json *is* present in the correct location, we will use and honor it; however, if it does not, we will allow the machine image to be pulled without a policy.
Signed-off-by: Brent Baude <baude@redhat.com>
Co-authored-by: Paul Holzinger <45212748+Luap99@users.noreply.github.com>
Signed-off-by: Brent Baude <bbaude@redhat.com>
1. Added the xz decompression unit tests
2. Removed the xz implementation to use the one from c/images
3. Removed the specific macos gzip, zstd compressor and use
the generic compressor but with SparseWriter if GOOS == darwin
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
Adding the final machine endpoint as quay.io/podman/machine-os in the
Podman code. As a reminder, we decided we would set this in containers
conf once things settle down and this code would then be removed.
Signed-off-by: Brent Baude <bbaude@redhat.com>
The annotations should be maintained by CRI-O itself to decouple the
projects from a dependency perspective.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Also update the website to display the correct swagger doc for the right
version, the 5.0 swagger file will not exist until we branch but I added
it anyway so we do not forget it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Healthchecks, defined in a .yaml file as livenessProbe did not had any
effect. They were executing as intended, containers were marked as
unhealthy, yet no action was taken. This was never the intended
behaviour, as observed by the comment:
> if restart policy is in place, ensure the health check enforces it
A minimal example is tracked in containers/podman#20903 [1] with the
following YAML:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: ubi-httpd-24
spec:
restartPolicy: Always
containers:
- name: ubi8-httpd
image: registry.access.redhat.com/rhscl/httpd-24-rhel7:2.4-217
livenessProbe:
httpGet:
path: "/"
port: 8081
```
By passing down the restart policy (and using constants instead of
actually wrong hard-coded ones), Podman actually restarts the container
now.
[1]: https://github.com/containers/podman/issues/20903Closes#20903.
Signed-off-by: Jasmin Oster <nachtjasmin@posteo.de>
Move the writes into the shim level to make sure they happen while we
hold the machine lock to prevent any race conditions reading/writing the
file.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
First make sure we check that a given VM exist when holding the VM lock
for it. The check in cmd/podman/machine/init.go is a nice quick out but
not enough to ensure that 2 processes to not create the same VM at the
same time. The only way to ensure this is by holding the lock and
checking if the VM config file exists.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Neither of the SparseWriter users actually _wants_ the underlying
WriteSeeker to be closed; so, don't.
That makes it clear where the responsibility for closing the file
lies, and allows us to remove the reliance on the destinations
reliably returning ErrClosed.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Make sure we only update the machine config when we are locked.
While it doesn't make a functional differnce for cpu and memory it was a
problem for disk size. The disk size must be larger than the previous
one so we must have accurate data on the previous value.
Thus change the settings only while locked and refresh the config so we
have the current up to date values.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It is unused, and it clearly doesn't work (it closes dest
before writing anything to it).
Just drop it, it can always be re-added.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When we set a relative path (i.e. ".") it should be resolved next to
binary so we need to get the base dir. If we join it directly like it
did before you get a path like .../podman/policy.json where podman is the
podman executable so it is not a directory and thus could not contain the
policy.json file.
ref #21964
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
While working on #21592 we figured out that the
the full VM File was loaded in memory when detecting
the file format, but only a few bytes are needed.
This commit address that.
[NO NEW TESTS NEEDED]
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>