Rather than returning a reference to the `OnceCell` wrapping the
allocator, return a static reference to the allocator itself. This
allows flexibility for how the allocator is wrapped.
Create a new submodule within `memory`, `allocators` which contains all
memory allocators. Also split out the `Frame` struct from the `pg_alloc`
module, allowing it to be used by other modules.
This commit refactors the core process representation to decouple
"Identity/Resources" from "Execution/Scheduling". Previously, a
monolithic `Task` struct wrapped in `Arc<SpinLock<>>` caused lock
contention during hot scheduling paths and conflated shared state with
CPU-local state.
The `Task` struct has been split into:
1. `Task` (Shared): Holds process-wide resources (VM, FileTable,
Credentials). Managed via `Arc` and internal fine-grained locking.
2. `OwnedTask` (Private): Holds execution state (Context, v_runtime,
signal mask). Strictly owned by a specific CPU (via the Scheduler) and
accessed lock-free.
Key changes:
* Scheduler:
chedState` now owns tasks via `Box<OwnedTask>`.
- Transitions between `run_queue` and `running_task` involve strictly
moving ownership of the Box, ensuring pointer stability.
- The EEVDF comparison logic now explicitly handles comparisons
between the queued candidates and the currently running task (which is
not in the queue).
* Current Task Access:
- `current()` now returns a `CurrentTaskGuard` which:
1. Disables preemption (preventing context switches while holding
the reference).
2. Performs a runtime borrow check (panic on double-mutable borrow).
3. Dereferences a cached Per-CPU raw pointer for O(1) access.
In an SMP environment, two threads sharing an address space may trigger
a page fault on the same address simultaneously. Previously, the loser
of this race would receive an `AlreadyMapped` error from the page table
mapper, causing the kernel to treat a valid execution flow as an error.
This patch modifies `handle_demand_fault` to gracefully handle these
spurious faults by:
1. Accepting `AlreadyMapped` as a successful resolution. If another CPU
has already mapped the page while we were waiting for the lock
(or performing I/O):, we consider the fault handled.
2. Fixing a memory leak in the race path. We now only `leak()` the
allocated `ClaimedPage` (surrendering ownership to the page tables) if
the mapping actually succeeds. If we lose the race, the `ClaimedPage` is
allowed to go out of scope, causing the `Drop` impl to return the unused
physical frame to the allocator.
3. Applying this logic to both the anonymous mapping path and the
deferred file-backed path.
Implement a new function, `try_copy_from_user` which will not sleep when
a fault occures, making it safe to be called while a spinlock has been
acquired.